Dan Sutherland, a senior adviser to The Chertoff Group‘s cybersecurity business, said organizations developing and deploying artificial intelligence tools should establish governance foundations now as regulatory frameworks continue to evolve.

As organizations work to navigate a rapidly evolving AI regulatory landscape, governance and risk management remain central considerations for industry leaders. These broader questions surrounding AI oversight and responsible deployment continue to shape conversations across government and the contracting community. Join the discussion at the Potomac Officers Club’s 2026 Artificial Intelligence Summit on March 18. Register now!
In an article published Tuesday on the company’s website, Sutherland said companies should prepare transparency reports outlining data sources and AI usage; perform security and safety risk assessments; develop policies to manage AI development and deployment; and establish executive governance forums that can make rapid, risk-based decisions. He described the approach as “Fast GRC,” a governance, risk and compliance model designed to keep pace with the speed of AI innovation.
What Is the ‘Fast GRC’ Approach?
Sutherland said traditional legal and compliance processes often take months, which may not align with the pace of AI development and deployment. Under the Fast GRC model, companies can accelerate governance decisions through reusable templates, cross-functional review groups and executive support.
He noted that this structure enables leadership teams to quickly evaluate risks and benefits, even as AI products evolve during development.
Sutherland also described a guardrail-based governance model in which business units can operate freely within established safety and security boundaries.
“Under this approach, when the business attempts something new, outside of the existing guardrails, the GRC, legal and other teams can quickly perform the necessary assessments that will then result in modifying or updating the existing guardrails,” he said of the governance model.
What Are the Emerging AI Regulatory Approaches?
In this piece, Sutherland outlined three broad approaches shaping AI regulation: restrictionist, pro-innovation and guardrails.
He noted that the restrictionist approach supports heavy oversight to address safety and security risks, while the pro-innovation approach favors minimal regulation to accelerate technological development. The guardrails approach calls for targeted transparency and sector-specific rules without broadly constraining innovation.
How Is AI Regulation Taking Shape?
Sutherland said the AI regulatory landscape remains fragmented, with states such as California and New York advancing differing laws.
He also cited President Trump’s executive order, which he said could result in federal action to address restrictive state laws.
In the short term, he said companies must navigate multiple jurisdictions and quasi-regulatory pressures, including emerging industry standards and litigation outcomes.


