COMPEL Certification Body of Knowledge — Module 1.5: Governance, Risk, and Compliance for AI
Article 2 of 10
The regulatory environment for artificial intelligence (AI) is no longer emerging — it is arriving. The European Union (EU) AI Act is being enforced. The National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF) is being adopted across industries. Sector-specific regulators in financial services, healthcare, and the public sector are publishing AI-specific guidance with increasing frequency and specificity. Transformation leaders who wait for the regulatory landscape to "settle" before building governance will find themselves perpetually behind.
This article maps the current regulatory landscape, identifies the trajectory of regulatory development, and provides the knowledge foundation that transformation leaders need to build governance frameworks that are both compliant today and adaptable to tomorrow's requirements. As established in Article 1: The AI Governance Imperative, governance enables innovation — and understanding what regulations require is the first step toward building governance that works.
The EU AI Act: The Global Standard-Setter
The EU AI Act, formally adopted in 2024, is the most comprehensive AI-specific legislation in the world. Its influence extends far beyond the EU's borders through the "Brussels Effect" — the tendency of EU regulation to set de facto global standards because multinational organizations find it more efficient to adopt a single, stringent standard than to maintain different practices for different jurisdictions.
Risk-Based Classification
The Act establishes a four-tier risk classification system:
Prohibited AI Practices include social scoring by governments, real-time remote biometric identification in public spaces (with narrow law enforcement exceptions), AI systems that exploit vulnerable groups, and subliminal manipulation techniques. Organizations deploying AI that falls into this category face the highest penalties — up to 35 million euros or 7 percent of global annual turnover.
High-Risk AI Systems are subject to the most detailed governance requirements. These include AI used in:
- Critical infrastructure management (energy, water, transport)
- Education and vocational training (admissions, assessments)
- Employment (recruitment, performance evaluation, termination decisions)
- Essential services access (credit scoring, insurance pricing, emergency services dispatch)
- Law enforcement (risk assessment, evidence evaluation)
- Migration and asylum (application processing, border control)
- Administration of justice (sentencing support, legal research AI)
High-risk systems must satisfy requirements for risk management systems, data governance, technical documentation, record-keeping, transparency and user information, human oversight, accuracy and robustness, and cybersecurity.
Limited Risk AI Systems — primarily chatbots and AI-generated content — face transparency obligations. Users must be informed when they are interacting with an AI system, and AI-generated content must be labeled as such.
Minimal Risk AI Systems — such as spam filters or AI-powered video games — face no specific regulatory requirements, though voluntary codes of conduct are encouraged.
Conformity Assessment and Documentation
High-risk AI systems require conformity assessment before deployment, which may be conducted internally or by a third-party notified body, depending on the specific use case. The documentation requirements are extensive: technical documentation must describe the system's intended purpose, design specifications, training data characteristics, testing and validation results, performance metrics, and risk management measures.
These documentation requirements have direct implications for governance frameworks. Organizations that establish documentation standards early — as part of their governance architecture — will produce conformity assessment evidence as a byproduct of normal operations. Organizations that bolt documentation on after the fact will find it expensive, error-prone, and perpetually incomplete.
Timeline and Enforcement
The Act's provisions phase in over a staged timeline. Prohibitions on unacceptable AI practices took effect on February 2, 2025. Requirements for general-purpose AI (GPAI) models apply from August 2, 2025. Obligations for high-risk AI systems take effect August 2, 2026, with certain Annex I high-risk systems given until August 2, 2027. The European AI Office coordinates enforcement, with national market surveillance authorities responsible for implementation within member states.
The Act also establishes specific obligations for providers of GPAI models, including technical documentation requirements, copyright compliance policies, and content training data summaries. GPAI models classified as presenting systemic risk face additional obligations including model evaluation, adversarial testing, cybersecurity protections, and energy consumption reporting. The European AI Office coordinates oversight of GPAI providers.
Implications for Non-EU Organizations
The AI Act applies to any organization that places AI systems on the EU market or whose AI systems affect EU residents, regardless of where the organization is headquartered. This extraterritorial reach means that most multinational enterprises must comply with the Act's requirements, even if they have no physical presence in the EU. The practical implication: the AI Act is a global regulation for any organization operating at scale.
The NIST AI Risk Management Framework
While the EU AI Act is regulation — binding and enforceable — the NIST AI RMF, published in January 2023, is a voluntary framework. Its influence, however, is substantial. Federal agencies increasingly reference it in procurement requirements, industry associations adopt it as a baseline, and organizations use it as the architectural foundation for their internal governance programs.
The Four Core Functions
The NIST AI RMF organizes AI risk management into four functions:
Govern establishes and maintains the organizational structures, policies, processes, and accountability mechanisms for managing AI risk. This function addresses culture, risk appetite, roles and responsibilities, and stakeholder engagement. It is the foundation upon which the other three functions rest.
Map identifies the context in which AI systems operate — their intended uses, stakeholders, potential impacts, and the specific risks they may introduce. Mapping is the analytical work of understanding what could go wrong and for whom.
Measure employs quantitative and qualitative methods to analyze, assess, and track identified AI risks. This includes testing for bias, evaluating performance across demographic groups, and monitoring for model drift.
Manage allocates resources and implements actions to address mapped and measured risks. This includes mitigation strategies, risk transfer, risk acceptance, and incident response.
NIST AI RMF Profiles and Use Cases
The framework supports the development of organizational profiles — prioritized implementations of the framework's subcategories tailored to specific organizational contexts, use cases, or regulatory environments. These profiles allow organizations to adapt the framework to their size, sector, and risk tolerance rather than implementing every element uniformly.
NIST has supplemented the core framework with companion resources including the AI RMF Playbook, which provides detailed implementation guidance for each subcategory, and Crosswalk documents that map the AI RMF to other standards and frameworks.
For COMPEL practitioners, the NIST AI RMF provides an excellent complementary structure to the governance architecture described in Article 3: Building an AI Governance Framework. The framework's Govern function maps directly to COMPEL's strategic governance tier, while Map, Measure, and Manage align with operational and project-level governance activities.
Sector-Specific Regulation
Beyond horizontal AI regulations, transformation leaders must navigate sector-specific requirements that add significant governance obligations.
Financial Services
Financial services is the most mature sector for AI governance regulation, building on decades of model risk management practice.
SR 11-7: Supervisory Guidance on Model Risk Management, issued jointly by the Board of Governors of the Federal Reserve System and the Office of the Comptroller of the Currency (OCC) in 2011, remains the foundational model risk management standard for U.S. banking organizations. It establishes requirements for model development, validation, and use that apply directly to AI and machine learning (ML) models. SR 11-7 requires effective challenge — the critical analysis of a model's conceptual soundness, ongoing monitoring, and outcomes analysis — by qualified, independent parties.
The Basel Committee on Banking Supervision has published reports on the implications of AI and ML for banking supervision, emphasizing the need for governance frameworks that address the specific risks of ML models, including explainability, data quality, and bias.
The European Banking Authority (EBA) has issued guidelines on internal governance that specifically address AI, requiring institutions to have robust risk management frameworks for AI-driven decision-making.
Fair lending regulations in the United States — including the Equal Credit Opportunity Act (ECOA) and the Fair Housing Act — create additional governance requirements for AI systems used in credit decisions. Adverse action notices must explain why a decision was made, creating explainability requirements that are challenging for complex ML models.
Healthcare
The Food and Drug Administration (FDA) regulates AI/ML-based Software as a Medical Device (SaMD) through an evolving framework that accounts for the iterative nature of ML models. The FDA's approach includes a total product lifecycle framework that allows pre-specified algorithm modifications, provided organizations maintain appropriate governance over the change process.
The Health Insurance Portability and Accountability Act (HIPAA) governs the use of protected health information (PHI) in AI systems, requiring governance controls around data access, use limitations, and de-identification standards.
Clinical decision support systems powered by AI face additional governance requirements around validation, clinical efficacy, and liability allocation that go beyond standard software governance.
Public Sector
Government AI use is increasingly subject to specific governance mandates. In the United States, Executive Orders on AI have established requirements for federal agencies including AI impact assessments, algorithmic transparency, and public reporting.
The Algorithmic Accountability Act (proposed in the U.S.), which has been introduced in multiple congressional sessions without advancing to a vote, would require impact assessments for automated decision systems, public reporting on AI use, and mechanisms for affected individuals to contest automated decisions. While the Act has not been enacted, its repeated introduction signals sustained legislative interest in algorithmic accountability and provides a useful reference point for the direction of potential future U.S. federal AI regulation.
Municipal and state-level AI governance requirements are proliferating. New York City's Local Law 144, which requires bias audits for automated employment decision tools, exemplifies the trend toward jurisdiction-specific AI governance mandates.
Emerging National Frameworks
The regulatory landscape extends beyond the EU and the United States:
China has implemented multiple AI-specific regulations, including the Algorithmic Recommendation Management Provisions, the Deep Synthesis Provisions (governing deepfakes), and the Generative AI Measures. China's approach is notable for its specificity — regulating particular AI applications rather than AI broadly.
The United Kingdom has adopted a sector-specific, principles-based approach through existing regulators rather than creating a single AI law. The UK's five AI principles — safety, security, and robustness; transparency and explainability; fairness; accountability and governance; contestability and redress — guide sector regulators in developing AI-specific guidance within their existing mandates.
Canada's proposed Artificial Intelligence and Data Act (AIDA), introduced as part of Bill C-27, would have established requirements for high-impact AI systems, including risk assessment, monitoring, and public transparency. Although the bill did not advance to enactment before Parliament was prorogued in early 2025, its core principles are expected to inform future Canadian AI legislation and reflect the direction of regulatory thinking in the jurisdiction.
Brazil, India, Japan, South Korea, Singapore, and Australia have all published AI governance frameworks, guidelines, or proposed legislation. While approaches vary — from Singapore's voluntary Model AI Governance Framework to Brazil's proposed AI regulation — the trajectory is consistent: more governance requirements, not fewer.
The Convergence Trajectory
Despite different regulatory approaches, a convergence is emerging around several core principles:
- Risk-based classification — governance requirements scaled to the risk level of the AI application
- Transparency and explainability — requirements to disclose AI use and explain AI decisions
- Fairness and non-discrimination — requirements to test for and mitigate bias
- Accountability — clear allocation of responsibility for AI outcomes
- Human oversight — requirements for meaningful human involvement in high-stakes AI decisions
- Data governance — requirements for quality, consent, and privacy in AI training data
- Documentation and auditability — requirements to maintain records sufficient for regulatory review
Organizations that build governance frameworks around these converging principles will be positioned to comply with regulations across multiple jurisdictions — a significant advantage over organizations that take a jurisdiction-by-jurisdiction approach.
International Standards
Complementing regulation, international standards bodies have published AI-specific standards that provide detailed implementation guidance:
ISO/IEC 42001:2023 — Artificial Intelligence Management System — establishes requirements for an AI management system (AIMS), providing a structured approach to managing AI development and deployment. It is rapidly becoming the governance certification standard of choice.
ISO/IEC 23894:2023 — Guidance on AI Risk Management — provides practical guidance for managing risks associated with AI, aligned with ISO 31000 risk management principles.
IEEE 7000-2021 — Standard for Addressing Ethical Concerns During System Design — provides a process for ethical design of autonomous and intelligent systems, operationalizing ethical principles into engineering practice.
These standards provide the detailed specifications that regulations often reference but do not fully define. Organizations pursuing governance maturity will find them essential for translating regulatory principles into operational practices.
What Transformation Leaders Must Know
The regulatory landscape has several implications for transformation leaders managing AI programs:
Build for the Highest Applicable Standard
Rather than building the minimum governance required by each jurisdiction, build for the highest applicable standard — typically the EU AI Act for multinational organizations. This approach is more efficient than maintaining multiple governance tracks and positions the organization for new regulations that will likely converge toward the highest existing standard.
Invest in Documentation Infrastructure
Every regulatory framework emphasizes documentation. The organizations that invest in documentation infrastructure early — model registries, automated documentation tools, standardized templates, audit trail systems — will produce compliance evidence as a natural byproduct of operations. This is far more efficient and reliable than retrospective documentation efforts.
Monitor the Regulatory Trajectory
The regulatory trajectory is toward more requirements, more specificity, and more enforcement. Organizations should design governance frameworks that accommodate new requirements without architectural redesign. The three-tier governance architecture described in Article 3: Building an AI Governance Framework provides this flexibility.
Engage with Regulators
Proactive engagement with regulators — through industry associations, public comment periods, regulatory sandboxes, and direct dialogue — provides early insight into regulatory direction and an opportunity to shape practical implementation approaches. Organizations that engage proactively have a governance advantage over those that wait for final rules.
Connect Regulatory Requirements to Business Value
As emphasized in Module 1.1, Article 7: The Business Value Chain of AI Transformation, governance activities must connect to business outcomes. Regulatory compliance is not merely a cost — it is a market access requirement, a customer trust enabler, and a competitive differentiator. Framing compliance in business value terms ensures sustained executive investment in governance capabilities.
The Compliance Foundation for AI Innovation
The regulatory landscape may appear daunting, but it reflects a maturing recognition that AI systems require structured governance. For organizations with strong governance frameworks, regulation validates their approach and creates barriers to entry for less-governed competitors. For organizations beginning their governance journey, regulation provides a clear mandate for investment and a framework for prioritization.
The COMPEL framework's Calibrate phase (Module 1.2, Article 1) includes a regulatory assessment that maps applicable regulations to the organization's AI portfolio, identifies compliance gaps, and prioritizes governance investments based on regulatory risk. This assessment transforms the regulatory landscape from an abstract list of requirements into a concrete, prioritized action plan.
Looking Ahead
With the regulatory landscape mapped, the next article turns to the practical work of building an AI governance framework — the architecture of policies, standards, and procedures that translates regulatory requirements and organizational risk appetite into operational governance that works at scale.
© FlowRidge.io — COMPEL AI Transformation Methodology. All rights reserved.