Model — The M in COMPEL

Design the policy architecture and risk frameworks that govern every AI system

What This Stage Is

Model is the design and policy architecture stage of COMPEL. Before any AI system is built or deployed, the Model stage requires that its governance context is fully defined: what policies apply, what risks exist, how humans interact with the system, and what data it depends on. Retrofitting governance onto AI systems after deployment is substantially more expensive and less effective than building it in from the start — studies consistently show that governance remediation costs 5 to 10 times more than design-time governance integration. The Model stage produces the comprehensive policy library that governs how AI systems are proposed, evaluated, approved, and retired within the organization. This includes AI use case classification schemas, risk tiering frameworks, ethical guardrails, decision flow documentation, bias testing protocols, data governance policies, incident response procedures, and alignment mappings to applicable regulations. Every AI initiative must pass Gate M — the Design Approval gate — before any production investment begins. This gate verifies that solution architecture is sound, data readiness is confirmed, human-AI collaboration points are explicitly defined, and the policy framework is in place. Organizations that skip Model consistently produce AI systems that fail audits, accumulate technical and ethical debt, and require costly remediation.

Why This Stage Matters

Model enforces a design-first discipline that separates governed AI programs from ad-hoc experimentation. The policies and frameworks designed in this stage are not bureaucratic overhead — they are the decision-support infrastructure that enables teams to move faster with confidence. When a product team knows exactly what risk classification their system falls into, what documentation is required, and what approval path to follow, they spend less time navigating ambiguity and more time building. Well-executed Model work makes the Produce stage implementation deterministic rather than improvised. It also creates the structural foundation for regulatory compliance: ISO 42001 requires a documented AI management system, the EU AI Act mandates risk management systems for high-risk AI, and NIST AI RMF expects organizations to MAP risks before managing them. Model produces exactly these artifacts. Without Model, organizations face a recurring pattern: each new AI system reinvents its own governance approach, creating inconsistency, duplication, and gaps that auditors and regulators will identify.

Inputs

Key Activities

Outputs & Deliverables

Controls

Evidence Artifacts

Metrics & KPIs

Risks If Skipped

Standards Alignment

StandardClauseDescription
ISO/IEC 42001:2023Clause 6.1-6.2, Annex A.4-A.7Planning for risk and opportunities, AI management system objectives, AI policy, AI system impact assessment, AI system lifecycle processes
NIST AI RMF 1.0MAP 1.1-1.6, MAP 2.1-2.3, MAP 3.1-3.5Context established, AI risks identified and documented, AI risks prioritized, stakeholder impact assessed
EU AI Act 2024/1689Article 9(2-8), 13, 14, 17Risk management system design, transparency requirements, human oversight design, quality management system
IEEE 7000-2021Clause 8.1-8.4Ethical requirements specification, value-sensitive design, impact assessment, and design constraints documentation

References

  1. [1] ISO/IEC 42001:2023 — Clause 6 (Planning) and Annex A (AI-specific controls)
  2. [2] NIST AI Risk Management Framework 1.0 (2023) — MAP function subcategories
  3. [3] EU AI Act 2024/1689 — Articles 9, 13, 14, 17 (Risk management, transparency, human oversight, quality)
  4. [4] IEEE 7000-2021 — Model Process for Addressing Ethical Concerns During System Design
  5. [5] OECD AI Principles (2024 update) — Transparency and explainability requirements
  6. [6] Anthropic, "Responsible AI Policy Development Guide" (2024)
  7. [7] COMPEL Policy Framework Template Library v2.0 — FlowRidge, 2025

Frequently Asked Questions

How detailed should AI policies be at the Model stage?
Policies should be detailed enough to be actionable but abstract enough to apply across AI system types. COMPEL recommends a three-tier policy architecture: enterprise-level AI principles (5-10 statements), domain-specific policies (acceptable use, data governance, risk management), and system-class-specific procedures (for high-risk vs. low-risk systems). The Model stage produces the first two tiers; system-specific procedures are finalized in Produce.
What is Gate M and who approves it?
Gate M (Design Approval) is a formal checkpoint that every AI system must pass before production investment begins. It verifies that the system architecture, risk classification, data readiness, and human oversight design are complete and compliant. Approval authority typically rests with the AI Risk Committee or CoE Director, depending on the system risk classification.
How do we handle existing AI systems that were deployed before Model?
Existing systems should undergo a retrospective Model assessment. Prioritize systems by risk classification — high-risk systems first. Create condensed Design Approval packages that document current state, identify gaps against the policy framework, and define remediation plans. This is a common challenge for organizations adopting COMPEL after prior AI deployment.
Should the risk framework align with our existing enterprise risk management?
Yes. The AI risk taxonomy should extend your existing enterprise risk framework rather than creating a parallel system. COMPEL recommends adding AI-specific risk categories (model drift, data poisoning, algorithmic bias, automation complacency) to your existing taxonomy, using consistent severity scales and escalation procedures. This enables integrated risk reporting to the board.
How does Model address agentic AI systems?
Agentic AI systems require enhanced Model stage attention because they operate with greater autonomy. The Human-AI Collaboration Blueprints must define explicit boundaries for autonomous action, mandatory human-in-the-loop checkpoints, kill-switch mechanisms, and audit trail requirements for every consequential decision the agent makes. COMPEL treats agentic systems as high-risk by default.

Abdelalim, T. (2025). “Model — The M in COMPEL.” COMPEL by FlowRidge. https://www.compel.one/methodology/model