Model — The M in COMPEL
Design the policy architecture and risk frameworks that govern every AI system
What This Stage Is
Model is the design and policy architecture stage of COMPEL. Before any AI system is built or deployed, the Model stage requires that its governance context is fully defined: what policies apply, what risks exist, how humans interact with the system, and what data it depends on. Retrofitting governance onto AI systems after deployment is substantially more expensive and less effective than building it in from the start — studies consistently show that governance remediation costs 5 to 10 times more than design-time governance integration. The Model stage produces the comprehensive policy library that governs how AI systems are proposed, evaluated, approved, and retired within the organization. This includes AI use case classification schemas, risk tiering frameworks, ethical guardrails, decision flow documentation, bias testing protocols, data governance policies, incident response procedures, and alignment mappings to applicable regulations. Every AI initiative must pass Gate M — the Design Approval gate — before any production investment begins. This gate verifies that solution architecture is sound, data readiness is confirmed, human-AI collaboration points are explicitly defined, and the policy framework is in place. Organizations that skip Model consistently produce AI systems that fail audits, accumulate technical and ethical debt, and require costly remediation.
Why This Stage Matters
Model enforces a design-first discipline that separates governed AI programs from ad-hoc experimentation. The policies and frameworks designed in this stage are not bureaucratic overhead — they are the decision-support infrastructure that enables teams to move faster with confidence. When a product team knows exactly what risk classification their system falls into, what documentation is required, and what approval path to follow, they spend less time navigating ambiguity and more time building. Well-executed Model work makes the Produce stage implementation deterministic rather than improvised. It also creates the structural foundation for regulatory compliance: ISO 42001 requires a documented AI management system, the EU AI Act mandates risk management systems for high-risk AI, and NIST AI RMF expects organizations to MAP risks before managing them. Model produces exactly these artifacts. Without Model, organizations face a recurring pattern: each new AI system reinvents its own governance approach, creating inconsistency, duplication, and gaps that auditors and regulators will identify.
Inputs
- CoE charter and team structure from Organize — defining who owns and maintains the policy framework
- Training plan from Organize — ensuring policy designers have adequate governance knowledge
- Gap analysis from Calibrate — identifying which policy areas require the most investment
- Regulatory Exposure Register from Calibrate — specifying which regulations must be mapped into the policy framework
Key Activities
- AI Policy Framework authoring — drafting organization-wide policies on acceptable use, data handling, human oversight, and system retirement
- System Registry architecture — designing the AI system registry schema, lifecycle states, documentation requirements, and metadata standards
- Risk Framework design — building the risk taxonomy, scoring methodology, escalation criteria, and risk appetite statement for AI systems
- Human-AI collaboration modeling — defining interaction patterns, override mechanisms, human-in-the-loop requirements, and autonomy boundaries
- Data readiness validation — assessing data quality, lineage, access controls, and bias potential for each planned AI system
- Decision flow documentation — mapping the decision chains that AI systems will influence or automate with accountability at each node
- Bias testing and red teaming protocol design — defining testing methodologies, thresholds, and remediation procedures
- Incident response procedure design — creating escalation paths, communication templates, and recovery playbooks for AI-related incidents
- Vendor and third-party AI evaluation — assessing external AI tools, APIs, and services against organizational governance standards and risk appetite
Outputs & Deliverables
- AI Acceptable Use Policy — governing document for permitted AI applications, prohibited uses, and exception procedures
- AI System Registry Schema — data model, lifecycle states, documentation templates, and metadata standards
- Enterprise Risk Taxonomy for AI — risk categories, severity criteria, escalation thresholds, and risk appetite statement
- Human-AI Collaboration Blueprints — interaction models per system class with override specifications and audit trail requirements
- Data Readiness Reports — structured assessment per AI system with gap remediation plans and timeline commitments
- Decision Log Templates — standardized formats for capturing AI-influenced decision chains with accountability at each node
- Vendor Risk Assessment Criteria — evaluation framework for third-party AI tools and services against governance standards
Controls
- Every AI system design must pass Gate M (Design Approval) before any production development or procurement investment begins
- Risk taxonomy must cover all four COMPEL pillars (People, Process, Technology, Governance) — not just technical risks
- Human-AI collaboration blueprints must specify explicit override mechanisms for every autonomous or semi-autonomous AI action
- Policy documents must include version control, ownership assignment, and scheduled review cadence (minimum annual)
- Bias testing protocols must define statistical thresholds for protected characteristics aligned with applicable anti-discrimination law
Evidence Artifacts
- Published AI Acceptable Use Policy with executive sign-off and distribution records
- AI System Registry Schema documentation with data dictionary and lifecycle state definitions
- Enterprise Risk Taxonomy document with scoring rubric and worked examples
- Human-AI Collaboration Blueprints for each system class with override mechanism specifications
- Gate M Decision Records for each AI system reviewed with approval, conditional, or reject outcomes
- Bias Testing Protocol document with statistical methodology and threshold definitions
Metrics & KPIs
- Policy coverage — percentage of identified AI risk categories with published governing policies (target: 100%)
- Gate M pass rate — percentage of AI systems that pass Design Approval on first submission (benchmark: 60-70%)
- Registry completeness — percentage of fields populated in AI system registry records (target: 95%+)
- Time to Gate M decision — average days from submission to formal approval or rejection (target: under 15 business days)
- Policy review currency — percentage of policies reviewed within their scheduled review cycle (target: 100%)
- Stakeholder policy awareness — percentage of AI practitioners who can locate and cite applicable policies (target: 85%+)
Risks If Skipped
- Each AI system reinvents its own governance approach, creating inconsistency and gaps that auditors will identify
- Risk frameworks are absent or ad-hoc, making it impossible to compare risk across the AI portfolio or prioritize remediation
- Human oversight mechanisms are designed after deployment, when changing system behavior is most expensive
- Regulatory compliance gaps are discovered during external audits rather than during design, increasing remediation cost and reputational risk
- Bias testing is reactive rather than proactive, leading to discriminatory outcomes that could have been prevented by design
Standards Alignment
| Standard | Clause | Description |
|---|---|---|
| ISO/IEC 42001:2023 | Clause 6.1-6.2, Annex A.4-A.7 | Planning for risk and opportunities, AI management system objectives, AI policy, AI system impact assessment, AI system lifecycle processes |
| NIST AI RMF 1.0 | MAP 1.1-1.6, MAP 2.1-2.3, MAP 3.1-3.5 | Context established, AI risks identified and documented, AI risks prioritized, stakeholder impact assessed |
| EU AI Act 2024/1689 | Article 9(2-8), 13, 14, 17 | Risk management system design, transparency requirements, human oversight design, quality management system |
| IEEE 7000-2021 | Clause 8.1-8.4 | Ethical requirements specification, value-sensitive design, impact assessment, and design constraints documentation |
References
- [1] ISO/IEC 42001:2023 — Clause 6 (Planning) and Annex A (AI-specific controls)
- [2] NIST AI Risk Management Framework 1.0 (2023) — MAP function subcategories
- [3] EU AI Act 2024/1689 — Articles 9, 13, 14, 17 (Risk management, transparency, human oversight, quality)
- [4] IEEE 7000-2021 — Model Process for Addressing Ethical Concerns During System Design
- [5] OECD AI Principles (2024 update) — Transparency and explainability requirements
- [6] Anthropic, "Responsible AI Policy Development Guide" (2024)
- [7] COMPEL Policy Framework Template Library v2.0 — FlowRidge, 2025
Frequently Asked Questions
- How detailed should AI policies be at the Model stage?
- Policies should be detailed enough to be actionable but abstract enough to apply across AI system types. COMPEL recommends a three-tier policy architecture: enterprise-level AI principles (5-10 statements), domain-specific policies (acceptable use, data governance, risk management), and system-class-specific procedures (for high-risk vs. low-risk systems). The Model stage produces the first two tiers; system-specific procedures are finalized in Produce.
- What is Gate M and who approves it?
- Gate M (Design Approval) is a formal checkpoint that every AI system must pass before production investment begins. It verifies that the system architecture, risk classification, data readiness, and human oversight design are complete and compliant. Approval authority typically rests with the AI Risk Committee or CoE Director, depending on the system risk classification.
- How do we handle existing AI systems that were deployed before Model?
- Existing systems should undergo a retrospective Model assessment. Prioritize systems by risk classification — high-risk systems first. Create condensed Design Approval packages that document current state, identify gaps against the policy framework, and define remediation plans. This is a common challenge for organizations adopting COMPEL after prior AI deployment.
- Should the risk framework align with our existing enterprise risk management?
- Yes. The AI risk taxonomy should extend your existing enterprise risk framework rather than creating a parallel system. COMPEL recommends adding AI-specific risk categories (model drift, data poisoning, algorithmic bias, automation complacency) to your existing taxonomy, using consistent severity scales and escalation procedures. This enables integrated risk reporting to the board.
- How does Model address agentic AI systems?
- Agentic AI systems require enhanced Model stage attention because they operate with greater autonomy. The Human-AI Collaboration Blueprints must define explicit boundaries for autonomous action, mandatory human-in-the-loop checkpoints, kill-switch mechanisms, and audit trail requirements for every consequential decision the agent makes. COMPEL treats agentic systems as high-risk by default.
Abdelalim, T. (2025). “Model — The M in COMPEL.” COMPEL by FlowRidge. https://www.compel.one/methodology/model