The COMPEL Operating Cycle — 6-Stage AI Transformation Methodology
A structured, repeatable 6-stage operating cycle that transforms AI from a series of technology projects into a compounding organizational capability — measurable, governable, and continuously improving.
Stage 1: Calibrate
Calibrate is the diagnostic and orientation stage. Organizations begin here regardless of prior AI investment, using structured assessment instruments to build an honest, evidence-based picture of current AI capability.
Many organizations significantly overestimate their AI readiness because they conflate technology access with organizational capability. Calibrate addresses this gap by surveying all 18 domains independently, surfacing shadow AI usage, quantifying the skills gap, and establishing the baseline that every subsequent stage is measured against.
Calibrate Activities
- AI maturity assessment across all 18 domains using the COMPEL 5-level scoring rubric
- Shadow AI discovery survey — identifying unapproved tools and use cases already in production
- Use case inventory — cataloging proposed and existing AI initiatives by business function
- Executive readiness interviews — assessing sponsorship depth and governance appetite
- Data landscape mapping — identifying critical data assets and access constraints
- Regulatory exposure mapping — cataloging applicable obligations by jurisdiction and AI system type
Calibrate Outputs
- COMPEL Baseline Maturity Report (domain scores across all 18 dimensions)
- Shadow AI Registry — inventory of unauthorized AI tools in active use
- Use Case Opportunity Map — prioritized pipeline of AI initiatives by value and feasibility
- Executive Alignment Summary — documented sponsorship commitments and governance mandates
- Data Readiness Assessment — structured gap analysis across data infrastructure domains
- Regulatory Exposure Register — mapped obligations per system type and jurisdiction
Stage 2: Organize
Organize establishes the human infrastructure that makes AI transformation durable. Without deliberate organizational design, AI initiatives fragment into departmental experiments that cannot scale.
Organize Activities
- Center of Excellence design — defining structure, headcount, reporting lines, and operating model
- Role matrix development — creating AI-specific role definitions across leadership, practitioner, and support tiers
- Skills gap analysis — comparing current workforce capabilities against role matrix requirements
- Training program design — building role-tiered curricula aligned to COMPEL certification pathways
- Oversight body formation — establishing AI Ethics Board, Risk Committee, and CoE governance council
- RACI definition — assigning responsibility, accountability, consultation, and information rights for AI decisions
Organize Outputs
- Center of Excellence Charter — mandate, structure, operating procedures, and success metrics
- AI Role Matrix — defined roles with responsibilities, authority levels, and qualification requirements
- Training Roadmap — phased learning plan with COMPEL certification targets by role tier
- Oversight Body Terms of Reference — operating charter for each governance body
- RACI for AI Decisions — accountability map for AI system registration, approval, and monitoring
- Organizational Change Management Plan — communication and adoption strategy
Stage 3: Model
Model is the design and policy architecture stage. Before any AI system is built or deployed, Model requires that its governance context is fully defined: what policies apply, what risks exist, how humans interact with the system, and what data it depends on.
Retrofitting governance onto AI systems after deployment is substantially more expensive and less effective than building it in from the start. Every AI initiative must pass Gate M — the Design Approval gate — before any production investment begins.
Model Activities
- AI Policy Framework authoring — organization-wide policies on acceptable use, data handling, and human oversight
- System Registry architecture — designing the AI system registry schema, lifecycle states, and documentation requirements
- Risk Framework design — building the risk taxonomy, scoring methodology, and escalation criteria
- Human-AI collaboration modeling — defining interaction patterns, override mechanisms, and human-in-the-loop requirements
- Data readiness validation — assessing data quality, lineage, access controls, and bias potential
- Decision flow documentation — mapping the decision chains that AI systems will influence or automate
Model Outputs
- AI Acceptable Use Policy — governing document for permitted AI applications and prohibited uses
- AI System Registry Schema — data model, lifecycle states, and documentation templates
- Enterprise Risk Taxonomy for AI — risk categories, severity criteria, and escalation thresholds
- Human-AI Collaboration Blueprints — interaction models with override and audit trail specifications
- Data Readiness Reports — structured assessment per AI system with gap remediation plans
- Decision Log Templates — standardized formats for capturing AI-influenced decision chains
Stage 4: Produce
Produce is where the governance architecture designed in Model is built, implemented, and operationalized. Controls are deployed, policies are enforced, workflows are configured, and audit evidence is generated at every step.
Produce Activities
- AI System Registry deployment — implementing the registry, populating system records, and configuring workflow integrations
- Control implementation — deploying technical and procedural controls defined in the risk framework
- Policy operationalization — translating policy documents into enforced workflows and decision gates
- Monitoring infrastructure build — configuring KPI dashboards, alert thresholds, and model drift detection
- Audit evidence pack assembly — gathering documentation for each AI system in scope
- MLOps pipeline integration — connecting AI development pipelines to governance controls and registry
Produce Outputs
- Deployed AI System Registry with complete system records
- Active control library — documented and tested controls mapped to risk taxonomy
- Monitoring dashboard suite — real-time KPIs, alerts, and governance scorecards
- Audit Evidence Packs — complete documentation sets for each AI system, Gate E ready
Stage 5: Evaluate
Evaluate is the formal validation stage. It verifies that every AI system meets both its business value promise and its responsible AI obligations before production deployment — and on an ongoing basis thereafter.
Evaluate Activities
- Gate E review execution — formal validation of audit evidence packs against criteria
- Bias and fairness testing — structured assessment of model outputs against protected characteristics
- Business value validation — measuring actual outcomes against success criteria
- Regulatory conformity assessment — checking each system against applicable obligations
- Governance scorecard assessment — scoring organizational AI governance maturity
Evaluate Outputs
- Gate E Decision Record — formal pass/fail determination with remediation requirements
- Bias and Fairness Testing Report — documented results with remediation actions
- Business Value Validation Report — actual vs. projected outcomes
- COMPEL Governance Scorecard — current maturity scores across all 18 domains
Stage 6: Learn
Learn is the continuous improvement stage and the mechanism through which the cycle compounds. It monitors production AI systems, captures operational insights, identifies improvement opportunities, and feeds structured findings back into the next Calibrate cycle.
Learn Activities
- KPI monitoring — tracking performance metrics, model accuracy, and business outcomes
- Model drift detection — identifying statistically significant changes in model behavior
- Incident analysis — structured review of AI-related incidents and near-misses
- Improvement opportunity identification — analyzing performance data and audit findings
- Calibrate cycle feed — packaging Learn outputs as inputs for the next baseline assessment
Learn Outputs
- AI Performance Dashboard — ongoing KPI reporting for all production systems
- Model Drift Monitoring Reports — automated alerts and trend analysis
- Continuous Improvement Register — prioritized improvements for the next cycle
- Next-Cycle Calibrate Inputs — structured baseline updates reflecting operational learnings
Four Pillars
People
Executive commitment, talent development, organization-wide literacy, and managed adoption. Domains: D1 Leadership Sponsorship, D2 Talent Strategy, D3 AI Literacy, D4 Change Management.
Process
How AI work is done: use case management, data governance, MLOps, project delivery, and continuous improvement. Domains: D5–D9.
Technology
Data platforms, AI/ML platforms, integration architecture, and security controls. Domains: D10–D13.
Governance
Strategic alignment, ethics and fairness, regulatory compliance, risk management, and governance structures. Domains: D14–D18.
Regulatory Alignment
ISO/IEC 42001:2023
COMPEL operationalizes every clause: Calibrate maps to Clause 4 (Context) and Clause 6 (Planning); Organize maps to Clause 5 (Leadership) and Clause 7 (Support); Model maps to Clause 6 and Annex A; Produce maps to Clause 8 (Operation); Evaluate maps to Clause 9 (Performance evaluation); Learn maps to Clause 10 (Improvement).
NIST AI RMF 1.0
COMPEL maps to all four NIST AI RMF functions: GOVERN (Calibrate, Organize), MAP (Calibrate, Model), MEASURE (Evaluate), and MANAGE (Produce, Learn).
EU AI Act 2024/1689
Risk classification in Calibrate (Article 9), transparency and human oversight in Model (Articles 13–14), record-keeping in Produce (Article 12), conformity assessment in Evaluate (Article 43), and post-market monitoring in Learn (Article 72).