Calibrate — The C in COMPEL
Establish an evidence-based AI maturity baseline across all 18 COMPEL domains
What This Stage Is
Calibrate is the diagnostic and orientation stage of the COMPEL operating cycle. Every organization begins here — regardless of prior AI investment — using structured assessment instruments to build an honest, evidence-based picture of current AI capability. Many organizations significantly overestimate their AI readiness because they conflate technology access with organizational capability. A company may have access to GPT-4, Bedrock, or SageMaker, yet lack the governance structures, role definitions, data quality processes, or risk management frameworks needed to deploy AI responsibly at scale. Calibrate addresses this gap by surveying all 18 COMPEL domains independently, surfacing shadow AI usage, quantifying the skills gap, and establishing the quantitative baseline that every subsequent stage is measured against. The Calibrate stage typically requires 4 to 8 weeks depending on organizational complexity, scope, and the number of business units in scope. It produces the Calibration Report — the authoritative starting point for all subsequent transformation work. Without Calibrate, organizations risk investing in governance controls that do not address their actual weakest domains, leading to misallocated resources and persistent blind spots.
Why This Stage Matters
Without a rigorous, evidence-based baseline, AI transformation efforts are built on assumptions rather than facts. The outputs of Calibrate drive the sequencing and prioritization decisions in Organize, making this stage the foundation upon which the entire COMPEL cycle rests. Organizations that skip or rush Calibrate consistently make poor prioritization decisions in later stages — investing in policy frameworks when they lack basic data governance, or standing up ethics boards before leadership alignment is secured. The discipline of honest self-assessment prevents the most common failure mode in enterprise AI programs: solving the wrong problems first. Calibrate also establishes the measurement system that makes improvement visible. By recording domain-level scores at the start of each cycle, organizations can demonstrate quantitative progress to boards, regulators, and auditors — converting AI governance from a cost center narrative into a measurable capability-building investment.
Inputs
- Improvement recommendations and updated baselines from a prior Learn stage (if not the first cycle)
- Executive mandate or strategic initiative to pursue AI transformation
- Access to existing AI tool inventories, IT asset registers, and organizational charts
- Prior audit reports, risk assessments, or regulatory correspondence related to AI
- Stakeholder availability for structured interviews and assessment workshops
Key Activities
- AI maturity assessment across all 18 domains using the COMPEL 5-level scoring rubric with documented evidence for each score
- Shadow AI discovery survey — identifying unapproved tools and use cases already in production across business units
- Use case inventory — cataloging proposed and existing AI initiatives by business function, risk level, and strategic alignment
- Executive readiness interviews — assessing sponsorship depth, governance appetite, and investment commitment from senior leadership
- Data landscape mapping — identifying critical data assets, quality baselines, access constraints, and cross-border considerations
- Regulatory exposure mapping — cataloging applicable obligations by jurisdiction, AI system type, and risk classification tier
- Stakeholder landscape mapping — identifying key stakeholders, their influence on AI transformation, and engagement requirements
- Cultural readiness evaluation — assessing organizational change capacity and resistance patterns relevant to AI adoption
- Self-assessment questionnaires — structured diagnostic tools enabling business units to evaluate their own AI readiness across COMPEL domains
Outputs & Deliverables
- COMPEL Baseline Maturity Report — domain scores across all 18 dimensions with documented evidence and scoring rationale
- Shadow AI Registry — inventory of unauthorized AI tools in active use with risk classifications and remediation recommendations
- Use Case Opportunity Map — prioritized pipeline of AI initiatives scored by business value, feasibility, and risk
- Executive Alignment Summary — documented sponsorship commitments, governance mandates, and investment authorizations
- Data Readiness Assessment — structured gap analysis across data infrastructure, quality, and governance dimensions
- Regulatory Exposure Register — mapped obligations per AI system type and jurisdiction with compliance gap indicators
- Transformation Readiness Baseline — composite assessment of organizational change capacity, skills readiness, and infrastructure preparedness
- Stakeholder Engagement Plan — documented strategy for engaging key stakeholders throughout the transformation program
- Transformation Success Criteria — measurable objectives and KPIs that define success for the current COMPEL cycle
Controls
- Assessment scoring must use documented evidence — no self-reported scores without corroboration from at least two independent sources
- Shadow AI discovery must cover all business units, not just IT-managed systems, including SaaS AI features embedded in existing tools
- Executive interviews must include at minimum the CTO, CISO, Chief Data Officer, and one business unit leader sponsoring an AI initiative
- Maturity scores must be independently validated by a second assessor for any domain scoring above level 3
- Use case inventory must capture both approved and unapproved AI initiatives to prevent governance blind spots
Evidence Artifacts
- Completed COMPEL Maturity Assessment Workbook with domain-level scores and evidence citations
- Shadow AI Discovery Report with system inventory, risk ratings, and recommended disposition (approve, remediate, retire)
- Executive Interview Transcripts or structured summaries with sign-off from interviewees
- Data Landscape Map documenting critical data assets, quality scores, and access control matrices
- Regulatory Exposure Matrix mapping each in-scope AI system to applicable regulatory obligations
- Stakeholder Register with influence mapping and engagement plan for the transformation program
Metrics & KPIs
- Percentage of 18 domains assessed with documented evidence (target: 100%)
- Number of shadow AI systems discovered and risk-classified
- Use case pipeline size — total initiatives cataloged with value and risk scores
- Executive sponsorship coverage — percentage of required leadership roles formally committed
- Assessment completion time — weeks from kickoff to final Calibration Report delivery
- Inter-rater reliability score — agreement rate between independent assessors on domain scores
Risks If Skipped
- Organizations invest in governance controls that do not address their actual weakest domains, leading to misallocated resources
- Shadow AI systems remain undetected, creating unmanaged risk exposure across the enterprise
- Transformation programs lack quantitative baselines, making it impossible to demonstrate progress to boards and regulators
- Prioritization decisions in Organize and Model are based on assumptions rather than evidence, reducing program effectiveness
- Regulatory exposure remains unmapped, increasing the probability of non-compliance findings during external audits
Standards Alignment
| Standard | Clause | Description |
|---|---|---|
| ISO/IEC 42001:2023 | Clause 4.1, 4.2, 6.1 | Understanding the organization and its context, understanding needs and expectations of interested parties, actions to address risks and opportunities |
| NIST AI RMF 1.0 | GOVERN 1.1, MAP 1.1-1.6 | Legal and regulatory requirements identified; context established, categorization of AI systems, risk identification |
| EU AI Act 2024/1689 | Article 9(1-2), Annex III | Risk management system establishment, risk identification and analysis, risk classification per high-risk categories |
| IEEE 7000-2021 | Clause 6.3, 6.4 | Stakeholder identification and ethical value elicitation for AI systems |
References
- [1] ISO/IEC 42001:2023 — Information technology — Artificial intelligence — Management system, Clauses 4-6
- [2] NIST AI Risk Management Framework 1.0 (2023) — GOVERN and MAP functions
- [3] EU Artificial Intelligence Act 2024/1689 — Articles 6, 9, Annex III (Risk classification)
- [4] IEEE 7000-2021 — Model Process for Addressing Ethical Concerns During System Design
- [5] COMPEL Maturity Model Specification v2.1 — FlowRidge, 2025
- [6] Gartner, "How to Assess AI Maturity in Your Organization" (2024)
- [7] McKinsey Global Institute, "The State of AI in 2024" — organizational readiness benchmarks
Frequently Asked Questions
- How long does the Calibrate stage typically take?
- Calibrate typically requires 4 to 8 weeks depending on organizational complexity, the number of business units in scope, and stakeholder availability. Smaller organizations with a single business unit may complete Calibrate in 3 weeks, while large enterprises with multiple jurisdictions and dozens of AI systems may require 10 to 12 weeks.
- Can we skip Calibrate if we already have an AI strategy?
- No. Having an AI strategy does not mean your organization has an accurate picture of its current AI maturity across all 18 COMPEL domains. Calibrate is designed to surface gaps between strategy and execution — including shadow AI, skills deficits, and regulatory exposure — that strategic documents typically do not capture.
- What tools are used during the Calibrate assessment?
- COMPEL provides a standardized Maturity Assessment Workbook covering all 18 domains with a 5-level scoring rubric. Assessors use structured interview guides, shadow AI discovery surveys, and data landscape mapping templates. The COMPEL platform can automate parts of the assessment through integration with IT asset management and HR systems.
- Who should lead the Calibrate stage?
- Calibrate should be led by an AIT Practitioner or AIT Governance Professional certified individual, typically working within or alongside the AI Center of Excellence. External COMPEL-certified assessors are recommended for the first cycle to ensure objectivity and benchmarking accuracy.
- How does Calibrate differ from a traditional IT maturity assessment?
- Traditional IT maturity assessments focus on technology infrastructure and operational processes. Calibrate assesses 18 domains spanning People, Process, Technology, and Governance pillars — including leadership sponsorship, AI ethics, change management, and regulatory compliance. This holistic scope is essential because AI transformation failures are more often organizational than technical.
Abdelalim, T. (2025). “Calibrate — The C in COMPEL.” COMPEL by FlowRidge. https://www.compel.one/methodology/calibrate