COMPEL Certification Body of Knowledge — Module 1.2: The COMPEL Six-Stage Lifecycle
Article 18 of 22
The transition from Organize to Model is not automatic. An organization that has invested weeks or months establishing its AI governance infrastructure — defining roles, standing up the Center of Excellence, aligning the operating model — does not automatically possess the capability to take on the more technically demanding governance work of the Model stage. The Readiness Assessment Report is the artifact that determines whether the transition is warranted.
It is, in effect, a gate. And like all effective gates in governance systems, it is not designed to block progress indefinitely. It is designed to ensure that organizations do not advance before they have the foundational capabilities on which subsequent stages depend. An organization that enters the Model stage without adequate data governance processes will find that every risk assessment it produces is undermined by poor data quality. An organization that lacks the technical staff to implement controls will find that its Control Requirements Matrix is aspirational rather than operational. The Readiness Assessment Report surfaces these gaps before they become expensive failures.
This article provides a complete guide to producing the Readiness Assessment Report: its purpose and structure, the assessment dimensions it covers, the scoring methodology, how to interpret results, and how to convert findings into actionable plans.
Purpose and Scope
The Readiness Assessment Report (TMPL-O-005) is a mandatory Organize-stage artifact owned by the CoE Lead. It is produced near the end of the Organize stage, after the AI Operating Model Blueprint has been approved and governance bodies have been established, and it serves as the primary evidence document for the Organize-to-Model gate review.
The Report answers a single overarching question: has the organization completed the foundational governance work required to govern AI systems at the technical and operational depth the Model stage demands? This question has many dimensions — organizational, technical, cultural, regulatory — and the Report must address each of them with specificity and evidence.
The scope of the Report extends across the entire organization, not just the CoE or the specific business units actively pursuing AI deployment. Governance readiness is an enterprise capability. A gap in one function — say, the Legal team's capacity to review AI procurement contracts — can create bottlenecks that affect the entire transformation program.
Assessment Dimensions
The Readiness Assessment Report evaluates six dimensions of organizational readiness. Each dimension is assessed independently, scored, and combined into an aggregate readiness profile.
Dimension 1: Governance Structure Readiness
This dimension assesses whether the governance structures defined in the AI Operating Model Blueprint are operational, not merely documented. Documentation and operationalization are distinct states. A governance body that appears in the Blueprint but has not held its inaugural meeting, has not received its charter, and has not oriented its members to their responsibilities is not operationally ready.
Key evidence indicators include: governance body inaugural meetings completed and minuted, role profiles communicated to all appointees, escalation protocols tested with at least one tabletop exercise, and the decision rights matrix circulated to all decision authorities and acknowledged in writing.
Dimension 2: Data Governance Readiness
The Model stage requires organizations to apply governance to specific AI systems, which in turn requires reliable knowledge of the data those systems use. This dimension assesses the maturity of the organization's data governance capabilities: data inventory completeness, data quality standards, data lineage documentation, and data access controls.
A minimum readiness threshold for this dimension requires: a data inventory covering all data assets used in AI systems currently in scope, documented data quality standards for those assets, and a data access governance process that ensures AI development teams have appropriate access to training and evaluation data without circumventing privacy or security controls.
Dimension 3: Technical Infrastructure Readiness
AI governance at the Model stage requires technical infrastructure: model registries, experiment tracking systems, audit logging capabilities, monitoring infrastructure, and the tooling to implement the controls that will be defined in the Control Requirements Matrix. This dimension assesses whether that infrastructure exists, is operational, and is accessible to the teams who will use it.
Organizations that lack mature MLOps infrastructure should not be penalized for gaps that reflect the early stage of their AI maturity — but those gaps must be explicitly identified, with remediation plans and timelines, so that the Model stage can proceed with a realistic understanding of what is and is not yet possible.
Dimension 4: Talent and Capability Readiness
Governance is performed by people. This dimension assesses whether the organization has the human capability — in sufficient quantity and at sufficient quality — to execute the Model-stage governance work. It covers: the CoE's capacity relative to the pipeline of AI systems requiring governance, business unit AI leads' governance training completion, the Ethics Review Board's access to subject matter expertise in relevant risk domains, and the availability of external advisory support for gaps that cannot be filled internally.
Dimension 5: Regulatory Mapping Readiness
The Model stage will require the organization to classify AI systems by risk tier and map those classifications to regulatory requirements. This is impossible without a clear understanding of which regulations apply. This dimension assesses whether the organization has completed its regulatory mapping: identifying the jurisdictions in which its AI systems operate, the sector-specific regulations that apply, the current and anticipated regulatory requirements under each, and the legal opinion on how those requirements apply to the organization's specific use cases.
Dimension 6: Cultural and Change Readiness
Technical and structural readiness is necessary but not sufficient. Organizations also need cultural readiness — a workforce that understands why AI governance matters, that has internalized the organization's AI values, and that will engage with governance processes in good faith rather than treating them as compliance theater to be minimized.
This dimension assesses: awareness training completion rates across the employee population with AI responsibilities, leadership communication activity on AI governance themes, the degree to which AI governance has been integrated into performance management expectations, and qualitative evidence from the CoE's interactions with business units about practitioner attitudes toward governance requirements.
Scoring Methodology
Each dimension is scored on a five-point readiness scale:
Level 1 — Initial: The capability does not exist or exists only in fragmentary, ad hoc form. Significant investment is required before Model-stage governance is feasible.
Level 2 — Developing: Basic capability exists but is incomplete, inconsistently applied, or inadequately resourced. Progress is visible but substantial work remains.
Level 3 — Defined: The capability is documented, consistently applied, and adequately resourced for current needs. The organization can proceed to the Model stage with this capability, acknowledging that further maturation will occur during the Model stage and beyond.
Level 4 — Managed: The capability is mature, metrics-driven, and continuously improving. This level exceeds the minimum threshold and represents best-practice governance for this dimension.
Level 5 — Optimizing: The capability is industry-leading, actively benchmarked against peers, and being used to advance the field. Few organizations achieve Level 5 across all dimensions at the Organize stage.
The minimum threshold to pass the Organize-to-Model gate review is a score of Level 3 or above on all six dimensions. Dimensions scoring below Level 3 require remediation plans with specific milestones before the gate review can approve the transition. The gate review authority may grant a conditional approval — permitting limited Model-stage activities to proceed — while remediation is underway, provided the gaps do not affect the specific AI systems entering the Model stage pipeline.
Interpretation Guide
A readiness profile in which all six dimensions score Level 3 or above represents a baseline-ready organization. The organization has the foundation to begin Model-stage governance work and should proceed.
A profile with one or two dimensions below Level 3 typically indicates a focused remediation requirement. The CoE should identify the root cause of the gap — is it a resource constraint, a process design issue, or an organizational resistance issue? — and develop a targeted remediation plan. The gate review should be rescheduled for no more than 60 days after the initial assessment, allowing time for focused remediation without allowing momentum to stall.
A profile with three or more dimensions below Level 3 indicates that the Organize stage has not yet achieved its objectives. This is not a failure state — it is diagnostic information. The CoE Lead should treat this result as evidence that the Organize stage requires additional time and resource investment. Advancing to the Model stage in this condition would undermine both the quality of the governance work and the credibility of the governance program.
Action Planning
The Readiness Assessment Report is not complete until it includes an Action Plan for every dimension scoring below Level 4. The Action Plan for each gap must specify: the specific capability improvements required, the owner responsible for delivering each improvement, the resources required (budget, headcount, tooling), the timeline for completion, and the evidence that will confirm completion.
Action Plans for pre-threshold gaps (below Level 3) are prerequisites for the gate review. Action Plans for above-threshold improvements (from Level 3 to Level 4 or 5) are continuous improvement commitments that the CoE tracks through the Model stage and beyond.
The Action Plan section of the Report transforms the assessment from an evaluation exercise into a governance planning tool. It is the bridge between the organization's current readiness state and the state it needs to achieve to govern AI at scale.
Cross-References
- Article 2: Organize — Structuring for Governance — Organize stage objectives and deliverables
- Article 3: The Enterprise AI Maturity Spectrum — maturity model used as reference for readiness levels
- Article 14: Mandatory Artifacts and Evidence Management — artifact lifecycle and evidence chain requirements
- Article 17: Creating the AI Operating Model Blueprint — the Blueprint is a prerequisite for the Readiness Assessment
- M1.2-Art19: Building the Control Requirements Matrix — the first mandatory artifact of the Model stage, for which this Report serves as the gate