Entry And Exit Criteria Stage Gate Readiness

Level 1: AI Transformation Foundations Module M1.2: The COMPEL Six-Stage Lifecycle Article 16 of 10 27 min read Version 1.0 Last reviewed: 2025-01-15 Open Access

COMPEL Certification Body of Knowledge — Module 1.2: The COMPEL Six-Stage Lifecycle

Article 16 of 16


Introduction

The COMPEL Six-Stage Lifecycle derives its rigor not from the stages themselves but from the boundaries between them. Every transition from one stage to the next represents a decision point — a gate through which work products, stakeholder commitments, and risk assessments must pass before the organization advances. Without formally defined entry criteria, exit criteria, and failure conditions, lifecycle execution degrades into a sequence of loosely connected activities rather than a disciplined governance process.

This article provides the definitive reference for stage gate readiness across all six COMPEL stages: Calibrate, Organize, Model, Produce, Evaluate, and Learn. For each stage, it specifies the preconditions that must be satisfied before work begins (entry criteria), the deliverables and evidence that must be produced before the stage can close (exit criteria), and the observable conditions under which the stage must be considered failed and remediation initiated (failure conditions). It also describes the four named quality gates that punctuate the lifecycle, the recommended cycle duration and its contextual range, and the procedural relationship between failure conditions and stage rollback.

Readers should treat this article as a companion to M1.2-Art07: Stage-Gate Decision Framework, which addresses the governance mechanics of gate reviews — the composition of review boards, escalation protocols, and decision recording requirements. Where Art07 answers "who decides and how," this article answers "what must be true before, during, and after each stage."


The Architecture of Stage Boundaries

Entry Criteria, Exit Criteria, and Failure Conditions Distinguished

These three concepts serve fundamentally different governance purposes, and conflating them is one of the most common errors in lifecycle implementation.

Entry criteria are preconditions. They describe the state of the organization, its commitments, and its information assets that must be verified before a stage commences. Entry criteria are binary: either they are met or they are not. If they are not met, the stage does not begin. Entry criteria exist to prevent premature work — the expenditure of resources on activities for which the organization is not yet prepared.

Exit criteria are completion conditions. They describe the work products, approvals, and evidence that the stage must produce before it can close. Exit criteria are also binary, but they are assessed retrospectively: the stage has been running, and the question is whether it has produced everything it was supposed to produce. Exit criteria exist to prevent premature advancement — the transition to a subsequent stage before the current stage has delivered its required outputs.

Failure conditions are fundamentally different from both. A failure condition is an observable state during stage execution that indicates the stage cannot succeed under current circumstances. Failure conditions are not the absence of exit criteria (that is simply "not done yet"); they are the presence of blocking factors that make completion impossible or inadvisable without intervention. Failure conditions trigger remediation procedures, which may include stage rollback, scope reduction, stakeholder escalation, or cycle termination.

The critical distinction: an unmet exit criterion means "keep working." A triggered failure condition means "stop and escalate."

The Chain of Continuity

A well-designed lifecycle ensures that the exit criteria of stage N are a superset of the entry criteria of stage N+1. This is the chain of continuity principle. When Calibrate's exit criteria are satisfied, Organize's entry criteria are automatically met — because Organize's entry criteria were derived from Calibrate's outputs. If there is a gap between one stage's exit criteria and the next stage's entry criteria, the lifecycle has a structural defect: it is possible to complete a stage and still not be ready for what comes next.

Throughout this article, the chain of continuity is made explicit. Each stage's entry criteria reference the specific exit criteria of the prior stage on which they depend.


Stage 1: Calibrate

Entry Criteria

The Calibrate stage is the lifecycle's origin point, and its entry criteria therefore describe organizational readiness rather than the outputs of a prior stage.

  1. Executive commitment to AI transformation. The board of directors or C-suite leadership must have issued a formal directive, memorandum, or strategic plan that acknowledges the organization's intent to pursue AI-enabled transformation. This need not be a detailed strategy; it must be an unambiguous statement of intent with sufficient authority to mobilize resources.
  1. Designated executive sponsor with decision-making authority. A named individual at the senior vice president level or above must be identified as the executive sponsor for the COMPEL cycle. This individual must have the authority to allocate budget, assign personnel, and make binding decisions on behalf of the organization regarding AI governance scope and priorities. The sponsor's authority must be documented and communicated to all business units that will participate in the assessment.
  1. Budget allocation for assessment and discovery activities. Sufficient funding must be committed — not merely projected — to cover the cost of the Calibrate stage's activities: maturity assessments, shadow AI discovery, stakeholder interviews, and use-case identification workshops. The budget need not cover the full lifecycle; it must cover Calibrate in its entirety.
  1. Availability of cross-functional assessment participants. Representatives from IT, legal, compliance, operations, human resources, and at least two line-of-business units must be identified and committed to participate in assessment activities. Their managers must have approved their time allocation.

Exit Criteria

  1. Maturity baseline completed across all 18 governance domains. The organization's current AI governance maturity must be assessed and scored across all 18 domains defined in the COMPEL maturity model (see M1.3-Art02: COMPEL Maturity Model Domains). Each domain must have a numeric score, supporting evidence, and an identified domain owner.
  1. Shadow AI inventory documented and risk-classified. All AI systems, models, and automated decision-making tools operating outside formal IT governance must be identified, catalogued, and classified by risk tier. The inventory must include system owner, data sources, affected populations, and current controls (or lack thereof). Risk classification must use the organization's adopted risk taxonomy.
  1. Use-case backlog prioritized with value theses. A minimum of ten candidate AI use cases must be identified, each with a value thesis articulating the expected business outcome, estimated effort, governance complexity, and strategic alignment score. The backlog must be rank-ordered using a consistent prioritization methodology.
  1. Executive alignment documented and signed. The executive sponsor and all participating C-suite stakeholders must sign an alignment document that confirms agreement on: the maturity baseline findings, the risk classification of shadow AI systems, the prioritized use-case backlog, and the scope of the next stage (Organize). This is not a formality; it is the binding commitment that authorizes the organization to proceed.

Failure Conditions

  • No executive sponsor identified after two weeks. If two calendar weeks elapse from the formal initiation of the Calibrate stage without a named executive sponsor who has accepted the role in writing, the stage must be suspended. Without sponsorship, assessment activities lack authority and their findings will not be actionable.
  • Assessment data collection blocked by more than three business units. If more than three business units refuse to participate in maturity assessment or shadow AI discovery, the resulting baseline will have material gaps that undermine the validity of all subsequent planning. The stage must be paused and the blocking units escalated to the executive sponsor.
  • Shadow AI discovery reveals critical unmitigated risks requiring immediate escalation. If the shadow AI inventory identifies systems that pose immediate legal, safety, or regulatory risk — for example, an unmonitored model making consequential decisions about individuals without any human oversight — the Calibrate stage must pause its normal activities and initiate an emergency risk escalation. The lifecycle does not proceed until the critical risks are either mitigated or formally accepted at the board level.

Stage 2: Organize

Entry Criteria

  1. Calibrate exit criteria fully satisfied. All four Calibrate exit criteria must be verified as complete. The maturity baseline, shadow AI inventory, use-case backlog, and signed executive alignment document must all be available and current.
  1. Governance structure mandate confirmed. The executive sponsor must confirm that the organization is prepared to establish or modify its AI governance structure — including committee charters, role definitions, and reporting lines — based on the Calibrate findings.
  1. Resource commitment for governance design activities. Personnel and budget for the Organize stage must be committed, including time from legal counsel, compliance officers, HR leadership, and technology architecture teams.

Exit Criteria

  1. Governance operating model formally defined. The AI governance committee structure, decision-rights matrix, escalation pathways, and reporting cadences must be documented and approved by the executive sponsor. Role definitions must include accountability assignments for each of the 18 maturity domains.
  1. Policy framework drafted and reviewed. Core AI governance policies — including acceptable use, model risk management, data governance, and human oversight requirements — must be drafted, reviewed by legal counsel, and approved for pilot deployment. Policies need not be final; they must be sufficient to govern the Model and Produce stages.
  1. Stakeholder engagement plan approved. A comprehensive stakeholder map and engagement plan must be completed, identifying all internal and external stakeholders affected by the prioritized use cases, their influence and interest levels, and the communication and involvement strategies for each group (see M1.2-Art09: Stakeholder Engagement and Communication Planning).
  1. Resource allocation plan for remaining lifecycle stages. A staffing plan, budget forecast, and timeline for the Model, Produce, Evaluate, and Learn stages must be produced and approved. This plan becomes the baseline against which execution is tracked.

Failure Conditions

  • Governance committee cannot be constituted within three weeks. If the organization cannot identify and secure commitment from the required committee members within three weeks of Organize commencement, the governance structure is unlikely to function effectively. The stage must escalate to the executive sponsor for intervention.
  • Legal review identifies regulatory blockers with no remediation path. If legal counsel determines that the organization's current regulatory posture prevents the deployment of AI systems in the prioritized use cases and no remediation is feasible within the cycle timeline, the use-case backlog must be revised. If fewer than three viable use cases remain after revision, the cycle should be reconsidered.
  • Irreconcilable stakeholder conflicts at the executive level. If two or more executive stakeholders hold fundamentally opposed positions on governance scope, risk appetite, or use-case priority, and mediation by the executive sponsor fails to resolve the conflict within one week, the Organize stage must pause. Proceeding without executive consensus produces governance structures that will be undermined during execution.

Stage 3: Model

Entry Criteria

  1. Organize exit criteria fully satisfied. The governance operating model, policy framework, stakeholder engagement plan, and resource allocation plan must all be complete and approved.
  1. Design team assembled and briefed. The technical and governance personnel responsible for designing AI solutions must be identified, onboarded, and briefed on the prioritized use cases, governance policies, and stakeholder requirements.
  1. Technical infrastructure access confirmed. The design team must have access to the development environments, data catalogues, model registries, and collaboration tools required for design work.

Exit Criteria

  1. Solution architectures completed for all prioritized use cases. Each use case in the approved backlog must have a detailed solution architecture covering data pipelines, model selection rationale, integration points, human oversight mechanisms, and governance controls. Architectures must be reviewed against the policy framework established in Organize.
  1. Risk assessments completed for each solution design. Every solution architecture must have an accompanying risk assessment that identifies potential failure modes, bias vectors, data quality risks, and regulatory compliance gaps. Risk assessments must include proposed mitigations and residual risk ratings.
  1. Ethical review completed and documented. An ethical review board or equivalent body must have reviewed each solution design for fairness, transparency, accountability, and societal impact. Review findings and any required design modifications must be documented.
  1. Gate M: Design Approved. The formal quality gate for the Model stage. The governance committee must convene a gate review and formally approve all solution designs for advancement to the Produce stage. Gate M approval requires documented consensus that each design is technically feasible, ethically reviewed, risk-assessed, and aligned with organizational policies. Gate M is the first of the four named quality gates in the COMPEL lifecycle and represents the organization's formal commitment to build what has been designed (see M1.2-Art07: Stage-Gate Decision Framework for gate review procedures).

Failure Conditions

  • Solution design fails ethical review with no feasible remediation. If the ethical review board determines that a proposed solution cannot be made to comply with the organization's ethical principles and no redesign is feasible, the use case must be removed from the backlog. If this reduces the backlog below the minimum viable scope, the cycle may need to return to Calibrate for re-prioritization.
  • Data readiness assessment reveals critical gaps. If the data required for a prioritized use case is unavailable, of insufficient quality, or legally restricted, and remediation cannot be completed within the cycle timeline, the affected use case must be descoped or the cycle extended.
  • Technical architecture review identifies fundamental platform limitations. If the organization's technology infrastructure cannot support the proposed designs without capital investment exceeding the approved budget, the designs must be revised or the budget renegotiated. This failure condition often triggers a partial rollback to Organize for resource reallocation.

Stage 4: Produce

Entry Criteria

  1. Gate M: Design Approved passed. All solution designs must have received formal Gate M approval from the governance committee.
  1. Development environments provisioned and validated. All technical environments required for building, testing, and staging AI solutions must be provisioned, security-hardened, and validated for readiness.
  1. Development team capacity confirmed. The personnel required to build the approved designs must be available, with no competing commitments that would reduce their allocation below the planned level.

Exit Criteria

  1. AI solutions built and unit-tested. All approved solution designs must be implemented as functional systems with passing unit tests, integration tests, and code quality checks. Build artifacts must be versioned and stored in the organization's artifact repository.
  1. Governance controls implemented and verified. Every governance control specified in the solution architecture — logging, audit trails, human override mechanisms, access controls, bias monitoring hooks — must be implemented and verified through testing. Controls are not optional enhancements; they are core deliverables.
  1. Documentation completed. Technical documentation, operational runbooks, and user-facing documentation must be completed for each solution. Documentation must include model cards, data dictionaries, and governance control descriptions.
  1. Gate P: Build Complete. The formal quality gate for the Produce stage. The governance committee must verify that all solutions are built to specification, governance controls are operational, and documentation is complete. Gate P approval authorizes advancement to the Evaluate stage. Gate P is the second named quality gate and represents the organization's assertion that the solutions are ready for formal validation (see M1.2-Art07).

Failure Conditions

  • Governance controls cannot be implemented as designed. If technical constraints prevent the implementation of required governance controls — for example, if the chosen platform does not support the specified audit logging granularity — the solution must be redesigned or the platform changed. This failure condition triggers a rollback to Model for design revision.
  • Build quality metrics fall below acceptable thresholds. If code quality, test coverage, or performance benchmarks fall below the thresholds defined in the organization's engineering standards and cannot be remediated within the stage timeline, the solution must not advance. The stage must be extended or the scope reduced.
  • Critical security vulnerability discovered in a dependency. If a security audit reveals a critical vulnerability in a third-party component that cannot be patched or replaced within the stage timeline, the affected solution must be quarantined. Deployment to Evaluate is prohibited until the vulnerability is resolved.

Stage 5: Evaluate

Entry Criteria

  1. Gate P: Build Complete passed. All solutions must have received formal Gate P approval.
  1. Validation environment prepared. A production-representative validation environment must be provisioned, with realistic data volumes, user loads, and integration points. The validation environment must not share resources with production systems.
  1. Evaluation criteria and acceptance thresholds defined. Quantitative acceptance thresholds for performance, fairness, reliability, and governance compliance must be defined before evaluation begins. These thresholds must be derived from the risk assessments completed in the Model stage and approved by the governance committee.

Exit Criteria

  1. Performance validation completed against all acceptance thresholds. Every solution must be tested against its defined acceptance thresholds, with results documented and deviations explained. Solutions that meet all thresholds are approved for production. Solutions that fail any threshold must have documented remediation plans or formal risk acceptances.
  1. Bias and fairness testing completed. Comprehensive bias testing must be conducted across all protected characteristics relevant to each solution's decision domain. Results must be documented and reviewed by the ethical review board. Any identified bias must be mitigated or formally accepted with documented justification (see M1.2-Art12: Bias Testing and Fairness Validation Protocols).
  1. User acceptance testing completed. End users and affected stakeholders must have tested each solution in the validation environment and provided formal feedback. Critical usability issues must be resolved before advancement.
  1. Gate E: Validated and Approved. The formal quality gate for the Evaluate stage. The governance committee, augmented by the ethical review board and user representatives, must formally approve each solution for production deployment. Gate E requires documented evidence that all acceptance thresholds are met (or deviations formally accepted), bias testing is complete, and user acceptance is confirmed. Gate E is the third named quality gate and represents the organization's assertion that the solutions are safe, fair, and effective for production use (see M1.2-Art07).

Failure Conditions

  • Solution fails critical acceptance thresholds with no remediation path. If a solution fails a critical acceptance threshold — particularly those related to safety, fairness, or regulatory compliance — and remediation is not feasible within the cycle timeline, the solution must not be deployed. The failure must be documented and the use case returned to the backlog for future cycles.
  • Bias testing reveals systematic discrimination. If bias testing reveals systematic discrimination against a protected group that cannot be mitigated through model adjustment, post-processing, or human oversight, the solution must be withdrawn. This is a non-negotiable failure condition; no risk acceptance is permitted for systematic discrimination.
  • Stakeholder objections unresolved after formal mediation. If affected stakeholders raise material objections to a solution's behavior, impact, or governance controls, and these objections cannot be resolved through the stakeholder engagement process, the solution must not advance until the objections are addressed or the governance committee issues a formal override with documented justification.

Stage 6: Learn

Entry Criteria

  1. Gate E: Validated and Approved passed. All solutions intended for production must have received formal Gate E approval.
  1. Production deployment plan approved. A detailed deployment plan — including rollout schedule, rollback procedures, monitoring configuration, and incident response protocols — must be approved by both the governance committee and the technology operations team.
  1. Monitoring and feedback infrastructure operational. Production monitoring dashboards, alerting systems, feedback collection mechanisms, and governance reporting pipelines must be operational before deployment begins.

Exit Criteria

  1. Solutions deployed to production with monitoring active. All approved solutions must be deployed to production according to the approved deployment plan. Monitoring systems must be confirmed operational and generating data.
  1. Post-deployment validation completed. A minimum monitoring period (typically two to four weeks) must elapse during which solution performance, fairness metrics, and governance controls are validated in the production environment. Any anomalies must be investigated and resolved.
  1. Lessons learned documented and institutionalized. A comprehensive lessons learned review must be conducted covering all six stages of the cycle. Findings must be documented and translated into actionable improvements for the governance operating model, policy framework, and lifecycle procedures. Knowledge must be disseminated to all relevant stakeholders.
  1. Gate L: Production Ready. The formal quality gate for the Learn stage and the final gate of the COMPEL cycle. The governance committee must formally confirm that all deployed solutions are operating within acceptable parameters, governance controls are functioning as designed, and the organization has captured and institutionalized the knowledge gained during the cycle. Gate L approval closes the current cycle and authorizes the initiation of the next Calibrate stage. Gate L is the fourth and final named quality gate, representing the organization's assertion that its AI systems are production-ready and its governance capabilities have matured (see M1.2-Art07).

Failure Conditions

  • Production deployment causes service degradation or safety incidents. If a deployed solution causes measurable harm — service outages, incorrect decisions affecting individuals, data breaches, or safety incidents — the solution must be immediately rolled back. The incident must be investigated under the organization's incident response procedures, and the solution may not be redeployed until root cause analysis is complete and remediation is verified.
  • Post-deployment monitoring reveals performance drift beyond acceptable bounds. If solution performance degrades beyond the acceptance thresholds validated in the Evaluate stage, and automated or manual interventions cannot restore performance within the defined response window, the solution must be taken offline for investigation. This may trigger a return to the Evaluate or even Model stage depending on the root cause.
  • Organizational resistance prevents knowledge institutionalization. If the lessons learned process is blocked by organizational resistance — teams refusing to participate in retrospectives, leadership declining to act on findings, or governance improvements being deprioritized — the Learn stage must escalate to the executive sponsor. A cycle that does not learn is a cycle that did not complete.

The Four Named Quality Gates

The COMPEL lifecycle defines four named quality gates that punctuate the transition between stages. These gates are not informal checkpoints; they are formal governance events with defined participants, evidence requirements, and decision protocols.

GateNameLocationPurpose
Gate MDesign ApprovedModel exitConfirms that solution designs are feasible, ethical, risk-assessed, and policy-aligned
Gate PBuild CompleteProduce exitConfirms that solutions are built to specification with governance controls operational
Gate EValidated and ApprovedEvaluate exitConfirms that solutions meet acceptance thresholds for performance, fairness, and usability
Gate LProduction ReadyLearn exitConfirms production stability, governance control efficacy, and knowledge capture

Note that not every stage transition has a named quality gate. The transitions from Calibrate to Organize and from Organize to Model are governed by the standard entry/exit criteria mechanism but do not carry named gates. This is deliberate: the first two stages are preparatory, establishing the organizational and governance foundations. The named gates begin at Model, where the organization first commits to building specific solutions, and continue through to Learn, where the organization confirms production readiness.

The absence of named gates at the earlier transitions does not reduce their rigor. The exit criteria for Calibrate and Organize are fully enforced. The distinction is that named gates carry additional procedural requirements — formal committee convocations, quorum rules, and recorded decisions — that reflect the higher stakes of the later transitions. See M1.2-Art07 for the complete gate review protocol.


Failure Conditions and Stage Rollback Procedures

Failure conditions are the lifecycle's circuit breakers. When a failure condition is triggered, the normal forward progression of the lifecycle halts, and the organization must initiate a defined response. The response depends on the severity and nature of the failure.

Severity Classification

Failure conditions are classified into three severity levels:

Severity 1 — Immediate Halt. The failure poses an immediate risk to safety, legal compliance, or organizational reputation. All stage activities cease immediately. Examples include the discovery of critical unmitigated shadow AI risks during Calibrate, systematic discrimination detected during Evaluate, or production safety incidents during Learn.

Severity 2 — Stage Pause. The failure blocks stage completion but does not pose an immediate risk. Stage activities pause while the failure is investigated and remediated. Examples include executive sponsor absence during Calibrate, irreconcilable stakeholder conflicts during Organize, or build quality falling below thresholds during Produce.

Severity 3 — Scope Adjustment. The failure affects specific deliverables but does not block the stage as a whole. The affected scope is reduced, deferred, or redesigned while the remainder of the stage continues. Examples include individual use cases failing data readiness checks during Model or single solutions failing acceptance thresholds during Evaluate.

Rollback Procedures

When a failure condition cannot be resolved within the current stage, the lifecycle may roll back to a prior stage. Rollback is not failure — it is the governance system working as designed, preventing the organization from advancing on a compromised foundation.

Single-stage rollback is the most common form. A failure in Produce that stems from a design deficiency triggers a return to Model for redesign. A failure in Evaluate that stems from inadequate build quality triggers a return to Produce for remediation. Single-stage rollbacks preserve the work completed in prior stages and focus remediation on the specific gap.

Multi-stage rollback is rare but sometimes necessary. If a failure in Evaluate reveals that the governance policies established in Organize are fundamentally inadequate — for example, if fairness testing reveals that the policy framework failed to account for a critical regulatory requirement — the lifecycle may need to return to Organize to revise the policy framework before the solutions can be redesigned and rebuilt. Multi-stage rollbacks are expensive and should be escalated to the executive sponsor for authorization.

Cycle termination is the most extreme response. If the cumulative effect of failures renders the cycle's objectives unachievable — for example, if the organization's strategic direction has changed so fundamentally that the prioritized use cases are no longer relevant — the cycle may be terminated. Cycle termination requires executive sponsor authorization and a documented decision record. Terminated cycles must still complete their lessons learned activities; the knowledge gained is valuable even when the cycle does not reach production deployment.


The Recommended Cycle Duration

The COMPEL lifecycle recommends a 12-week standard cycle duration, with a contextual range of 8 to 16 weeks depending on organizational factors.

The 12-week standard assumes a mid-sized organization with moderate AI maturity, three to five prioritized use cases, and an established governance function. Under these conditions, the stages are typically allocated as follows:

StageStandard DurationRange
Calibrate2 weeks1-3 weeks
Organize2 weeks1-3 weeks
Model3 weeks2-4 weeks
Produce3 weeks2-4 weeks
Evaluate1.5 weeks1-2 weeks
Learn0.5 weeks1-2 weeks

The 8-week minimum applies to organizations with high AI maturity, a single focused use case, and pre-existing governance structures that require only minor adaptation. In these cases, Calibrate and Organize may be compressed to one week each, as much of the preparatory work is already done.

The 16-week maximum applies to organizations with low AI maturity, many prioritized use cases, complex regulatory environments, or significant organizational change management requirements. Cycles exceeding 16 weeks should be reconsidered: they may be attempting too much scope for a single cycle and should be split into multiple sequential cycles with narrower scope.

These durations are guidelines, not mandates. The governance committee should calibrate (in the colloquial sense) the cycle duration to the organization's context during the Organize stage, when the resource allocation plan is developed. The key constraint is that the cycle must maintain momentum: stages that extend significantly beyond their planned duration without triggering a formal failure condition suggest that the entry criteria were not rigorous enough or the scope was not well defined.


The Chain of Continuity in Practice

The principle that each stage's entry criteria derive from the prior stage's exit criteria is best illustrated by tracing a single thread through the full lifecycle.

Consider the use-case backlog. It originates as a Calibrate exit criterion: "Use-case backlog prioritized with value theses." It appears as an Organize entry prerequisite, where it informs the governance structure design and resource allocation. The Organize exit criterion "Resource allocation plan for remaining lifecycle stages" is built on the backlog. The Model entry criterion "Design team assembled and briefed" depends on knowing which use cases will be designed, which depends on the backlog and the resource plan. The Model exit criterion "Solution architectures completed for all prioritized use cases" directly references the backlog. And so on through Produce, Evaluate, and Learn.

If the backlog is deficient — if it was not properly prioritized, if value theses are missing, if the wrong use cases were selected — the deficiency propagates through every subsequent stage. This is why the Calibrate exit criteria are so demanding: they are the foundation on which the entire cycle rests.

The chain of continuity also explains why rollbacks are sometimes necessary. If a deficiency in Calibrate's outputs is not discovered until Evaluate — for example, if a use case was prioritized based on a flawed value thesis that testing now disproves — the remediation may need to reach all the way back to the root of the chain.


Cross-References

This article is situated within a broader body of knowledge that provides detailed treatment of specific topics referenced here:

  • M1.2-Art07: Stage-Gate Decision Framework — Gate review procedures, committee composition, quorum rules, and decision recording requirements for all four named gates.
  • M1.2-Art09: Stakeholder Engagement and Communication Planning — Detailed stakeholder mapping methodology and engagement strategies referenced in the Organize stage exit criteria.
  • M1.2-Art12: Bias Testing and Fairness Validation Protocols — The bias testing and fairness validation methods referenced in the Evaluate stage exit criteria and failure conditions.
  • M1.3-Art02: COMPEL Maturity Model Domains — The 18 governance domains referenced in the Calibrate stage exit criteria for maturity baseline assessment.

Conclusion

The entry criteria, exit criteria, and failure conditions defined in this article are not bureaucratic overhead. They are the mechanism by which the COMPEL lifecycle ensures that AI governance is rigorous, traceable, and accountable. Every criterion exists because its absence has, in practice, led to governance failures: premature deployments, unmitigated risks, organizational misalignment, or solutions that do not serve their intended purpose.

Practitioners implementing the COMPEL lifecycle should treat these criteria as the minimum standard. Organizations with higher risk profiles, more complex regulatory environments, or more ambitious AI programs should augment them with additional criteria specific to their context. What must not be compromised is the principle that every stage transition is a deliberate, evidence-based decision — never an assumption, never a default, and never a matter of elapsed time alone.

The four named quality gates — Design Approved, Build Complete, Validated and Approved, and Production Ready — mark the moments where the organization makes its most consequential commitments. The failure conditions and rollback procedures ensure that when things go wrong, the response is structured and proportionate. And the 12-week cycle duration, with its 8-to-16-week contextual range, provides a cadence that balances thoroughness with momentum.

Governance that cannot be measured cannot be improved. The criteria in this article make governance measurable. The gates make it decidable. And the failure conditions make it honest.