Model Designing The Target State

Level 1: AI Transformation Foundations Module M1.2: The COMPEL Six-Stage Lifecycle Article 3 of 10 13 min read Version 1.0 Last reviewed: 2025-01-15 Open Access

COMPEL Certification Body of Knowledge — Module 1.2: The COMPEL Six-Stage Lifecycle

Article 3 of 10


Strategy without evidence is speculation. In Artificial Intelligence (AI) transformation, speculation is not merely unproductive — it is expensive, demoralizing, and often fatal to executive confidence. The Model stage of the COMPEL methodology exists to prevent precisely this outcome. It is the bridge between knowing where you are and deciding, with rigor and precision, where you intend to go. Where Article 1: Calibrate — Establishing the Baseline produced an honest assessment of current state and Article 2: Organize — Building the Transformation Engine erected the organizational infrastructure to act, Model is where that assessment data and organizational readiness converge into a concrete, evidence-based transformation plan. This is the stage where ambition meets arithmetic — where aspirational goals are tested against organizational reality and shaped into a target state that is both meaningful and achievable within a single 12-week COMPEL cycle.

The distinction is critical. Model is not about writing a five-year AI strategy deck that will gather dust in a shared drive. It is about designing a precise, measurable target state that can be reached in the next twelve weeks, validated against the data collected during Calibrate, and executed through the structures established during Organize. As explored in Article 8: The COMPEL Cycle — Iteration and Continuous Improvement, targets are set per cycle, not as one-time aspirations. Each cycle's Model stage builds on the last, creating a compounding trajectory of capability growth that no single-phase strategic plan can replicate.

From Assessment to Architecture: The Logic of Evidence-Based Strategy

The Model stage begins where Calibrate ends — with data. The maturity assessment scores, gap analyses, stakeholder alignment maps, and capability inventories produced during calibration are not background reading for strategy sessions. They are the primary inputs. Every strategic decision made during Model must trace its lineage back to specific assessment evidence.

This evidence-based discipline serves three purposes. First, it prevents the organizational tendency to pursue whatever AI initiative generated the most excitement at the last conference or board meeting. Second, it ensures that transformation resources are directed at the gaps that matter most, not the gaps that are easiest to address. Third, it creates a defensible rationale for every investment, timeline, and priority — essential for maintaining executive sponsorship and organizational buy-in across multiple cycles.

Consider a financial services organization that completes its Calibrate assessment and discovers a revealing pattern: its Technology pillar scores at Level 3 (Operational) with solid cloud infrastructure and Machine Learning (ML) pipeline tooling, but its Governance pillar languishes at Level 1 (Foundational) with no model risk management framework, no algorithmic bias testing protocols, and no regulatory compliance documentation for AI systems. An excitement-driven strategy might prioritize deploying more models to exploit the technology advantage. An evidence-based strategy recognizes that deploying more models without governance creates compounding regulatory and reputational risk. The Model stage forces this recognition by requiring every strategic choice to reference specific assessment data.

Selecting the Target Maturity Level

The Enterprise AI Maturity Spectrum, introduced in Module 1.1, Article 3: The Enterprise AI Maturity Spectrum, defines five levels of organizational AI capability: Foundational, Developing, Operational, Advanced, and Transformational. During Model, the transformation team selects a target maturity level for each of the four pillars — People, Process, Technology, and Governance — for the upcoming cycle.

The operative word is "selects," not "aspires to." Target maturity selection is a disciplined exercise governed by three constraints.

The One-Level Rule

In a single 12-week cycle, moving an entire pillar more than one maturity level is unrealistic for most organizations. The organizational change required — new processes, trained personnel, deployed technology, established governance — simply cannot be absorbed faster without creating fragile, superficial capability that will collapse under operational pressure. The Model stage enforces this constraint explicitly: if your Governance pillar assessed at Level 1 during Calibrate, your target for this cycle is Level 2, not Level 4. Ambitious organizations sometimes resist this discipline. Experience consistently validates it. Durable capability is built incrementally, not aspirationally.

The Balance Imperative

The four pillars must advance in reasonable alignment. A Technology score at Level 4 paired with a Governance score at Level 1 is not a sign of technology leadership — it is a structural risk. During Model, the transformation team evaluates the gap between pillar scores and prioritizes advancement in the pillar that is most critically lagging. This does not mean all pillars must advance equally in every cycle. It means that no pillar should be left behind by more than one level, and that the strategic rationale for any imbalanced advancement is explicitly documented and risk-assessed.

The Resource Reality Check

Target maturity selection must account for actual resource availability. Budget, personnel, executive attention, technology procurement timelines, and organizational change capacity are finite. The Model stage requires the transformation team to validate each target against a resource feasibility assessment. A target that consumes 100% of available resources with zero margin for the unexpected is not ambitious — it is reckless. Effective Model outputs include explicit resource buffers, typically 15-20% of total cycle capacity reserved for contingencies and emerging opportunities.

Use Case Portfolio Design

With target maturity levels established across all four pillars, the next strategic task is designing the use case portfolio — the specific AI initiatives that will be pursued during the cycle. Use case portfolio design is perhaps the most consequential decision in the Model stage, and the area where organizations most frequently err.

The Portfolio Mindset

The COMPEL methodology treats AI use cases as a portfolio, not a project list. This distinction matters. A project list is a collection of independent initiatives. A portfolio is a deliberately balanced collection of investments designed to achieve specific strategic outcomes while managing risk. The portfolio mindset demands that use cases be evaluated not only on their individual merit but on their collective contribution to the target state.

An effective cycle portfolio typically includes three categories of use cases:

Foundation builders are use cases that advance organizational capability even if their direct Return on Investment (ROI) is modest. Standing up a centralized feature store, implementing a model monitoring framework, or deploying a governance workflow for AI approvals falls into this category. These use cases build the infrastructure that makes future high-value deployments possible.

Value demonstrators are use cases selected specifically for their ability to deliver measurable, visible business value within the 12-week cycle. These initiatives maintain executive confidence and organizational momentum. Effective value demonstrators have clear baselines, quantifiable success metrics, and a defined business stakeholder who will champion the results.

Capability stretchers are use cases that push the organization slightly beyond its current comfort zone — not recklessly, but deliberately. These might involve a new technology (such as a first deployment of a Large Language Model, or LLM, in a customer-facing application), a new partnership model (such as a first co-development initiative with an external AI vendor), or a new governance challenge (such as a first algorithmic impact assessment). Capability stretchers ensure that the organization is learning and expanding its boundaries, not merely repeating what it already knows how to do.

A well-designed portfolio balances these three categories. A portfolio composed entirely of foundation builders will lose executive support. A portfolio of only value demonstrators will plateau as the organization runs out of easy wins. A portfolio dominated by capability stretchers introduces excessive risk. The Model stage requires explicit categorization and balance assessment for every proposed use case.

Prioritization Criteria

Not every promising use case belongs in the current cycle. The Model stage applies structured prioritization criteria to filter and rank candidates:

  • Strategic alignment: Does this use case directly advance the target maturity state for at least one pillar?
  • Feasibility within cycle: Can this use case be delivered — not merely started, but delivered to measurable outcomes — within 12 weeks?
  • Data readiness: Does the required data exist, at sufficient quality, with appropriate access permissions?
  • Organizational readiness: Are the necessary skills, processes, and governance structures in place (or being built in this cycle) to support deployment?
  • Value measurability: Can the business impact be quantified against a defined baseline?
  • Risk proportionality: Is the risk profile appropriate given the organization's current maturity level and risk appetite?

Use cases that score poorly on feasibility or data readiness are not rejected — they are deferred to future cycles where prerequisites will have been addressed. This deferral discipline is one of the most valuable functions of the Model stage. It prevents organizations from launching initiatives they are not yet equipped to succeed at, while ensuring those initiatives remain visible on the strategic horizon.

Technology Architecture Decisions

The Model stage is where high-level technology architecture decisions are made for the cycle. These decisions are not detailed engineering specifications — those emerge during Article 4: Produce — Executing the Transformation. Rather, they are the strategic technology choices that shape what is possible and what is not.

Key architecture decisions during Model include:

Build versus buy versus partner: For each use case in the portfolio, the transformation team determines whether to build custom solutions, procure commercial products, or engage partners. This decision is driven by the use case's strategic importance, the organization's internal capability, time constraints, and long-term ownership considerations. Organizations at lower maturity levels typically lean toward buy and partner strategies that accelerate time-to-value while building internal understanding.

Platform consolidation versus best-of-breed: Organizations accumulate AI tools quickly, often with different teams adopting different platforms for similar functions. The Model stage evaluates platform sprawl against the Technology pillar's target maturity and defines a consolidation trajectory. Full consolidation in a single cycle is rarely feasible, but establishing a target architecture and beginning migration is a common and valuable Model output.

Infrastructure scaling decisions: If the use case portfolio requires compute, storage, or network capacity beyond current availability, the Model stage identifies these requirements and triggers procurement or provisioning activities early enough to avoid execution delays during Produce. Organizations that defer infrastructure planning to the execution phase consistently experience timeline slippage.

Integration architecture: AI solutions that operate in isolation deliver a fraction of their potential value. The Model stage defines how new AI capabilities will integrate with existing enterprise systems — Customer Relationship Management (CRM), Enterprise Resource Planning (ERP), data warehouses, operational workflows — and identifies integration dependencies that must be resolved during the cycle.

Workforce Capability Planning

Technology without capable people is expensive shelf-ware. The Model stage includes explicit workforce capability planning aligned to the People pillar's target maturity level and the demands of the use case portfolio.

Workforce planning during Model operates at three levels:

Immediate cycle needs: What specific skills are required to execute the current cycle's use case portfolio? Where do gaps exist between required skills and available talent? How will those gaps be closed — through training, hiring, contracting, or partner augmentation? These questions must be answered with enough specificity to prevent execution bottlenecks during Produce.

Capability building trajectory: Beyond the immediate cycle, what skills and competencies is the organization deliberately developing? The Model stage defines learning pathways for key roles — data engineers, Machine Learning Operations (MLOps) engineers, AI product managers, governance specialists, business translators — with milestones mapped to current and future cycles. This trajectory should align with the maturity progression defined in the Enterprise AI Maturity Spectrum.

Cultural readiness: Skills are necessary but insufficient. The Model stage also assesses and plans for cultural readiness — the willingness of business units to adopt AI-informed processes, the comfort of managers with algorithmic decision support, and the trust of frontline workers in AI-augmented workflows. Cultural readiness gaps that are ignored during Model become execution barriers during Produce.

Building the Transformation Roadmap

The culminating output of the Model stage is the transformation roadmap — a structured document that synthesizes all prior analysis into an actionable plan for the upcoming cycle. The roadmap is not a Gantt chart. It is a strategic narrative backed by evidence, constrained by reality, and designed for execution.

An effective COMPEL transformation roadmap includes the following components:

Target state summary: The specific maturity level targets for each pillar, with explicit reference to the Calibrate assessment data that justifies each target.

Use case portfolio: The categorized and prioritized list of AI initiatives for the cycle, with assigned owners, defined success metrics, resource allocations, and dependency maps.

Technology decisions: The architecture choices that enable the portfolio, including procurement actions, platform decisions, and integration requirements.

Workforce plan: The talent acquisition, training, and cultural readiness activities for the cycle, with milestones and accountability.

Governance enhancements: The specific governance frameworks, policies, or processes that will be established or improved during the cycle to support the target maturity level.

Risk register: The identified risks to cycle success, with mitigation strategies and escalation thresholds.

Success criteria: The quantitative and qualitative measures that will determine whether the cycle achieved its target state, directly feeding the Evaluate stage (covered in Article 6).

The roadmap undergoes review and approval by the AI Steering Committee and relevant executive sponsors before the transition to Produce. This approval gate is not ceremonial. It is the point where the organization formally commits resources and attention to the defined target state, and where any misalignment between strategy and organizational willingness is surfaced and resolved.

Common Pitfalls in the Model Stage

Even with a structured methodology, several recurring mistakes can undermine the Model stage:

Overambition: Setting targets that require flawless execution with zero contingency. The antidote is the one-level rule and the resource reality check described above.

Pet project capture: Allowing politically powerful stakeholders to insert use cases that do not meet prioritization criteria. The antidote is structured prioritization with explicit scoring and documented rationale for every portfolio inclusion.

Technology infatuation: Selecting technology before defining the problem it solves. The use case portfolio must drive technology decisions, not the reverse. Organizations that acquire AI platforms and then search for use cases to justify the purchase consistently underperform.

Governance deferral: Treating governance enhancements as optional or deferrable when more exciting technology deployments are available. The balance imperative prevents this, but only if enforced with discipline by the Center of Excellence (CoE) established during Organize.

Isolation from Calibrate data: Designing strategy based on assumptions about organizational capability rather than measured assessment data. Every strategic choice in Model should be traceable to specific Calibrate outputs. If it is not, it is speculation, not strategy.

Looking Ahead

The Model stage transforms assessment data and organizational readiness into a precise, evidence-based plan. But a plan, however well-designed, delivers no value until it is executed. Article 4: Produce — Executing the Transformation examines the critical transition from strategy to action — where use cases move from portfolio documents into development pipelines, where governance frameworks move from policy drafts into operational practice, and where the organization's transformation intent is tested against the unforgiving reality of implementation. The discipline invested in Model pays its dividends during Produce, where evidence-based targets and realistic roadmaps separate organizations that deliver from those that merely plan.


© FlowRidge.io — COMPEL AI Transformation Methodology. All rights reserved.