COMPEL Certification Body of Knowledge — Module 1.3: The 18-Domain Maturity Model
Article 4 of 10
The difference between an organization that experiments with Artificial Intelligence (AI) and one that delivers enterprise value from AI is process. Not talent — talented teams fail constantly when they lack process discipline. Not technology — sophisticated platforms sit idle when there is no structured method for turning business problems into deployed solutions. And not leadership — even the most committed executives cannot will transformation into existence without the operational machinery to make it happen. Process is the connective tissue that transforms individual capability into organizational capability, and its absence is the single most underdiagnosed cause of AI transformation stagnation.
The Process pillar contains five domains, spanning from strategic opportunity identification through operational delivery and continuous improvement. This article examines the first two: Domain 5, AI Use Case Management, and Domain 6, Data Management and Quality. These domains represent, respectively, what the organization chooses to build and the raw material from which it builds. Together they form the strategic and material foundation of AI delivery — the starting point of every AI initiative, and the point at which most failed initiatives went wrong.
Domain 5: AI Use Case Management
What This Domain Measures
AI Use Case Management assesses the maturity of the processes by which an organization identifies, evaluates, prioritizes, tracks, and retires AI use cases. A "use case" in this context is a specific application of AI to a defined business problem, with measurable outcomes, identifiable stakeholders, and quantifiable resource requirements.
This domain evaluates the entire use case lifecycle: from opportunity identification through feasibility assessment, business case development, prioritization against competing opportunities, portfolio tracking, value realization measurement, and eventual retirement or evolution. It also assesses the governance structures that ensure use case decisions are made transparently, based on evidence, and aligned with strategic priorities.
Why This Domain Matters
Every enterprise generates more potential AI use cases than it can pursue. Without disciplined use case management, organizations default to one of two failure modes identified in Module 1.1, Article 6: AI Transformation Anti-Patterns: either they spread resources across too many initiatives, delivering none to production quality, or they concentrate resources on use cases selected by organizational politics rather than business value.
McKinsey's research on AI scaling consistently identifies use case prioritization as a critical differentiator between organizations that capture significant value from AI and those that do not. High-performing organizations typically pursue three to five use cases at a time, selected through rigorous evaluation of business impact, technical feasibility, data readiness, and organizational capacity. Low-performing organizations pursue ten to twenty, selected through executive preference, departmental lobbying, or vendor suggestion.
The consequences of poor use case management extend beyond resource waste. Each failed or abandoned AI initiative erodes organizational confidence in AI transformation. Business units that proposed use cases that were never funded lose faith in the process. Teams that built models for use cases that were never adopted lose motivation. As described in Module 1.1, Article 7: The Business Value Chain of AI Transformation, the path from AI investment to business value runs through use case selection — and every misstep on that path delays value realization.
Level-by-Level Maturity Criteria
Level 1 — Foundational. AI use cases emerge informally — from vendor demonstrations, conference presentations, competitor announcements, or individual enthusiasm. There is no structured process for evaluating whether a proposed use case is viable, valuable, or aligned with strategy. Use cases are approved based on executive interest or team availability rather than systematic assessment. No portfolio view exists. The organization cannot answer basic questions: How many AI initiatives are underway? What is their collective expected value? How do they connect to strategic objectives?
Level 1.5. Someone — typically within the AI team or a strategy function — has begun cataloging AI use case ideas, but the catalog is informal and not connected to a decision-making process. Ad hoc feasibility discussions occur, but they do not follow a consistent framework or produce standardized outputs.
Level 2 — Developing. A basic use case intake process exists. Business units can submit AI use case proposals through a defined channel. Each proposal receives at least informal evaluation for business value and technical feasibility. A use case backlog is maintained, though prioritization criteria are inconsistent and not well documented. Some use cases have business cases with estimated Return on Investment (ROI), but the methodology for estimating ROI varies. There is a periodic (quarterly or semi-annual) review of the use case pipeline, though decisions are still heavily influenced by organizational politics.
Level 2.5. Standardized templates exist for use case proposals and business case development. Feasibility assessments include data readiness as a formal criterion alongside business value and technical complexity. The use case backlog is visible to stakeholders beyond the AI team. Initial attempts at portfolio-level tracking show aggregate investment and expected value, though accuracy is limited.
Level 3 — Defined. A formal use case management process governs the full lifecycle from proposal through retirement. Standardized evaluation criteria assess each use case across multiple dimensions: strategic alignment, business impact, technical feasibility, data readiness, organizational readiness, risk profile, and resource requirements. A scoring framework enables transparent prioritization. A governance body — the AI steering committee or equivalent — reviews and approves use case priorities on a regular cadence. Portfolio-level tracking shows the status, investment, and expected value of all active use cases. Value realization is tracked post-deployment, comparing actual outcomes to business case projections.
Level 3.5. Use case management is connected to enterprise strategy processes. AI opportunities are systematically identified during annual and quarterly business planning, not only through ad hoc proposals. Cross-functional use case identification workshops bring together business domain experts, data scientists, and governance representatives. The organization maintains a structured taxonomy of AI use case types, enabling pattern recognition and knowledge reuse across domains.
Level 4 — Advanced. Use case management operates as a strategic capability that drives AI investment allocation. The portfolio is actively managed — not just tracked — with underperforming use cases deprioritized or retired and emerging opportunities fast-tracked. Sophisticated business case methodologies account for direct value, indirect value, option value, and risk-adjusted returns. The organization has developed proprietary benchmarks for use case evaluation based on historical delivery data. Use case management is integrated with financial planning, with AI investment allocations driven by portfolio analysis rather than departmental negotiation. As described in Module 1.2, Article 2: Organize — Building the Transformation Engine, the Center of Excellence (CoE) plays a central role in this process.
Level 4.5. The organization proactively identifies use case opportunities through systematic analysis of operational data, process mining, and competitive intelligence rather than waiting for proposals to emerge. AI opportunity identification is embedded in business process improvement and product development cycles. Cross-industry use case benchmarking informs the portfolio strategy.
Level 5 — Transformational. Use case management is fully integrated into enterprise strategy and operations. Every major business decision considers AI as a potential value lever. The use case pipeline is continuously refreshed based on technological advances, competitive dynamics, and operational insights. The organization's use case management capability is recognized as a competitive advantage — it consistently identifies and captures AI value faster than competitors. Use case management extends beyond internal operations to customer-facing innovation, partner ecosystem development, and new business model creation. The organization contributes to industry-level knowledge about AI use case identification and prioritization.
Domain 6: Data Management and Quality
What This Domain Measures
Data Management and Quality assesses the maturity of the organization's data governance, data quality assurance, data cataloging, metadata management, data lineage tracking, and data accessibility practices. This domain focuses on the organizational and process dimensions of data management — how data is governed, measured, documented, and made available — rather than the technology infrastructure that stores and moves data, which is assessed separately in Domain 10 (Data Infrastructure).
The distinction between Domain 6 and Domain 10 is deliberate and important. An organization can have world-class data infrastructure — modern data lakes, real-time streaming platforms, sophisticated Extract-Transform-Load (ETL) pipelines — while simultaneously suffering from poor data governance, inconsistent quality standards, and undocumented data assets. The technology to store and move data is a solved problem for most enterprises. The processes to ensure that data is accurate, complete, timely, documented, and trustworthy remain a persistent challenge.
Why This Domain Matters
Data is the raw material of AI. Every Machine Learning (ML) model is, at its mathematical core, a compressed representation of the patterns found in its training data. If that data is inaccurate, incomplete, biased, poorly documented, or inaccessible, the resulting model inherits those deficiencies — and amplifies them. The phrase "garbage in, garbage out" has been a cliché in computing for decades, but in AI it carries particular force because the "garbage out" takes the form of automated decisions affecting customers, operations, and strategy.
Industry research has consistently identified poor data quality as a significant cost driver for organizations, with estimates suggesting millions of dollars in annual impact for large enterprises — and that figure does not include the opportunity cost of AI initiatives that fail or underperform due to data issues. Industry surveys consistently identify data quality problems as the primary reason for AI project failure cited by Chief Data Officers (CDOs), exceeding talent shortages, technology limitations, and organizational resistance.
The relationship between data quality and AI outcomes is not linear — it is multiplicative. A model trained on data that is 90 percent accurate does not produce predictions that are 90 percent as good as one trained on perfect data. Depending on the problem domain and the nature of the inaccuracies, a 10 percent data quality deficit can produce a 30 to 50 percent degradation in model performance. This multiplicative effect means that organizations cannot treat data quality as a secondary concern to be addressed after models are built. It must be addressed before and during model development, through mature processes that operate continuously.
Level-by-Level Maturity Criteria
Level 1 — Foundational. Data management is fragmented and informal. No enterprise data governance framework exists. Data quality is not measured systematically. Data definitions vary between departments — the same term (e.g., "customer," "revenue," "active user") may have different meanings in different systems. No data catalog exists. Data access is governed by informal relationships rather than formal policies. AI teams spend 60 to 80 percent of their time on data preparation, cleaning, and reconciliation — a figure consistent with industry surveys but indicative of severe process immaturity.
Level 1.5. Awareness of data quality issues exists at the leadership level, often triggered by a visible failure — a flawed report, a model that produced obviously wrong predictions, or a regulatory inquiry about data handling. Initial discussions about data governance have begun, but no formal program is in place.
Level 2 — Developing. A basic data governance program exists, typically led by a CDO or equivalent role. Data quality rules are defined for critical data elements, though enforcement is inconsistent. A data catalog has been initiated, covering the organization's most important data assets but far from comprehensive. Data stewards have been identified for key domains, though the stewardship role may not be formalized in job descriptions or performance objectives. Data quality is measured for some critical datasets, but measurement is manual and periodic rather than automated and continuous.
Level 2.5. Data quality metrics are reported to leadership on a regular cadence. Defined data quality thresholds exist for key AI use cases, with remediation processes triggered when quality falls below threshold. The data catalog is actively maintained and covers the majority of data assets used by AI teams. Initial data lineage tracking provides basic visibility into data origins and transformations.
Level 3 — Defined. A comprehensive data governance framework is in place, with defined policies, roles, responsibilities, and decision rights. Data quality is measured automatically across all critical data domains using defined quality dimensions: accuracy, completeness, consistency, timeliness, validity, and uniqueness. Data quality Service Level Agreements (SLAs) exist between data producers and AI consumers. A mature data catalog covers all enterprise data assets, with standardized metadata, business glossary entries, and data lineage documentation. Data stewards are formally appointed with defined responsibilities, trained in data governance practices, and accountable for quality within their domains. Data access is governed by formal policies that balance security with accessibility, enabling AI teams to access the data they need without compromising data protection requirements.
Level 3.5. Data quality is integrated into the AI delivery lifecycle — model development does not proceed until data quality has been validated against defined criteria. Automated data quality monitoring detects and alerts on quality degradation in real time, enabling proactive remediation before downstream AI systems are affected. The data governance framework extends to cover AI-specific data requirements, including training data documentation, feature store governance, and data bias assessment.
Level 4 — Advanced. Data management operates as a strategic capability that actively enables AI value creation. The data governance function proactively identifies data improvement opportunities that unlock new AI use cases. Data quality is continuously monitored and optimized, with automated remediation for common quality issues. Advanced metadata management provides rich context about every dataset — its lineage, quality profile, known limitations, approved use cases, and sensitivity classification. Master Data Management (MDM) ensures consistent, authoritative data across the enterprise. Data sharing agreements and data products enable AI teams to access curated, documented, quality-assured datasets without manual preparation. The percentage of AI practitioner time spent on data preparation has dropped below 30 percent.
Level 4.5. The organization treats data as a product, with data teams delivering documented, quality-assured, discoverable data products to internal consumers. Data quality metrics are part of enterprise performance dashboards reviewed by the executive committee. The organization has implemented data contracts that formalize the expectations between data producers and consumers, including AI teams. Data governance extends across organizational boundaries to partner and supplier data.
Level 5 — Transformational. Data management is a recognized core competency and competitive differentiator. The organization's data is an enterprise asset that is inventoried, valued, and managed with the same rigor applied to financial assets. Data governance is not a compliance function — it is a value creation function that enables the organization to move faster, with greater confidence, than competitors burdened by data chaos. The organization contributes to industry standards for data governance, data quality, and AI data management. The data management function anticipates and prepares for emerging data requirements — new data types, new regulatory requirements, new AI architectures — before they become urgent. Data is not a bottleneck for AI — it is an accelerant.
The Use Case-Data Dynamic
Domains 5 and 6 have a relationship that is both intimate and frequently dysfunctional. Use case management identifies what the organization wants to build. Data management determines what the organization can build. When these two domains are misaligned, the result is one of two familiar failure modes.
The Feasibility Gap
The first failure mode occurs when use case management operates independently of data management. Use cases are identified, evaluated, and prioritized based on business value and strategic alignment — but without rigorous assessment of data readiness. The AI team begins working on a high-priority use case only to discover that the required data is unavailable, unreliable, undocumented, or scattered across systems with no integration layer. Months of effort are lost. Organizational confidence erodes.
This pattern is distressingly common. Annual surveys by NewVantage Partners (now Wavestone) have consistently found that a substantial majority of organizations report data challenges as the primary obstacle to delivering value from AI initiatives. The root cause is almost always a disconnect between use case management and data management — the organization's ambition exceeds its data readiness, and no process exists to reconcile the gap before resources are committed.
The Data-First Trap
The inverse failure mode occurs when data management becomes an end in itself — the organization invests years in building a comprehensive data foundation before pursuing AI use cases, believing that perfect data is a prerequisite for any AI work. This "data-first trap" produces extensive data infrastructure with no clear connection to value creation. Data quality improves, catalogs expand, governance matures — but the organization cannot articulate what it intends to do with all this well-governed data.
The resolution is to advance both domains in tandem, with each informing the other. Use case management identifies the data most critical to value creation, focusing data management investment where it matters most. Data management informs use case feasibility assessments, ensuring that prioritization reflects data reality. This bidirectional relationship is operationalized in the COMPEL lifecycle, where the Calibrate stage assesses both domains simultaneously and the Model stage designs target states that advance them in coordination, as described in Module 1.2, Article 3: Model — Designing the Target State.
Assessment Guidance for Practitioners
Domain 5 Assessment Pitfalls
The most common error in assessing AI Use Case Management is conflating activity with maturity. An organization that has identified fifty potential AI use cases is not necessarily more mature than one that has identified ten — if those fifty use cases lack business cases, feasibility assessments, or prioritization criteria, the large number actually indicates lower maturity, not higher. Look for process quality, not output volume.
Also beware of "stealth use cases" — AI projects that bypass the formal intake process because they were approved directly by an executive or initiated informally within a business unit. The existence of stealth use cases is evidence that the use case management process lacks organizational authority or credibility. Count them as evidence of Level 2 or below, regardless of how mature the formal process appears.
Domain 6 Assessment Pitfalls
The most common error in assessing Data Management and Quality is accepting technology investments as evidence of process maturity. An organization that has purchased an expensive data catalog tool but populated it with fewer than 20 percent of its data assets does not merit a Level 3 score. Similarly, data governance policies that exist in documents but are not followed in practice should be scored based on actual adherence, not documented intent.
Pay particular attention to the experience of AI practitioners. Ask data scientists and ML engineers how much time they spend on data preparation, how easily they can discover and access relevant data, and whether data quality is a recurring source of project delay or failure. Their answers provide a ground-truth check against the picture painted by data governance leadership.
Looking Ahead
Domains 5 and 6 define the strategic and material foundations of AI delivery — what the organization chooses to build and the quality of the data from which it builds. But identifying use cases and preparing data are only the beginning. Converting that preparation into deployed, operational AI systems requires three additional Process pillar capabilities.
Article 5: Process Pillar Domains — MLOps, Delivery, and Improvement examines the remaining three Process domains: ML Operations and Deployment (Domain 7), AI Project Delivery (Domain 8), and Continuous Improvement Processes (Domain 9). These domains determine whether the organization can move from data and ideas to production systems — reliably, repeatedly, and at scale.
© FlowRidge.io — COMPEL AI Transformation Methodology. All rights reserved.