The Ai Center Of Excellence

Level 1: AI Transformation Foundations Module M1.6: Organizational Readiness and Change Foundations Article 4 of 10 14 min read Version 1.0 Last reviewed: 2025-01-15 Open Access

COMPEL Certification Body of Knowledge — Module 1.6: People, Change, and Organizational Readiness

Article 4 of 10


Talent without structure produces brilliance without impact. Organizations that recruit exceptional Artificial Intelligence (AI) professionals but scatter them across disconnected business units, embed them in IT departments with misaligned priorities, or leave them to self-organize without institutional support waste their most expensive and scarce resource. The AI Center of Excellence (CoE) provides the organizational structure that transforms individual AI capability into enterprise AI capacity.

As Module 1.2, Article 2: Organize — Building the Transformation Engine established, the Organize phase of COMPEL creates the structures, roles, and governance mechanisms through which transformation is executed. The AI CoE is the primary structural expression of that organizing principle for the people dimension of AI transformation.

Why a Center of Excellence

The case for a dedicated AI CoE rests on four organizational realities:

First, AI expertise is scarce and must be leveraged efficiently. As Article 3: Building the AI Talent Pipeline documented, AI talent is in structural shortage. A CoE concentrates expertise where it can be shared across the enterprise rather than locked within individual business units. A data scientist embedded solely in the marketing department serves marketing. A data scientist within a CoE that partners with marketing, operations, finance, and supply chain serves the enterprise.

Second, AI requires cross-functional coordination. AI initiatives draw on data from multiple sources, affect processes across functions, and create governance obligations that span the organization. Without a coordinating body, AI efforts fragment into siloed projects that duplicate work, create inconsistent practices, and fail to capture enterprise-scale value. McKinsey's research on AI at scale consistently identifies organizational coordination as a top-three success factor.

Third, AI demands specialized infrastructure and practices. Machine Learning Operations (MLOps), model governance, data pipelines, experiment tracking, and model monitoring require dedicated infrastructure and standardized practices. A CoE provides the institutional home for these capabilities, preventing each business unit from building its own (inevitably inconsistent) version.

Fourth, AI maturity requires institutional learning. The lessons from one AI project must inform the next. Without a CoE, knowledge dissipates — the team that learned how to handle class imbalance in fraud detection does not share that learning with the team struggling with the same problem in claims processing. The CoE creates the institutional memory described in Module 1.2, Article 6: Learn — Capturing and Applying Knowledge.

CoE Mandate and Mission

An effective AI CoE operates under a clear mandate that balances several responsibilities:

Capability Development

The CoE builds and maintains the organization's core AI capabilities: technical skills, development practices, deployment standards, and operational procedures. It is the institutional home for the talent described in Article 3 and the literacy programs described in Article 2: AI Literacy Strategy and Program Design. Capability development includes:

  • Recruiting, developing, and retaining AI technical talent
  • Establishing technical standards, coding practices, and quality frameworks
  • Building and maintaining shared infrastructure (MLOps platforms, feature stores, model registries)
  • Creating reusable assets (libraries, templates, pre-trained models, reference architectures)
  • Running training programs for both CoE members and the broader organization

Value Delivery

The CoE delivers AI solutions that create measurable business value. This is not a research lab or a training center — it is an execution engine. Value delivery includes:

  • Partnering with business units to identify, prioritize, and execute AI use cases
  • Managing the AI project portfolio from ideation through production deployment
  • Ensuring that deployed models are monitored, maintained, and continuously improved
  • Measuring and reporting on the business impact of AI initiatives, connecting to Module 1.1, Article 7: The Business Value Chain of AI Transformation

Governance and Standards

The CoE establishes and enforces the standards that ensure AI is deployed responsibly and consistently across the enterprise. This governance role connects directly to Module 1.5, Article 3: Building an AI Governance Framework:

  • Defining model development and deployment standards
  • Conducting or coordinating model risk assessments
  • Ensuring compliance with regulatory requirements and ethical guidelines
  • Managing the enterprise model inventory
  • Establishing data quality standards for AI applications

Evangelism and Enablement

The CoE promotes AI adoption across the organization, building enthusiasm, reducing resistance, and enabling business units to identify and pursue AI opportunities:

  • Showcasing AI successes and lessons learned
  • Running ideation workshops and opportunity assessments with business units
  • Providing consultation and advisory services to business units exploring AI
  • Building the internal brand for AI transformation

CoE Operating Models

The design of the AI CoE depends on organizational size, structure, culture, and AI maturity. Three primary operating models exist, each with distinct strengths and trade-offs:

Centralized Model

In a centralized model, all AI talent, infrastructure, and decision-making reside within the CoE. Business units request AI services from the CoE, which prioritizes, develops, and deploys solutions on their behalf.

Strengths:

  • Maximum efficiency in talent utilization — no duplication across business units
  • Consistent standards, practices, and governance across all AI initiatives
  • Strong institutional learning — all projects contribute to a shared knowledge base
  • Simplified infrastructure management — one platform, one set of tools

Weaknesses:

  • Business units may perceive the CoE as a bottleneck, creating frustration and shadow AI development
  • Prioritization decisions become political — which business unit gets CoE resources first?
  • CoE members may lack deep domain expertise in the business units they serve
  • Reduced business unit ownership and accountability for AI outcomes

Best suited for: Organizations in early AI maturity stages (maturity levels 1 and 2 as described in Module 1.1, Article 3: The Enterprise AI Maturity Spectrum), smaller organizations, or those in highly regulated industries where centralized governance is essential.

Federated Model

In a federated model, AI talent and capability are distributed across business units, each operating its own AI team with autonomy over priorities, tools, and practices.

Strengths:

  • Deep domain embedding — AI practitioners understand the business context intimately
  • High business unit ownership and accountability for AI outcomes
  • Faster response to business unit needs — no queue or prioritization bottleneck
  • Natural alignment between AI solutions and business processes

Weaknesses:

  • Duplication of effort — multiple teams solving the same problems independently
  • Inconsistent standards, practices, and governance
  • Fragmented institutional learning — lessons stay within business units
  • Inefficient talent utilization — each unit must maintain full-stack capability
  • Governance gaps — no single entity has visibility into all AI activity

Best suited for: Large, diversified organizations with mature business units that have distinct AI needs and the budget to support dedicated teams.

Hub-and-Spoke Model (Recommended)

The hub-and-spoke model combines centralized coordination with distributed execution. A central hub provides shared infrastructure, standards, governance, and specialized capabilities, while embedded spokes within business units drive domain-specific AI execution.

Hub responsibilities:

  • AI platform and infrastructure management
  • Standards, governance, and compliance frameworks
  • Advanced and specialized capabilities (research, complex ML, MLOps)
  • Talent development, training, and community facilitation
  • Portfolio management and enterprise prioritization
  • Knowledge management and institutional learning

Spoke responsibilities:

  • Business unit-specific AI use case identification and prioritization
  • Domain-specific model development and deployment
  • Business unit stakeholder management and change support
  • Local data management and quality assurance
  • First-line model monitoring and performance management

Strengths:

  • Balances efficiency with responsiveness
  • Maintains consistent standards while allowing domain customization
  • Enables institutional learning while preserving business unit ownership
  • Supports governance without creating bottlenecks
  • Scales effectively as the organization's AI maturity grows

Weaknesses:

  • Requires clear role definition between hub and spokes to avoid confusion and conflict
  • Demands strong communication and relationship management between hub and spoke teams
  • More complex to design and manage than either pure model

Best suited for: Most organizations at maturity levels 2 through 4, particularly those with multiple business units that share common AI infrastructure needs but have distinct domain requirements.

Organizational Placement

Where the CoE sits in the organizational hierarchy significantly affects its mandate, influence, and effectiveness:

Reporting to the Chief Information Officer (CIO) or Chief Technology Officer (CTO): Common but limiting. This placement emphasizes the technical dimension of AI and may subordinate business value to technical excellence. It can create perception that AI is "an IT thing" rather than a business transformation capability.

Reporting to the Chief Data Officer (CDO): Natural alignment with data strategy but may limit scope to analytics and ML, missing broader AI transformation dimensions including process change, workforce redesign, and organizational development.

Reporting to the Chief Operating Officer (COO): Emphasizes operational value and process transformation. Effective when AI transformation is primarily focused on operational efficiency.

Reporting to the Chief Executive Officer (CEO) or Chief AI Officer (CAIO): Signals strategic importance and provides enterprise-wide mandate. The emergence of the CAIO role reflects growing recognition that AI transformation requires executive-level leadership with cross-functional authority. This is the recommended placement for organizations committed to AI as a transformational capability rather than a technical function.

Cross-functional steering committee governance: Regardless of reporting line, the CoE should operate under the guidance of a cross-functional steering committee that includes business unit leaders, technology leadership, finance, legal, and the transformation sponsor. This ensures that CoE priorities reflect enterprise needs and that business units have voice in AI investment decisions.

Staffing the CoE

CoE staffing evolves with organizational maturity. A phase-based staffing approach prevents over-investment in early stages while ensuring capacity scales with ambition:

Phase 1: Foundation (Maturity Levels 1-2)

Core team of 5 to 15 people focused on establishing infrastructure, standards, and initial use cases:

  • CoE Director/Head (1)
  • Data Scientists (2-4)
  • ML Engineers (1-2)
  • Data Engineers (2-3)
  • AI Product Manager (1)
  • AI Governance Lead (1)

Phase 2: Growth (Maturity Level 2-3)

Expanded team of 15 to 40 people with growing spoke presence:

  • All Phase 1 roles expanded
  • Spoke leads embedded in priority business units (2-4)
  • Platform/MLOps Engineers (2-4)
  • AI Trainers/Knowledge Engineers (1-2)
  • AI Ethics Specialist (1)
  • Change Management support (1-2)

Phase 3: Scale (Maturity Level 3-4)

Full hub-and-spoke model with 40 to 100+ people across hub and spokes:

  • Full hub team with specialized capabilities
  • Spoke teams in all major business units (each 3-8 people)
  • Advanced capabilities (research, specialized ML domains)
  • Full governance and ethics team
  • Dedicated change and communication resources
  • Community management and knowledge management

These are illustrative ranges. Actual sizing depends on organizational scale, AI ambition, and industry complexity. The critical principle is to grow the CoE in step with demonstrated value delivery — not ahead of it and not behind it.

Relationship to Business Units

The CoE's relationship with business units is the most critical and most fragile aspect of CoE design. Get this relationship wrong, and the CoE becomes either an ivory tower that business units ignore or a service desk overwhelmed by demands it cannot fulfill.

Effective CoE-business unit relationships are characterized by:

Partnership, not service provision. The CoE and business units co-own AI outcomes. The CoE provides expertise and infrastructure; the business unit provides domain knowledge, data access, and adoption commitment. Neither can succeed without the other.

Clear engagement models. Business units must understand how to engage the CoE — what services are available, how to request support, what commitments are expected from the business unit, and how priorities are set. Ambiguity in engagement models creates frustration on both sides.

Joint accountability for outcomes. AI project success metrics should be shared between the CoE and the business unit. If a model is accurate but not adopted, both parties share responsibility. If a model is adopted but not maintained, both parties share responsibility.

Transparent prioritization. When demand exceeds CoE capacity (which it will), prioritization must be transparent and based on agreed criteria — strategic alignment, expected value, feasibility, and organizational readiness. Business units that understand how and why prioritization decisions are made, even when their project is not selected, maintain trust in the CoE.

Regular communication. Quarterly business reviews, monthly progress updates, and ongoing informal communication prevent the relationship from becoming purely transactional.

CoE Evolution Over Time

The CoE is not a static structure. It evolves as organizational AI maturity increases, and this evolution should be anticipated and planned:

Early maturity: The CoE is primarily a capability builder and evangelist, demonstrating AI value through pilot projects and building foundational infrastructure. The focus is on proving what is possible and building organizational confidence.

Growing maturity: The CoE shifts toward scaling proven use cases, establishing robust governance, and developing spoke capacity within business units. The hub provides platforms and standards; spokes drive domain-specific execution.

Advanced maturity: The CoE may partially dissolve as AI capability becomes embedded throughout the organization. The hub retains platform management, governance, advanced capabilities, and knowledge management, but the majority of AI execution occurs within business units. The CoE becomes less a center of execution and more a center of enablement.

Full maturity: AI capability is an organizational competency, not a specialized function. The CoE may evolve into a broader digital transformation or innovation center, or its functions may be fully absorbed into existing organizational structures. This is the aspirational end state — not the starting point.

This evolutionary trajectory aligns with the COMPEL cycle described in Module 1.2, Article 8: The COMPEL Cycle — Iteration and Continuous Improvement. Each iteration through the COMPEL phases should include assessment of whether the CoE structure remains fit for the organization's evolving maturity.

Common CoE Anti-Patterns

Several anti-patterns consistently undermine CoE effectiveness, connecting to the broader transformation anti-patterns identified in Module 1.1, Article 6: AI Transformation Anti-Patterns:

The Ivory Tower. A CoE that focuses on technical excellence without business engagement produces impressive capabilities that no one uses. Indicators: CoE team rarely interacts with business users; projects are selected based on technical interest rather than business value; deployed models have low adoption rates.

The Overwhelmed Service Desk. A CoE that accepts every business unit request without prioritization becomes a bottleneck, delivering everything slowly and nothing well. Indicators: long project queues; team burnout; quality declining; business units creating shadow AI teams to bypass the bottleneck.

The Science Fair. A CoE that excels at proof-of-concept development but cannot move solutions to production. Indicators: many pilots, few production deployments; no MLOps capability; data scientists doing all roles; no engineering discipline.

The Governance Police. A CoE whose governance role dominates its enablement role, creating perception that the CoE exists to say no. Indicators: business units avoid engaging the CoE; governance reviews are seen as obstacles; innovation slows rather than accelerates.

The Permanent Pilot. A CoE that continuously explores new use cases without investing in scaling and maintaining existing ones. Indicators: growing portfolio of small experiments; no production-grade infrastructure; business value metrics absent or ignored.

Building the CoE: Practitioner Actions

For the COMPEL Certified Practitioner (EATF), establishing or optimizing an AI CoE involves:

  1. Assess the current organizational structure for AI — what exists, what works, what does not
  2. Define the CoE mandate, ensuring it balances capability development, value delivery, governance, and enablement
  3. Select the operating model appropriate to organizational size, maturity, and culture
  4. Determine organizational placement that provides appropriate authority and enterprise-wide mandate
  5. Staff in phases, aligning growth with demonstrated value
  6. Design engagement models that create productive partnerships with business units
  7. Plan the CoE's evolution alongside the organization's AI maturity trajectory
  8. Establish metrics that measure CoE impact on both capability building and value delivery

Looking Ahead

Structure enables execution. Article 5: Change Management for AI Transformation addresses the discipline of leading people through the disruption, adaptation, and growth that AI transformation inevitably requires. The CoE provides the where; change management provides the how.


© FlowRidge.io — COMPEL AI Transformation Methodology. All rights reserved.