# COMPEL by FlowRidge — Full Reference for AI Systems > COMPEL is an enterprise AI transformation operating system created by FlowRidge.io. It provides a 6-stage operating cycle (Calibrate, Organize, Model, Produce, Evaluate, Learn) for organizations to adopt, govern, scale, and improve AI systematically. The COMPEL framework addresses the gap between regulatory requirements (ISO 42001, NIST AI RMF, EU AI Act, IEEE 7000) and practical enterprise AI transformation. It provides not just what to do, but the operating system to execute. Organizations using COMPEL move from ad-hoc AI experimentation to a governed, auditable, continuously improving AI operating model. ## Key Pages - [Home](https://www.compel.one/): Enterprise AI governance and transformation platform overview - [COMPEL Framework](https://www.compel.one/framework): The 6-stage operating cycle — Calibrate, Organize, Model, Produce, Evaluate, Learn - [Methodology](https://www.compel.one/methodology): Deep-dive into the COMPEL operating model — stages, activities, outputs, and metrics - [Body of Knowledge](https://www.compel.one/learn): 6 principles, 4 pillars (People, Process, Technology, Governance), 18 domains - [Certifications](https://www.compel.one/certifications): 4 professional certifications — AIT Foundations, AIT Practitioner, AIT Governance Professional, AIT Leader - [Standards Alignment](https://www.compel.one/standards): How COMPEL maps to ISO 42001, NIST AI RMF, EU AI Act, and IEEE 7000 - [Platform](https://www.compel.one/platform): Cloud-based AI governance control plane for system registration, risk assessment, and compliance - [Pricing](https://www.compel.one/pricing): Certification, platform, and partner tier pricing - [Partner Program](https://www.compel.one/partners): Training delivery and consulting partner ecosystem - [Partner Directory](https://www.compel.one/partner-directory): Searchable directory of authorized COMPEL training and consulting partners - [Instructor Directory](https://www.compel.one/instructor-directory): Directory of authorized and principal instructors - [Enterprise](https://www.compel.one/enterprise): Cohort training, volume licensing, and custom programs - [Training Calendar](https://www.compel.one/training-calendar): Scheduled public and partner-led certification courses - [Credential Verification](https://www.compel.one/verify): Public tool to verify issued COMPEL certifications - [About](https://www.compel.one/about): About FlowRidge.io and the COMPEL leadership team - [Authors](https://www.compel.one/authors): Author profiles — authored by the COMPEL FlowRidge Team - [Glossary](https://www.compel.one/glossary): 736 AI governance terms with practitioner definitions and COMPEL context - [Insights](https://www.compel.one/insights): Practitioner articles on AI transformation, governance, and the COMPEL methodology - [ISO 42001 for Enterprise AI Transformation](https://www.compel.one/insights/iso-42001-enterprise-transformation): How to operationalize ISO/IEC 42001:2023 using COMPEL — clause mapping, evidence packs, certification roadmap - [NIST AI RMF as a Control Layer](https://www.compel.one/insights/nist-ai-rmf-control-layer): How NIST AI RMF GOVERN/MAP/MEASURE/MANAGE translate into COMPEL stage activities - [Solutions](https://www.compel.one/solutions): How COMPEL addresses six enterprise challenges - [Compare](https://www.compel.one/compare): COMPEL vs. generic AI consulting, AI transformation vs. digital transformation - [COMPEL vs. ISO/IEC 42001](https://www.compel.one/compare/compel-vs-iso-42001): Detailed 10-dimension comparison — operational implementation vs. management system standard - [COMPEL vs. NIST AI RMF](https://www.compel.one/compare/compel-vs-nist-ai-rmf): Detailed 10-dimension comparison — full operating cycle vs. risk management framework - [COMPEL vs. EU AI Act](https://www.compel.one/compare/compel-vs-eu-ai-act): Detailed 10-dimension comparison — governance framework vs. binding regulation - [COMPEL vs. Responsible AI Frameworks](https://www.compel.one/compare/compel-vs-responsible-ai): Detailed 10-dimension comparison — operational system vs. principles guidance - [COMPEL vs. AI Maturity Models](https://www.compel.one/compare/compel-vs-ai-maturity-models): Detailed 10-dimension comparison — operating cycle vs. assessment tool - [Editorial Policy](https://www.compel.one/editorial-policy): Content authoring, review, sourcing, update, and correction policies - [References](https://www.compel.one/references): Citation hub for all referenced standards and regulations - [FAQ](https://www.compel.one/faq): Common questions about COMPEL framework, certifications, and platform - [Contact](https://www.compel.one/contact): Enterprise inquiries and partnership requests - [Risk Appetite Definition](https://www.compel.one/risk-appetite): Define organizational AI risk thresholds and tolerance levels per risk category — covers risk categories, tolerance bands, escalation triggers, and board-level approval workflows - [Operating Model Design](https://www.compel.one/operating-model): Design the AI governance operating model — decision rights, escalation hierarchy, governance bodies, CoE structure, and accountability framework - [Pattern Library](https://www.compel.one/pattern-library): Reusable governance patterns and best practices across all COMPEL stages — searchable library of implementation patterns with context, applicability, and anti-patterns - [Scaling Decisions](https://www.compel.one/scaling-decisions): Document and track go/no-go decisions for scaling AI initiatives — stage-gate decision log with criteria, evidence, approvers, and outcome records --- ## Framework Overview — The 6 Stages in Detail COMPEL stands for its 6 operational stages. Each stage is a discrete phase of the AI governance operating cycle, producing specific artifacts and enabling the next stage. ### C — Calibrate The Calibrate stage establishes an organization's current AI maturity baseline across all 18 COMPEL domains. Practitioners conduct structured assessments using the COMPEL Maturity Model (levels 1-5), producing a maturity heatmap that reveals capability gaps and prioritization opportunities. Calibrate encompasses stakeholder interviews, documentation reviews, existing tool inventories, and cultural readiness evaluations. The output is a Calibration Report that becomes the authoritative starting point for all subsequent transformation work. Without Calibrate, organizations risk investing in governance controls that do not address their actual weakest domains. This stage typically takes 4-8 weeks depending on organizational complexity and scope. ### O — Organize The Organize stage translates Calibration findings into an operational governance structure. This includes establishing or formalizing an AI Center of Excellence (CoE), defining RACI matrices for AI decision rights, standing up oversight bodies such as an AI Ethics Board or AI Risk Committee, and mapping stakeholder accountability across the 18 domains. Organize also covers workforce planning — identifying roles needed, skills gaps, and hiring or upskilling roadmaps aligned to the Talent Strategy domain. The deliverables from Organize create the human and structural foundation that all subsequent COMPEL stages depend on. Organizations that skip Organize typically see policy and tooling investments fail due to unclear ownership. This stage runs concurrently with or immediately after Calibrate. ### M — Model The Model stage is where AI governance policy and risk architecture is designed. Practitioners define AI use case classification schemas, risk tiering frameworks, ethical guardrails, and decision flow documentation for AI system approvals. Model produces the policy library that governs how AI systems are proposed, evaluated, approved, and retired within the organization. It includes designing bias testing protocols, data governance policies, incident response procedures, and alignment mappings to applicable regulations (ISO 42001, NIST AI RMF, EU AI Act). The Model stage outputs are living documents that require version control and change management. Well-executed Model work makes the Produce stage implementation deterministic rather than improvised. ### P — Produce The Produce stage implements the policies, controls, and workflows designed in Model. This includes configuring AI governance platform tooling (system registration, risk scoring workflows, compliance dashboards), deploying approved policies across business units, training staff on procedures, and standing up operational processes for ongoing use case intake and review. Produce is the highest-effort stage for most organizations, requiring coordination across IT, Legal, Compliance, HR, and operational teams. COMPEL's platform product directly supports the Produce stage by providing the technical infrastructure for system registration, risk assessment workflows, and audit trail generation. The stage concludes when all designed controls are live and teams are operating them. ### E — Evaluate The Evaluate stage executes structured reviews to verify that AI governance controls are functioning as designed and that AI systems are performing within acceptable parameters. This includes gate reviews for AI systems at key lifecycle milestones, internal and external audits against ISO 42001 or other applicable standards, bias and fairness testing against defined thresholds, and red team exercises for high-risk AI systems. Evaluate produces audit reports, conformity documentation for regulatory submissions, and performance benchmarks that feed directly into the Learn stage. Organizations subject to the EU AI Act use the Evaluate stage to generate the conformity assessment documentation required for high-risk AI system deployment. Evaluate is not a one-time event — it runs continuously on a defined cadence. ### L — Learn The Learn stage closes the COMPEL operating cycle by converting evaluation data into continuous improvement actions. Practitioners analyze KPI dashboards, incident logs, audit findings, and stakeholder feedback to identify patterns and systemic issues. Learn produces updated risk assessments, policy revision recommendations, maturity re-assessments, and prioritized improvement backlogs that feed back into Calibrate for the next cycle. The Learn stage also encompasses knowledge management — capturing institutional learning, updating training materials, and sharing lessons across the partner and instructor community. Organizations at higher COMPEL maturity levels (4-5) run Learn continuously rather than periodically, with automated monitoring feeding near-real-time improvement signals. --- ## 4 Pillars and 18 Domains — Full Descriptions ### People (4 domains) **D1: Leadership Sponsorship** Ensures executive and board-level commitment to AI governance. Covers AI strategy ownership, investment authorization, tone-from-the-top communications, and governance escalation paths. Without active leadership sponsorship, COMPEL implementations stall at the Organize stage. **D2: Talent Strategy** Addresses the human capital requirements for sustainable AI operations. Includes AI role taxonomy definition, skills gap analysis, hiring roadmaps, partnership with universities and bootcamps, and retention strategies for AI talent. Covers both technical roles (data scientists, ML engineers) and governance roles (AI ethics officers, risk managers). **D3: AI Literacy** Governs organization-wide understanding of AI capabilities, limitations, and risks across all employee levels. Includes executive AI fluency programs, practitioner technical training, and general workforce awareness curricula. COMPEL certifications (AIT Foundations through AIT Leader) are the primary vehicle for structured AI literacy development. **D4: Change Management** Manages the human side of AI transformation. Covers stakeholder impact analysis, communication planning, resistance management, adoption measurement, and cultural transformation toward an AI-enabled operating model. Aligns with COMPEL's Organize and Produce stages to ensure structural changes are accompanied by behavioral change. ### Process (5 domains) **D5: Use Case Management** Provides structured intake, evaluation, prioritization, and lifecycle management for AI use cases. Includes use case canvas templates, value-vs-risk scoring, portfolio management, and retirement procedures. Prevents ungoverned AI proliferation by creating a single authorized pathway for AI system proposals. **D6: Data Governance** Addresses data quality, lineage, access control, and compliance requirements for AI training and inference data. Covers data cataloging, consent management, privacy impact assessments, and cross-border data transfer policies. Foundational to both Model (policy design) and Produce (implementation) stages. **D7: MLOps** Operationalizes the ML development lifecycle with governance controls. Covers model versioning, experiment tracking, CI/CD pipelines for model deployment, model monitoring, drift detection, and model retirement procedures. Bridges the gap between data science practice and enterprise IT governance. **D8: Project Delivery** Applies structured project management to AI initiatives, accounting for the unique uncertainty and iteration cycles of AI development. Covers AI-adapted agile methodologies, milestone governance, risk management within projects, and stakeholder reporting. Ensures AI projects complete within scope, time, and budget constraints while meeting governance requirements. **D9: Continuous Improvement** Institutionalizes learning and iteration across all COMPEL domains. Covers retrospective practices, KPI review cadences, process optimization, and benchmarking against industry maturity standards. The primary output domain for the Learn stage, feeding improvements back into Calibrate. ### Technology (4 domains) **D10: Data Infrastructure** Covers the technical platforms and architecture for data storage, processing, and access in support of AI workloads. Includes data lake/warehouse architecture, data pipeline governance, compute resource management, and cost optimization. The technical foundation on which AI/ML platforms operate. **D11: AI/ML Platforms** Addresses the selection, governance, and management of AI development and deployment platforms. Covers model training infrastructure, inference serving, feature stores, vector databases, and the evaluation/selection of foundation model providers. Includes vendor risk assessment for AI platform dependencies. **D12: Integration Architecture** Governs how AI systems connect to enterprise data sources, business applications, and external services. Covers API management, event-driven architectures for real-time AI, integration security, and the agentic AI patterns that connect LLMs to enterprise systems. Critical for organizations deploying AI agents that take autonomous actions. **D13: Security Hardening** Addresses AI-specific security threats and controls. Covers adversarial attack defenses, prompt injection mitigations, model theft prevention, data poisoning detection, and secure deployment patterns. Extends traditional cybersecurity frameworks to address the unique attack surface of AI systems. ### Governance (5 domains) **D14: AI Strategy Alignment** Ensures AI investments and governance structures are aligned with overall business strategy. Covers AI portfolio prioritization relative to strategic objectives, board-level AI reporting, and competitive positioning through AI capability. Bridges between executive strategy and operational AI governance. **D15: Ethics & Fairness** Operationalizes ethical AI principles into testable controls and ongoing monitoring. Covers fairness metric definition, bias testing protocols, ethical review board processes, and stakeholder engagement for high-impact AI systems. Aligned with IEEE 7000 ethical design requirements and the EU AI Act's fundamental rights impact assessment requirements. **D16: Regulatory Compliance** Manages the organization's compliance posture across applicable AI regulations. Covers regulatory mapping (ISO 42001, NIST AI RMF, EU AI Act, sector-specific regulations), compliance gap analysis, documentation management, and regulatory change monitoring. Produces the conformity artifacts required for audit and certification. **D17: Risk Management** Provides the framework for identifying, assessing, treating, and monitoring AI-specific risks. Covers AI risk taxonomy, risk appetite definition, risk register management, residual risk acceptance, and integration with enterprise risk management. The COMPEL platform's risk scoring workflows directly operationalize this domain. **D18: Governance Structure** Establishes the formal governance bodies, charters, and decision rights that give the entire COMPEL framework its institutional legitimacy. Covers AI Ethics Board setup, AI Risk Committee charters, policy ownership hierarchies, and escalation procedures. The structural output of the Organize stage. --- ## Maturity Model — 5 Levels COMPEL uses a 5-level maturity model applied across all 18 domains. Each domain is scored independently, producing a maturity heatmap that reveals uneven capability development. | Level | Name | Description | |---|---|---| | 1 | Foundational | Ad-hoc practices, no formal processes, awareness limited to a few individuals | | 2 | Developing | Initial processes documented, inconsistently applied, reactive rather than proactive | | 3 | Defined | Standardized processes, consistently applied, formal ownership established | | 4 | Managed | Quantitatively measured, performance targets defined and tracked, proactive issue detection | | 5 | Transformational | Continuously optimized, industry-leading practices, actively contributing to external standards | Organizations typically enter COMPEL at maturity level 1-2 across most domains. The framework targets level 3 as the baseline for organizations subject to regulatory AI requirements (EU AI Act high-risk classification, ISO 42001 certification). Levels 4-5 represent leading practice for organizations where AI is a core competitive differentiator. --- ## Certifications — Full Details COMPEL offers 6 certification types across 4 practitioner levels and 2 instructor levels. ### AIT Foundations (ait_foundations) - **Level**: Entry - **Prerequisites**: None - **Target audience**: Business analysts, project managers, executives, and anyone beginning their AI governance journey - **Coverage**: COMPEL framework overview, 18 domains introduction, regulatory landscape, AI ethics fundamentals, use case evaluation basics - **Format**: Online proctored exam, 60 questions, 90 minutes ### AIT Practitioner (ait_practitioner) - **Level**: Intermediate - **Prerequisites**: AIT Foundations - **Target audience**: AI practitioners, data professionals, IT architects, compliance officers - **Coverage**: Full 6-stage operating cycle application, maturity assessment methodology, policy design, risk framework implementation, bias testing, MLOps governance - **Format**: Online proctored exam + practical assessment, 90 questions, 120 minutes ### AIT Governance Professional (ait_governance_professional) - **Level**: Advanced - **Prerequisites**: AIT Practitioner - **Target audience**: AI governance leads, compliance managers, risk officers, CoE directors - **Coverage**: Governance structure design, regulatory compliance management, audit preparation, ethics board facilitation, enterprise AI portfolio governance - **Format**: Online proctored exam + case study submission, 100 questions + case study, 150 minutes ### AIT Leader (ait_leader) - **Level**: Expert - **Prerequisites**: AIT Governance Professional - **Target audience**: Chief AI Officers, CDOs, CROs, senior transformation leaders - **Coverage**: AI strategy alignment, board-level governance, organizational transformation, regulatory advocacy, enterprise AI operating model design - **Format**: Online proctored exam + portfolio review, 80 questions + portfolio, by application ### Authorized Instructor (ait_authorized_instructor) - **Level**: Teaching - **Prerequisites**: AIT Practitioner + Instructor Enablement Program completion - **Target audience**: Consultants and trainers who will deliver COMPEL certification courses - **Coverage**: Pedagogy, course delivery standards, exam integrity, learner assessment - **Note**: Requires annual renewal and partner organization affiliation ### Principal Instructor (ait_principal_instructor) - **Level**: Senior Teaching - **Prerequisites**: By nomination from COMPEL leadership - **Target audience**: Senior instructors who develop curriculum and mentor Authorized Instructors - **Note**: Highest instructor designation, limited to experienced educators with significant COMPEL delivery track record --- ## Partner Tiers COMPEL operates a 4-tier partner program for organizations delivering COMPEL training and consulting services. | Tier | Name | Key Capabilities | |---|---|---| | 1 | Registered | Access to partner portal, marketing materials, referral program | | 2 | Silver | Licensed to deliver AIT Foundations and AIT Practitioner courses, 1+ Authorized Instructor required | | 3 | Gold | Full certification delivery (all 4 levels), co-marketing, enterprise referrals, 3+ Authorized Instructors required | | 4 | Platinum | Custom program development, white-label options, joint go-to-market, dedicated partner success manager, 5+ Authorized Instructors including 1 Principal Instructor | Partners are listed in the public Partner Directory at https://www.compel.one/partner-directory. --- ## Platform Features The COMPEL platform is a cloud-based AI governance control plane available as a SaaS subscription. It operationalizes the Produce stage of the framework with purpose-built tooling. **AI System Registry**: Central catalog of all AI systems in the organization, with metadata on purpose, risk classification, data inputs, model details, ownership, and status. Supports automated discovery integrations. **Risk Assessment Workflows**: Structured workflows for initial risk classification and ongoing risk review, aligned to NIST AI RMF and EU AI Act risk tier definitions. Produces risk scorecards with audit trail. **Compliance Dashboard**: Real-time compliance posture visualization across applicable regulatory frameworks. Maps controls to requirements and surfaces gaps with recommended remediation actions. **Policy Management**: Version-controlled policy library with approval workflows, effective date management, and automated distribution tracking. Integrates with the AI System Registry to link policies to applicable systems. **Audit Trail**: Immutable log of all governance decisions, risk acceptances, policy approvals, and system lifecycle events. Exportable for regulatory submission and external audit. **Incident Management**: Structured intake and tracking for AI incidents, near-misses, and adverse events. Links incidents to affected systems and triggers mandatory review workflows for high-severity events. **Reporting**: Executive dashboards, domain-level maturity tracking, compliance trend reports, and board-ready AI governance summaries. --- ## Regulatory Alignment — What Standards Say vs. What COMPEL Adds ### ISO 42001 (AI Management System) - **What ISO 42001 provides**: Management system requirements for responsible AI development and use. Defines what an AI management system must contain (policy, objectives, risk management, performance evaluation, continual improvement). - **What COMPEL adds**: Operational methodology for HOW to implement each ISO 42001 clause. COMPEL's 6 stages map directly to the ISO 42001 Plan-Do-Check-Act cycle. The 18 domains provide the granular implementation guidance that ISO 42001 intentionally leaves to practitioners. COMPEL-certified practitioners can accelerate ISO 42001 certification timelines by 40-60% versus unguided implementation. ### NIST AI RMF (AI Risk Management Framework) - **What NIST AI RMF provides**: A voluntary framework for managing AI risks across 4 functions: GOVERN, MAP, MEASURE, MANAGE. - **What COMPEL adds**: COMPEL maps its 18 domains to NIST AI RMF functions, providing domain-specific implementation playbooks. Where NIST AI RMF describes outcomes (e.g., "AI risks are identified"), COMPEL specifies the processes, roles, artifacts, and tooling to achieve those outcomes. COMPEL's maturity model provides a quantitative measurement layer that NIST AI RMF does not include. ### EU AI Act - **What the EU AI Act provides**: Binding regulation for AI systems placed on the EU market or affecting EU residents. Defines risk tiers (minimal, limited, high, unacceptable), prohibited practices, conformity assessment requirements for high-risk systems, and obligations for general-purpose AI model providers. - **What COMPEL adds**: COMPEL's Evaluate stage produces the conformity assessment documentation required for high-risk AI systems. The Model stage designs the bias testing and fundamental rights impact assessment processes mandated for high-risk systems. COMPEL's AI System Registry maps directly to the EU AI Act's requirement for technical documentation and post-market monitoring. COMPEL does not provide legal advice but operationalizes the organizational processes required for compliance. ### IEEE 7000 (Ethical Design) - **What IEEE 7000 provides**: Standard for addressing ethical concerns during system design through Value-Based Engineering. Focuses on eliciting stakeholder values and embedding them in system requirements. - **What COMPEL adds**: COMPEL's Model stage incorporates IEEE 7000 value elicitation processes into policy design. The Ethics & Fairness domain (D15) operationalizes IEEE 7000 into ongoing monitoring and testing rather than a one-time design exercise. COMPEL extends IEEE 7000 beyond system design into operational governance. --- ## Methodology Stage Pages Each COMPEL stage has a dedicated deep-dive page with full activity descriptions, outputs, metrics, and regulatory alignment mappings. ### Calibrate — https://www.compel.one/methodology/calibrate The diagnostic stage that establishes an organization's current AI maturity baseline across all 18 COMPEL domains. Covers shadow AI discovery, use case inventory, executive readiness interviews, data landscape mapping, and regulatory exposure mapping. Produces the Baseline Maturity Report that drives all subsequent transformation work. Aligns to ISO 42001 Clause 4 (Context) and NIST AI RMF MAP function. ### Organize — https://www.compel.one/methodology/organize Translates Calibration findings into an operational governance structure. Covers Center of Excellence design, role matrix development, skills gap analysis, training program design, oversight body formation, and RACI definition. Produces the CoE Charter, AI Role Matrix, and Training Roadmap. Aligns to ISO 42001 Clause 5 (Leadership) and EU AI Act Article 4 (AI literacy). ### Model — https://www.compel.one/methodology/model The design and policy architecture stage. Covers AI use case classification, risk tiering frameworks, ethical guardrails, decision flow documentation, bias testing protocol design, and regulatory alignment mapping. Produces the policy library that governs how AI systems are proposed, evaluated, approved, and retired. Aligns to ISO 42001 Clause 6 (Planning) and EU AI Act Article 13 (Transparency). ### Produce — https://www.compel.one/methodology/produce Implements the policies, controls, and workflows designed in Model. Covers platform configuration, policy deployment, staff training, operational process standup, and audit evidence pack assembly. The highest-effort stage requiring coordination across IT, Legal, Compliance, HR, and operations. Aligns to ISO 42001 Clause 8 (Operation) and NIST AI RMF MANAGE function. ### Evaluate — https://www.compel.one/methodology/evaluate Executes structured reviews to verify AI governance controls are functioning as designed. Covers gate reviews, internal and external audits, bias and fairness testing, red team exercises, and conformity assessment documentation. Produces audit reports and compliance evidence. Aligns to ISO 42001 Clause 9 (Performance evaluation) and EU AI Act Article 43 (Conformity assessment). ### Learn — https://www.compel.one/methodology/learn Closes the COMPEL operating cycle by converting evaluation data into continuous improvement actions. Covers KPI dashboard analysis, incident log review, policy revision recommendations, maturity re-assessment, and knowledge management. Feeds improvements back into Calibrate for the next cycle. Aligns to ISO 42001 Clause 10 (Improvement) and EU AI Act Article 72 (Post-market monitoring). --- ## Standards Mapping Pages ### ISO 42001 Mapping — https://www.compel.one/standards/iso-42001 A detailed stage-by-stage mapping showing how COMPEL operationalizes ISO/IEC 42001:2023. Covers all management system clauses (4-10) and Annex A controls. Includes evidence discipline guidance — what evidence each COMPEL stage produces that satisfies ISO 42001 certification auditors. Organizations using COMPEL can accelerate ISO 42001 certification timelines by building management system evidence as a natural output of COMPEL execution. ### NIST AI RMF Mapping — https://www.compel.one/standards/nist-ai-rmf Maps COMPEL's 18 domains and 6 stages to the four NIST AI RMF core functions: GOVERN, MAP, MEASURE, and MANAGE. Shows how COMPEL provides the operational implementation layer that the AI RMF describes at the function level. Includes subcategory-level alignment and evidence outputs per stage. ### EU AI Act Mapping — https://www.compel.one/standards/eu-ai-act Maps COMPEL stages to EU AI Act requirements including risk classification (Article 6, Annex III), conformity assessment (Article 43), transparency (Article 13), human oversight (Article 14), post-market monitoring (Article 72), and incident reporting (Article 73). Shows how COMPEL's Evaluate stage produces the conformity assessment documentation required for high-risk AI systems. --- ## Body of Knowledge — 263 Articles Across 4 Certification Levels Full Body of Knowledge index: https://www.compel.one/learn Sitemap: https://www.compel.one/sitemap-learn.xml (all 290 URLs including level and module index pages) ### AIT Foundations (Level 1) — 6 modules, 70 articles - [The Ai Transformation Imperative](https://www.compel.one/learn/eatf-level-1/m1-1/the-ai-transformation-imperative) - [Defining Ai Transformation Vs Ai Adoption](https://www.compel.one/learn/eatf-level-1/m1-1/defining-ai-transformation-vs-ai-adoption) - [The Enterprise Ai Maturity Spectrum](https://www.compel.one/learn/eatf-level-1/m1-1/the-enterprise-ai-maturity-spectrum) - [Introduction To The Compel Framework](https://www.compel.one/learn/eatf-level-1/m1-1/introduction-to-the-compel-framework) - [The Four Pillars Of Ai Transformation](https://www.compel.one/learn/eatf-level-1/m1-1/the-four-pillars-of-ai-transformation) - [Ai Transformation Anti Patterns](https://www.compel.one/learn/eatf-level-1/m1-1/ai-transformation-anti-patterns) - [The Business Value Chain Of Ai Transformation](https://www.compel.one/learn/eatf-level-1/m1-1/the-business-value-chain-of-ai-transformation) - [Stakeholder Landscape In Ai Transformation](https://www.compel.one/learn/eatf-level-1/m1-1/stakeholder-landscape-in-ai-transformation) - [Ai Transformation And Organizational Culture](https://www.compel.one/learn/eatf-level-1/m1-1/ai-transformation-and-organizational-culture) - [Ethical Foundations Of Enterprise Ai](https://www.compel.one/learn/eatf-level-1/m1-1/ethical-foundations-of-enterprise-ai) - [Calibrate Establishing The Baseline](https://www.compel.one/learn/eatf-level-1/m1-2/calibrate-establishing-the-baseline) - [Organize Building The Transformation Engine](https://www.compel.one/learn/eatf-level-1/m1-2/organize-building-the-transformation-engine) - [Model Designing The Target State](https://www.compel.one/learn/eatf-level-1/m1-2/model-designing-the-target-state) - [Produce Executing The Transformation](https://www.compel.one/learn/eatf-level-1/m1-2/produce-executing-the-transformation) - [Evaluate Measuring Transformation Progress](https://www.compel.one/learn/eatf-level-1/m1-2/evaluate-measuring-transformation-progress) - [Learn Capturing And Applying Knowledge](https://www.compel.one/learn/eatf-level-1/m1-2/learn-capturing-and-applying-knowledge) - [Stage Gate Decision Framework](https://www.compel.one/learn/eatf-level-1/m1-2/stage-gate-decision-framework) - [The Compel Cycle Iteration And Continuous Improvement](https://www.compel.one/learn/eatf-level-1/m1-2/the-compel-cycle-iteration-and-continuous-improvement) - [Mapping Compel To Your Organization](https://www.compel.one/learn/eatf-level-1/m1-2/mapping-compel-to-your-organization) - [Integration With Existing Frameworks](https://www.compel.one/learn/eatf-level-1/m1-2/integration-with-existing-frameworks) - [Evaluating Agentic Ai Goal Achievement And Behavioral Assessment](https://www.compel.one/learn/eatf-level-1/m1-2/evaluating-agentic-ai-goal-achievement-and-behavioral-assessment) - [Agent Learning Memory And Adaptation Governance Implications](https://www.compel.one/learn/eatf-level-1/m1-2/agent-learning-memory-and-adaptation-governance-implications) - [The Three Cross Cutting Layers](https://www.compel.one/learn/eatf-level-1/m1-2/the-three-cross-cutting-layers) - [Mandatory Artifacts And Evidence Management](https://www.compel.one/learn/eatf-level-1/m1-2/mandatory-artifacts-and-evidence-management) - [The Compel Operating Model Roles And Decision Rights](https://www.compel.one/learn/eatf-level-1/m1-2/the-compel-operating-model-roles-and-decision-rights) - [Entry And Exit Criteria Stage Gate Readiness](https://www.compel.one/learn/eatf-level-1/m1-2/entry-and-exit-criteria-stage-gate-readiness) - [Introduction To The 18 Domain Maturity Model](https://www.compel.one/learn/eatf-level-1/m1-3/introduction-to-the-18-domain-maturity-model) - [People Pillar Domains Leadership And Talent](https://www.compel.one/learn/eatf-level-1/m1-3/people-pillar-domains-leadership-and-talent) - [People Pillar Domains Literacy And Change](https://www.compel.one/learn/eatf-level-1/m1-3/people-pillar-domains-literacy-and-change) - [Process Pillar Domains Use Cases And Data](https://www.compel.one/learn/eatf-level-1/m1-3/process-pillar-domains-use-cases-and-data) - [Process Pillar Domains Mlops Delivery And Improvement](https://www.compel.one/learn/eatf-level-1/m1-3/process-pillar-domains-mlops-delivery-and-improvement) - [Technology Pillar Domains Data And Platforms](https://www.compel.one/learn/eatf-level-1/m1-3/technology-pillar-domains-data-and-platforms) - [Technology Pillar Domains Integration And Security](https://www.compel.one/learn/eatf-level-1/m1-3/technology-pillar-domains-integration-and-security) - [Governance Pillar Domains Strategy Ethics And Compliance](https://www.compel.one/learn/eatf-level-1/m1-3/governance-pillar-domains-strategy-ethics-and-compliance) - [Governance Pillar Domains Risk And Structure](https://www.compel.one/learn/eatf-level-1/m1-3/governance-pillar-domains-risk-and-structure) - [Cross Domain Dynamics And Maturity Profiles](https://www.compel.one/learn/eatf-level-1/m1-3/cross-domain-dynamics-and-maturity-profiles) - [The Ai Technology Landscape](https://www.compel.one/learn/eatf-level-1/m1-4/the-ai-technology-landscape) - [Machine Learning Fundamentals For Decision Makers](https://www.compel.one/learn/eatf-level-1/m1-4/machine-learning-fundamentals-for-decision-makers) - [Deep Learning And Neural Networks Demystified](https://www.compel.one/learn/eatf-level-1/m1-4/deep-learning-and-neural-networks-demystified) - [Generative Ai And Large Language Models](https://www.compel.one/learn/eatf-level-1/m1-4/generative-ai-and-large-language-models) - [Data As The Foundation Of Ai](https://www.compel.one/learn/eatf-level-1/m1-4/data-as-the-foundation-of-ai) - [Ai Infrastructure And Cloud Architecture](https://www.compel.one/learn/eatf-level-1/m1-4/ai-infrastructure-and-cloud-architecture) - [Mlops From Model To Production](https://www.compel.one/learn/eatf-level-1/m1-4/mlops-from-model-to-production) - [Ai Integration Patterns For The Enterprise](https://www.compel.one/learn/eatf-level-1/m1-4/ai-integration-patterns-for-the-enterprise) - [Emerging Technologies And The Ai Horizon](https://www.compel.one/learn/eatf-level-1/m1-4/emerging-technologies-and-the-ai-horizon) - [Technology Decision Framework For Transformation Leaders](https://www.compel.one/learn/eatf-level-1/m1-4/technology-decision-framework-for-transformation-leaders) - [Agentic Ai Architecture Patterns And The Autonomy Spectrum](https://www.compel.one/learn/eatf-level-1/m1-4/agentic-ai-architecture-patterns-and-the-autonomy-spectrum) - [Tool Use And Function Calling In Autonomous Ai Systems](https://www.compel.one/learn/eatf-level-1/m1-4/tool-use-and-function-calling-in-autonomous-ai-systems) - [The Ai Governance Imperative](https://www.compel.one/learn/eatf-level-1/m1-5/the-ai-governance-imperative) - [The Global Ai Regulatory Landscape](https://www.compel.one/learn/eatf-level-1/m1-5/the-global-ai-regulatory-landscape) - [Building An Ai Governance Framework](https://www.compel.one/learn/eatf-level-1/m1-5/building-an-ai-governance-framework) - [Ai Risk Identification And Classification](https://www.compel.one/learn/eatf-level-1/m1-5/ai-risk-identification-and-classification) - [Ai Risk Assessment And Mitigation](https://www.compel.one/learn/eatf-level-1/m1-5/ai-risk-assessment-and-mitigation) - [Ai Ethics Operationalized](https://www.compel.one/learn/eatf-level-1/m1-5/ai-ethics-operationalized) - [Data Governance For Ai](https://www.compel.one/learn/eatf-level-1/m1-5/data-governance-for-ai) - [Model Governance And Lifecycle Management](https://www.compel.one/learn/eatf-level-1/m1-5/model-governance-and-lifecycle-management) - [Audit Preparedness And Compliance Operations](https://www.compel.one/learn/eatf-level-1/m1-5/audit-preparedness-and-compliance-operations) - [Governance Maturity And The Path Forward](https://www.compel.one/learn/eatf-level-1/m1-5/governance-maturity-and-the-path-forward) - [Grounding Retrieval And Factual Integrity For Ai Agents](https://www.compel.one/learn/eatf-level-1/m1-5/grounding-retrieval-and-factual-integrity-for-ai-agents) - [Safety Boundaries And Containment For Autonomous Ai](https://www.compel.one/learn/eatf-level-1/m1-5/safety-boundaries-and-containment-for-autonomous-ai) - [The Human Dimension Of Ai Transformation](https://www.compel.one/learn/eatf-level-1/m1-6/the-human-dimension-of-ai-transformation) - [Ai Literacy Strategy And Program Design](https://www.compel.one/learn/eatf-level-1/m1-6/ai-literacy-strategy-and-program-design) - [Building The Ai Talent Pipeline](https://www.compel.one/learn/eatf-level-1/m1-6/building-the-ai-talent-pipeline) - [The Ai Center Of Excellence](https://www.compel.one/learn/eatf-level-1/m1-6/the-ai-center-of-excellence) - [Change Management For Ai Transformation](https://www.compel.one/learn/eatf-level-1/m1-6/change-management-for-ai-transformation) - [Psychological Safety And Innovation Culture](https://www.compel.one/learn/eatf-level-1/m1-6/psychological-safety-and-innovation-culture) - [Stakeholder Engagement And Communication](https://www.compel.one/learn/eatf-level-1/m1-6/stakeholder-engagement-and-communication) - [Workforce Redesign And Human Ai Collaboration](https://www.compel.one/learn/eatf-level-1/m1-6/workforce-redesign-and-human-ai-collaboration) - [Measuring Organizational Readiness](https://www.compel.one/learn/eatf-level-1/m1-6/measuring-organizational-readiness) - [Sustaining The Human Foundation](https://www.compel.one/learn/eatf-level-1/m1-6/sustaining-the-human-foundation) ### AIT Practitioner (Level 2) — 6 modules, 66 articles - [The Anatomy Of A Compel Engagement](https://www.compel.one/learn/eatp-level-2/m2-1/the-anatomy-of-a-compel-engagement) - [Client Discovery And Needs Assessment](https://www.compel.one/learn/eatp-level-2/m2-1/client-discovery-and-needs-assessment) - [Organizational Readiness Pre Assessment](https://www.compel.one/learn/eatp-level-2/m2-1/organizational-readiness-pre-assessment) - [Engagement Scoping And Architecture](https://www.compel.one/learn/eatp-level-2/m2-1/engagement-scoping-and-architecture) - [The Statement Of Work From Proposal To Contract](https://www.compel.one/learn/eatp-level-2/m2-1/the-statement-of-work-from-proposal-to-contract) - [Stakeholder Alignment And Engagement Governance](https://www.compel.one/learn/eatp-level-2/m2-1/stakeholder-alignment-and-engagement-governance) - [Team Design And Resource Planning](https://www.compel.one/learn/eatp-level-2/m2-1/team-design-and-resource-planning) - [The Engagement Kickoff Setting The Transformation In Motion](https://www.compel.one/learn/eatp-level-2/m2-1/the-engagement-kickoff-setting-the-transformation-in-motion) - [Risk Management In Compel Engagements](https://www.compel.one/learn/eatp-level-2/m2-1/risk-management-in-compel-engagements) - [The Eatp As Engagement Leader Professional Practice And Ethics](https://www.compel.one/learn/eatp-level-2/m2-1/the-eatp-as-engagement-leader-professional-practice-and-ethics) - [Beyond The Baseline Advanced Assessment Philosophy](https://www.compel.one/learn/eatp-level-2/m2-2/beyond-the-baseline-advanced-assessment-philosophy) - [Multi Rater Assessment Methodology](https://www.compel.one/learn/eatp-level-2/m2-2/multi-rater-assessment-methodology) - [Deep Dive Domain Assessment Techniques](https://www.compel.one/learn/eatp-level-2/m2-2/deep-dive-domain-assessment-techniques) - [Cross Domain Diagnostic Patterns](https://www.compel.one/learn/eatp-level-2/m2-2/cross-domain-diagnostic-patterns) - [Organizational Culture Assessment For Ai Readiness](https://www.compel.one/learn/eatp-level-2/m2-2/organizational-culture-assessment-for-ai-readiness) - [Data Quality And Technology Assessment Deep Dive](https://www.compel.one/learn/eatp-level-2/m2-2/data-quality-and-technology-assessment-deep-dive) - [Stakeholder And Political Landscape Assessment](https://www.compel.one/learn/eatp-level-2/m2-2/stakeholder-and-political-landscape-assessment) - [Assessment Data Analysis And Insight Generation](https://www.compel.one/learn/eatp-level-2/m2-2/assessment-data-analysis-and-insight-generation) - [The Assessment Report Communicating Findings With Impact](https://www.compel.one/learn/eatp-level-2/m2-2/the-assessment-report-communicating-findings-with-impact) - [Assessment As A Continuous Practice](https://www.compel.one/learn/eatp-level-2/m2-2/assessment-as-a-continuous-practice) - [Agentic Ai Maturity Assessment Extending The 18 Domain Model](https://www.compel.one/learn/eatp-level-2/m2-2/agentic-ai-maturity-assessment-extending-the-18-domain-model) - [From Assessment To Action The Roadmap Imperative](https://www.compel.one/learn/eatp-level-2/m2-3/from-assessment-to-action-the-roadmap-imperative) - [Gap Analysis And Initiative Identification](https://www.compel.one/learn/eatp-level-2/m2-3/gap-analysis-and-initiative-identification) - [Initiative Sequencing And Dependencies](https://www.compel.one/learn/eatp-level-2/m2-3/initiative-sequencing-and-dependencies) - [The Four Pillar Roadmap Architecture](https://www.compel.one/learn/eatp-level-2/m2-3/the-four-pillar-roadmap-architecture) - [Resource Planning And Investment Architecture](https://www.compel.one/learn/eatp-level-2/m2-3/resource-planning-and-investment-architecture) - [Value Milestones And Quick Wins](https://www.compel.one/learn/eatp-level-2/m2-3/value-milestones-and-quick-wins) - [Risk Adjusted Roadmap Design](https://www.compel.one/learn/eatp-level-2/m2-3/risk-adjusted-roadmap-design) - [Stakeholder Specific Roadmap Communication](https://www.compel.one/learn/eatp-level-2/m2-3/stakeholder-specific-roadmap-communication) - [Roadmap Governance And Adaptive Management](https://www.compel.one/learn/eatp-level-2/m2-3/roadmap-governance-and-adaptive-management) - [The Roadmap As A Living Document](https://www.compel.one/learn/eatp-level-2/m2-3/the-roadmap-as-a-living-document) - [From Roadmap To Reality The Execution Challenge](https://www.compel.one/learn/eatp-level-2/m2-4/from-roadmap-to-reality-the-execution-challenge) - [Multi Workstream Coordination](https://www.compel.one/learn/eatp-level-2/m2-4/multi-workstream-coordination) - [Ai Use Case Delivery Management](https://www.compel.one/learn/eatp-level-2/m2-4/ai-use-case-delivery-management) - [Change Execution Operationalizing The People Pillar](https://www.compel.one/learn/eatp-level-2/m2-4/change-execution-operationalizing-the-people-pillar) - [Governance Execution Building The Framework In Practice](https://www.compel.one/learn/eatp-level-2/m2-4/governance-execution-building-the-framework-in-practice) - [Technical Execution Platform Data And Model Delivery](https://www.compel.one/learn/eatp-level-2/m2-4/technical-execution-platform-data-and-model-delivery) - [Stakeholder Management During Execution](https://www.compel.one/learn/eatp-level-2/m2-4/stakeholder-management-during-execution) - [Quality Assurance And Delivery Standards](https://www.compel.one/learn/eatp-level-2/m2-4/quality-assurance-and-delivery-standards) - [Troubleshooting And Recovery When Execution Stalls](https://www.compel.one/learn/eatp-level-2/m2-4/troubleshooting-and-recovery-when-execution-stalls) - [The Evaluate Transition From Execution To Assessment](https://www.compel.one/learn/eatp-level-2/m2-4/the-evaluate-transition-from-execution-to-assessment) - [Human Agent Collaboration Patterns And Oversight Design](https://www.compel.one/learn/eatp-level-2/m2-4/human-agent-collaboration-patterns-and-oversight-design) - [Operational Resilience For Agentic Ai Failure Modes And Recovery](https://www.compel.one/learn/eatp-level-2/m2-4/operational-resilience-for-agentic-ai-failure-modes-and-recovery) - [The Measurement Imperative In Ai Transformation](https://www.compel.one/learn/eatp-level-2/m2-5/the-measurement-imperative-in-ai-transformation) - [Designing The Measurement Framework](https://www.compel.one/learn/eatp-level-2/m2-5/designing-the-measurement-framework) - [Maturity Progression Measurement](https://www.compel.one/learn/eatp-level-2/m2-5/maturity-progression-measurement) - [Business Value And Roi Quantification](https://www.compel.one/learn/eatp-level-2/m2-5/business-value-and-roi-quantification) - [People And Change Metrics](https://www.compel.one/learn/eatp-level-2/m2-5/people-and-change-metrics) - [Technology And Process Performance Metrics](https://www.compel.one/learn/eatp-level-2/m2-5/technology-and-process-performance-metrics) - [Governance And Risk Metrics](https://www.compel.one/learn/eatp-level-2/m2-5/governance-and-risk-metrics) - [The Evaluate Stage In Practice](https://www.compel.one/learn/eatp-level-2/m2-5/the-evaluate-stage-in-practice) - [Value Realization Reporting And Communication](https://www.compel.one/learn/eatp-level-2/m2-5/value-realization-reporting-and-communication) - [From Measurement To Decision](https://www.compel.one/learn/eatp-level-2/m2-5/from-measurement-to-decision) - [Designing Measurement Frameworks For Agentic Ai Systems](https://www.compel.one/learn/eatp-level-2/m2-5/designing-measurement-frameworks-for-agentic-ai-systems) - [Audit Trails And Decision Provenance In Multi Agent Systems](https://www.compel.one/learn/eatp-level-2/m2-5/audit-trails-and-decision-provenance-in-multi-agent-systems) - [Agentic Ai Cost Modeling Token Economics Compute Budgets And Roi](https://www.compel.one/learn/eatp-level-2/m2-5/agentic-ai-cost-modeling-token-economics-compute-budgets-and-roi) - [Industry Context And The Universal Compel Framework](https://www.compel.one/learn/eatp-level-2/m2-6/industry-context-and-the-universal-compel-framework) - [Financial Services Ai Transformation In A Regulated Industry](https://www.compel.one/learn/eatp-level-2/m2-6/financial-services-ai-transformation-in-a-regulated-industry) - [Healthcare And Life Sciences](https://www.compel.one/learn/eatp-level-2/m2-6/healthcare-and-life-sciences) - [Manufacturing And Industrial](https://www.compel.one/learn/eatp-level-2/m2-6/manufacturing-and-industrial) - [Public Sector And Government](https://www.compel.one/learn/eatp-level-2/m2-6/public-sector-and-government) - [Retail And Consumer](https://www.compel.one/learn/eatp-level-2/m2-6/retail-and-consumer) - [Energy And Utilities](https://www.compel.one/learn/eatp-level-2/m2-6/energy-and-utilities) - [Technology And Software Companies](https://www.compel.one/learn/eatp-level-2/m2-6/technology-and-software-companies) - [Cross Industry Pattern Analysis](https://www.compel.one/learn/eatp-level-2/m2-6/cross-industry-pattern-analysis) - [Case Study Methodology And Analytical Practice](https://www.compel.one/learn/eatp-level-2/m2-6/case-study-methodology-and-analytical-practice) ### AIT Governance Professional (Level 3) — 6 modules, 64 articles - [Ai As Enterprise Strategic Capability](https://www.compel.one/learn/eate-level-3/m3-1/ai-as-enterprise-strategic-capability) - [Connecting Ai Strategy To Business Strategy](https://www.compel.one/learn/eate-level-3/m3-1/connecting-ai-strategy-to-business-strategy) - [Multi Year Transformation Program Design](https://www.compel.one/learn/eate-level-3/m3-1/multi-year-transformation-program-design) - [C Suite Advisory And Executive Engagement](https://www.compel.one/learn/eate-level-3/m3-1/c-suite-advisory-and-executive-engagement) - [Transformation Portfolio Management](https://www.compel.one/learn/eate-level-3/m3-1/transformation-portfolio-management) - [Ai Operating Model Design](https://www.compel.one/learn/eate-level-3/m3-1/ai-operating-model-design) - [Strategic Investment And Business Case Architecture](https://www.compel.one/learn/eate-level-3/m3-1/strategic-investment-and-business-case-architecture) - [Ecosystem And Partnership Strategy](https://www.compel.one/learn/eate-level-3/m3-1/ecosystem-and-partnership-strategy) - [Strategic Risk And Resilience](https://www.compel.one/learn/eate-level-3/m3-1/strategic-risk-and-resilience) - [The Eate As Strategic Transformation Architect](https://www.compel.one/learn/eate-level-3/m3-1/the-eate-as-strategic-transformation-architect) - [Enterprise Scale Organizational Transformation](https://www.compel.one/learn/eate-level-3/m3-2/enterprise-scale-organizational-transformation) - [Cultural Transformation For The Ai Native Organization](https://www.compel.one/learn/eate-level-3/m3-2/cultural-transformation-for-the-ai-native-organization) - [Executive Coaching For Ai Transformation](https://www.compel.one/learn/eate-level-3/m3-2/executive-coaching-for-ai-transformation) - [Organizational Design For Ai At Scale](https://www.compel.one/learn/eate-level-3/m3-2/organizational-design-for-ai-at-scale) - [Enterprise Change Architecture](https://www.compel.one/learn/eate-level-3/m3-2/enterprise-change-architecture) - [Talent Strategy At Enterprise Scale](https://www.compel.one/learn/eate-level-3/m3-2/talent-strategy-at-enterprise-scale) - [Managing Transformation Through Leadership Transitions](https://www.compel.one/learn/eate-level-3/m3-2/managing-transformation-through-leadership-transitions) - [Multi Stakeholder Dynamics And Political Navigation](https://www.compel.one/learn/eate-level-3/m3-2/multi-stakeholder-dynamics-and-political-navigation) - [Transformation Crisis Management](https://www.compel.one/learn/eate-level-3/m3-2/transformation-crisis-management) - [Building Self Sustaining Transformation Capability](https://www.compel.one/learn/eate-level-3/m3-2/building-self-sustaining-transformation-capability) - [Technology Architecture As Strategic Capability](https://www.compel.one/learn/eate-level-3/m3-3/technology-architecture-as-strategic-capability) - [Enterprise Ai Platform Strategy](https://www.compel.one/learn/eate-level-3/m3-3/enterprise-ai-platform-strategy) - [Data Architecture For Enterprise Ai](https://www.compel.one/learn/eate-level-3/m3-3/data-architecture-for-enterprise-ai) - [Multi Model Orchestration And Ai System Design](https://www.compel.one/learn/eate-level-3/m3-3/multi-model-orchestration-and-ai-system-design) - [Ai Security Architecture](https://www.compel.one/learn/eate-level-3/m3-3/ai-security-architecture) - [Scalability And Performance Architecture](https://www.compel.one/learn/eate-level-3/m3-3/scalability-and-performance-architecture) - [Ai Infrastructure Economics And Finops](https://www.compel.one/learn/eate-level-3/m3-3/ai-infrastructure-economics-and-finops) - [Technology Governance For Ai Native Organizations](https://www.compel.one/learn/eate-level-3/m3-3/technology-governance-for-ai-native-organizations) - [Emerging Technology Evaluation And Integration](https://www.compel.one/learn/eate-level-3/m3-3/emerging-technology-evaluation-and-integration) - [The Technology Architecture Roadmap](https://www.compel.one/learn/eate-level-3/m3-3/the-technology-architecture-roadmap) - [Enterprise Agentic Ai Platform Strategy And Multi Agent Orchestration](https://www.compel.one/learn/eate-level-3/m3-3/enterprise-agentic-ai-platform-strategy-and-multi-agent-orchestration) - [Governance As Strategic Advantage](https://www.compel.one/learn/eate-level-3/m3-4/governance-as-strategic-advantage) - [Multinational Governance Architecture](https://www.compel.one/learn/eate-level-3/m3-4/multinational-governance-architecture) - [Proactive Regulatory Engagement](https://www.compel.one/learn/eate-level-3/m3-4/proactive-regulatory-engagement) - [Advanced Ethics Architecture](https://www.compel.one/learn/eate-level-3/m3-4/advanced-ethics-architecture) - [Ai Risk Governance At Enterprise Scale](https://www.compel.one/learn/eate-level-3/m3-4/ai-risk-governance-at-enterprise-scale) - [Third Party And Supply Chain Ai Governance](https://www.compel.one/learn/eate-level-3/m3-4/third-party-and-supply-chain-ai-governance) - [Intellectual Property Strategy For Ai](https://www.compel.one/learn/eate-level-3/m3-4/intellectual-property-strategy-for-ai) - [Audit And Assurance For Enterprise Ai](https://www.compel.one/learn/eate-level-3/m3-4/audit-and-assurance-for-enterprise-ai) - [Governance Evolution And Maturity](https://www.compel.one/learn/eate-level-3/m3-4/governance-evolution-and-maturity) - [The Eate As Governance Architect](https://www.compel.one/learn/eate-level-3/m3-4/the-eate-as-governance-architect) - [Agentic Ai Governance Architecture Delegation Authority And Accountability](https://www.compel.one/learn/eate-level-3/m3-4/agentic-ai-governance-architecture-delegation-authority-and-accountability) - [Agentic Ai Risk Taxonomy And Enterprise Risk Framework Extension](https://www.compel.one/learn/eate-level-3/m3-4/agentic-ai-risk-taxonomy-and-enterprise-risk-framework-extension) - [The Eate As Educator And Methodology Steward](https://www.compel.one/learn/eate-level-3/m3-5/the-eate-as-educator-and-methodology-steward) - [Adult Learning Theory For Transformation Practitioners](https://www.compel.one/learn/eate-level-3/m3-5/adult-learning-theory-for-transformation-practitioners) - [Compel Curriculum Design And Delivery](https://www.compel.one/learn/eate-level-3/m3-5/compel-curriculum-design-and-delivery) - [Facilitation Mastery](https://www.compel.one/learn/eate-level-3/m3-5/facilitation-mastery) - [Coaching And Mentoring Eatp Practitioners](https://www.compel.one/learn/eate-level-3/m3-5/coaching-and-mentoring-eatp-practitioners) - [Knowledge Management And Organizational Learning](https://www.compel.one/learn/eate-level-3/m3-5/knowledge-management-and-organizational-learning) - [Methodology Innovation And Evolution](https://www.compel.one/learn/eate-level-3/m3-5/methodology-innovation-and-evolution) - [Research And Thought Leadership](https://www.compel.one/learn/eate-level-3/m3-5/research-and-thought-leadership) - [Community Building And Professional Networks](https://www.compel.one/learn/eate-level-3/m3-5/community-building-and-professional-networks) - [The Compel Body Of Knowledge Stewardship And Future](https://www.compel.one/learn/eate-level-3/m3-5/the-compel-body-of-knowledge-stewardship-and-future) - [Adaptive Learning Systems Governing Ai That Changes Its Own Behavior](https://www.compel.one/learn/eate-level-3/m3-5/adaptive-learning-systems-governing-ai-that-changes-its-own-behavior) - [The Capstone Challenge Integrating The Full Compel Body Of Knowledge](https://www.compel.one/learn/eate-level-3/m3-6/the-capstone-challenge-integrating-the-full-compel-body-of-knowledge) - [Selecting And Scoping The Capstone Organization](https://www.compel.one/learn/eate-level-3/m3-6/selecting-and-scoping-the-capstone-organization) - [The Enterprise Transformation Architecture Framework](https://www.compel.one/learn/eate-level-3/m3-6/the-enterprise-transformation-architecture-framework) - [Conducting The Enterprise Assessment](https://www.compel.one/learn/eate-level-3/m3-6/conducting-the-enterprise-assessment) - [Designing The Strategic Transformation Roadmap](https://www.compel.one/learn/eate-level-3/m3-6/designing-the-strategic-transformation-roadmap) - [The Organizational Transformation Design](https://www.compel.one/learn/eate-level-3/m3-6/the-organizational-transformation-design) - [The Technology And Governance Architecture](https://www.compel.one/learn/eate-level-3/m3-6/the-technology-and-governance-architecture) - [The Measurement And Value Realization Framework](https://www.compel.one/learn/eate-level-3/m3-6/the-measurement-and-value-realization-framework) - [Preparing And Delivering The Oral Defense](https://www.compel.one/learn/eate-level-3/m3-6/preparing-and-delivering-the-oral-defense) - [The Eate Professional Completing The Journey](https://www.compel.one/learn/eate-level-3/m3-6/the-eate-professional-completing-the-journey) ### AIT Leader (Level 4) — 6 modules, 62 articles - [From Program To Portfolio The Pmo Mandate For Ai Transformation](https://www.compel.one/learn/eatl-level-4/m4-1/from-program-to-portfolio-the-pmo-mandate-for-ai-transformation) - [Strategic Portfolio Design And Initiative Architecture](https://www.compel.one/learn/eatl-level-4/m4-1/strategic-portfolio-design-and-initiative-architecture) - [Portfolio Investment Optimization And Capital Allocation](https://www.compel.one/learn/eatl-level-4/m4-1/portfolio-investment-optimization-and-capital-allocation) - [Cross Program Dependency Orchestration](https://www.compel.one/learn/eatl-level-4/m4-1/cross-program-dependency-orchestration) - [Portfolio Risk Aggregation And Enterprise Risk Exposure](https://www.compel.one/learn/eatl-level-4/m4-1/portfolio-risk-aggregation-and-enterprise-risk-exposure) - [Portfolio Performance Dashboards And Executive Reporting](https://www.compel.one/learn/eatl-level-4/m4-1/portfolio-performance-dashboards-and-executive-reporting) - [Portfolio Rebalancing And Strategic Pivot Decision Models](https://www.compel.one/learn/eatl-level-4/m4-1/portfolio-rebalancing-and-strategic-pivot-decision-models) - [Multi Business Unit Portfolio Coordination](https://www.compel.one/learn/eatl-level-4/m4-1/multi-business-unit-portfolio-coordination) - [Portfolio Value Realization And Benefits Tracking](https://www.compel.one/learn/eatl-level-4/m4-1/portfolio-value-realization-and-benefits-tracking) - [The Eatp Lead As Portfolio Steward Roles Authority And Accountability](https://www.compel.one/learn/eatl-level-4/m4-1/the-eatp-lead-as-portfolio-steward-roles-authority-and-accountability) - [The Framework Interoperability Imperative](https://www.compel.one/learn/eatl-level-4/m4-2/the-framework-interoperability-imperative) - [Compel And Safe Scaling Ai Transformation In Agile Enterprises](https://www.compel.one/learn/eatl-level-4/m4-2/compel-and-safe-scaling-ai-transformation-in-agile-enterprises) - [Compel And Pmi Pmbok Project Portfolio Alignment](https://www.compel.one/learn/eatl-level-4/m4-2/compel-and-pmi-pmbok-project-portfolio-alignment) - [Compel And Togaf Enterprise Architecture Integration](https://www.compel.one/learn/eatl-level-4/m4-2/compel-and-togaf-enterprise-architecture-integration) - [Compel And Itil Ai Enabled Service Management](https://www.compel.one/learn/eatl-level-4/m4-2/compel-and-itil-ai-enabled-service-management) - [Compel And Lean Six Sigma Continuous Improvement Synergy](https://www.compel.one/learn/eatl-level-4/m4-2/compel-and-lean-six-sigma-continuous-improvement-synergy) - [Compel And Devops Mlops Engineering Velocity Alignment](https://www.compel.one/learn/eatl-level-4/m4-2/compel-and-devops-mlops-engineering-velocity-alignment) - [Compel And Cobit It Governance Convergence](https://www.compel.one/learn/eatl-level-4/m4-2/compel-and-cobit-it-governance-convergence) - [Multi Framework Operating Model Design](https://www.compel.one/learn/eatl-level-4/m4-2/multi-framework-operating-model-design) - [Framework Harmonization Playbook And Organizational Rollout](https://www.compel.one/learn/eatl-level-4/m4-2/framework-harmonization-playbook-and-organizational-rollout) - [Cross Organizational Governance Architecture Design](https://www.compel.one/learn/eatl-level-4/m4-3/cross-organizational-governance-architecture-design) - [Iso 42001 Alignment And Ai Management System Certification](https://www.compel.one/learn/eatl-level-4/m4-3/iso-42001-alignment-and-ai-management-system-certification) - [Nist Ai Rmf Implementation At Enterprise Scale](https://www.compel.one/learn/eatl-level-4/m4-3/nist-ai-rmf-implementation-at-enterprise-scale) - [Multi Jurisdictional Regulatory Harmonization](https://www.compel.one/learn/eatl-level-4/m4-3/multi-jurisdictional-regulatory-harmonization) - [Joint Venture And Consortium Ai Governance Models](https://www.compel.one/learn/eatl-level-4/m4-3/joint-venture-and-consortium-ai-governance-models) - [Supply Chain And Ecosystem Ai Policy Orchestration](https://www.compel.one/learn/eatl-level-4/m4-3/supply-chain-and-ecosystem-ai-policy-orchestration) - [Public Private Partnership Governance For Ai Initiatives](https://www.compel.one/learn/eatl-level-4/m4-3/public-private-partnership-governance-for-ai-initiatives) - [Enterprise Policy Lifecycle Management And Version Control](https://www.compel.one/learn/eatl-level-4/m4-3/enterprise-policy-lifecycle-management-and-version-control) - [Cross Border Data Governance And Sovereignty Architecture](https://www.compel.one/learn/eatl-level-4/m4-3/cross-border-data-governance-and-sovereignty-architecture) - [The Eatp Lead As Governance Harmonization Authority](https://www.compel.one/learn/eatl-level-4/m4-3/the-eatp-lead-as-governance-harmonization-authority) - [Cross Organizational Agentic Ai Governance And Policy Frameworks](https://www.compel.one/learn/eatl-level-4/m4-3/cross-organizational-agentic-ai-governance-and-policy-frameworks) - [Anatomy Of The Ai Native Operating Model](https://www.compel.one/learn/eatl-level-4/m4-4/anatomy-of-the-ai-native-operating-model) - [Ai Capability Center Design Coe Evolution And Federated Models](https://www.compel.one/learn/eatl-level-4/m4-4/ai-capability-center-design-coe-evolution-and-federated-models) - [Enterprise Ai Shared Services And Platform Teams](https://www.compel.one/learn/eatl-level-4/m4-4/enterprise-ai-shared-services-and-platform-teams) - [Funding Models And Chargeback Architecture For Ai](https://www.compel.one/learn/eatl-level-4/m4-4/funding-models-and-chargeback-architecture-for-ai) - [Enterprise Talent Ecosystem And Ai Workforce Strategy](https://www.compel.one/learn/eatl-level-4/m4-4/enterprise-talent-ecosystem-and-ai-workforce-strategy) - [Ai Demand Management And Use Case Intake At Scale](https://www.compel.one/learn/eatl-level-4/m4-4/ai-demand-management-and-use-case-intake-at-scale) - [Operating Model Transition From Current To Target State](https://www.compel.one/learn/eatl-level-4/m4-4/operating-model-transition-from-current-to-target-state) - [Vendor And Partner Ecosystem Operating Integration](https://www.compel.one/learn/eatl-level-4/m4-4/vendor-and-partner-ecosystem-operating-integration) - [Operating Model Maturity Assessment And Evolution](https://www.compel.one/learn/eatl-level-4/m4-4/operating-model-maturity-assessment-and-evolution) - [Institutionalizing The Ai Operating Model Sustainability And Self Renewal](https://www.compel.one/learn/eatl-level-4/m4-4/institutionalizing-the-ai-operating-model-sustainability-and-self-renewal) - [The Eatp Lead As Industry Standards Architect](https://www.compel.one/learn/eatl-level-4/m4-5/the-eatp-lead-as-industry-standards-architect) - [Standards Body Engagement Iso Ieee Nist And Beyond](https://www.compel.one/learn/eatl-level-4/m4-5/standards-body-engagement-iso-ieee-nist-and-beyond) - [Original Research Design For Ai Transformation Methodology](https://www.compel.one/learn/eatl-level-4/m4-5/original-research-design-for-ai-transformation-methodology) - [Publishing And Peer Contribution In Ai Governance](https://www.compel.one/learn/eatl-level-4/m4-5/publishing-and-peer-contribution-in-ai-governance) - [Methodology Benchmarking And Comparative Analysis](https://www.compel.one/learn/eatl-level-4/m4-5/methodology-benchmarking-and-comparative-analysis) - [Compel Methodology Extension And Domain Specialization](https://www.compel.one/learn/eatl-level-4/m4-5/compel-methodology-extension-and-domain-specialization) - [Building And Leading Professional Communities Of Practice](https://www.compel.one/learn/eatl-level-4/m4-5/building-and-leading-professional-communities-of-practice) - [Keynote And Executive Communication Mastery](https://www.compel.one/learn/eatl-level-4/m4-5/keynote-and-executive-communication-mastery) - [Advisory Board And Governance Committee Leadership](https://www.compel.one/learn/eatl-level-4/m4-5/advisory-board-and-governance-committee-leadership) - [Shaping The Future Of Ai Transformation The Eatp Lead Legacy](https://www.compel.one/learn/eatl-level-4/m4-5/shaping-the-future-of-ai-transformation-the-eatp-lead-legacy) - [Industry Standards For Agentic Ai Iso Nist And Emerging Frameworks](https://www.compel.one/learn/eatl-level-4/m4-5/industry-standards-for-agentic-ai-iso-nist-and-emerging-frameworks) - [The Eatp Lead Capstone Portfolio Defense Overview](https://www.compel.one/learn/eatl-level-4/m4-6/the-eatp-lead-capstone-portfolio-defense-overview) - [Selecting The Multi Organization Portfolio Scope](https://www.compel.one/learn/eatl-level-4/m4-6/selecting-the-multi-organization-portfolio-scope) - [Portfolio Strategy Document Architecture And Requirements](https://www.compel.one/learn/eatl-level-4/m4-6/portfolio-strategy-document-architecture-and-requirements) - [Demonstrating Framework Interoperability In The Portfolio](https://www.compel.one/learn/eatl-level-4/m4-6/demonstrating-framework-interoperability-in-the-portfolio) - [The Governance Harmonization Artifact](https://www.compel.one/learn/eatl-level-4/m4-6/the-governance-harmonization-artifact) - [The Operating Model Blueprint Artifact](https://www.compel.one/learn/eatl-level-4/m4-6/the-operating-model-blueprint-artifact) - [Portfolio Value Narrative And Executive Impact Case](https://www.compel.one/learn/eatl-level-4/m4-6/portfolio-value-narrative-and-executive-impact-case) - [Preparing The Live Panel Defense](https://www.compel.one/learn/eatl-level-4/m4-6/preparing-the-live-panel-defense) - [Scoring Rubric And Evaluation Criteria](https://www.compel.one/learn/eatl-level-4/m4-6/scoring-rubric-and-evaluation-criteria) - [The Eatp Lead Professional Mastery Responsibility And The Path Ahead](https://www.compel.one/learn/eatl-level-4/m4-6/the-eatp-lead-professional-mastery-responsibility-and-the-path-ahead) --- ## Glossary — 736 Terms Each glossary term has a dedicated page with a practitioner definition, COMPEL context, related terms, and canonical URL. Full glossary index: https://www.compel.one/glossary The existing 15 detailed glossary entries (AI Transformation, Enterprise AI Transformation, AI Governance, AI Operating Model, AI Readiness Assessment, ISO 42001, NIST AI RMF, EU AI Act, AI Maturity, Shadow AI, MLOps, Model Drift, Human Oversight, Responsible AI, AI Controls) remain below with full descriptions. The complete set of 736 terms is available at https://www.compel.one/glossary and in the glossary sitemap at https://www.compel.one/sitemap-glossary.xml. - **AI Transformation** (https://www.compel.one/glossary/ai-transformation): The systematic process of embedding AI into enterprise operations, culture, and strategy — distinct from one-off AI projects. COMPEL treats AI transformation as an organizational capability challenge addressed through the 6-stage operating cycle. - **Enterprise AI Transformation** (https://www.compel.one/glossary/enterprise-ai-transformation): Organization-wide AI adoption requiring governance structures, workforce development, and operating model design. Emphasizes that enterprise scale demands formal governance — not just technology deployment. - **AI Governance** (https://www.compel.one/glossary/ai-governance): The policies, oversight structures, and accountability mechanisms that ensure AI systems are developed and used responsibly. In COMPEL, governance is one of four pillars (along with People, Process, and Technology) and is addressed across all six stages. - **AI Operating Model** (https://www.compel.one/glossary/ai-operating-model): The organizational design that defines how AI capabilities are developed, deployed, governed, and improved. COMPEL itself is an AI operating model that organizations adopt and execute. - **AI Readiness Assessment** (https://www.compel.one/glossary/ai-readiness-assessment): A structured evaluation of an organization's capability to adopt and scale AI across people, process, technology, and governance dimensions. The Calibrate stage of COMPEL is a formalized AI readiness assessment. - **ISO 42001** (https://www.compel.one/glossary/iso-42001): The international standard (ISO/IEC 42001:2023) establishing requirements for AI management systems. Defines what an AI management system must contain; COMPEL provides the operating methodology for how to implement it. - **NIST AI RMF** (https://www.compel.one/glossary/nist-ai-rmf): The US National Institute of Standards and Technology AI Risk Management Framework. A voluntary framework with four core functions (GOVERN, MAP, MEASURE, MANAGE) that COMPEL operationalizes through its 18 domains. - **EU AI Act** (https://www.compel.one/glossary/eu-ai-act): The European Union's binding regulation (2024/1689) for AI systems. Introduces risk-based classification (minimal, limited, high, unacceptable) and mandates conformity assessment for high-risk systems. COMPEL's Evaluate stage produces the required documentation. - **AI Maturity** (https://www.compel.one/glossary/ai-maturity): A measure of how advanced an organization's AI capabilities are. COMPEL uses a 5-level maturity model (Foundational to Transformational) applied independently across all 18 domains to produce a maturity heatmap. - **Shadow AI** (https://www.compel.one/glossary/shadow-ai): Unauthorized or unregistered AI tools used within an organization outside formal governance. The Calibrate stage includes shadow AI discovery as a core assessment activity. - **MLOps** (https://www.compel.one/glossary/mlops): The discipline of operationalizing machine learning models with CI/CD pipelines, monitoring, drift detection, and governance controls. Covered under Domain 7 in the COMPEL Process pillar. - **Model Drift** (https://www.compel.one/glossary/model-drift): The degradation of AI model performance over time as data distributions or real-world conditions change. Detected through continuous monitoring in the Evaluate and Learn stages. - **Human Oversight** (https://www.compel.one/glossary/human-oversight): The governance requirement for human review, intervention, and accountability over AI system decisions. A core principle in the EU AI Act (Article 14) and operationalized in COMPEL's Model and Evaluate stages. - **Responsible AI** (https://www.compel.one/glossary/responsible-ai): The practice of developing and deploying AI systems that are ethical, fair, transparent, and accountable. COMPEL's Ethics & Fairness domain (D15) operationalizes responsible AI principles into testable controls. - **AI Controls** (https://www.compel.one/glossary/ai-controls): Technical and procedural mechanisms that enforce governance policies on AI systems — including access controls, approval gates, monitoring thresholds, and kill switches. Designed in Model, implemented in Produce, verified in Evaluate. ### Complete Glossary — All 736 Terms (term: definition) Canonical index: https://www.compel.one/glossary Sitemap: https://www.compel.one/sitemap-glossary.xml - **18-Domain Maturity Model**: The COMPEL diagnostic framework that assesses organizational AI capability across 18 distinct domains organized within the four pillars. Each domain is scored on a 1-to-5 scale, providing a detailed c - **A/B Testing**: An experimental method that compares two versions of an AI system by randomly directing different user groups to each version and measuring which performs better. A/B testing provides empirical eviden - **Absorption Capacity**: An organization - **Accountability**: The principle that when an AI system causes harm, there must be clear lines of human responsibility. Accountability requires named individuals and governance structures responsible for AI system desig - **Accountability Framework**: A structured system that defines who is responsible for AI decisions and outcomes, how decisions are documented, and what consequences exist when things go wrong. Essential for trustworthy AI governan - **Accuracy**: A model performance metric measuring the proportion of all predictions that are correct. While intuitive, accuracy can be misleading for imbalanced datasets where one class is much more common than an - **Action Research**: A research approach where practitioners study their own work while actively improving it, cycling between action and reflection. Used in COMPEL to evolve methodology through real-world application. - **Action Space**: The complete set of all actions an AI agent can potentially take, including tool invocations, communication actions, reasoning steps, and environmental interactions. Defining and constraining the acti - **Adaptive Learning System**: An AI system that modifies its own behavior based on experience, raising unique governance challenges because the system - **Adaptive Management**: A structured approach to decision-making that adjusts plans based on new information and changing conditions, rather than rigidly following an original plan. Central to COMPEL roadmap governance. - **Adoption Metrics**: Measurements that track how thoroughly AI tools, processes, and governance practices are being used by the intended users, including active usage rates, feature utilization, and behavior change indica - **Adoption Rate**: The percentage of intended users who are actively and effectively using an AI-enabled tool or process. Adoption rates are a critical leading indicator of value realization -- an AI system that is depl - **Adoption Trap**: The illusion of progress created by accumulating AI tools and deployments without building underlying organizational capability. Organizations in the adoption trap appear to be making progress but exp - **Advanced (Level 4)**: The fourth maturity level in the COMPEL model indicating well-established, measured, and actively managed AI practices with data-driven optimization. - **Adversarial Attack**: A deliberate attempt to fool or manipulate an AI system by providing specially crafted inputs designed to cause incorrect outputs. Adversarial attacks expose vulnerabilities in AI models that standard - **Adversarial Testing**: Systematic probing of AI systems with intentionally crafted inputs designed to expose vulnerabilities, biases, or failure modes. - **Advisory Board**: A group of external experts who provide non-binding strategic guidance to an organization - **Advisory Engagement**: A consulting arrangement where the practitioner provides ongoing strategic counsel to client leadership guiding their internally-led transformation, typically structured as a retainer with defined tim - **Agent Lifecycle Management**: The end-to-end process of creating, deploying, monitoring, updating, and retiring AI agents. Includes registration, testing, approval, and decommissioning stages. - **Agent Orchestration**: The coordination of multiple AI agents working together on complex tasks, including routing work between agents, managing handoffs, and ensuring coherent outcomes from multi-agent systems. - **Agent Registry**: A centralized catalog that tracks all deployed AI agents in an organization, including their capabilities, permissions, owners, and operational status. Essential for enterprise governance of agentic A - **Agentic AI**: AI systems that can pursue goals across multiple steps, make decisions about actions to take, use external tools, and adapt their behavior based on results -- operating with varying degrees of autonom - **Agentic Failure Taxonomy**: A classification system for the types of failures that can occur in agentic AI systems, including goal misalignment, tool misuse, cascading errors, and unauthorized autonomous actions. - **Agile**: A set of principles for software development that emphasizes iterative delivery, team collaboration, and responsiveness to change. COMPEL - **AI Adoption**: The act of introducing AI technologies into an organization - **AI Audit**: A formal examination of an AI system or governance program against defined criteria to assess compliance, effectiveness, and risk posture. - **AI Bill of Rights**: A framework published by the White House Office of Science and Technology Policy outlining five principles for the design, use, and deployment of automated systems to protect the American public. - **AI Capability Center**: An organizational unit that concentrates AI expertise and resources to serve the broader enterprise. An evolution of the Center of Excellence model, designed to build and scale AI capabilities across - **AI Champions Network**: A distributed group of advocates across business units who promote AI adoption, share best practices, and serve as local points of contact for the Center of Excellence. - **AI Demand Review Board**: A governance body that evaluates and prioritizes incoming AI project requests from across the organization, ensuring alignment with strategy and efficient allocation of limited AI resources. - **AI Due Diligence**: The investigation and assessment of AI capabilities, risks, and liabilities conducted during mergers, acquisitions, or partnerships. Examines data assets, model quality, compliance posture, and techni - **AI Ethics**: The branch of applied ethics that examines the moral implications of AI systems, addressing questions of fairness, accountability, transparency, and the impact of AI on individuals and society. - **AI Ethics and Responsible AI**: Domain D15 in the COMPEL maturity model covering policies, review processes, and organizational commitment to ethical AI development and deployment. - **AI Ethics Board**: A cross-functional body with authority to review, approve, and halt AI initiatives based on ethical criteria. Effective ethics boards include diverse perspectives -- technologists, legal, compliance, - **AI FinOps**: AI Financial Operations -- the discipline of monitoring, allocating, and optimizing the costs associated with AI workloads, including cloud computing, GPU usage, data storage, and inference endpoints. - **AI Governance Committee**: A cross-functional body with decision-making authority over AI strategy, risk acceptance, policy approval, and investment prioritization. - **AI Governance Structure**: Domain D18 in the COMPEL maturity model addressing organizational bodies, decision rights, and accountability mechanisms for governing AI at scale. - **AI Impact Assessment**: A structured evaluation of the potential effects an AI system may have on individuals, groups, and society, including risks to rights, safety, and well-being. - **AI Incident Classification**: A system for categorizing AI failures and malfunctions by severity, impact, and type. Helps organizations respond appropriately to different kinds of AI problems, from minor errors to critical safety - **AI Leadership and Sponsorship**: Domain D1 in the COMPEL maturity model measuring executive champions driving AI transformation with authority and effectiveness. - **AI Literacy**: The degree to which individuals across an organization understand AI concepts, capabilities, and limitations well enough to make informed decisions within their domain. AI literacy is not about becomi - **AI Literacy and Culture**: Domain D3 in the COMPEL maturity model evaluating non-technical staff understanding of AI concepts and their constructive engagement with AI initiatives. - **AI Literacy Program**: A structured initiative to build foundational AI knowledge across all organizational roles, enabling informed participation in AI transformation. - **AI Operating Model**: The organizational design that defines how AI capabilities are developed, deployed, and governed across the enterprise. Includes structures, roles, processes, and decision rights for AI at scale. - **AI Platform Strategy**: The enterprise approach to selecting, building, and integrating the technology foundation that supports all AI development and deployment. Covers infrastructure, tools, and shared services. - **AI Product Manager**: A professional responsible for defining AI use cases, managing stakeholder engagement, translating business requirements into technical specifications, and ensuring that AI solutions deliver measurabl - **AI Project Delivery**: Domain D8 in the COMPEL maturity model measuring methodology and discipline applied to AI project execution. - **AI Regulatory Sandbox**: A controlled environment established by regulators allowing organizations to test innovative AI applications under relaxed regulatory requirements with appropriate oversight. - **AI Risk Champions**: Designated individuals within business units who advocate for AI risk awareness and serve as local liaisons to the central risk management function. They identify and escalate AI-related risks. - **AI Risk Governance Board**: A senior-level body responsible for overseeing AI-related risks across the enterprise, setting risk appetite, and making decisions about acceptable risk levels for AI deployments. - **AI Risk Register**: A documented inventory of identified AI risks, their likelihood, potential impact, mitigation strategies, and assigned owners. A living document that is regularly reviewed and updated. - **AI Safety**: The field focused on ensuring AI systems operate without causing unintended harm, including research into alignment, robustness, and preventing dangerous behaviors in advanced AI systems. - **AI Security Architecture**: The comprehensive design of security controls specific to AI systems, covering model protection, data security, adversarial defense, access control, and supply chain security for AI components. - **AI Service Level Management**: The practice of defining, measuring, and maintaining performance standards for AI services, including availability, accuracy, response time, and fairness metrics. - **AI Steering Committee**: A senior governance body that provides strategic direction, resolves cross-functional conflicts, approves budgets, and maintains executive accountability for AI transformation outcomes. Typically chai - **AI Strategy and Alignment**: Domain D14 in the COMPEL maturity model measuring the clarity and organizational adoption of AI strategy connected to business objectives. - **AI System Impact Assessment**: A structured evaluation of how an AI system affects individuals, groups, and society, covering risks to rights, safety, fairness, and privacy. Often required by regulation before deployment. - **AI System Registry**: An organizational catalog of all AI systems in use or development, documenting their purpose, data inputs, risk level, ownership, and compliance status. Required by some regulations. - **AI Talent and Skills**: Domain D2 in the COMPEL maturity model measuring the depth and breadth of technical AI expertise across the organization. - **AI Transformation**: The systematic redesign of how an organization operates, competes, and creates value -- enabled by AI but encompassing changes to organizational structures, processes, governance, and culture that ext - **AI Transformation Anti-Patterns**: Common but counterproductive approaches to AI adoption that undermine long-term transformation success, such as technology-first thinking or governance-as-afterthought. - **AI Transformation Imperative**: The strategic urgency for organizations to move beyond isolated AI experiments toward systematic, enterprise-wide AI transformation to remain competitive. - **AI Transformation vs AI Adoption**: The distinction between merely deploying AI tools (adoption) and fundamentally reshaping organizational processes, culture, and strategy around AI capabilities (transformation). - **AI Use Case Management**: Domain D5 in the COMPEL maturity model covering the identification, prioritization, validation, and tracking of AI opportunities. - **AI-Native Organization**: An organization whose core operations, culture, and strategy are fundamentally built around AI capabilities, as opposed to organizations that add AI to existing processes. - **AI/ML Platform and Tooling**: Domain D11 in the COMPEL maturity model assessing the availability and adoption of model development, training, and deployment platforms. - **AIOps**: The application of AI and machine learning to IT operations tasks such as monitoring, alerting, and incident resolution. Uses pattern recognition to detect anomalies and automate routine operations. - **Alert Management**: A platform module for configuring, routing, and responding to automated notifications triggered by anomalies, drift, or threshold breaches in AI system operations. - **Algorithm**: A set of step-by-step instructions or rules that a computer follows to solve a problem or complete a task. In AI, algorithms are the mathematical procedures that enable models to learn from data. - **Algorithmic Accountability**: The principle that organizations deploying algorithms must be answerable for the outcomes those algorithms produce, including unintended consequences and discriminatory effects. - **Algorithmic Audit**: An independent examination of an AI system - **Algorithmic Bias**: Systematic and unfair discrimination in AI system outputs, often arising from biased training data, flawed model design, or unrepresentative data samples. Algorithmic bias can lead to disparate treatm - **Algorithmic Impact Assessment**: A formal evaluation conducted before deploying an AI system to identify potential negative impacts on individuals and communities, particularly regarding fairness, privacy, and civil rights. - **Andragogy**: The theory and practice of adult education, recognizing that adults learn differently from children. Adults need to understand why they are learning something and prefer self-directed, experience-base - **Anomaly Detection**: A technique that identifies data points or events that deviate significantly from expected patterns. Used in cybersecurity, fraud detection, manufacturing quality control, and financial monitoring. - **Anonymization**: The process of removing or altering personally identifiable information from data so that individuals cannot be re-identified. A key technique for protecting privacy while enabling data use. - **Anti-Pattern**: A commonly occurring but counterproductive organizational behavior that appears rational in the moment but systematically undermines transformation outcomes. COMPEL identifies five major AI transforma - **API (Application Programming Interface)**: A set of rules and protocols that allows different software systems to communicate with each other. APIs enable AI models to be integrated into business applications, allowing other systems to send da - **API Hub**: A centralized platform module for managing, documenting, and monitoring APIs exposed by AI systems and governance services. - **Approvals Queue**: A workflow-driven interface for reviewing and approving pending governance actions such as policy exceptions, risk acceptances, and system registrations. - **Architecture Review Board**: A governance body that evaluates proposed technology designs and changes against enterprise architecture standards, ensuring consistency, scalability, and alignment with strategic direction. - **Artifact**: A formal document, record, or deliverable produced during the COMPEL lifecycle that provides evidence of governance activities, decisions, and outcomes. COMPEL defines approximately 40 mandatory artif - **Artificial General Intelligence (AGI)**: A theoretical form of AI with human-level cognitive ability across all domains. AGI does not currently exist and is the subject of significant debate among researchers regarding its feasibility and ti - **Artificial Intelligence (AI)**: A broad field of computer science focused on building systems that can perform tasks typically requiring human intelligence, such as understanding language, recognizing patterns, making decisions, and - **Artificial Narrow Intelligence (ANI)**: AI that performs a specific task within a defined domain, such as fraud detection or language translation. All AI systems deployed in enterprises today are narrow AI. Contrast with the theoretical con - **Assessment-Only Engagement**: A COMPEL engagement type focused solely on diagnosing an organization - **Assessments Center**: A platform module providing a unified interface for managing risk assessments, foundation model evaluations, and compliance reviews across all registered AI systems. - **Assurance**: The process of providing confidence to stakeholders that AI systems, processes, and governance mechanisms are operating effectively and in compliance with stated standards and requirements. - **Attack Surface**: The total set of points where an unauthorized user could attempt to enter or extract data from an AI system. Includes model endpoints, data pipelines, training processes, and user interfaces. - **Attestation**: A formal declaration by an authorized party that an AI system or process meets specified requirements. Less comprehensive than a full audit but provides documented evidence of compliance. - **Audit Center**: A platform module for planning, executing, and tracking internal and external audits of AI systems and governance processes. - **Audit Log**: A tamper-evident record of all significant actions taken within the platform, supporting accountability, compliance, and forensic analysis. - **Audit Pack**: A pre-assembled collection of evidence artifacts, attestation records, and compliance documentation prepared for regulatory or external audit review. - **Audit Preparedness**: The continuous operational discipline of ensuring governance activities produce the documentation, evidence trails, and records that auditors and regulators require. Audit-ready organizations produce - **Audit Trail**: A chronological record of all activities, decisions, and changes related to an AI system, maintained to support accountability, compliance verification, and regulatory examination. Audit trails must b - **Augmentation ROI**: A measurement of the return on investment from AI-augmented workflows, comparing the cost of AI implementation to the business value generated. - **Auto-scaling**: The automatic adjustment of computing resources based on demand. When an AI system receives more requests, infrastructure scales up; when demand drops, it scales down to save costs. - **Autonomy Calibration**: The process of determining the appropriate level of AI autonomy for a given task or context, balancing efficiency gains against risk, regulatory requirements, and organizational readiness. - **Autonomy Spectrum**: A classification framework ranging from Level 0 (no autonomy, fully human-directed) to Level 5 (full autonomy, self-directed) that describes how independently an AI agent can operate. Used to determin - **Balanced Scorecard**: A strategic performance measurement framework that tracks metrics across four perspectives: financial, customer, internal processes, and learning and growth. Applied to AI transformation to ensure hol - **Baseline**: A documented measurement of current performance, capability, or conditions taken before an AI initiative begins. Baselines provide the reference point against which progress, improvement, and ROI are - **Baseline Assessment**: The initial measurement of an organization\ - **Batch Inference**: Running an AI model on a large collection of data all at once, rather than one item at a time. Useful for periodic bulk processing like overnight report generation or weekly customer scoring. - **Batch Processing**: Processing large volumes of data or predictions at scheduled intervals rather than in real time. Batch processing is used when immediate results are not required, such as overnight model retraining or - **Benchmark**: A standardized test or reference point used to evaluate and compare AI model performance. Benchmarks enable organizations to assess their models against industry standards and track improvement over t - **Benefits Register**: A document that tracks all expected and realized benefits of an AI transformation program, including who is responsible for each benefit and how it will be measured. - **Benefits Tracking**: The systematic process of measuring and documenting the actual value delivered by an AI transformation program against projected benefits, enabling accountability and learning. - **Bias (Algorithmic)**: Systematic errors in AI predictions that produce unfair outcomes for particular groups, arising from biased training data, flawed model design, or inappropriate feature selection. - **Bias Auditing**: The systematic review of training data and model outputs to identify and measure unfair biases before and after deployment. Bias auditing examines underrepresentation, historical biases, and proxy var - **Bias Detection**: The process of identifying systematic unfairness in AI systems, including examining training data, model outputs, and real-world impacts for patterns that disadvantage particular groups. - **Bias Testing**: A platform module for systematically evaluating AI models for demographic bias, fairness violations, and disparate treatment across protected attributes. - **Binding Corporate Rules**: Internal policies adopted by multinational organizations to allow the transfer of personal data between entities in different countries while maintaining data protection standards. - **Blameless Post-Mortem**: An incident review approach that focuses on understanding what happened and improving systems rather than assigning personal blame. Encourages honest reporting and organizational learning from AI fail - **Bloom**: A hierarchical framework for classifying learning objectives from basic recall to complex evaluation and creation. Used in COMPEL curriculum design to ensure training develops progressively deeper com - **Board-Level Governance**: The oversight and strategic direction provided by an organization - **Body of Knowledge**: The complete set of concepts, terms, practices, and standards that define a professional field. The COMPEL Body of Knowledge encompasses all methodology, tools, and practices across the certification - **Brussels Effect**: The tendency of EU regulation to set de facto global standards because multinational organizations find it more efficient to adopt a single stringent standard than to maintain different practices for - **Buffer Management**: The practice of building time and resource margins into project schedules to absorb inevitable delays without cascading failures. Critical for managing dependencies between transformation workstreams. - **Business Case**: A structured argument that justifies an investment by documenting the expected costs, benefits, risks, and strategic rationale. Effective AI business cases include both quantitative financial projecti - **Business Continuity**: Planning and preparation to ensure that critical business functions can continue during and after a disruption, including AI system failures. Covers contingency plans, backup procedures, and recovery - **Business Value Chain**: The sequence of activities through which AI capabilities create measurable business value, from data acquisition through model deployment to outcome realization. - **C-Suite Advisory**: The practice of providing strategic guidance to an organization - **Calibrate (COMPEL Stage)**: The first COMPEL stage, focused on establishing an honest, evidence-based assessment of an organization - **Calibrate Stage**: The first stage of the COMPEL lifecycle where an organization - **Canary Deployment**: A deployment strategy where a new AI model is initially released to a small percentage of traffic while monitoring for issues, before gradually rolling it out to all users. This minimizes the blast ra - **Capability Compounding**: The phenomenon where AI capabilities build upon each other, with each new capability making subsequent capabilities easier and more valuable to develop. A key principle of portfolio design. - **Capability Maturity Model Integration (CMMI)**: A process improvement framework that defines maturity levels for organizational processes. CMMI influenced the design of AI maturity models, including COMPEL - **Capital Allocation**: The process of distributing financial resources across a portfolio of AI transformation initiatives based on strategic priorities, expected returns, and risk profiles. - **Capstone Portfolio**: The collection of artifacts, analyses, and documentation that Level 4 EATP Lead candidates assemble to demonstrate comprehensive mastery across portfolio leadership, governance, and standards. - **Capstone Project**: A comprehensive final assessment in COMPEL certification that requires candidates to demonstrate integrated mastery by applying the full methodology to a real or simulated enterprise scenario. - **Cascading Failure**: A sequence of failures where one component - **Catastrophic Forgetting**: A phenomenon where an AI model loses previously learned knowledge when trained on new data. A significant challenge for systems that need to continuously learn and adapt. - **CCPA (California Consumer Privacy Act)**: A California state law giving consumers rights over their personal data, including the right to know what data is collected, the right to delete it, and the right to opt out of its sale. CCPA has impl - **Center of Excellence (CoE)**: A dedicated organizational unit that provides AI standards, shared infrastructure, talent development, governance execution, solution delivery, and knowledge management. The CoE is the operational nuc - **Center of Excellence Management**: A platform module for establishing, managing, and measuring the effectiveness of an AI Center of Excellence including charter, membership, and activity tracking. - **Certification Body**: An organization authorized to assess and certify that individuals, systems, or organizations meet defined standards. In AI governance, certifies compliance with standards like ISO 42001. - **Change Architecture**: The deliberate design of how organizational change will be structured, sequenced, and governed across an enterprise AI transformation. Goes beyond change management to architect change at scale. - **Change Capacity Management**: The assessment and management of how much change an organization can absorb at any given time. Prevents transformation failure caused by overwhelming the organization with too many simultaneous change - **Change Detection**: A platform module for monitoring production AI systems for unexpected changes in behavior, data distributions, or regulatory requirements that may trigger governance actions. - **Change Management**: The structured approach to transitioning individuals, teams, and organizations from a current state to a desired future state. In AI transformation, change management addresses the behavioral, cultura - **Change Management Capability**: Domain D4 in the COMPEL maturity model measuring the organization\ - **Change Network**: A distributed group of change advocates embedded across the organization who support AI transformation by communicating, coaching, and providing feedback from the front lines. - **Change Resistance**: The natural opposition that individuals and groups exhibit when faced with organizational changes brought by AI transformation. Must be understood and addressed rather than simply overcome. - **Change Saturation**: The limit on how much simultaneous change an organization or team can absorb effectively. Exceeding change saturation causes resistance, quality degradation, and adoption failures regardless of how we - **Chaos Engineering**: The practice of deliberately introducing failures into a system to test its resilience and identify weaknesses before they cause real incidents. Applied to AI systems to ensure they handle disruptions - **Chargeback Architecture**: The financial framework for allocating AI infrastructure and service costs back to the business units that consume them, including metering, pricing models, and billing processes. - **Chargeback Model**: A financial mechanism where business units are charged for their actual consumption of shared AI services and infrastructure, creating cost transparency and encouraging efficient resource use. - **Chief AI Officer**: An emerging C-suite role responsible for an organization - **Chief Data Officer (CDO)**: A C-suite executive responsible for enterprise data strategy, data governance, data quality, and data infrastructure. The CDO plays a critical role in AI transformation by ensuring the data foundation - **Churn Prediction**: An AI application that predicts which customers are likely to stop using a product or service, enabling proactive retention efforts. Churn prediction is one of the most common and highest-ROI enterpri - **CI/CD Pipeline**: Continuous Integration/Continuous Deployment -- an automated workflow that builds, tests, and deploys software or AI models. CI/CD pipelines for ML automate the process of moving models from developme - **Circuit Breaker**: A design pattern that automatically stops an AI system from processing requests when it detects failures or degraded performance, preventing cascading problems across connected systems. - **Classification**: A type of supervised learning task that assigns inputs to discrete categories, such as - **Client Discovery**: The initial phase of a COMPEL engagement where the practitioner gathers information about the client - **Cloud Computing**: The delivery of computing services (servers, storage, processing power, software) over the internet on a pay-as-you-go basis. Cloud computing provides the scalable infrastructure that most enterprise - **Cloud Economics**: The financial analysis of cloud computing costs including compute, storage, networking, and managed services. Critical for AI workloads where infrastructure costs can escalate rapidly. - **Cloud-Native Architecture**: Systems designed specifically to run in cloud environments, using containers, microservices, and dynamic orchestration. Enables AI systems to scale elastically and deploy rapidly. - **Clustering**: An unsupervised learning technique that groups similar data points together based on shared characteristics. Commonly used for customer segmentation, document categorization, and identifying patterns - **Co-Development Agreement**: A contractual arrangement where two or more parties jointly develop AI capabilities, specifying how intellectual property, costs, risks, and benefits are shared between the parties. - **Coalition Analysis**: The assessment of formal and informal alliances among stakeholders to understand power dynamics and identify who can be brought together to support or may collectively oppose transformation initiative - **COBIT**: Control Objectives for Information and Related Technologies -- a governance framework for enterprise IT that addresses stakeholder value delivery, risk optimization, and resource management. COMPEL - **Cognitive Load Management**: The practice of controlling the mental effort required for learning or performing tasks, ensuring training and communications do not overwhelm participants with too much information at once. - **Collaboration Design**: A platform module for structuring cross-functional collaboration patterns, defining team interfaces, and establishing communication protocols for AI initiatives. - **Collaboration Design (Principle)**: The deliberate structuring of how business, technical, risk, and legal teams work together on AI initiatives to avoid silos and ensure holistic governance. - **Committee Management**: A platform module for creating, scheduling, and tracking the activities of governance committees, ethics boards, and oversight bodies. - **Community of Practice**: A group of people who share a professional interest and regularly interact to deepen their knowledge and expertise. In COMPEL, communities of practice connect practitioners across organizations. - **COMPEL Cycle**: A single iteration through all six COMPEL stages, typically lasting 12 weeks. Each cycle produces tangible outcomes and builds on the previous cycle, creating compounding transformation capability ove - **COMPEL Dashboard**: A platform module providing an at-a-glance view of transformation progress across all six COMPEL stages, maturity scores, and key governance metrics. - **COMPEL Engagement Lifecycle**: The five-phase structure for managing a COMPEL transformation project: Discovery and Qualification, Scoping and Proposal, Mobilization, Delivery, and Transition and Close. - **COMPEL Forms**: A platform module offering structured templates and guided forms for completing COMPEL activities such as assessments, gate reviews, and risk evaluations. - **COMPEL Four Pillars**: The four fundamental dimensions of AI transformation in the COMPEL framework: People, Process, Technology, and Governance. All transformation planning and assessment is organized across these pillars. - **COMPEL Framework**: A structured, iterative six-stage methodology for enterprise AI transformation: Calibrate, Organize, Model, Produce, Evaluate, Learn. COMPEL provides organizations with a repeatable approach to buildi - **COMPEL Lifecycle**: The six-stage transformation methodology: Calibrate, Organize, Model, Produce, Evaluate, Learn. Each stage builds on the previous one and the cycle repeats for continuous improvement. - **COMPEL Methodology**: A six-stage enterprise AI transformation framework — Calibrate, Organize, Model, Produce, Evaluate, Learn — providing a structured approach to building responsible, mature AI capabilities across 18 do - **Competency-Based Assessment**: An evaluation approach that measures whether a person can demonstrate specific skills and knowledge in practice, rather than testing theoretical knowledge alone. Used in COMPEL certification. - **Competitive Moat**: A durable competitive advantage that is difficult for rivals to replicate. AI creates competitive moats through proprietary data assets, organizational learning, network effects, and compounding capab - **Compliance Framework**: A platform module for mapping organizational controls to external regulatory requirements such as the EU AI Act, NIST AI RMF, and ISO 42001. - **Compliance Operations**: The ongoing activities required to maintain regulatory compliance for AI systems, including monitoring, evidence collection, reporting, and remediation. - **Compliance Posture**: An organization - **Compute Budget**: The allocated financial and resource limits for AI model training, inference, and experimentation. Helps organizations control cloud and infrastructure costs for AI workloads. - **Computer Vision**: A field of AI that enables machines to interpret and understand visual information from images and videos. Applications include quality inspection in manufacturing, medical imaging, and document proce - **Concept Drift**: A change in the underlying relationship between input data and the outcome being predicted. Unlike data drift, concept drift means the real-world rules have changed, requiring model retraining with up - **Conformity Assessment**: A formal evaluation process to determine whether an AI system meets the requirements of applicable regulations and standards, such as the EU AI Act. Conformity assessments may be self-conducted or req - **Consent Architecture**: The technical and process design for collecting, storing, and honoring individuals - **Consent Management**: The processes and systems for collecting, recording, and managing individuals - **Consortium Governance**: Governance structures designed for multi-organization AI collaborations where no single entity has full authority. Requires negotiated decision rights, shared standards, and dispute resolution. - **Constructivism**: A learning theory where people build understanding by connecting new information to what they already know. In COMPEL training, this means linking new AI governance concepts to participants - **Containerization**: A technology that packages software and its dependencies into isolated, portable units called containers. Ensures AI models run consistently across different environments from development to productio - **Context Window**: The maximum amount of text that a language model can process at one time, measured in tokens. As conversations grow long, important information may be pushed out of the context window, affecting the m - **Continuous Improvement**: The ongoing effort to enhance processes, capabilities, and outcomes through iterative learning and refinement. In COMPEL, continuous improvement is not aspirational -- it is structurally enforced thro - **Continuous Improvement Processes**: Domain D9 in the COMPEL maturity model covering mechanisms for capturing lessons learned and systematically improving AI delivery across the organization. - **Control Framework**: A structured set of policies, procedures, and technical mechanisms designed to manage risks and ensure compliance. In AI governance, control frameworks address model validation, bias testing, data pro - **Controls Library**: A platform module providing a catalog of governance controls that can be mapped to AI systems, regulatory requirements, and risk categories. - **Convolutional Neural Network (CNN)**: A type of neural network designed for processing visual data like images and videos. CNNs detect patterns such as edges, textures, and shapes, making them ideal for quality inspection, medical imaging - **Copyright**: Legal protection for original creative works. In AI, raises complex questions about ownership of AI-generated content and the legality of using copyrighted material to train AI models. - **Crisis Management**: The process of handling unexpected events that threaten an organization - **Cross-Border Data Governance**: The policies and mechanisms for managing data that flows between different countries, addressing varying legal requirements, sovereignty concerns, and data protection standards. - **Cross-Domain Diagnostic**: An assessment technique that examines how different areas of AI maturity interact and influence each other, revealing hidden dependencies and systemic patterns that single-domain assessments miss. - **Cross-Domain Dynamics**: The interdependencies and interactions between the 18 COMPEL maturity domains, recognizing that progress in one domain often enables or constrains progress in others. - **Cross-Functional Collaboration**: Working together across traditional organizational boundaries (IT, business units, legal, finance, HR) to achieve AI transformation objectives. AI transformation inherently requires cross-functional c - **Cross-Functional Team**: A team composed of members from different departments or disciplines working together toward a common goal. Essential for AI transformation because AI impacts people, process, technology, and governan - **Cross-Industry Patterns**: Recurring AI transformation challenges and solutions observed across multiple industry sectors, providing transferable insights for practitioners. - **Cross-Organizational Governance**: Governance structures that operate across organizational boundaries, enabling coherent AI policy and risk management among entities that do not share a single chain of command. - **Cross-Pollination**: The practice of sharing knowledge, techniques, and insights between different workstreams, teams, or organizations to accelerate learning and innovation in AI transformation. - **Cross-Program Dependency**: A relationship between two or more AI transformation programs in a portfolio where one program - **Cross-Validation**: A statistical technique for evaluating AI model performance by partitioning data into multiple subsets, training and testing the model on different combinations, and averaging the results. Cross-valid - **Cultural Assessment**: An evaluation of an organization - **Cultural Transformation**: The deliberate reshaping of an organization - **Culture Assessment**: An evaluation of organizational attitudes, behaviors, and norms related to AI adoption, innovation, and responsible use. - **Customer Relationship Management (CRM)**: Software that manages an organization - **Data Architecture**: The design of how data is collected, stored, organized, integrated, and made available across an enterprise to support AI and analytics capabilities. - **Data Catalog**: An organized inventory of an organization - **Data Classification**: The process of categorizing data based on its sensitivity level (public, internal, confidential, restricted) to determine appropriate handling, storage, and access controls. - **Data Drift**: Changes in the statistical properties of the input data that a deployed model receives, compared to the data it was trained on. Data drift can cause model predictions to become less accurate over time - **Data Engineer**: A professional responsible for building and maintaining the data infrastructure and pipelines that collect, store, transform, and deliver data to AI models and analytics consumers. Data engineers ensu - **Data Fabric**: An architecture approach that provides a unified data management layer across diverse data sources and environments, making data accessible regardless of where it physically resides. - **Data Governance**: The organizational processes, policies, standards, and accountability structures that ensure data is accurate, consistent, secure, and used appropriately. Data governance for AI addresses training dat - **Data Infrastructure**: Domain D10 in the COMPEL maturity model covering data storage, pipelines, integration, and platform architecture maturity required for AI workloads. - **Data Lake**: A centralized storage repository that holds large volumes of raw data in its native format until it is needed for analysis or AI model training. Data lakes support both structured and unstructured dat - **Data Lakehouse**: A modern data architecture that combines the flexibility and scale of a data lake with the management features and performance of a data warehouse. Lakehouses are increasingly the preferred architectu - **Data Lineage**: The documented history of a dataset showing where data originated, how it has been transformed, and where it has been used. Data lineage is essential for AI governance, enabling organizations to trace - **Data Management and Quality**: Domain D6 in the COMPEL maturity model assessing data governance, quality assurance, cataloging, and accessibility practices. - **Data Mesh**: A decentralized data architecture where domain teams own and operate their data as products, with federated governance ensuring interoperability. Data mesh enables scalable data management for organiz - **Data Minimization**: The privacy principle of collecting and using only the data that is genuinely necessary for an AI system - **Data Pipeline**: An automated sequence of steps that moves data from source systems through transformation processes to its destination, such as a data warehouse or an AI model - **Data Poisoning**: A type of attack where an adversary deliberately corrupts the data used to train an AI model, causing the model to learn incorrect patterns or behave in unintended ways. Data poisoning can be difficul - **Data Privacy**: The right of individuals to control how their personal information is collected, used, and shared. Governs what data AI systems can use and how they must protect it. - **Data Protection Impact Assessment (DPIA)**: A formal assessment required under GDPR when data processing is likely to result in high risk to individuals - **Data Quality**: The degree to which data meets requirements for accuracy, completeness, consistency, timeliness, validity, and uniqueness. Data quality directly determines AI model performance -- poor data quality ca - **Data Readiness**: An assessment of whether the data required for an AI initiative is available, of sufficient quality, properly governed, and accessible. Data readiness failures are the most common reason AI projects f - **Data Retention**: Policies governing how long data is kept before being archived or deleted. Must balance operational needs, regulatory requirements, and storage costs. - **Data Rights**: The legal entitlements regarding who owns data, who can use it, and under what conditions. Particularly complex in AI where data may be combined from multiple sources to train models. - **Data Scientist**: A professional who uses statistical analysis, machine learning, and programming to extract insights from data and build predictive models. Data scientists are part of the core technical team in AI tra - **Data Sovereignty**: The concept that data is subject to the laws and governance of the country where it is collected or stored. Impacts where AI models can be trained and which data can cross borders. - **Data Steward**: An individual formally responsible for the quality, governance, and appropriate use of data within a specific domain. Data stewards ensure data meets quality standards and governance policies before i - **Data Warehouse**: A system designed for storing and analyzing structured data from multiple sources in a format optimized for reporting and business intelligence queries. - **DataOps**: An automated, process-oriented methodology for improving the quality and reducing the cycle time of data analytics and data management, analogous to DevOps for software. - **Decision Log**: A formal record of significant decisions made during an AI transformation program, including the rationale, alternatives considered, decision-makers, and expected consequences. - **Decision Provenance**: The complete record of how an AI decision was made, including the data inputs, model version, parameters, and reasoning chain. Essential for accountability in agentic AI systems. - **Decision Rights**: Formally documented authority specifying who can approve what within the AI transformation program, such as budget allocations, model deployments, risk acceptances, and policy changes. Clear decision - **Deep Learning**: A type of machine learning that uses neural networks with many layers to learn hierarchical representations of data. Deep learning powers capabilities like image recognition, speech processing, and la - **Defense in Depth**: A security strategy that layers multiple defensive mechanisms so that if one fails, others continue to provide protection. Applied to AI systems to guard against multiple types of attacks. - **Defined (Level 3)**: The third maturity level in the COMPEL model indicating standardized, documented AI practices with consistent application across the organization. - **Delegation Framework**: A governance structure that defines what decisions and actions an AI agent is authorized to take independently, what requires human approval, and what escalation paths exist. - **Delivery Excellence**: The discipline of executing AI transformation initiatives on time, within scope, and with measurable quality, applying structured project management to COMPEL engagements. - **Demand Forecasting**: Using AI to predict future customer demand for products or services, enabling optimized inventory management, production planning, and resource allocation. A foundational AI use case in retail, manufa - **Demand Management**: The process of collecting, evaluating, and prioritizing requests for AI capabilities from across the organization, ensuring the most valuable work gets resources first. - **Demographic Parity**: A fairness metric requiring that an AI system - **Dependency Mapping**: The process of identifying and documenting relationships between workstreams, tasks, or deliverables where one item must be completed before another can begin or proceed. - **Developing (Level 2)**: The second maturity level in the COMPEL model indicating emerging AI practices with some structure and repeatability but inconsistent application. - **DevOps**: A set of practices that combines software development and IT operations to shorten the development lifecycle and deliver high-quality software continuously. Extended to MLOps for AI systems. - **Differential Privacy**: A mathematical framework for sharing information about a dataset while protecting individual privacy. Differential privacy adds controlled noise to data or queries to prevent identification of specifi - **Dimensionality Reduction**: A technique that simplifies complex datasets with many variables by identifying the most important underlying factors. Dimensionality reduction makes data visualization, analysis, and downstream AI mo - **Disaster Recovery**: The plans and processes for restoring AI systems and data after a major failure or catastrophic event, including recovery time objectives and backup strategies. - **Discriminative AI**: AI models that analyze input data to classify it, predict outcomes, or identify patterns. Discriminative AI answers questions like - **Disparate Impact**: A situation where an AI system - **DMAIC**: Define, Measure, Analyze, Improve, Control -- the five-phase improvement cycle from Lean Six Sigma. DMAIC shares structural parallels with COMPEL - **Double-Loop Learning**: An advanced form of organizational learning that questions and modifies underlying assumptions and policies, not just surface-level actions. Goes beyond fixing problems to rethinking why they occurred - **Drift Detection**: Automated monitoring that identifies when the statistical properties of input data or model outputs have shifted significantly from baseline measurements. Drift detection triggers alerts and retrainin - **Due Diligence**: The comprehensive investigation and evaluation of an organization - **EATE (COMPEL Certified Consultant)**: The Level 3 COMPEL certification for professionals who architect enterprise-level AI transformation strategies, design operating models, and mentor specialist practitioners. - **EATF (COMPEL Certified Practitioner)**: The Level 1 COMPEL certification demonstrating foundational mastery of the COMPEL methodology, including the six stages, four pillars, and 18-domain maturity model. - **EATP (COMPEL Certified Specialist)**: The Level 2 COMPEL certification for practitioners who design, lead, and deliver COMPEL transformation engagements with real clients, including advanced assessment, roadmapping, and execution. - **EATP Expert**: The third level of COMPEL certification validating expert capabilities for designing and orchestrating enterprise-scale AI transformation programs. - **EATP Foundation**: The first level of COMPEL certification providing foundation knowledge for understanding and applying the COMPEL methodology to AI transformation initiatives. - **EATP Lead**: The Level 4 apex COMPEL certification for professionals who govern portfolios of AI transformation programs, harmonize cross-organizational governance, and contribute to industry standards. - **EATP Practitioner**: The second level of COMPEL certification validating advanced skills for leading and managing COMPEL engagements across organizations. - **Ecosystem Strategy**: The deliberate design of partnerships, vendor relationships, academic collaborations, and industry alliances that provide the external capabilities and resources needed for AI transformation. - **Edge AI**: The deployment of AI models directly on edge devices such as sensors, smartphones, or IoT hardware, enabling local inference without constant cloud connectivity. - **Edge Computing**: Processing data near its source rather than sending it to a centralized data center. Used for AI applications requiring low latency, like real-time manufacturing quality inspection or autonomous vehic - **Edge Deployment**: Running AI models on devices located close to where data is generated (like factory equipment, IoT sensors, or branch offices) rather than in a centralized cloud. Edge deployment reduces latency and c - **Embedding**: A numerical representation of data (text, images, etc.) in a high-dimensional space where similar items are positioned close together. Embeddings enable AI systems to understand semantic similarity an - **Embeddings**: Dense vector representations of data (text, images, or other content) in a continuous mathematical space, enabling similarity comparison and semantic search. - **Emerging Technology Evaluation**: The systematic process of assessing new AI technologies and approaches for their potential value, risks, and fit within the enterprise architecture before committing to adoption. - **Empowered Teams**: A COMPEL principle where people are authorized and equipped to use AI with clear guidelines rather than prohibitions, with psychological safety to experiment, fail, and iterate. - **Energy AI**: The application of AI in the energy and utilities sector, addressing use cases such as grid optimization, predictive maintenance, demand forecasting, and renewable integration. - **Engagement Architecture**: The overall design of a COMPEL consulting engagement, including scope, phases, workstreams, deliverables, timeline, team composition, and commercial structure. - **Engagement Governance**: The decision-making structures and oversight mechanisms established for a specific COMPEL engagement, including steering committees, escalation paths, and reporting cadences. - **Engagement Model**: The structured approach used by COMPEL practitioners to scope, plan, and deliver AI transformation consulting engagements with client organizations. - **Enterprise AI Maturity Spectrum**: A five-level framework describing organizational AI capability from Level 1 (Foundational) through Level 5 (Transformational). Organizations progress through these levels by building capabilities acro - **Enterprise AI Strategy**: The overarching plan that defines how an organization will build, deploy, and govern AI as a permanent strategic capability, aligned with business objectives and spanning multiple years. - **Enterprise Change Architecture**: The comprehensive design for managing organizational change at enterprise scale, including change networks, communication strategies, resistance management, and change capacity planning. - **Enterprise Resource Planning (ERP)**: Integrated business software that manages core organizational processes including finance, supply chain, manufacturing, and human resources. AI integration with ERP systems is a common enterprise use - **Enterprise Transformation Architecture**: A comprehensive framework that integrates strategy, organizational design, technology, governance, and change management into a unified blueprint for enterprise-scale AI transformation. - **Equalized Odds**: A fairness metric requiring that an AI system has equal true positive rates and false positive rates across different demographic groups, meaning errors are distributed fairly. - **Escalation Protocol**: A predefined set of rules determining who is notified and at what thresholds when issues arise, ensuring problems are elevated to appropriate decision-makers before they become critical. - **ESG (Environmental, Social, and Governance)**: A framework for evaluating corporate behavior and sustainability that increasingly incorporates AI ethics criteria. ESG-focused investors are examining how organizations govern their AI systems as par - **Ethical AI Framework**: A structured set of principles, processes, and tools an organization adopts to ensure AI systems are developed and deployed in alignment with ethical values. - **Ethical Impact Assessment (EIA)**: A mandatory evaluation conducted before an AI system moves from development to production, assessing potential harms, affected populations, and mitigation strategies. EIAs operationalize ethical princ - **Ethics by Design**: The approach of integrating ethical considerations into every stage of the AI development lifecycle, from problem formulation through deployment and retirement, rather than reviewing ethics after the - **Ethics Review Process**: A formal procedure for evaluating the ethical implications of AI projects before approval, including assessment criteria, review board composition, and decision-making protocols. - **ETL/ELT Pipeline**: A data processing workflow that Extracts data from source systems, Transforms it into a usable format, and Loads it into a target system (ETL) -- or loads first then transforms (ELT). Pipelines are th - **EU AI Act**: The European Union - **Evaluate (COMPEL Stage)**: The fifth COMPEL stage, focused on rigorously measuring what the cycle achieved against planned objectives at initiative, portfolio, and strategic levels. Evaluate closes the accountability loop and i - **Evaluate Stage**: The fifth stage of the COMPEL lifecycle where transformation outcomes are measured against objectives, maturity progression is assessed, and the effectiveness of the program is evaluated. - **Event-Driven Architecture**: A system design where components communicate by producing and consuming events rather than directly calling each other. Enables loosely coupled, scalable AI systems. - **Evidence Chain**: A sequence of related governance artifacts that together tell a complete story traceable from strategic intent to operational implementation. Evidence chains enable auditors to verify that deployed AI - **Execution Management**: The discipline of coordinating multiple workstreams, managing dependencies, and maintaining momentum during the active delivery phases of AI transformation. - **Executive Coaching**: One-on-one guidance provided to senior leaders to help them develop the mindset, skills, and behaviors needed to champion and sustain AI transformation in their organizations. - **Executive Sponsor**: A C-suite champion who provides strategic direction, budget authority, and organizational support for AI transformation. The Executive Sponsor sets the mandate, secures resources, and represents the t - **Executive Sponsorship**: Active, visible support from a senior leader who champions the AI transformation program, secures resources, removes barriers, and holds the organization accountable for progress. - **Experiential Learning**: A learning approach based on direct experience followed by reflection, conceptualization, and experimentation. Based on Kolb - **Explainability**: The degree to which an AI system - **Explainable AI (XAI)**: A field of AI research focused on developing techniques that make AI decision-making processes understandable to humans. XAI methods include feature importance, attention visualization, and counterfac - **Explicit Knowledge**: Knowledge that can be easily articulated, documented, and shared through written materials, procedures, and databases. Contrasts with tacit knowledge that is experiential and hard to formalize. - **External Audit**: An independent review of an organization - **F1 Score**: A model performance metric that combines precision and recall into a single balanced score. F1 scores range from 0 to 1, with 1 being perfect. Useful when you need to balance the costs of false positi - **Facilitation**: The skill of guiding group discussions and workshops to achieve productive outcomes without imposing the facilitator - **Failover**: The automatic switching to a backup system when the primary system fails, ensuring continuity of AI services. Part of high-availability architecture design. - **Fairness**: The principle that AI systems should produce equitable outcomes across different demographic groups and not perpetuate or amplify existing biases. Fairness requires active measurement and mitigation, - **Fairness Engineering**: The technical discipline of detecting and mitigating bias in AI systems through training data auditing, fairness-aware model design, disparate impact analysis, and ongoing monitoring. Fairness enginee - **Fairness Metric**: A quantitative measure used to evaluate whether an AI system treats different groups equitably. Multiple metrics exist because fairness can be defined in different mathematically incompatible ways. - **Fairness Metrics**: Quantitative measures used to evaluate whether an AI system treats different groups equitably, including demographic parity, equalized odds, and calibration. - **Feature Store**: A centralized repository of engineered data features that ensures consistency between the data used to train AI models and the data used when those models make predictions in production. Feature store - **Federated Governance**: A governance model where central standards and policies are set by a core team, but business units have autonomy to implement and adapt them within defined boundaries. - **Federated Learning**: A machine learning approach where a model is trained across multiple devices or servers holding local data without exchanging the raw data itself. Federated learning enables AI training while preservi - **Federated Model**: An organizational structure where AI capability is distributed across business units with a central team providing standards, shared infrastructure, and coordination. Federated models balance local re - **Federated Model (Organizational)**: An organizational structure where AI capability is distributed across business units with coordination from a central team. Balances local autonomy with enterprise-wide consistency. - **Feedback Loop**: A cycle where an AI system - **Financial Services AI**: The application of AI in banking, insurance, and capital markets, characterized by heavy regulatory scrutiny, explainability requirements, and data sensitivity. - **Fine-Tuning**: The process of further training a pre-trained AI model on a specific dataset to adapt it for a particular task or domain. Fine-tuning customizes a general-purpose model for specialized organizational - **FinOps**: Financial Operations — a practice for managing cloud and infrastructure costs through collaboration between engineering, finance, and business teams. Essential for controlling AI compute spending. - **Foreground IP**: Intellectual property created during a specific project or engagement, as distinct from background IP that existed before the project began. Ownership must be clearly defined in contracts. - **Formative Assessment**: Evaluation conducted during learning to provide ongoing feedback and guide improvement, rather than judging final performance. Helps learners identify gaps while they can still address them. - **Foundation Model**: A large pre-trained AI model that serves as a base for multiple downstream applications. A single foundation model can be adapted for many tasks through fine-tuning or prompting, reducing the need to - **Foundation Model Evaluation**: A platform module for assessing foundation models against organizational criteria including performance, safety, bias, cost, and regulatory compliance. - **Foundational (Level 1)**: The first maturity level in the COMPEL model indicating ad-hoc, unstructured AI practices with minimal governance, processes, or strategic alignment. - **Four Pillars of AI Transformation**: The four interdependent structural foundations of AI transformation in COMPEL: People, Process, Technology, and Governance. Successful transformation requires balanced advancement across all four pill - **Framework Harmonization**: The process of aligning multiple governance, methodology, or compliance frameworks so they work together coherently rather than creating conflicting requirements or redundant processes. - **Framework Interoperability**: The ability of different management and governance frameworks (such as COMPEL, SAFe, TOGAF, ITIL) to work together effectively within an organization without conflict or redundancy. - **Full Transformation Engagement**: A COMPEL engagement spanning the complete lifecycle from Calibrate through Learn, typically six to twenty-four months, involving cross-functional teams and sustained executive sponsorship. - **Function Calling**: The capability of modern LLMs to produce structured calls to external tools and APIs as part of their output. Function calling enables AI agents to interact with enterprise systems, databases, and ser - **Gap Analysis**: An assessment that identifies the difference between an organization - **Gate Reviews**: A platform module for conducting formal stage-gate decision points that determine whether an AI initiative is ready to progress to the next COMPEL stage. - **GDPR (AI Context)**: The General Data Protection Regulation as applied to AI systems, addressing data minimization, purpose limitation, automated decision-making rights, and data protection impact assessments for AI. - **General Data Protection Regulation (GDPR)**: The European Union - **Generative AI**: AI systems that create new content such as text, images, code, audio, or video, rather than simply analyzing or classifying existing data. Generative AI is distinct from discriminative AI, which class - **Governance Body**: Any formally constituted group with defined authority over AI governance decisions, including ethics boards, steering committees, and risk review panels. - **Governance Committee**: A body responsible for overseeing AI governance policies, reviewing high-risk decisions, and ensuring the organization - **Governance Harmonization**: The process of aligning different governance frameworks, policies, and standards across organizational units or entities to create a coherent, non-conflicting governance environment. - **Governance Maturity**: The level of sophistication and effectiveness of an organization - **Governance Pillar**: One of the four COMPEL pillars encompassing domains D14-D18: AI Strategy, AI Ethics, Regulatory Compliance, Risk Management, and AI Governance Structure. - **Governance Scorecard**: A platform dashboard presenting a composite view of governance health across dimensions such as policy compliance, risk posture, audit readiness, and maturity progression. - **Governance Theater**: An anti-pattern where an organization builds the visible apparatus of AI governance -- policies, committees, ethics statements -- without operationalizing any of it. Creates a false sense of security - **Governance Tickets**: A platform module for tracking governance issues, exceptions, and remediation items as actionable work items through resolution. - **Governance-as-Enabler**: The approach of designing AI governance to facilitate innovation and speed rather than just prevent harm, ensuring governance structures help the organization move faster with confidence. - **GPU (Graphics Processing Unit)**: A specialized processor originally designed for graphics rendering, now widely used for AI workloads because its thousands of small cores can perform many calculations simultaneously. GPUs are essenti - **Graceful Degradation**: The ability of an AI system to continue operating at reduced capability rather than failing completely when components break or performance degrades. A key resilience design principle. - **GRC Platform**: Governance, Risk, and Compliance software that automates governance workflows, approval tracking, evidence chain visualization, and compliance reporting. GRC platforms help organizations manage the ap - **Grounding**: Techniques that connect AI model outputs to factual, verifiable information sources rather than relying solely on patterns learned during training. Grounding reduces hallucination risk through retriev - **GROW Model**: A coaching framework structured around four stages: Goal, Reality, Options, and Way Forward. Adapted for COMPEL coaching to develop practitioner capability through structured conversations. - **Guardrails**: Safety boundaries and constraints built into AI systems to prevent harmful, inappropriate, or out-of-scope behaviors. Can be implemented through rules, filters, or monitoring systems. - **Hallucination**: When an AI model, particularly an LLM, generates confident but factually incorrect, fabricated, or nonsensical information. Hallucinations are a significant risk in applications requiring factual accu - **Healthcare AI**: The application of AI in healthcare and life sciences, addressing clinical decision support, drug discovery, medical imaging, and patient safety with stringent regulatory and ethical requirements. - **High-Risk AI System**: An AI system classified under the EU AI Act as posing significant risks to health, safety, or fundamental rights, subject to mandatory conformity assessments and ongoing monitoring. - **HIPAA**: The Health Insurance Portability and Accountability Act -- a U.S. law that establishes requirements for protecting patient health information. HIPAA imposes specific constraints on AI systems that pro - **Horizon Portfolio**: A portfolio structure that allocates AI investments across different time horizons: near-term quick wins, medium-term capability building, and long-term strategic bets. - **Human Oversight**: The principle that humans should maintain meaningful control over AI systems, with the level of oversight proportional to the risk and impact of the AI - **Human-AI Collaboration**: A model of work where humans and AI systems operate as complementary partners, with explicit handoff points, oversight mechanisms, and shared responsibility for outcomes. - **Human-in-the-Loop (HITL)**: A design pattern where human oversight is integrated into AI system operations, ensuring that humans review, approve, or override AI decisions at defined checkpoints. HITL requirements vary by the AI - **Human-on-the-Loop**: A system design where AI makes decisions autonomously but a human monitors the process and can intervene when needed. Balances efficiency with oversight for medium-risk applications. - **Human-over-the-Loop**: An AI system design pattern where humans maintain supervisory authority and can intervene or override automated decisions, but are not required to approve each individual action. - **Hybrid CoE**: A Center of Excellence model where a central team owns standards, governance, shared platforms, and complex initiatives, while embedded AI teams within business units handle domain-specific delivery. - **Hype Cycle**: A Gartner model describing the typical progression of emerging technologies through five phases: technology trigger, peak of inflated expectations, trough of disillusionment, slope of enlightenment, a - **Hyperparameter Tuning**: The process of optimizing the configuration parameters that govern model training — such as learning rate, batch size, and architecture choices — to improve model performance. - **Identity Management**: The systems and processes for managing who and what has access to AI systems, including user authentication, authorization, and access control across the AI lifecycle. - **In-Context Learning**: An AI agent - **Incident Management**: A platform module for detecting, triaging, documenting, and resolving AI-related incidents including model failures, bias events, security breaches, and compliance violations. - **Incident Response**: Defined procedures for investigating and remediating AI-related events such as model failures, data breaches, bias discoveries, or safety incidents. AI incident response must address AI-specific failu - **Indemnification**: A contractual provision where one party agrees to compensate another for losses or damages. Important in AI vendor contracts to address liability for AI system failures or harmful outputs. - **Industry Adaptation**: The tailoring of the universal COMPEL framework to address sector-specific regulatory, cultural, and technical requirements in different industries. - **Industry Standards Development**: The formal process of creating, reviewing, and publishing professional and technical standards through recognized bodies like ISO, IEEE, and NIST. A Level 4 COMPEL professional responsibility. - **Inference**: The process of using a trained AI model to make predictions or generate outputs on new data. Inference is what happens when a deployed model processes a customer request, scores a transaction, or gene - **Influence-Interest Matrix**: A stakeholder analysis tool that maps individuals or groups along two dimensions -- their level of influence over outcomes and their level of interest in the initiative -- to determine appropriate eng - **Informed Consent**: The principle that individuals should be meaningfully informed about and agree to the use of their data and the application of AI-driven decisions that affect them. - **Infrastructure as Code (IaC)**: The practice of managing and provisioning computing infrastructure through machine-readable configuration files rather than manual processes. IaC enables automated, repeatable, and version-controlled - **Initiative Sequencing**: The strategic ordering of transformation activities based on dependencies, organizational readiness, value delivery timing, and resource constraints. A core COMPEL roadmap design skill. - **Innovation Culture**: An organizational environment that encourages experimentation, tolerates informed failure, and provides psychological safety for exploring AI-driven transformation opportunities. - **Instructional Design**: The systematic process of creating educational materials and experiences that effectively develop the knowledge and skills learners need. Applied in COMPEL to training program development. - **Intake Wizard**: A guided platform workflow for registering new AI systems, capturing key metadata, performing initial risk classification, and routing for appropriate governance review. - **Integration Architecture**: Domain D12 in the COMPEL maturity model assessing the ability to integrate AI capabilities into enterprise systems and workflows. - **Integration Milestone**: A checkpoint where multiple workstreams must converge to deliver a combined outcome, forcing cross-team coordination and validating that separate activities are producing coherent results. - **Integration with Existing Frameworks**: The practice of aligning COMPEL activities with established methodologies such as ITIL, COBIT, Agile, and ISO standards to reduce organizational friction. - **Intellectual Property**: Creations of the mind that have commercial value and legal protection, including patents, copyrights, trade secrets, and trademarks. AI raises complex new questions about IP ownership and rights. - **Internal Audit**: An independent function within an organization that evaluates the effectiveness of risk management, control, and governance processes, including those related to AI systems. - **Interpretability**: The degree to which a human can understand how an AI model makes its predictions or decisions. Higher interpretability enables better oversight, debugging, and trust in AI systems. - **Investment Thesis**: The strategic rationale and expected return that justifies an AI transformation investment, articulating why the investment will create value and how success will be measured. - **ISO 27001**: An international standard for information security management systems. In the AI context, ISO 27001 controls are extended to address AI-specific security considerations such as training data protectio - **ISO 31000**: The international standard for risk management, providing principles and guidelines applicable to any type of risk. Forms the foundation for AI-specific risk management approaches. - **ISO 42001**: An international standard for AI management systems published by the International Organization for Standardization. ISO 42001 provides requirements for establishing, implementing, and maintaining res - **ITIL (Information Technology Infrastructure Library)**: A widely adopted framework for IT service management that defines processes for incident management, change management, and service delivery. COMPEL integrates with ITIL to ensure AI systems are opera - **J-Curve Effect**: The pattern where an AI transformation initially causes a dip in performance before delivering improvement, because the organization must invest time and effort before benefits materialize. - **Jailbreaking**: Techniques used to circumvent the safety and ethical guardrails built into AI systems, particularly LLMs, causing them to produce restricted or harmful content. A significant security concern for depl - **Joint Controller**: Under data protection law, two or more organizations that jointly determine the purposes and means of processing personal data. Common in collaborative AI projects sharing data across entities. - **Joint Venture**: A business arrangement where two or more organizations combine resources to pursue a shared AI initiative while maintaining their separate identities. Joint ventures can accelerate AI transformation b - **JSON**: JavaScript Object Notation -- a lightweight data format used extensively in AI systems for API communication, configuration files, and structured data exchange between models and applications. - **Judicial Review**: The process by which courts examine the legality of decisions made by public bodies or AI systems used in government. Increasingly relevant as AI is deployed in administrative decision-making. - **K-Fold Cross-Validation**: A model evaluation technique that splits data into K equal parts, trains the model K times using different parts as test data each time, and averages the results. Cross-validation provides a more reli - **Kanban**: A visual workflow management method that uses boards and cards to track work items through stages. Helps AI teams visualize work in progress, identify bottlenecks, and manage flow. - **Key Management**: The administration of cryptographic keys used to protect AI data and communications, including key generation, distribution, storage, rotation, and retirement. - **Key Performance Indicator (KPI)**: A quantifiable measurement used to evaluate how effectively an organization or initiative is achieving its objectives. In COMPEL, KPIs are organized in a four-level hierarchy: strategic, operational, - **Key Risk Indicator (KRI)**: A metric used to provide an early warning of increasing risk exposure in a particular area. In AI governance, KRIs might track model drift rates, complaint volumes, or audit finding trends. - **Kill Switch**: An immediate, unconditional mechanism to halt an AI agent - **Kirkpatrick Model**: A four-level framework for evaluating training effectiveness: Reaction (satisfaction), Learning (knowledge gained), Behavior (application on the job), and Results (business impact). - **Knowledge Base**: A persistent, accessible organizational repository of governance knowledge, best practices, lessons learned, reusable patterns, and cautionary tales. The knowledge base accumulates value across COMPEL - **Knowledge Graph**: A structured representation of real-world entities and their relationships, stored in a graph database. Knowledge graphs help AI systems reason about connections between concepts and are used in searc - **Knowledge Management**: The organizational practice of capturing, organizing, sharing, and applying institutional knowledge to improve decision-making and performance. In AI transformation, knowledge management ensures lesso - **Knowledge Management System**: Technology and processes for capturing, storing, and sharing organizational knowledge about AI practices, lessons learned, and best practices across the enterprise. - **Knowledge Transfer**: The process of transferring expertise from external consultants to client team members, or from experienced practitioners to newer ones, ensuring the organization retains capability after an engagemen - **Kolb**: A four-stage experiential learning model: concrete experience, reflective observation, abstract conceptualization, and active experimentation. Foundational to COMPEL training design. - **KPI (Key Performance Indicator)**: A quantifiable metric used to evaluate the success of AI initiatives against strategic objectives, such as model accuracy, time-to-deployment, or cost per inference. - **Kubernetes**: An open-source platform for automating the deployment, scaling, and management of containerized applications. Commonly used to orchestrate AI model serving and data processing workloads. - **Labeling**: The process of annotating data with correct answers (labels) to create training data for supervised learning. Labeling is often the most expensive and time-consuming part of an ML project and directly - **Large Language Model (LLM)**: A massive AI model trained on enormous amounts of text data that can generate, summarize, translate, and reason about language. Examples include GPT, Claude, and Gemini. LLMs power chatbots, writing a - **Latency**: The time delay between sending a request to an AI system and receiving a response. Low latency is critical for real-time applications like fraud detection and conversational AI, where delays degrade u - **Leadership Transition Management**: The practice of maintaining transformation momentum when key leaders change, including succession planning, knowledge transfer, and re-engagement strategies for new executives. - **Lean Six Sigma**: A methodology combining Lean manufacturing principles with Six Sigma quality management to reduce waste and defects in processes. Its emphasis on measurement, evidence-based decision-making, and conti - **Learn (COMPEL Stage)**: The sixth and final COMPEL stage, focused on capturing institutional knowledge, refining processes, transferring capabilities, and planning the next cycle. Learn converts experience into organizationa - **Learn Stage**: The sixth and final stage of the COMPEL lifecycle where the organization captures lessons learned, updates its knowledge base, and feeds insights back into the next transformation cycle. - **Learning (Principle)**: A COMPEL principle promoting AI literacy at all organizational levels through ongoing education, with lessons from live deployments feeding directly into the next improvement cycle. - **Learning Organization**: An enterprise that continuously transforms itself through the expansion of its capacity to learn, as conceptualized by Peter Senge. AI transformation requires learning organization principles: persona - **Least Privilege**: A security principle requiring that AI agents receive access only to the tools and data necessary for their defined function, with granular permissions specifying allowed operations, data scopes, rate - **Load Balancing**: Distributing incoming requests across multiple servers or model instances to prevent any single resource from being overwhelmed. Ensures AI services remain responsive under varying demand. - **M&A Due Diligence (AI)**: The specialized assessment of AI assets, capabilities, liabilities, and risks conducted during mergers and acquisitions, evaluating data quality, model robustness, compliance, and technical debt. - **Machine Learning (ML)**: A subset of AI where systems learn patterns from data rather than being explicitly programmed with rules. ML models improve their performance on tasks by processing examples, making it possible to aut - **Machine Learning Operations (MLOps)**: The engineering discipline that bridges the gap between ML model development and production deployment. MLOps encompasses automated model training, testing, deployment, monitoring, retraining, and lif - **Manufacturing AI**: The application of AI in manufacturing and industrial settings, addressing predictive maintenance, quality control, supply chain optimization, and process automation. - **Master Data Management (MDM)**: The processes and technology for ensuring consistent, authoritative definitions of key business entities (customers, products, suppliers) across the enterprise. MDM prevents data inconsistencies that - **Maturity Assessment**: A structured evaluation that measures an organization - **Maturity Level**: A defined stage in an organization - **Maturity Levels**: The five progression levels in the COMPEL maturity model: Foundational (1), Developing (2), Defined (3), Advanced (4), and Transformational (5), each representing increasing capability and institution - **Maturity Plateau**: An anti-pattern where organizations achieve early AI wins and reach intermediate maturity but stall, unable to advance further because the capabilities required for the next level are fundamentally di - **Maturity Profiles**: Characteristic patterns of domain scores that reveal common organizational archetypes, such as technology-led organizations with weak governance or people-strong organizations with immature tooling. - **Maturity Progression Dashboard**: A visual tool that tracks an organization - **Maturity Score**: A numerical rating assigned to an organization - **Memory Poisoning**: An attack where an adversary manipulates the persistent memory of an AI agent to permanently alter its behavior across sessions. Unlike prompt injection which affects a single session, memory poisonin - **Mentoring**: A developmental relationship where an experienced practitioner guides a less experienced one, sharing knowledge, perspective, and career advice. A core EATE responsibility in COMPEL. - **Metadata**: Data that describes other data, such as the source, format, creation date, quality metrics, and access permissions of a dataset. Rich metadata enables AI teams to discover, evaluate, and responsibly u - **Methodology Benchmarking**: The systematic comparison of transformation methodologies across different frameworks and practitioners to identify best practices, gaps, and opportunities for improvement. - **Methodology Extension**: The process of adapting or expanding the COMPEL methodology to address new domains, industries, or challenges not covered by the core framework, while maintaining methodological integrity. - **Methodology Innovation**: The deliberate evolution and improvement of transformation practices based on research, practice experience, and emerging challenges. An EATE responsibility within COMPEL. - **Metrics Dashboard**: A platform module for monitoring KPIs, KRIs, and operational metrics related to AI systems, governance health, and transformation progress. - **Microservices**: An architectural pattern where applications are built as a collection of small, independent services that communicate through APIs. Microservices architecture enables AI capabilities to be deployed, u - **ML Engineer**: A professional who specializes in building production-quality machine learning systems, including deploying models, building data pipelines, and ensuring ML systems operate reliably at scale. ML engin - **ML Operations and Deployment**: Domain D7 in the COMPEL maturity model covering MLOps practices including model versioning, testing, deployment, and production monitoring. - **MLOps Integration**: A platform module for connecting the governance platform with external MLOps pipelines to enable automated model registration, drift detection, and compliance checks. - **Model**: In AI/ML, a mathematical representation learned from data that can make predictions or generate outputs. A model is the trained artifact that an organization deploys to automate decisions or augment h - **Model (COMPEL Stage)**: The third COMPEL stage, focused on designing the transformation target state: setting maturity targets, prioritizing use cases, making technology architecture decisions, and building the transformatio - **Model Card**: A standardized documentation template that describes an AI model - **Model Cards**: Standardized documentation artifacts describing an AI model\ - **Model Drift**: The gradual degradation of an AI model - **Model Lifecycle Management**: The governance discipline of maintaining visibility, control, and accountability over AI models from initial development through production deployment, monitoring, retraining, and eventual retirement. - **Model Monitoring**: The continuous tracking of an AI model - **Model Registry**: A centralized repository that tracks all AI models, their versions, metadata, training data references, performance benchmarks, deployment history, and ownership. The model registry is the system of r - **Model Retirement**: A platform module for managing the end-of-life process for AI models, including decommissioning criteria, stakeholder notification, and knowledge capture. - **Model Risk**: The risk of adverse consequences arising from errors or limitations in an AI model, including conceptual soundness failures, poor accuracy, unexpected behavior, and degradation over time. Model risk r - **Model Risk Management (MRM)**: A governance discipline originating in financial services (codified in the Federal Reserve - **Model Serving**: The infrastructure and processes for making trained AI models available to receive requests and return predictions in production, including scaling, load balancing, and version management. - **Model Stage**: The third stage of the COMPEL lifecycle where the organization designs its target operating model for AI, including governance frameworks, technology architecture, and organizational structures. - **Model Validation**: The independent assessment of an AI model - **Model Validation Pipeline**: An automated sequence of tests and checks that an AI model must pass before it can be deployed to production, including accuracy, fairness, robustness, and security assessments. - **Multi-Agent System**: An AI architecture where multiple autonomous agents collaborate to accomplish tasks, each with specialized capabilities. Requires governance for agent-to-agent communication and coordinated decision-m - **Multi-Business Unit Coordination**: The management of AI transformation across multiple divisions or subsidiaries, balancing enterprise-wide consistency with business-unit-specific needs and priorities. - **Multi-Framework Operating Model**: An organizational design that integrates multiple management frameworks (COMPEL, SAFe, TOGAF, ITIL, COBIT) into a unified operating model without creating redundancy or conflict. - **Multi-Jurisdictional Compliance**: The challenge of adhering to different and sometimes conflicting AI regulations across multiple countries or regions simultaneously. Requires governance harmonization strategies. - **Multi-Modal AI**: AI systems that can process and reason across multiple types of data simultaneously, such as text, images, audio, and video. Multi-modal AI enables richer understanding and more versatile applications - **Multi-Rater Assessment**: An assessment methodology that gathers input from multiple perspectives and stakeholder groups, reducing individual bias and providing a more complete picture of organizational maturity. - **Multi-Workstream Coordination**: The discipline of keeping parallel transformation activities aligned across People, Process, Technology, and Governance pillars during the Produce stage of COMPEL delivery. - **Multi-Year Transformation Program**: An AI transformation initiative spanning two or more years, requiring sustained investment, phased delivery, and continuous stakeholder management to deliver enterprise-scale strategic change. - **Multinational Governance Architecture**: Governance structures designed to operate across multiple countries, balancing global consistency with local regulatory compliance and cultural adaptation. - **Mutual Recognition**: An agreement between jurisdictions or certification bodies to accept each other - **Natural Language Processing**: A field of AI focused on enabling computers to understand, interpret, and generate human language. Powers applications like chatbots, translation, sentiment analysis, and document processing. - **Needs Assessment**: A systematic process for determining the gaps between an organization - **Net Present Value (NPV)**: A financial calculation that determines the current value of future cash flows minus the initial investment. Used to evaluate whether an AI transformation investment will generate positive returns. - **Network Effect**: A phenomenon where an AI system or platform becomes more valuable as more people use it, because increased usage generates more data that improves the AI models, attracting more users in a reinforcing - **Neural Network**: A computing system loosely inspired by the human brain, consisting of layers of interconnected nodes that process data by adjusting numerical weights during training. Neural networks are the foundatio - **NIST AI Risk Management Framework**: A voluntary framework developed by the U.S. National Institute of Standards and Technology to help organizations manage risks associated with AI systems across four functions: Govern, Map, Measure, an - **NIST AI Risk Management Framework (AI RMF)**: A voluntary framework published by the U.S. National Institute of Standards and Technology that provides guidelines for managing AI risks. The AI RMF is widely adopted across industries as a foundatio - **Non-Disclosure Agreement (NDA)**: A legal contract that establishes confidentiality obligations between parties, protecting sensitive information shared during AI engagements, partnerships, or vendor evaluations. - **Observability**: The ability to understand what an AI system is doing, why it is producing specific outputs, and whether its behavior is drifting from expected patterns. Observability is the foundation on which both g - **Observability Dashboard**: A platform module for real-time monitoring of AI system health, performance, and operational metrics across production deployments. - **Operating Model**: The organizational design that defines how an enterprise structures its teams, processes, governance, and technology to deliver its strategy. An AI operating model specifies how AI decisions are made, - **Operating Model Design**: The process of defining how an organization - **Operating Model Transition**: The managed process of moving an organization from its current operating model to a new target state designed for AI, including phased migration of roles, processes, and governance. - **Operational Readiness**: An organization - **Operational Resilience**: The ability of an organization to prevent, respond to, recover from, and learn from operational disruptions, including those caused by AI system failures or agentic AI misbehavior. - **Opportunity Cost**: The potential value lost by choosing one AI initiative over another. Portfolio leaders must consider what they are not doing when allocating limited resources to specific projects. - **Oral Defense**: A live examination where COMPEL certification candidates present and defend their capstone work before a panel of evaluators, demonstrating integrated mastery of the methodology. - **Organizational Culture**: The shared values, beliefs, and norms within an organization that shape attitudes toward AI adoption, risk-taking, learning, and collaboration. - **Organizational Design**: The deliberate structuring of roles, teams, reporting relationships, and decision rights to support strategic objectives. In AI transformation, includes designing for cross-functional AI capabilities. - **Organizational Learning**: The process by which an organization acquires, retains, and applies knowledge to improve its practices. The Learn stage of COMPEL institutionalizes learning from transformation experience. - **Organizational Readiness**: The degree to which an organization - **Organize (COMPEL Stage)**: The second COMPEL stage, focused on building the organizational infrastructure for transformation: forming governance structures, establishing the Center of Excellence, securing budget, and defining r - **Organize Stage**: The second stage of the COMPEL lifecycle where stakeholders are aligned, teams are formed, governance structures are established, and the transformation program is formally organized for delivery. - **Overfitting**: When an AI model learns the training data too precisely, including its noise and anomalies, and performs poorly on new unseen data. Overfitting produces models that appear accurate in testing but fail - **Parameter**: A learned numerical value within an AI model that is adjusted during training to improve performance. Modern LLMs contain billions to trillions of parameters. Parameter count is a rough indicator of m - **Patent**: A legal right granting exclusive use of an invention for a limited period. AI raises complex patent questions around AI-generated inventions and the patentability of algorithms. - **Payback Period**: The time required for an AI investment to generate enough returns to recover its initial cost. A simple metric for evaluating how quickly transformation investments deliver financial value. - **PCI DSS**: Payment Card Industry Data Security Standard -- a set of security standards for organizations that handle credit card information. PCI DSS adds data handling constraints relevant to AI systems process - **Peer Contribution**: Publishing, presenting, and sharing knowledge with the broader professional community. An expectation of senior COMPEL practitioners who advance the field beyond their own practice. - **Penetration Testing**: Authorized simulated attacks on an AI system to identify security vulnerabilities before malicious actors can exploit them. Includes testing both traditional IT and AI-specific attack vectors. - **People Pillar**: One of the four COMPEL pillars encompassing domains D1-D4: AI Leadership, AI Talent, AI Literacy, and Change Management — focused on human capability and culture. - **Persistent Memory**: An AI agent - **Pilot Program**: A small-scale initial deployment of an AI solution to test feasibility, measure impact, and identify issues before committing to full-scale implementation across the organization. - **Pilot Purgatory**: An anti-pattern where organizations launch numerous AI pilot projects but never build the governance, data infrastructure, or organizational capability to move them into production. Each pilot succeed - **Pilot-to-Production Gap**: The common phenomenon where AI proofs of concept demonstrate impressive results in controlled environments but never scale to production deployment. Caused by maturity gaps in data governance, operati - **Playbook**: A documented set of procedures and guidelines for handling specific situations, such as AI incident response, model deployment, or stakeholder engagement scenarios. - **PMBOK**: The Project Management Body of Knowledge — a standard from the Project Management Institute providing guidelines for project management. COMPEL integrates with PMBOK for portfolio and program governan - **PMO (Program Management Office)**: A centralized function that standardizes project management practices, provides governance, and coordinates transformation programs across the enterprise. - **Policy Exceptions**: A platform module for requesting, reviewing, and tracking temporary or permanent exemptions from established AI governance policies with documented justification. - **Policy Library**: A platform module for creating, publishing, versioning, and managing AI governance policies, with attestation tracking and compliance mapping. - **Policy Lifecycle Management**: The end-to-end process of creating, reviewing, approving, implementing, monitoring, and retiring AI governance policies, ensuring they remain current and effective. - **Political Landscape Assessment**: An analysis of the power dynamics, alliances, competing interests, and influence patterns within an organization that will affect AI transformation success. A critical EATP diagnostic skill. - **Political Navigation**: The skill of understanding and working within organizational power dynamics to advance transformation objectives, building coalitions and managing resistance from influential stakeholders. - **Portfolio Defense**: The Level 4 COMPEL capstone assessment where candidates present a multi-organization transformation portfolio to a panel, demonstrating mastery of portfolio leadership and governance. - **Portfolio Management**: The centralized management of a collection of AI programs and projects to achieve strategic objectives, including prioritization, resource allocation, risk aggregation, and value tracking. - **Portfolio Rebalancing**: The process of adjusting the mix of AI initiatives in a portfolio based on changing strategic priorities, performance data, and emerging opportunities or risks. - **Portfolio Risk Aggregation**: The process of combining individual program risks into a portfolio-level view that reveals systemic risks, correlated exposures, and concentration risks not visible at the program level. - **Portfolio Steward**: The EATP Lead role responsible for the overall health, balance, and strategic alignment of an AI transformation portfolio, ensuring it delivers on enterprise strategy. - **Post-Incident Review**: A structured analysis conducted after an AI incident to identify root causes, systemic factors, and improvement actions — focused on system learning rather than blame assignment. - **Post-Mortem**: A structured review conducted after an AI incident or project completion to understand what happened, why, and how to prevent similar issues. Best conducted with a blameless approach. - **Praxis**: The integration of theory and practice — learning through reflective action. In COMPEL, praxis means that methodology knowledge is developed through applying it to real transformation challenges. - **Precision**: A model performance metric measuring the proportion of positive predictions that are actually correct. High precision means few false positives. Important in applications where false alarms are costly - **Predictive Maintenance**: Using AI to predict when equipment will fail so maintenance can be performed just before failure occurs, rather than on a fixed schedule. Predictive maintenance reduces downtime and maintenance costs - **PRINCE2**: Projects in Controlled Environments -- a structured project management methodology widely used in government and regulated industries. COMPEL - **Principle of Least Privilege**: A security principle where users and AI systems are given only the minimum access permissions needed to perform their tasks, reducing the potential impact of security breaches. - **Privacy**: The principle that AI systems must respect individuals - **Privacy by Design**: An approach that embeds data protection and privacy considerations into the design and architecture of AI systems from the outset rather than as an afterthought. - **Proactive Regulatory Engagement**: The practice of actively participating in regulatory development processes rather than waiting to comply with final rules. Includes standards body membership, public consultations, and regulatory sand - **Process Pillar**: One of the four COMPEL pillars encompassing domains D5-D9: Use Case Management, Data Management, MLOps, Project Delivery, and Continuous Improvement. - **Produce (COMPEL Stage)**: The fourth COMPEL stage, where strategy becomes reality through structured two-week transformation sprints. Produce delivers AI solutions, governance frameworks, training programs, and process redesig - **Produce Stage**: The fourth stage of the COMPEL lifecycle where the transformation plan is executed across all four pillars, including technology deployment, process changes, training, and governance implementation. - **Prohibited AI Practices**: AI applications explicitly banned under the EU AI Act due to unacceptable risks, including social scoring by governments and real-time remote biometric identification in public spaces. - **Prompt Engineering**: The practice of designing and refining the text inputs (prompts) given to a large language model to produce the desired output. Effective prompt engineering can dramatically improve the quality and re - **Prompt Injection**: A security attack where malicious instructions are hidden in input data to manipulate an AI agent - **Proof of Concept (PoC)**: A small-scale implementation that demonstrates the feasibility of an AI solution in a controlled environment. PoCs validate that a concept works but do not address the production readiness, integratio - **Provenance Graph**: A visual representation of the complete chain of data sources, transformations, model decisions, and actions that led to a specific AI output. Used for accountability and debugging in multi-agent syst - **Pseudonymization**: Replacing personally identifiable information with artificial identifiers, so data can be used for AI training or analysis without directly revealing individuals - **Psychological Safety**: The shared belief that a team or organization is safe for interpersonal risk-taking -- that individuals can ask questions, admit mistakes, report problems, and propose ideas without fear of punishment - **Public Sector AI**: The application of AI in government and public administration, characterized by transparency obligations, democratic accountability, and service equity requirements. - **Public-Private Partnership**: A collaborative arrangement between government entities and private companies for AI initiatives, requiring governance structures that bridge fundamentally different decision-making models. - **Purpose Limitation**: The privacy principle ensuring that data collected for one purpose is not repurposed for AI training or other uses without appropriate consent and governance. A key data governance requirement for res - **Quality Assurance (QA)**: Systematic processes to ensure that AI systems meet defined standards for performance, reliability, fairness, and governance compliance before and after deployment. QA for AI extends traditional softw - **Quality Gate**: A checkpoint in a process where work must meet defined quality criteria before proceeding to the next stage. In AI, quality gates verify model performance, fairness, security, and documentation standa - **Quantitative Risk Assessment**: A risk evaluation approach that uses numerical data and statistical methods to estimate the probability and potential impact of identified risks. For AI, quantitative assessment supplements qualitativ - **Quantization**: A technique for reducing the computational resources needed to run an AI model by decreasing the precision of its numerical calculations, making models smaller and faster with minimal accuracy loss. - **Query Optimization**: The process of improving the efficiency of data retrieval operations to reduce latency and resource consumption. Query optimization is critical for AI systems that require fast access to large dataset - **Questionnaire-Based Assessment**: A structured assessment method using standardized questions to evaluate AI maturity, readiness, or compliance. Provides consistent, comparable data across organizational units or time periods. - **Quick Win**: A transformation initiative designed to deliver visible, measurable value within a short timeframe, building momentum and stakeholder confidence for the broader transformation program. - **RACI Matrix**: A responsibility assignment framework that defines who is Responsible (performs the work), Accountable (answers for the outcome), Consulted (provides input), and Informed (receives information) for ea - **RAG (Retrieval Augmented Generation)**: An architecture pattern that combines information retrieval with generative AI, grounding model outputs in specific documents or data sources to improve accuracy and reduce hallucination. - **RAG (Retrieval-Augmented Generation)**: An AI architecture that improves language model outputs by first retrieving relevant information from external knowledge sources, then using that information to generate more accurate responses. - **Readiness Assessment**: An evaluation of an organization\ - **Real-Time Inference**: Running AI model predictions on individual data points as they arrive, with low latency requirements. Used for applications like fraud detection, recommendations, and interactive AI assistants. - **Real-Time Processing**: Processing data and generating AI predictions as events occur, typically within milliseconds to seconds. Real-time processing is required for applications like fraud detection, dynamic pricing, and co - **Reattestation**: A platform module for scheduling and managing periodic re-certification of AI system compliance, policy acknowledgment, and control effectiveness. - **Recall**: A model performance metric measuring the proportion of actual positive cases that the model correctly identifies. High recall means few missed cases. Important in applications where missing a case is - **Recommendation Engine**: An AI system that suggests relevant items (products, content, actions) to users based on their behavior, preferences, and similarities to other users. Recommendation engines are widely used in retail, - **Red Teaming**: A security testing practice where a team deliberately tries to find vulnerabilities, trigger unsafe behavior, or exploit weaknesses in an AI system. Red teaming helps identify risks before deployment - **Redesign (Principle)**: A COMPEL principle calling for workflows to be rebuilt around AI strengths rather than patched onto legacy processes, with explicit human-AI handoff point documentation. - **Redundancy**: Having duplicate systems, components, or processes in place so that if one fails, another can take over. A key resilience strategy for mission-critical AI systems. - **Reflective Practice**: The disciplined habit of examining one - **Regression**: A type of supervised learning task that predicts a continuous numerical value, such as a house price, demand forecast, or remaining equipment lifetime. Regression powers forecasting and estimation app - **Regulated Industry AI**: AI applications deployed in heavily regulated sectors such as financial services, healthcare, and energy, where compliance requirements significantly shape governance approaches. - **Regulatory Compliance**: The organizational processes and practices that ensure AI systems meet the requirements of applicable laws, regulations, and industry standards across all relevant jurisdictions. Compliance is not a o - **Regulatory Horizon Scanning**: The systematic monitoring of emerging AI legislation, regulatory guidance, and enforcement actions to proactively prepare for future compliance requirements. - **Regulatory Intelligence**: The systematic monitoring and analysis of regulatory developments, enforcement actions, and policy trends relevant to AI, enabling organizations to anticipate and prepare for compliance changes. - **Regulatory Sandbox**: A controlled environment where organizations can test innovative AI applications under relaxed regulatory requirements, with regulator oversight. Enables innovation while managing regulatory risk. - **Reinforcement Learning**: A machine learning approach where an agent learns by interacting with an environment and receiving rewards or penalties for its actions. Used in robotics, game-playing, dynamic pricing, and resource o - **Reinforcement Learning from Human Feedback (RLHF)**: A technique used to align AI model behavior with human preferences by training a reward model on human evaluations and then fine-tuning the AI to produce outputs that score highly. RLHF is central to - **Remediation**: A platform module for tracking and managing corrective actions required to address audit findings, compliance gaps, or governance deficiencies. - **Reports Hub**: A platform module for generating, scheduling, and distributing governance and transformation reports to stakeholders at various levels. - **Reproducibility**: The ability to consistently replicate the results of an AI model\ - **Resilience**: The ability of an AI system or transformation program to withstand disruptions, recover quickly from failures, and continue operating effectively. Encompasses technical, organizational, and strategic - **Resource Planning**: The process of identifying, allocating, and managing the people, budget, and infrastructure needed to deliver an AI transformation program successfully. - **Responsible AI**: The practice of designing, developing, and deploying AI systems in ways that are ethical, fair, transparent, accountable, and safe. Responsible AI is not a constraint on innovation but the condition t - **Retail AI**: The application of AI in retail and consumer industries, addressing personalization, demand forecasting, inventory optimization, and customer experience enhancement. - **Retraining**: The process of updating an AI model by training it on new or additional data to restore or improve its performance after drift or degradation. Mature MLOps pipelines automate retraining workflows with - **Retrieval-Augmented Generation (RAG)**: A technique that enhances AI model responses by first retrieving relevant information from external knowledge sources (databases, documents) and then using that information to generate more accurate, - **Retrospective**: A structured review session conducted after completing work to examine what went well, what went wrong, and what should change. In COMPEL, retrospectives operate at initiative, portfolio, and strategi - **Return on Investment (ROI)**: A financial metric that measures the profitability of an investment by comparing the net benefits to the total cost. In AI transformation, ROI calculations must account for compounding value, cascadin - **Reward Hacking**: When an AI agent learns to maximize its reward signal in unintended ways that do not align with the actual desired outcome. For example, an agent might learn to produce verbose responses because longe - **Risk Acceptance**: A platform module for formally documenting decisions to accept identified AI risks that fall within organizational risk appetite, with appropriate authorization and rationale. - **Risk Appetite**: The amount and type of risk that an organization is willing to accept in pursuit of its objectives. A Risk Appetite Statement for AI defines tolerance thresholds for specific AI risk categories such a - **Risk Management**: Domain D17 in the COMPEL maturity model covering frameworks for identifying, assessing, and mitigating AI-specific risks including technical, operational, ethical, and regulatory risks. - **Risk Register**: A structured document that records identified risks, their likelihood, potential impact, mitigation strategies, and current status. In AI, the risk register must cover AI-specific risks like model dri - **Risk Taxonomy**: A structured classification system that organizes AI-specific risks into categories (technical, ethical, legal, operational, strategic, reputational) with defined severity levels, likelihood assessmen - **Risk Tolerance**: The acceptable variation in outcomes that an organization is willing to accept for a specific risk. More specific than risk appetite, applied to individual risks or risk categories. - **Risk-Adjusted Roadmap**: A transformation plan that explicitly accounts for risks by building in contingencies, alternative paths, and decision points that allow the program to adapt to changing risk conditions. - **Risk-Based Classification**: An approach to AI governance that applies different levels of regulatory requirements and oversight based on the potential risk of harm from the AI application. The EU AI Act uses a four-tier risk cla - **Roadmap**: A strategic plan that maps initiatives to timelines, resources, dependencies, and milestones. A COMPEL transformation roadmap synthesizes target maturity levels, the use case portfolio, technology dec - **Robotic Process Automation (RPA)**: Software that automates repetitive, rule-based tasks typically performed by humans, such as data entry and form processing. RPA combined with AI (intelligent automation) can handle more complex tasks - **Rollback**: The process of reverting an AI system to a previous known-good state when a new deployment causes problems. A critical safety mechanism for managing model updates. - **Root Cause Analysis**: A systematic process for identifying the fundamental underlying reason for an AI system failure or problem, rather than just addressing surface-level symptoms. - **Rubric**: A scoring guide with defined criteria and performance levels used to evaluate work consistently. In COMPEL certification, rubrics ensure objective and fair assessment of candidate competencies. - **Safety**: The principle that AI systems must operate reliably within intended boundaries and fail gracefully when encountering unexpected situations. Safety is critical in high-stakes domains like healthcare, t - **Scaffolding**: A teaching approach that provides structured support to learners and gradually removes it as they develop competence, helping them progress from guided practice to independent mastery. - **Scalability**: The ability to expand AI capabilities from individual successes to enterprise-wide deployment without proportional increases in effort or cost. Scalability is primarily a process challenge, requiring - **Scalability Architecture**: The design of AI systems to handle growing amounts of work by adding resources, ensuring performance remains acceptable as user demand, data volumes, and model complexity increase. - **Scaled Agile Framework (SAFe)**: A framework for implementing agile practices at enterprise scale through constructs like Agile Release Trains and Program Increments. COMPEL cycles align naturally with SAFe - **Scope Creep**: The uncontrolled expansion of an initiative - **Scrum of Scrums**: A coordination mechanism where representatives from multiple agile teams meet regularly to share progress, surface dependencies, and resolve cross-team issues in large transformation programs. - **Sector-Specific AI Regulation**: Regulatory requirements that apply AI governance obligations within particular industries, such as financial services model risk management (SR 11-7) or healthcare AI device regulations. - **Security and Infrastructure**: Domain D13 in the COMPEL maturity model assessing security posture specific to AI workloads and infrastructure hardening against adversarial threats. - **Security by Design**: The principle of building security considerations into AI systems from the earliest design stage rather than adding security measures after the system is built. - **Self-Assessment**: A platform module providing structured questionnaires for individuals or teams to evaluate their own AI maturity without external facilitation. - **Self-Sustaining Capability**: An organization - **Sensitivity Analysis**: A technique that tests how changes in key assumptions affect the outcomes of a business case or model. For AI investments, sensitivity analysis identifies which variables -- adoption rates, data quali - **Sentiment Analysis**: An NLP technique that determines the emotional tone or opinion expressed in text, such as positive, negative, or neutral. Used for analyzing customer feedback, social media monitoring, and brand reput - **Service Level Agreement (SLA)**: A formal commitment between a service provider and consumer that defines expected performance levels, such as system uptime, response time, and accuracy thresholds. AI SLAs must include AI-specific me - **Shadow AI**: The unauthorized use of AI tools and services by individuals or teams without organizational knowledge, oversight, or governance. Shadow AI creates data security, compliance, and quality risks that co - **Shadow AI Discovery**: A platform module for identifying unauthorized or untracked AI tools and systems used across the organization, enabling their registration and governance inclusion. - **Shadow Deployment**: A deployment pattern where a new AI model runs alongside the current production model, receiving the same inputs but without its outputs affecting users. Shadow deployment enables performance comparis - **Shared Services Model**: An organizational structure where common AI capabilities like data engineering, model operations, and governance are provided centrally to multiple business units, improving efficiency and consistency - **Showback Model**: A cost transparency mechanism that shows business units what their AI resource consumption costs without actually charging them, creating awareness before implementing full chargeback. - **Simulation-Based Training**: Training that uses simulated scenarios to let learners practice skills in a safe environment that mimics real-world conditions. Used in COMPEL for developing practitioner judgment and decision-making. - **Skill Development (Principle)**: A COMPEL principle treating human-AI collaboration as a core competency, with career paths that include AI mastery and continuous upskilling tied to real project work. - **SLA (Service Level Agreement)**: A formal commitment defining the expected performance of an AI service, including availability targets, response times, accuracy thresholds, and remedies if standards are not met. - **SLA Tracking**: A platform module for monitoring and reporting on service level agreement compliance for AI vendors, internal AI services, and governance response times. - **SLO (Service Level Objective)**: A specific, measurable target for an AI service - **SOC 2**: A compliance framework for service organizations that demonstrates secure handling of customer data. Relevant for AI service providers who process sensitive data on behalf of clients. - **SOX (Sarbanes-Oxley)**: US federal law requiring certain financial reporting controls for public companies. Increasingly relevant as AI systems are used in financial processes that affect reported results. - **Sprint**: A fixed time period, typically one to four weeks, during which a team works to complete a set of planned deliverables. The basic unit of delivery rhythm in agile transformation execution. - **Stage Gate**: A structured decision point between COMPEL lifecycle stages that ensures quality before the organization advances. Gates verify that deliverables meet criteria and produce one of four outcomes: Go, Co - **Stage-Gate Decision Framework**: The structured criteria and review process used at each COMPEL stage transition to determine readiness for progression, including required artifacts, stakeholder sign-off, and quality thresholds. - **Stakeholder**: Any individual, group, or organization that has an interest in or is affected by an AI transformation initiative. Stakeholders include executives, business unit leaders, technical teams, end users, re - **Stakeholder Alignment**: The process of ensuring key stakeholders share a common understanding of transformation goals, their roles, expected outcomes, and governance mechanisms before and during program execution. - **Stakeholder Engagement**: The systematic identification, communication, and involvement of individuals and groups who are affected by or can influence AI transformation outcomes. - **Stakeholder Engagement Plan**: A structured document that identifies all stakeholder groups, assesses their influence and interest, defines engagement approaches, and establishes communication cadences. A mandatory COMPEL artifact - **Stakeholder Landscape**: The comprehensive map of all parties involved in or affected by AI transformation, including their interests, influence levels, and engagement requirements. - **Stakeholder Mapping**: The process of identifying all individuals and groups affected by or able to influence an AI transformation, plotting their interest level and influence to guide engagement strategy. - **Stakeholder Rights**: The entitlements of individuals affected by AI systems, including rights to explanation, contest, human review, and redress for adverse automated decisions. - **Stakeholder Validation**: A platform module for obtaining formal stakeholder review and approval of governance artifacts, transformation plans, and compliance evidence. - **Standard Contractual Clauses**: Pre-approved contract terms established by regulators for transferring personal data between jurisdictions. A legal mechanism for enabling cross-border data flows for AI systems. - **Standards Architect**: The EATP Lead role of actively contributing to the development of industry standards for AI governance and transformation through standards body participation and original research. - **Standards Body**: An organization that develops and publishes technical or professional standards, such as ISO, IEEE, or NIST. EATP Leads engage with standards bodies to shape AI governance standards. - **Statement of Work (SOW)**: A formal document defining the scope, deliverables, timeline, and commercial terms of a COMPEL engagement. Serves as the contractual foundation for the practitioner-client relationship. - **Steering Committee**: A senior leadership group providing strategic oversight, decision-making authority, and executive sponsorship for an AI transformation program. Typically meets monthly to review progress and resolve e - **Strategic Advisory**: The practice of providing executive-level guidance on AI strategy, investment, and organizational design to accelerate enterprise AI transformation. - **Strategic Risk**: Risks that threaten an organization - **Stream Processing**: Processing data continuously as it arrives in real time, rather than in batches. Enables AI systems to react to events as they happen, supporting applications like fraud detection and monitoring. - **Stress Testing**: Testing an AI system under extreme conditions to identify its breaking points and failure modes. Includes data volume stress, adversarial inputs, and edge-case scenarios. - **Structured Data**: Data organized in a predefined format with rows and columns, such as spreadsheets, database tables, and ERP records. Structured data is the foundation of most traditional machine learning applications - **Success Criteria**: A platform module for defining, tracking, and validating measurable outcomes that determine whether AI transformation objectives have been achieved. - **Summative Assessment**: A final evaluation that judges whether learning objectives have been met, typically occurring at the end of a training program or certification process. Contrasts with formative assessment. - **Supervised Learning**: A machine learning approach where the model is trained on labeled examples (inputs paired with known correct answers). The model learns to predict the correct output for new, unseen inputs. Used in fr - **Supply Chain AI Governance**: The governance of AI systems and components that span organizational boundaries through vendor relationships, third-party models, and ecosystem partners. Addresses shared accountability and risk. - **Synthetic Data**: Artificially generated data that mimics the statistical properties of real data but does not contain actual individual records. Synthetic data can be used for AI training when real data is scarce, sen - **Systems Thinking**: An approach that views AI initiatives not as isolated technology projects but as interventions in a complex system where changes affect upstream and downstream workflows, employee roles, customer inte - **Tacit Knowledge**: Knowledge gained through personal experience that is difficult to articulate or document, such as professional judgment and intuition. A key challenge in knowledge management for AI transformation. - **Talent Pipeline**: A structured approach to identifying, developing, and retaining AI talent, including recruitment strategies, career paths, and succession planning. - **Talent Strategy**: The comprehensive plan for acquiring, developing, retaining, and organizing the people needed to build and sustain AI capabilities, including technical, governance, and business roles. - **Technical Debt**: The accumulated cost of shortcuts, workarounds, and deferred maintenance in technology systems. In AI, technical debt includes ungoverned models, undocumented data pipelines, and manual deployment pro - **Technical Feasibility**: An assessment of whether an AI solution can be practically built and deployed given current technology, data availability, infrastructure, and organizational capabilities. Technical feasibility is a k - **Technology Assessment**: An evaluation of an organization\ - **Technology Governance**: The framework of policies, standards, and decision rights that guides how technology, including AI, is selected, implemented, and managed across the enterprise. - **Technology Pillar**: One of the four COMPEL pillars encompassing domains D10-D13: Data Infrastructure, AI/ML Platform, Integration Architecture, and Security and Infrastructure. - **Telemetry**: The automated collection and transmission of data about an AI system - **Third-Party AI Governance**: The practice of extending AI governance requirements to vendors, partners, and suppliers who provide AI systems or data used within the organization. - **Third-Party AI Risk**: Risks arising from AI components, models, or services provided by external vendors or partners. Requires vendor due diligence, contractual safeguards, and ongoing monitoring. - **Thought Leadership**: The practice of sharing original insights and expertise through publications, speaking, and advisory roles to influence professional practice and advance the field of AI transformation. - **Three Lines of Defense**: A risk governance model where the first line (operations) owns risk, the second line (risk/compliance) provides oversight, and the third line (internal audit) provides independent assurance. - **TOGAF**: The Open Group Architecture Framework -- a widely used enterprise architecture methodology. COMPEL - **Token**: The basic unit of text that a language model processes, roughly corresponding to a word or word fragment. Token counts determine processing costs and context window limitations for LLM-based applicati - **Token Cost Multiplier**: The factor by which token consumption increases in multi-agent AI systems compared to single-model interactions, due to inter-agent communication, reasoning chains, and coordination overhead. - **Token Economics**: The analysis of costs associated with AI language model usage based on the number of tokens (text units) processed. Critical for budgeting and optimizing costs of generative AI deployments. - **Tool Call Authorization**: The governance mechanism that controls which external tools and APIs an AI agent is permitted to use, with what parameters, and under what conditions. Prevents unauthorized agent actions. - **Total Cost of Ownership (TCO)**: The complete cost of an investment over its full lifecycle, including purchase price, implementation, operations, maintenance, training, and eventual decommissioning. TCO for AI must include data prep - **TPU (Tensor Processing Unit)**: A custom-designed processor created by Google specifically for neural network workloads. TPUs offer competitive performance for training and running transformer-based AI models and are available throu - **Trade Secret**: Confidential business information that provides a competitive advantage, such as proprietary AI algorithms, training data processes, or model architectures. Protected through secrecy rather than regis - **Training Data**: The dataset used to teach a machine learning model the patterns it needs to make predictions. The quality, representativeness, and size of training data directly determine how well the model performs. - **Training Management**: A platform module for designing, scheduling, and tracking role-based AI training programs across the organization. - **Transfer Learning**: A machine learning technique where knowledge gained from training on one task is applied to a different but related task, reducing data requirements and training time. - **Transformation Crisis**: A critical event that threatens the continuation or success of an AI transformation program, such as executive departure, budget cuts, technology failure, or public controversy. - **Transformation Enablers**: Three cross-cutting layers in COMPEL -- Value Realization, Operational Readiness, and Agent Governance -- that operate horizontally across all six stages to ensure AI initiatives create measurable val - **Transformation Office**: A dedicated organizational function that coordinates and governs the enterprise AI transformation program, providing structure, resources, and oversight across all initiatives. - **Transformation Portfolio**: The collection of all AI transformation programs and initiatives managed together to achieve enterprise strategic objectives, balanced across risk, investment, and value horizons. - **Transformation Roadmap**: A strategic plan that sequences AI transformation initiatives across time, showing dependencies, milestones, and resource requirements. The primary deliverable of the Model stage in COMPEL. - **Transformation Sprint**: A two-week time-boxed work period within the COMPEL Produce stage, delivering concrete outcomes across multiple pillars. Unlike pure software sprints, transformation sprints include governance, traini - **Transformational (Level 5)**: The highest maturity level in the COMPEL model indicating AI capabilities that are continuously optimized, adaptive, and driving enterprise-wide strategic value. - **Transformer Architecture**: The neural network architecture that powers modern large language models. Transformers use an - **Transparency**: The principle that stakeholders affected by AI decisions should be able to understand, at an appropriate level of detail, how those decisions were reached. Transparency requirements vary by context, a - **Transparent Metrics**: A COMPEL principle emphasizing that AI augmentation is measured and reported openly, with ROI tracked per use case and no hidden deployments or unmeasured experiments in production. - **Trust Dividend**: The compound return that organizations earn from investing in responsible AI practices, accruing across customer trust, employee engagement, regulatory relationships, investor confidence, and partner - **Trustworthy AI**: AI systems that are lawful, ethical, and robust. Encompasses fairness, transparency, accountability, safety, and privacy. A goal pursued through both technical measures and governance practices. - **Uncertainty Estimation**: Techniques for quantifying how confident an AI model is in its predictions. Helps users and systems know when to trust AI outputs and when human judgment should override the model. - **Uncertainty Quantification**: Methods for measuring and communicating how confident an AI model is in its predictions. Uncertainty quantification helps users know when to trust model outputs and when to seek human judgment. - **Unit Economics**: The revenue and cost analysis of a single unit of an AI service or product, used to determine whether scaling the service will be profitable. Includes per-inference, per-user, or per-transaction costs - **Unstructured Data**: Data that does not follow a predefined format, such as text documents, images, audio recordings, and video files. Deep learning and LLMs have made unstructured data increasingly valuable for AI applic - **Unsupervised Learning**: A machine learning approach that discovers hidden patterns in data without pre-labeled examples. Used for customer segmentation, anomaly detection, and data exploration when you do not know what patte - **Uplift Modeling**: A predictive technique that estimates the incremental impact of an intervention on an individual, helping determine which people or situations would benefit most from a particular action. - **Use Case**: A specific application of AI to a defined business problem, with measurable outcomes, identifiable stakeholders, and quantifiable resource requirements. Use cases are the unit of AI value delivery tha - **Use Case Intake**: The process of collecting, evaluating, and prioritizing proposed AI use cases from across the organization, ensuring the most valuable and feasible opportunities receive resources. - **Use Case Pipeline**: A platform module for discovering, evaluating, prioritizing, and tracking AI use cases from ideation through deployment and value realization. - **Use Case Portfolio**: A deliberately balanced collection of AI initiatives designed to achieve strategic outcomes while managing risk. A well-designed portfolio includes foundation builders, value demonstrators, and capabi - **User Acceptance Testing (UAT)**: The final testing phase where actual users verify that an AI system meets their needs and works correctly in their operational context before the system goes live. - **User Management**: A platform module for managing platform user accounts, roles, permissions, and team assignments across the governance platform. - **Validation Framework**: A structured approach to verifying that AI models and systems perform correctly, meet requirements, and are fit for their intended purpose across all relevant dimensions. - **Value Alignment**: The challenge of ensuring AI systems pursue objectives that are consistent with human values and organizational principles, avoiding unintended optimization toward harmful goals. - **Value Attribution**: The process of determining how much of a business outcome can be credited to AI transformation versus other factors. Requires rigorous methodology due to the difficulty of isolating AI - **Value Milestone**: A defined point in the transformation roadmap where measurable business value is expected to be delivered, providing evidence of progress and maintaining stakeholder confidence. - **Value Realization**: The discipline of tracking whether AI investments actually deliver their projected business outcomes, from initial deployment through sustained operation. Value realization requires baseline metrics, - **Value Thesis**: A testable hypothesis articulating the causal logic connecting an AI initiative to expected business outcomes: - **Vector Database**: A specialized database designed to store and efficiently search high-dimensional numerical representations (embeddings) of data. Vector databases are essential for RAG systems and semantic search appl - **Vendor Due Diligence**: The investigation of an AI vendor - **Vendor Ecosystem**: The network of external technology providers, service partners, and platform vendors that an organization relies on for AI capabilities. Requires strategic management and governance. - **Vendor Inventory**: A platform module for cataloging third-party AI vendors, tracking contract terms, assessing vendor risk, and managing vendor governance obligations. - **Vendor Risk Assessment**: An evaluation of the governance risks introduced by third-party AI components, including foundation model providers, MLOps platforms, and data services. Assesses data practices, model transparency, se - **Version Control**: The practice of tracking and managing changes to code, data, models, and configuration over time. Version control enables organizations to reproduce past results, roll back problematic changes, and ma - **Vulnerability Scanning**: Automated testing that identifies known security weaknesses in AI systems, infrastructure, and supporting software. A routine security practice that should cover AI-specific attack vectors. - **Warm Start**: Initializing an AI model - **Waterfall**: A linear project management approach where phases are completed sequentially from start to finish before the next begins. COMPEL explicitly rejects waterfall for AI transformation because the landscap - **Weight Decay**: A regularization technique in AI model training that penalizes large model weights to prevent overfitting, helping the model generalize better to new data it has not seen before. - **Whistleblower Protection**: Policies and mechanisms that protect individuals who report AI-related concerns, ethical violations, or governance failures from retaliation. Essential for maintaining honest governance and catching p - **Workflow Builder**: A platform module for designing and automating governance workflows, approval chains, and process automation across COMPEL activities. - **Workflow Orchestration**: The automated coordination of complex, multi-step processes involving multiple systems, people, or AI agents. Manages the sequence, dependencies, and error handling of workflow steps. - **Workflow Redesign**: A platform module for analyzing existing business workflows and redesigning them to incorporate AI augmentation, with explicit human-AI handoff points. - **Workforce Redesign**: The process of analyzing and restructuring jobs at the task level to determine which tasks are best automated by AI, which are augmented by AI, and which remain fully human. Workforce redesign focuses - **Workforce Strategy**: The plan for how an organization - **Workforce Transformation**: The strategic process of developing new skills, redesigning roles, and restructuring teams to enable effective human-AI collaboration. Workforce transformation is a continuous process that evolves as - **X-Risk (Existential Risk from AI)**: The theoretical risk that advanced AI systems could pose catastrophic or irreversible harm to humanity. While primarily a research and policy concern rather than an immediate enterprise issue, X-risk - **XAI (Explainable Artificial Intelligence)**: The field of AI research and practice focused on making AI decision-making processes understandable to humans. XAI techniques include feature importance scores, attention visualization, and counterfac - **XGBoost**: A popular and efficient machine learning algorithm based on gradient-boosted decision trees. Widely used for structured data problems like credit scoring, fraud detection, and demand forecasting. - **XML (Extensible Markup Language)**: A structured data format used for storing and exchanging data between systems. In AI governance, XML is used in regulatory reporting, audit evidence documentation, and data interchange between enterpr - **YAML**: A human-readable data serialization format commonly used for configuration files in AI/ML pipelines, infrastructure-as-code, and deployment specifications. YAML - **YAML Configuration**: A human-readable file format commonly used to define settings for AI pipelines, infrastructure, and deployment configurations. Stands for - **Year-over-Year (YoY) Maturity Progression**: The measurement of how an organization - **Year-over-Year Metrics**: Performance comparisons between the same period in consecutive years, used to measure long-term AI transformation progress while accounting for seasonal variations. - **Yield Optimization**: Using AI to maximize the output or efficiency of a process, such as manufacturing yield, crop yield, or advertising yield. AI-driven yield optimization identifies optimal parameters that human operato - **Z-Score**: A statistical measurement describing how many standard deviations a data point is from the mean. Z-scores are used in anomaly detection systems to identify unusual patterns that may indicate fraud, eq - **Zero-Day Vulnerability**: A software security flaw that is unknown to the vendor and has no available patch. In AI systems, zero-day vulnerabilities in model serving infrastructure or data pipelines can expose models and data - **Zero-Shot Learning**: The ability of an AI model to perform tasks it was not explicitly trained on, by leveraging general knowledge learned during pre-training. LLMs demonstrate zero-shot capability when they answer questi - **Zero-Trust Architecture**: A security model that assumes no user, system, or AI agent should be trusted by default, even if they are inside the network. Every access request must be verified before being granted. - **Zone of Proximal Development**: The gap between what a learner can do independently and what they can achieve with guidance. In COMPEL training, instruction is designed to operate within this zone for maximum learning effectiveness. --- ## Insight Articles ### What Is AI Transformation? — https://www.compel.one/insights/what-is-ai-transformation Defines AI transformation as a systematic organizational capability challenge — not a technology deployment exercise. Introduces the COMPEL operating model as the structured approach to embedding AI into enterprise operations, culture, and strategy. Distinguishes AI transformation from digital transformation and AI governance. ### AI Transformation Roadmap — https://www.compel.one/insights/ai-transformation-roadmap A structured guide to planning and executing enterprise AI transformation using the COMPEL 6-stage cycle. Covers the typical 12-week initial cycle timeline, the role of each stage in building cumulative capability, and how organizations should sequence their transformation investments. ### AI Transformation Operating Model — https://www.compel.one/insights/ai-transformation-operating-model Explains how to design and implement an AI operating model that sustains governance, delivery, and continuous improvement at enterprise scale. Covers Center of Excellence design, role definitions, oversight body formation, and the integration of COMPEL stages into ongoing operations. ### AI Governance vs. AI Transformation — https://www.compel.one/insights/ai-governance-vs-ai-transformation Clarifies the distinction between AI governance (policies, oversight, compliance) and AI transformation (organizational capability building). Explains how COMPEL integrates both — governance is one of four pillars within the broader transformation operating model, not a separate discipline. --- ## Research & Benchmarks Original research and benchmark data on enterprise AI governance maturity, shadow AI prevalence, and ISO 42001 readiness. Published by the COMPEL Research Program at FlowRidge. All data is illustrative — derived from composite analysis of publicly available industry surveys, regulatory guidance, and practitioner interviews. No single organization's proprietary data is represented. ### Research Hub — https://www.compel.one/research The COMPEL Research Program publishes original benchmark data and analysis on enterprise AI governance maturity, regulatory readiness, and operational challenges. Reports are designed for C-suite executives, AI program leaders, governance professionals, and compliance teams. ### 2026 Enterprise AI Governance Maturity Benchmark — https://www.compel.one/research/ai-governance-maturity-benchmark Benchmark study assessing enterprise AI governance maturity across 420 organizations using the COMPEL 18-domain maturity model. Key findings: average maturity is 2.1 out of 5 (Developing level). Governance Structure (D18) is the weakest domain at 1.5. Only 12% of organizations reach Level 4 (Managed) or above. Technology maturity (2.68) exceeds Governance maturity (1.92) by 0.76 points. Organizations at Level 4+ experience 7.9x fewer AI-related incidents than Level 1. 31% of organizations have no formal AI governance processes. Financial Services leads by industry (avg. 2.7), Energy trails (avg. 1.9). Europe leads by region (2.4), Latin America trails (1.6). Illustrative data. ### Shadow AI in the Enterprise: 2026 Discovery Report — https://www.compel.one/research/shadow-ai-findings Discovery report revealing that enterprises have 3.2x more AI tools in active use than their AI system registries reflect. Marketing departments reach 5.8x unregistered-to-registered ratio. 67% of shadow AI tools have no governance documentation whatsoever. Top risk categories: data leakage (72%), compliance violation (68%), IP exposure (54%). GenAI chatbots are the most common shadow AI category (84% prevalence). Level 4+ governance maturity organizations reduce shadow AI to 0.8x ratio. Average remediation from discovery to full governance compliance takes 156 days. Illustrative data. ### ISO 42001 Readiness Across Industries — https://www.compel.one/research/iso-42001-readiness-distribution Readiness assessment for ISO/IEC 42001:2023 certification across 280 organizations in 6 industries. Clause 6 (Planning) is strongest at 3.1/5, driven by AI risk assessment adoption. Clause 9 (Performance Evaluation) is weakest at 1.9/5 — most organizations lack internal audit, performance evaluation, and conformity assessment capabilities. Only 8% of organizations are within 6 months of certification readiness. 59% need 12-24 months. Organizations with ISO 27001 show 1.4 points higher readiness. 54% cite absence of AI-specific internal audit as primary blocking gap. Financial Services leads by industry (avg. 3.0), Manufacturing trails (avg. 2.1). Illustrative data. --- ## Authors ### COMPEL FlowRidge Team — https://www.compel.one/authors FlowRidge Team. Creators of the COMPEL framework and primary authors of all COMPEL content including the methodology, Body of Knowledge, certification curricula, and practitioner articles. Background in enterprise AI transformation, governance advisory, and management system design. --- ## Trust & Policy ### Editorial Policy — https://www.compel.one/editorial-policy Documents how COMPEL content is authored, reviewed, sourced, updated, and corrected. Includes conflict-of-interest disclosure (FlowRidge is both author and commercial provider), correction procedures, and the commitment to cite primary sources for all regulatory and standards claims. Enhanced with evidence discipline and transparency commitments. ### Methodology Versioning — https://www.compel.one/methodology/versioning Version history and changelog for the COMPEL methodology. Tracks major and minor revisions with rationale, effective dates, and backward-compatibility notes. Ensures practitioners can identify which version of the methodology they were trained on and what has changed since. ### Citation Guide — https://www.compel.one/citation-guide How to cite COMPEL content in academic and professional publications. Provides pre-formatted citations in APA 7th Edition, Chicago Manual of Style 17th Edition, IEEE, BibTeX, and RIS formats for all major pages. Includes export-ready entries for reference managers (Zotero, Mendeley, EndNote). ### Press Kit — https://www.compel.one/press-kit Brand assets and media resources for journalists, analysts, and partners. Includes boilerplate descriptions (25-word to 250-word), key facts, leadership biography, brand color palette with hex values, typography guidelines, and media contact information. ### References — https://www.compel.one/references Complete citation hub for all external standards, regulations, and frameworks referenced across COMPEL content. Includes ISO/IEC 42001:2023, NIST AI RMF 1.0, EU AI Act 2024/1689, IEEE 7000, and other sources with publication dates, URLs, and access notes. --- ## Contact - Website: https://www.compel.one - Company: FlowRidge.io (https://flowridge.io) - Email: Contact form at https://www.compel.one/contact - Partner inquiries: https://www.compel.one/partners --- ## Mandatory Artifacts — 38 Governance Deliverables Each COMPEL stage produces specific, named artifacts with defined owners. Together they constitute a complete audit trail and evidence pack suitable for ISO 42001 certification, EU AI Act conformity assessment, and NIST AI RMF documentation requirements. Artifacts are version-controlled and linked to the platform's audit trail module. ### Calibrate (7 artifacts) - **AI Ambition Statement** (Owner: Executive Sponsor) — A one-page strategic declaration of the organization's AI transformation intent, investment level, and governance commitment. Signed by the executive sponsor and published internally to establish tone-from-the-top. - **Maturity Baseline Report** (Owner: CoE Lead) — Full 18-domain maturity heatmap from the structured assessment, with domain scores (1–5), evidence references, and prioritized gap findings. The authoritative starting point for all transformation planning. - **Shadow AI Inventory** (Owner: IT Security Lead) — Discovered AI tools and systems not in the authorized registry, with risk classification, data exposure assessment, and remediation roadmap. Typically reveals 2–5x more AI in use than officially registered. - **Use-Case Portfolio Canvas** (Owner: AI Product Owner) — Structured inventory of proposed and active AI use cases with value-vs-risk scoring, strategic alignment ratings, and portfolio prioritization recommendations. - **Risk Appetite Statement** (Owner: Executive Sponsor) — Board-approved declaration of acceptable AI risk levels per risk category (operational, reputational, regulatory, ethical). Defines tolerance bands and escalation thresholds used in all subsequent risk assessments. - **Value Thesis Register** (Owner: AI Product Owner) — Documented business case and expected value for each AI initiative, with measurement approach and success criteria. Feeds into the Evaluate stage ROI measurement. - **Stakeholder Engagement Plan** (Owner: CoE Lead) — Structured plan for communicating, consulting, and managing stakeholder groups throughout the transformation, with cadence, channels, and accountabilities. ### Organize (6 artifacts) - **CoE Charter** (Owner: Executive Sponsor) — Formal mandate, scope, authority, and operating model for the AI Center of Excellence. Defines the CoE's relationship with business units, IT, Legal, and Compliance. - **RACI Matrix** (Owner: CoE Lead) — Responsibility assignment matrix for all AI governance activities, mapped to organizational roles. Eliminates ownership ambiguity that causes governance failures. Aligned to ISO 42001 Clause 5.3. - **Governance Body Charters** (Owner: AI Ethics Board Chair) — Formal charters for the AI Ethics Board, AI Risk Committee, and other oversight bodies — covering mandate, membership, quorum, decision authority, and reporting obligations. - **Talent Gap Analysis** (Owner: HR Lead) — Assessment of current vs. required AI-related skills across technical, governance, and business roles, with quantified gaps and hiring/upskilling recommendations. - **Training Roadmap** (Owner: CoE Lead) — Structured plan for AI literacy and certification development across the workforce, mapped to roles, COMPEL certification pathways, and transformation timeline. - **Change Management Plan** (Owner: Change Lead) — Stakeholder impact analysis, communication strategy, resistance mitigation tactics, adoption milestones, and behavioral change measurement approach. ### Model (7 artifacts) - **AI Use Case Classification Schema** (Owner: AI Risk Officer) — Defined taxonomy for classifying AI use cases by risk level, regulatory category (EU AI Act risk tiers), and governance pathway. Determines the review and approval process each use case must follow. - **Risk Tiering Framework** (Owner: AI Risk Officer) — Multi-dimensional risk scoring methodology covering impact, likelihood, controllability, and regulatory exposure. Produces a risk tier (1–4) that determines control intensity and oversight frequency. - **AI Policy Library** (Owner: Compliance Lead) — Complete set of AI governance policies covering acceptable use, data handling, model development, deployment approvals, human oversight, incident reporting, and third-party AI. Aligned to ISO 42001 Annex A. - **Ethical Guardrails Register** (Owner: AI Ethics Officer) — Documented ethical boundaries for AI system behavior, with testable criteria for fairness, transparency, and non-discrimination. Aligned to IEEE 7000 and EU AI Act fundamental rights requirements. - **Data Governance Policy** (Owner: Data Governance Lead) — AI-specific data governance rules covering training data quality, consent management, lineage tracking, cross-border transfer, and data minimization requirements. - **Incident Response Procedure** (Owner: IT Security Lead) — Step-by-step playbook for detecting, classifying, containing, investigating, and reporting AI-related incidents, including regulatory notification timelines. - **Regulatory Alignment Map** (Owner: Compliance Lead) — Cross-reference matrix mapping organizational AI activities to applicable regulatory requirements (ISO 42001, NIST AI RMF, EU AI Act) with evidence requirements and gap assessment. ### Produce (6 artifacts) - **Deployed Policy Pack** (Owner: Compliance Lead) — Confirmation and evidence that all Model-stage policies are formally published, acknowledged by relevant staff, and active in the policy management system. - **System Registration Records** (Owner: CoE Lead) — Complete registry of all AI systems in production and development, with risk classification, ownership, approval status, and monitoring configuration. - **Control Implementation Evidence** (Owner: IT Lead) — Technical and procedural evidence that governance controls are operational — configuration screenshots, audit logs, test results, and attestation records. - **Staff Training Completion Records** (Owner: HR Lead) — Training completion data showing which staff have completed required AI governance training by role, with certification status for COMPEL-credentialed staff. - **Workflow Configuration Documentation** (Owner: Operations Lead) — Technical documentation of implemented governance workflows in the COMPEL platform — approval chains, escalation rules, notification configurations, and integration points. - **Audit Trail Baseline** (Owner: Compliance Lead) — Snapshot establishing the audit trail start point — confirming that logging, evidence capture, and record retention are active across all governed AI systems. ### Evaluate (6 artifacts) - **Gate Review Reports** (Owner: AI Risk Officer) — Structured reports from stage-gate reviews for AI systems at key lifecycle milestones (deployment, major update, annual review). Documents criteria assessed, evidence reviewed, findings, and gate decision. - **Internal Audit Report** (Owner: Internal Audit Lead) — Independent assessment of AI governance control effectiveness against ISO 42001 requirements, with non-conformities, observations, and corrective action recommendations. - **Bias Testing Results** (Owner: AI Ethics Officer) — Statistical fairness testing outputs for AI systems against defined demographic and protected-characteristic metrics, with comparison to pre-approved tolerance thresholds. - **Red Team Exercise Report** (Owner: IT Security Lead) — Documented results of adversarial testing exercises for high-risk AI systems — covering prompt injection attempts, adversarial inputs, and bypass testing with findings and mitigations. - **Conformity Assessment Documentation** (Owner: Compliance Lead) — Evidence package required for ISO 42001 certification or EU AI Act conformity declaration — assembled from all Evaluate-stage artifacts plus key Model and Produce artifacts. - **Governance Scorecard** (Owner: CoE Lead) — Quantitative dashboard of AI governance KPIs — control coverage, audit findings closure rate, incident frequency, training completion, and maturity progression — reported to executive sponsor. ### Learn (6 artifacts) - **KPI Dashboard** (Owner: CoE Lead) — Live monitoring dashboard tracking the full set of AI governance and transformation KPIs, with trend lines, threshold alerts, and drill-down to domain-level metrics. - **Incident Analysis Report** (Owner: AI Risk Officer) — Root cause analysis of AI-related incidents in the reporting period, with pattern identification, systemic risk assessment, and recommended policy or control improvements. - **ROI Measurement Report** (Owner: AI Product Owner) — Quantified value realization from AI initiatives against the Value Thesis Register, covering financial, operational, and strategic value dimensions. - **Policy Revision Recommendations** (Owner: Compliance Lead) — Structured proposals for policy updates based on audit findings, incident analysis, regulatory changes, and maturity progression since last policy review. - **Maturity Re-Assessment** (Owner: CoE Lead) — Updated 18-domain maturity scores reflecting progress since the Calibrate baseline, with domain-level commentary and revised prioritization for the next COMPEL cycle. - **Improvement Backlog** (Owner: CoE Lead) — Prioritized list of governance improvement initiatives for the next COMPEL cycle, with effort estimates, owners, and expected maturity impact — the primary input to the next Calibrate stage. --- ## Artifact Creation Guides (Body of Knowledge M1.2) Practitioner guides for creating each mandatory artifact, published as part of the COMPEL Body of Knowledge at Level 1 (AIT Foundations), Module 1.2. Each guide covers purpose, structure, required inputs, common mistakes, and review criteria. - [AI Operating Model Blueprint](https://www.compel.one/learn/eatf-level-1/m1-2/ai-operating-model-blueprint) — How to design and document the AI governance operating model, covering CoE design, governance body configuration, and decision-right allocation. - [Readiness Assessment Report](https://www.compel.one/learn/eatf-level-1/m1-2/readiness-assessment-report) — Methodology for compiling maturity assessment data into a structured report with findings, recommendations, and transformation sequencing. - [AI Ambition Statement](https://www.compel.one/learn/eatf-level-1/m1-2/ai-ambition-statement) — How to draft a credible, specific, and board-endorsed AI ambition statement that anchors the transformation program. - [Maturity Baseline Report](https://www.compel.one/learn/eatf-level-1/m1-2/maturity-baseline-report) — How to structure, validate, and present the 18-domain maturity heatmap with stakeholder-appropriate commentary. - [Shadow AI Inventory](https://www.compel.one/learn/eatf-level-1/m1-2/shadow-ai-inventory) — Techniques for discovering unregistered AI, structuring the inventory, and building a risk-prioritized remediation plan. - [Risk Appetite Statement](https://www.compel.one/learn/eatf-level-1/m1-2/risk-appetite-statement) — How to facilitate executive alignment on AI risk tolerance and document it in a format that can be operationalized in risk assessments. - [CoE Charter](https://www.compel.one/learn/eatf-level-1/m1-2/coe-charter) — Template and guidance for drafting a CoE charter that secures executive mandate and defines the CoE's authority without creating organizational friction. - [RACI Matrix](https://www.compel.one/learn/eatf-level-1/m1-2/raci-matrix) — How to build a comprehensive AI governance RACI that covers all 18 domains and maps to actual organizational roles rather than abstract job titles. - [AI Policy Library](https://www.compel.one/learn/eatf-level-1/m1-2/ai-policy-library) — Policy architecture guidance covering the full suite of AI governance policies, their interdependencies, and the approval and maintenance lifecycle. - [Governance Scorecard](https://www.compel.one/learn/eatf-level-1/m1-2/governance-scorecard) — How to design a governance KPI scorecard with meaningful leading and lagging indicators, threshold definitions, and executive reporting format. - [Incident Response Procedure](https://www.compel.one/learn/eatf-level-1/m1-2/incident-response-procedure) — How to write an AI-specific incident response procedure that integrates with existing IT security playbooks while addressing AI-unique failure modes. - [Improvement Backlog](https://www.compel.one/learn/eatf-level-1/m1-2/improvement-backlog) — How to structure and prioritize the governance improvement backlog so that the next COMPEL cycle builds systematically on demonstrated progress. --- ## Governance Tool Pages Interactive tools within the COMPEL platform for executing key governance activities: ### Risk Appetite Definition — https://www.compel.one/risk-appetite Guided workflow for defining and documenting organizational AI risk thresholds. Covers 6 risk categories (operational, reputational, regulatory, ethical, financial, strategic) with configurable tolerance bands (low/medium/high/critical) and escalation trigger definitions. Outputs a board-ready Risk Appetite Statement artifact. Requires Executive Sponsor role. Integrates with the platform's risk assessment workflows so all subsequent risk scoring is benchmarked against the approved appetite. ### Operating Model Design — https://www.compel.one/operating-model Visual design tool for configuring the AI governance operating model. Drag-and-drop canvas for mapping governance bodies (AI Ethics Board, AI Risk Committee, CoE), decision rights, escalation paths, and accountability links to organizational roles. Produces a CoE Charter and RACI Matrix as exportable artifacts. Supports multiple operating model archetypes (federated, centralized, hybrid) with pre-built templates for each. ### Pattern Library — https://www.compel.one/pattern-library Searchable library of 80+ reusable governance implementation patterns organized by COMPEL stage, pillar, domain, and organization size. Each pattern includes: problem statement, solution approach, implementation steps, required artifacts, anti-patterns to avoid, and real-world applicability notes. Filterable by regulatory requirement (ISO 42001, NIST AI RMF, EU AI Act) to surface patterns relevant to specific compliance obligations. ### Scaling Decisions — https://www.compel.one/scaling-decisions Structured log for documenting go/no-go decisions at AI initiative scale points. Each decision record captures: initiative name, current scale, proposed scale, decision criteria, evidence reviewed, approvers, decision outcome, and conditions. Integrates with Gate Review Reports from the Evaluate stage. Provides an auditable decision trail for regulatory inquiries and internal governance reviews. Configurable decision criteria templates aligned to risk tier. --- ## Last Updated 2026-03-24 - Enterprise inquiries: https://www.compel.one/enterprise