Control Requirements Matrix

Level 1: AI Transformation Foundations Module M1.2: The COMPEL Six-Stage Lifecycle Article 19 of 10 10 min read Version 1.0 Last reviewed: 2025-01-15 Open Access

COMPEL Certification Body of Knowledge — Module 1.2: The COMPEL Six-Stage Lifecycle

Article 19 of 22


Risk identification without risk control is an academic exercise. Organizations that invest in thorough AI risk assessments — cataloging model failure modes, identifying fairness risks, mapping regulatory exposures — but do not translate those assessments into specific, implemented, monitored controls have produced documentation, not governance. The Control Requirements Matrix is the artifact that completes this translation.

The Matrix maps every identified AI risk to a specific set of governance controls. It distinguishes between controls that are mandatory — required by regulation, by the organization's risk appetite, or by the COMPEL framework itself — and controls that are recommended but discretionary. It specifies the evidence that must be produced to demonstrate each control is operating effectively. And it connects controls to the AI system classification tiers that determine which controls apply to which systems. The result is a governance instrument that practitioners can use to answer, for any AI system in the portfolio, the question: what controls are required here, who is responsible for them, and how do we know they are working?

This article provides a comprehensive guide to building and maintaining the Control Requirements Matrix, including control categories, the required-versus-recommended distinction, evidence requirements, and the relationship to AI system classification tiers.

What the Control Requirements Matrix Is

The Control Requirements Matrix (TMPL-M-002) is a mandatory Model-stage artifact owned by the Model Risk Manager, with review and approval required from the CoE Lead and the AI Risk Committee. It is produced early in the Model stage, after the AI system classification tier assignments have been completed, and it serves as the master reference for control implementation across the AI portfolio.

The Matrix is structured as a two-dimensional mapping: AI risks on one axis, governance controls on the other. Each cell in the Matrix indicates whether the control is required or recommended for a system exhibiting the corresponding risk, the classification tier thresholds that trigger mandatory control application, and the evidence requirement that demonstrates effective control operation.

Unlike the Risk Taxonomy (which catalogs risk types) or the Risk Register (which records specific risk instances for specific systems), the Matrix operates at the level of risk categories and control types. It is a framework document, not a system-specific document. Its value lies in providing consistent governance guidance across the entire AI portfolio — ensuring that all systems presenting the same risk profile are governed with the same rigor.

Control Categories

The Matrix organizes governance controls into six categories. Each category addresses a distinct dimension of AI governance risk.

Category 1: Development Controls

Development controls govern the AI system development process, from initial design through final model training. They include: requirements documentation standards, training data quality checks, bias testing protocols, model architecture review processes, and development environment security controls. Development controls are primarily preventive — they are designed to eliminate governance risks before they are embedded in deployed systems.

For high-risk AI systems (Tier 3 and above in the standard COMPEL classification schema), development controls include mandatory independent review of the training data composition, algorithmic impact assessment during the design phase, and adversarial testing before deployment authorization.

Category 2: Validation Controls

Validation controls govern the process of evaluating AI system performance before deployment authorization. They include: test set construction standards, performance threshold requirements by use case type, fairness evaluation protocols, explainability assessments, and validation documentation requirements.

The key distinction for validation controls is the independence requirement. Controls for low-risk systems may permit self-validation by the development team. Controls for high-risk systems require independent validation — performed by a team that had no involvement in system development — and in some cases external third-party validation.

Category 3: Deployment Controls

Deployment controls govern the conditions under which AI systems are released to production use. They include: deployment authorization requirements, staged rollout protocols, user notification requirements, human oversight configuration, and deployment documentation requirements.

Deployment controls are where the human oversight requirements defined in the Agent Autonomy Classification (see Article 20: Agent Autonomy Classification Framework) are translated into technical and operational requirements. A system classified at the Delegated autonomy level requires specific deployment controls — mandatory override capability, continuous monitoring, defined intervention thresholds — that do not apply to Advisory-level systems.

Category 4: Monitoring Controls

Monitoring controls govern the ongoing observation of deployed AI systems. They include: performance monitoring frequency, drift detection protocols, anomaly alerting thresholds, fairness metric tracking, and incident detection requirements.

Monitoring controls are the primary mechanism for detecting governance failures in production. They require both technical infrastructure — logging, dashboarding, alerting — and operational processes — regular review meetings, defined response protocols, escalation paths. The Matrix must specify both dimensions.

Category 5: Access and Security Controls

Access and security controls govern who can interact with AI systems, in what capacities, and under what authentication and authorization conditions. They include: model artifact access controls, inference API authentication requirements, data pipeline access controls, audit log integrity protections, and supply chain security requirements for third-party model components.

For AI systems that handle personal data, access controls must align with the data protection requirements in the Privacy Impact Assessment (TMPL-M-007). For systems where model inversion or membership inference attacks are feasible, technical controls against adversarial extraction must be specified.

Category 6: Documentation and Accountability Controls

Documentation controls govern the records that must be maintained for each AI system throughout its lifecycle. They include: model card requirements, system card requirements, decision log completeness standards, audit trail requirements, and documentation retention schedules.

Accountability controls govern the human accountability structures that attach to each AI system — the designated system owner, the responsible risk reviewer, the deployment authority, and the escalation path for incidents. The Matrix specifies which accountability structures are required for each tier of AI system.

Required Versus Recommended Controls

The Matrix distinguishes clearly between required and recommended controls. This distinction has legal and governance significance and must not be treated casually.

Required controls are those that the organization must implement for any AI system presenting the corresponding risk profile. The requirement may originate from regulation (the EU AI Act mandates specific technical documentation for high-risk AI systems), from the organization's risk appetite statement (which may require independent validation for all systems affecting credit decisions), or from the COMPEL framework itself (which mandates certain controls as baseline governance requirements regardless of regulatory context).

Failure to implement a required control is a governance deficiency that must be escalated through the Risk Committee. It may be accepted as a risk — with formal risk acceptance documentation — but it cannot be silently unaddressed.

Recommended controls represent governance best practice that the organization should implement but may deprioritize based on resource constraints, risk profile, or strategic context. Recommended controls that are not implemented should be recorded in the system's risk register with a brief rationale, so that future reviewers understand the deliberate decision not to implement them.

The Matrix should be reviewed and updated whenever: new regulations are issued or existing regulations are amended, the organization's risk appetite statement is revised, material new AI risks are identified through the operational monitoring program, or significant AI incidents — internal or at peer organizations — reveal gaps in the existing control framework.

Evidence Requirements Per Control

For each control in the Matrix, the evidence requirement specifies what documentation must exist to demonstrate that the control is implemented and operating effectively. Evidence requirements serve two purposes: they guide practitioners in understanding what "implemented" means for each control, and they enable auditors and reviewers to verify control effectiveness without relying solely on practitioner attestation.

Evidence requirements are specified in three components: artifact (the document or record that constitutes the primary evidence), currency (how recent the artifact must be — some controls require real-time evidence, others require evidence updated annually), and provenance (who must produce or attest to the evidence — in some cases, evidence produced by the development team does not satisfy the independence requirement).

Example evidence requirements illustrate the specificity required:

For the independent validation control applicable to Tier 3 systems: artifact — Validation Report (TMPL-M-004), currency — produced after the final model version and before deployment authorization, provenance — signed by the Model Risk Manager (independent of the development team).

For the drift monitoring control applicable to all deployed systems: artifact — monthly Monitoring Dashboard extract, currency — current month, provenance — generated from the production monitoring system (not manually prepared).

Relationship to AI System Classification Tiers

The COMPEL framework defines four AI system classification tiers based on risk level: Tier 1 (low risk), Tier 2 (limited risk), Tier 3 (high risk), and Tier 4 (unacceptable risk, prohibited from deployment). The Control Requirements Matrix is organized around these tiers: each control specifies the minimum tier threshold at which it becomes required.

A control required at Tier 2 applies to all Tier 2, Tier 3, and higher systems. A control required at Tier 3 applies only to Tier 3 systems and above. This tiered structure ensures that governance resources are allocated proportionally to risk — Tier 1 systems face a lighter control burden, freeing capacity for the intensive governance work that Tier 3 systems require.

The tier assignment for a specific AI system is documented in the System Classification Record (TMPL-M-001). The Control Requirements Matrix is then consulted to identify the full set of required and recommended controls for that system. This lookup process — from tier assignment to control set — is the primary operational use of the Matrix in day-to-day governance work.

Maintenance and Governance

The Control Requirements Matrix is a framework document that requires ongoing maintenance. The Model Risk Manager is responsible for reviewing the Matrix at minimum annually, and updating it in response to the triggers identified above. Major revisions require AI Risk Committee approval. Minor revisions — adding or updating evidence requirements, clarifying control descriptions — may be approved by the CoE Lead alone.

Version control for the Matrix is essential. When the Matrix is updated, all AI systems in the portfolio must be reviewed against the new version to identify whether any previously compliant systems now have gaps. This gap analysis should be documented and tracked through the Risk Committee's remediation workflow.

Cross-References

  • Article 3: The Enterprise AI Maturity Spectrum — maturity context for control program development
  • Article 4: Model — Designing the Governance Architecture — Model stage objectives and the control design process
  • Article 14: Mandatory Artifacts and Evidence Management — artifact lifecycle and evidence chain requirements
  • Article 18: Producing the Readiness Assessment Report — gate review prerequisite for this artifact
  • M1.2-Art20: Agent Autonomy Classification Framework — autonomy levels that drive deployment control requirements
  • M1.2-Art22: The Deployment Readiness Checklist — the Produce-stage artifact that verifies control implementation before deployment