Produce — The P in COMPEL
Implement controls, deploy policies, and operationalize governance infrastructure
What This Stage Is
Produce is where the governance architecture designed in Model is built, implemented, and operationalized at scale. Controls are deployed, policies are enforced through workflows, monitoring infrastructure is configured, and audit evidence is generated at every step. The Produce stage transforms policy documents and design artifacts into working governance infrastructure that actively governs AI system behavior in production. This is the highest-effort stage for most organizations, requiring coordination across IT, Legal, Compliance, HR, and operational teams. A critical discipline of the Produce stage is documentation-as-you-build: every implementation decision is captured in the system record at the time it is made, not reconstructed afterward. This creates the contemporaneous audit trail that regulators and auditors require. Organizations that defer documentation to a post-implementation cleanup phase consistently produce incomplete, inaccurate records that fail audit scrutiny. COMPEL's platform product directly supports the Produce stage by providing the technical infrastructure for system registration, risk assessment workflows, and audit trail generation. Produce ends at Gate P — Build Complete — which verifies that all implementation is finished, documentation is current, and the system is ready for formal validation in Evaluate.
Why This Stage Matters
Design without implementation is academic exercise. The Produce stage is where governance becomes operational — where policies start enforcing behavior, where monitoring starts generating insights, and where audit evidence starts accumulating. The quality of Produce execution directly determines whether the organization can demonstrate governance compliance to regulators, auditors, and boards. Organizations that execute Produce well create a self-documenting governance system: every AI system registration, risk assessment, approval, and monitoring event is captured automatically as evidence. This transforms audit preparation from a manual document-gathering exercise into a report-generation task. The Produce stage also builds organizational muscle memory. When teams configure and use governance workflows daily, governance transitions from an external compliance requirement into an embedded operational practice. This behavioral change is what distinguishes organizations at COMPEL maturity level 3-4 from those stuck at level 1-2.
Inputs
- Policy framework and risk rubrics from Model — defining what must be implemented
- AI system registry schema from Model — specifying the data model and lifecycle states to configure
- CoE charter and team structure from Organize — providing the human resources for implementation
- Gate M Decision Records from Model — specifying approved AI systems and their implementation requirements
Key Activities
- AI System Registry deployment — implementing the registry, populating system records, and configuring workflow integrations with existing IT systems
- Control implementation — deploying technical and procedural controls defined in the risk framework with test evidence
- Policy operationalization — translating policy documents into enforced workflows, automated access controls, and decision gates
- Monitoring infrastructure build — configuring KPI dashboards, alert thresholds, model drift detection, and governance scorecards
- Audit evidence pack assembly — gathering and organizing documentation for each AI system in a format ready for Gate E review
- Workflow automation — implementing approval chains, exception handling, escalation paths, and notification routing
- Bias testing execution — running the testing protocols designed in Model against actual system outputs with documented results
- Training delivery — executing the training roadmap from Organize to ensure all staff can operate governance workflows
- Stakeholder validation of artifacts — structured review and sign-off of governance artifacts by business owners and oversight bodies
- Red teaming execution — running adversarial testing protocols designed in Model against AI systems to identify vulnerabilities and failure modes
- MLOps pipeline integration — connecting AI development and deployment pipelines to governance controls, registry, and monitoring infrastructure
Outputs & Deliverables
- Deployed AI System Registry with complete system records and active workflow integrations
- Active control library — documented and tested controls mapped to the risk taxonomy with evidence of implementation
- Operational policy documentation — policies updated to reflect implemented configurations and workflow rules
- Monitoring dashboard suite — real-time KPIs, automated alerts, model drift indicators, and governance scorecards
- Audit Evidence Packs — complete documentation sets for each AI system in scope, assembled and ready for Gate E review
- Workflow Configuration Documentation — documented approval chains, exception procedures, escalation paths, and notification routing configurations
Controls
- Every control implementation must include a test plan and documented test results before being marked as active
- Audit evidence packs must be assembled concurrently with implementation — no retrospective documentation is permitted
- Policy operationalization must include automated enforcement where technically feasible, not just procedural guidance
- Monitoring thresholds must be configured with automated alerting — manual monitoring is not sufficient for production systems
- Gate P (Build Complete) review must verify that all controls, documentation, and monitoring are operational before proceeding
Evidence Artifacts
- AI System Registry export showing all in-scope systems with complete metadata and current lifecycle status
- Control implementation test reports with pass/fail results and remediation records for any failures
- Policy workflow configuration documentation showing rules, triggers, and enforcement mechanisms
- Monitoring dashboard screenshots or exports demonstrating active data collection and alert configuration
- Audit Evidence Pack index for each AI system showing all required documents with completion status
- Training completion records showing staff certification and governance workflow competency
Metrics & KPIs
- Control implementation rate — percentage of designed controls deployed and tested (target: 100%)
- Evidence pack completeness — percentage of required audit documents assembled per AI system (target: 100%)
- Policy enforcement automation rate — percentage of policies with automated enforcement versus procedural-only (target: 70%+)
- Monitoring coverage — percentage of production AI systems with active monitoring dashboards (target: 100%)
- Time to Gate P — weeks from Produce kickoff to Build Complete gate review (benchmark varies by scope)
- Training completion rate — percentage of identified staff who have completed governance workflow training (target: 95%+)
Risks If Skipped
- Policies exist on paper but are not enforced, creating a false sense of governance coverage that auditors will expose
- Audit evidence is incomplete or reconstructed after the fact, failing the contemporaneous documentation standard
- Monitoring is absent or manual, meaning governance failures are discovered reactively rather than proactively
- Controls are designed but not tested, creating unknown gaps between intended and actual governance behavior
- The organization cannot demonstrate compliance to regulators because governance infrastructure was never operationalized
Standards Alignment
| Standard | Clause | Description |
|---|---|---|
| ISO/IEC 42001:2023 | Clause 8.1-8.4, Annex A.8-A.10 | Operational planning and control, AI system lifecycle management, data for AI systems, documentation and information management |
| NIST AI RMF 1.0 | MANAGE 1.1-1.4, MANAGE 2.1-2.4 | Risk response and recovery, risk treatment implementation, residual risk documentation, metrics deployment |
| EU AI Act 2024/1689 | Article 9(5-8), 12, 17 | Risk management measures implementation, technical documentation and record-keeping, quality management system deployment |
| IEEE 7000-2021 | Clause 9.1-9.3 | Operationalization of ethical requirements into verifiable, testable system controls with evidence generation |
References
- [1] ISO/IEC 42001:2023 — Clause 8 (Operation) and Annex A controls A.8-A.10
- [2] NIST AI Risk Management Framework 1.0 (2023) — MANAGE function subcategories
- [3] EU AI Act 2024/1689 — Articles 9, 12, 17 (Implementation, record-keeping, quality management)
- [4] IEEE 7000-2021 — Operationalization and verification of ethical requirements
- [5] ISACA, "Implementing AI Governance Controls: A Practical Guide" (2024)
- [6] Forrester, "AI Governance Technology Landscape" (2024)
- [7] COMPEL Platform Implementation Guide v3.0 — FlowRidge, 2025
Frequently Asked Questions
- How long does the Produce stage typically take?
- Produce is the longest COMPEL stage for most organizations. For a first cycle with 5-10 AI systems in scope, expect 8 to 16 weeks. The duration depends on the number of systems, the complexity of control requirements, the maturity of existing IT infrastructure, and the availability of implementation resources. Subsequent cycles are faster as the governance infrastructure is already in place.
- What does documentation-as-you-build mean in practice?
- It means every implementation decision is recorded in the system of record at the time the decision is made. For example, when a control is configured, the configuration rationale, test results, and approver are captured immediately — not in a documentation sprint weeks later. This practice ensures audit evidence is contemporaneous, complete, and accurate.
- Can we use existing GRC tools for the Produce stage?
- Yes. If your organization has existing Governance, Risk, and Compliance (GRC) tooling, COMPEL governance controls can be implemented within those platforms. The COMPEL platform provides purpose-built AI governance workflows, but the framework is tool-agnostic. The key requirement is that whatever tooling you use generates the audit evidence artifacts specified in Model.
- What happens if a control fails testing during Produce?
- Failed controls are logged in the control implementation test report with root cause analysis and remediation plans. The system cannot proceed to Gate P until all critical controls pass testing. Non-critical control failures may be accepted with documented risk acceptance from the designated risk owner, but this creates a remediation item tracked into the next cycle.
Abdelalim, T. (2025). “Produce — The P in COMPEL.” COMPEL by FlowRidge. https://www.compel.one/methodology/produce