COMPEL Certification Body of Knowledge — Module 1.2: The COMPEL Six-Stage Lifecycle
Article 21 of 22
AI systems do not slot into existing workflows unchanged. They alter how work is done, who does it, at what pace, and with what accountability. An organization that deploys an AI system into a process designed around purely human execution will find the system underperforms, the process degrades, and the accountability structure becomes confused. The system was designed to assist a human decision-maker — but the process still requires the human to make a decision, the workflow does not specify when the AI output is reviewed, and when something goes wrong, it is unclear whether the AI or the human is accountable.
Workflow Redesign Documentation is the artifact that prevents this confusion from arising. It maps the current-state workflow, designs the future-state workflow that incorporates the AI system, specifies the precise points where human-AI handoffs occur, defines the rollback procedures if the AI system needs to be removed, and assesses the change impact on the people, processes, and systems affected. It transforms the introduction of an AI system from an IT deployment event into a governed process change.
This article provides a complete guide to producing Workflow Redesign Documentation, including the methodology for mapping current and future states, the principles for human-AI task allocation, the requirements for handoff point documentation, rollback procedure design, and change impact assessment.
Purpose and Ownership
The Workflow Redesign Documentation (TMPL-P-001) is a mandatory Produce-stage artifact. Ownership is shared between the Business Unit AI Lead — who has subject-matter authority over the workflow being redesigned — and the CoE Lead — who ensures the redesign meets governance standards. Both must approve the final document before it is submitted as part of the Deployment Readiness package.
The Documentation covers every workflow affected by the AI system's deployment. For systems that affect a single, well-bounded process, this may be a single workflow map. For systems with broad operational scope — an AI assistant deployed across multiple business functions, for example — the Documentation may cover dozens of workflows. In these cases, the Documentation should be structured as a set of workflow modules, each covering a discrete process segment, with a summary section capturing cross-cutting themes.
Mapping Current-State Workflows
The foundation of the Workflow Redesign Documentation is an accurate, detailed map of how work currently gets done. This is harder than it sounds. Process documentation is frequently outdated, incomplete, or aspirational — describing how work is supposed to happen rather than how it actually happens. Effective current-state mapping requires direct observation and practitioner involvement, not just document review.
The current-state map must capture: every step in the process, the actor responsible for each step (by role, not individual), the inputs consumed at each step, the outputs produced at each step, the decision points in the process and the criteria used to make decisions, the systems and tools used at each step, the time typically taken at each step, and the handoff points where work transfers between actors or systems.
For governance purposes, the current-state map must also capture the accountability structure: who is responsible for the outcome of the process as a whole, who is notified when the process produces a concerning outcome, and how errors in the process are currently detected and corrected. This accountability mapping is essential because the future-state redesign must maintain or improve accountability — it must never produce a state in which accountability is diluted or unclear.
Designing Future-State Workflows
The future-state workflow map shows how the process will operate with the AI system in place. It must be designed with the same level of detail as the current-state map — every step, every actor, every input and output — so that the two can be directly compared.
The future-state design should be driven by explicit principles rather than implicit assumptions.
Principle 1: Task allocation follows comparative advantage. Allocate tasks to humans and AI based on where each has a genuine advantage. AI systems are typically superior at consistent, rapid processing of structured data, pattern recognition across large populations, and recall of rules and precedents. Humans are typically superior at contextual judgment, handling novel situations, ethical reasoning, relationship management, and accountability-bearing. The future-state design should reflect this allocation honestly — not aspirationally.
Principle 2: Human oversight is substantive, not nominal. When the future-state design includes human review of AI outputs, that review must be designed so it can be exercised genuinely. This means the human reviewer has sufficient time to review the output thoughtfully, has access to the information needed to evaluate it, has the cognitive background to understand what they are reviewing, and has the authority and technical capability to reject or modify the AI's output. A review step that takes three seconds per item is not a substantive review step — it is a nominal one that creates accountability without genuine oversight.
Principle 3: Accountability must be explicit. For every output of the redesigned process — every decision, every action, every record produced — the future-state design must identify a human who is accountable for that output. "The AI is accountable" is not an acceptable answer. AI systems are tools. The humans who deploy them, approve their use, and operate the processes in which they are embedded bear accountability for their outputs.
Principle 4: Error recovery is designed in. The future-state design must include explicit error recovery steps: what happens when the AI system produces an output that the human reviewer identifies as incorrect, what happens when the AI system fails to produce an output, and what happens when the AI system produces a harmful output that was not caught by the review step.
Human-AI Handoff Points
Handoff points — the moments where work transfers between the AI system and a human, or from a human to the AI system — are the highest-governance moments in any AI-augmented workflow. They are where accountability is transferred, where errors can be introduced, and where the design of the process most directly determines whether human oversight is genuine or nominal.
For each handoff point, the Documentation must specify: the direction of the handoff (human to AI, AI to human), the information transferred at the handoff, the format and medium of that information, the time constraint on the handoff (how quickly must the receiving party act?), the quality standards for the information being handed off, and the escalation protocol if the receiving party cannot accept the handoff (AI system unavailable, human reviewer absent, output quality below threshold).
Special attention is required for handoffs where the AI system's output is highly influential on subsequent human decisions. Research on automation bias consistently shows that humans systematically over-weight algorithmic recommendations, particularly under time pressure. The Documentation should identify handoff points with high automation bias risk and specify design mitigations: presenting the AI's reasoning (not just its conclusion), requiring the human reviewer to record their independent judgment before seeing the AI output, or reducing the presentation prominence of the AI recommendation.
Rollback Procedures
Every AI system deployment must include documented rollback procedures: the steps required to remove the AI system from the workflow and return to the pre-deployment process or to an interim manual process. Rollback procedures are not a sign of low confidence in the system — they are a standard element of responsible deployment.
Rollback procedures must be specific and tested. They must specify: the trigger conditions that would initiate a rollback (system failure, governance policy breach, regulatory directive, adverse impact finding), the decision authority for initiating a rollback, the technical steps to remove the AI system from the process, the process steps that replace the AI system's functions in the interim, the staffing implications of the rollback (more human labor will typically be required), the communication to be sent to affected stakeholders, and the timeline for completing the rollback.
Rollback procedures must be tested before deployment — not after an incident. A tabletop exercise simulating a rollback scenario, conducted with the operational team that would execute the rollback, is the minimum testing standard. For high-risk systems, a live rollback drill in a staging environment is required.
Change Impact Assessment
The introduction of an AI system into a workflow is a change that affects people, processes, and systems. The change impact assessment section of the Documentation systematically identifies these effects and ensures that the deployment plan includes appropriate change management responses.
People impacts: Which roles are affected by the workflow redesign? What new skills are required? What existing tasks are eliminated or significantly altered? What anxieties about job security or professional identity may arise? The change impact assessment should be based on direct engagement with the affected workforce — not just management assumptions about how employees will respond.
Process impacts: Which upstream and downstream processes are affected by the workflow redesign? Are there dependencies on the current workflow's structure or timing that will be disrupted by the redesigned workflow? Are there regulatory or contractual requirements embedded in the current workflow that must be preserved in the redesign?
System impacts: What systems are affected by the integration of the AI system? What data flows change? What system interactions are added or removed? What monitoring and logging requirements does the AI system create for connected systems?
For each identified impact, the Documentation must specify the change management response: training and communication for affected employees, process documentation updates for affected processes, and technical changes for affected systems. The change management responses become inputs to the deployment plan, ensuring that the technical deployment and the organizational change management are coordinated rather than sequential.
Cross-References
- Article 5: Produce — Deploying AI Responsibly — Produce stage objectives and governance requirements
- Article 14: Mandatory Artifacts and Evidence Management — artifact lifecycle and evidence chain requirements
- M1.2-Art20: Agent Autonomy Classification Framework — autonomy levels that shape human-AI task allocation
- M1.2-Art22: The Deployment Readiness Checklist — the gate artifact that verifies workflow redesign completion before deployment
- Article 11: Change Management in AI Transformation — organizational change management principles and practices