COMPEL Certification Body of Knowledge — Module 3.4: Regulatory Strategy and Advanced Governance
Article 6 of 10
The assumption that an organization governs only the AI it builds is dangerously outdated. In practice, most enterprise AI portfolios include substantial components that the organization did not develop — vendor-provided AI platforms, partner-integrated AI services, open-source models and libraries, AI embedded in enterprise software, and AI capabilities acquired through mergers and acquisitions. The governance perimeter for AI extends far beyond the organization's own development teams, and governing what crosses that perimeter is among the most complex challenges in enterprise AI governance.
This article addresses the EATE's role in designing third-party and supply chain AI governance — the structures, processes, and capabilities required to ensure that externally sourced AI meets the same governance standards as internally developed AI.
The Expanding Governance Perimeter
The traditional model of AI governance assumes that the organization builds its own AI. This assumption was reasonable five years ago, when most enterprise AI consisted of internally developed models trained on proprietary data. It is no longer reasonable.
The Vendor AI Explosion
Enterprise software vendors have embedded AI into virtually every category of business application. Customer relationship management systems use AI for lead scoring and next-best-action recommendations. Enterprise resource planning systems use AI for demand forecasting and supply chain optimization. Human capital management systems use AI for resume screening, performance prediction, and attrition risk modeling. Cybersecurity platforms use AI for threat detection and response.
Each of these embedded AI capabilities creates governance obligations. When the organization's hiring decisions are influenced by AI embedded in a vendor's HR platform, the organization bears the same ethical and regulatory responsibility as if it had built the model itself. The EU AI Act makes this explicit: deployers of high-risk AI systems bear compliance obligations regardless of whether they developed the system or procured it.
The Open-Source Dimension
Open-source AI has democratized access to powerful model architectures, pre-trained models, and machine learning libraries. Organizations routinely build AI systems on open-source foundations — using pre-trained language models, open-source computer vision frameworks, or community-developed machine learning libraries. These components enter the organization's AI supply chain with varying levels of documentation, testing, and provenance information.
The governance challenge is significant. Open-source AI components may carry biases inherited from their training data, may have been developed without the rigorous testing that enterprise deployment requires, and may have licensing terms that create intellectual property complications (Module 3.4, Article 7: Intellectual Property Strategy for AI addresses this dimension). Governing open-source AI requires the same rigor as governing any other supply chain input — but the decentralized, community-driven nature of open-source development makes traditional vendor governance approaches insufficient.
Partner and Ecosystem AI
Organizations increasingly operate within AI ecosystems — sharing data with partners, integrating partner AI into their products, and providing their data or AI capabilities to others. These ecosystem relationships create bidirectional governance challenges. The organization must govern AI it receives from partners, and it must ensure that AI it provides to partners meets appropriate governance standards.
Acquired AI
Mergers and acquisitions frequently bring AI assets into the organization — models, data sets, deployment infrastructure, and the teams that built and maintain them. Acquired AI may have been developed under different governance standards (or no governance standards), may use data with unclear provenance, and may operate under technical assumptions that are incompatible with the acquiring organization's governance framework.
The Third-Party AI Risk Framework
Governing third-party AI requires a risk management framework specifically designed for externally sourced AI. This framework operates across the full lifecycle of the third-party relationship — from vendor selection through ongoing monitoring to relationship termination.
Pre-Engagement Due Diligence
Before engaging any third-party AI provider, the organization should conduct structured due diligence that assesses the provider's AI governance capabilities and the specific governance characteristics of the AI being procured.
Provider governance assessment: Evaluate the provider's overall AI governance maturity. Does the provider have a documented AI governance framework? An ethics program? Bias testing procedures? Model monitoring capabilities? Incident response processes? The EATE should design a standardized provider governance assessment questionnaire that reflects the organization's governance standards.
System-specific assessment: Evaluate the specific AI system being procured. What data was it trained on? What bias testing has been conducted? What performance validation has been performed? What monitoring capabilities are available? What documentation is provided? What is the model's intended use, and does the organization's planned use fall within that scope?
Contractual requirements: Due diligence findings should inform contractual negotiations. Key contractual provisions for AI procurement include: right to audit (the organization's right to conduct or commission audits of the provider's AI systems and governance practices); transparency obligations (the provider's obligation to disclose model architecture, training data characteristics, known limitations, and performance metrics); performance and fairness guarantees (contractual commitments regarding model performance, bias thresholds, and accuracy standards); incident notification (the provider's obligation to notify the organization of performance issues, bias discoveries, security incidents, or regulatory actions affecting the AI system); change management (the provider's obligation to notify the organization before making material changes to the model, including retraining, architecture changes, and data changes); and termination and transition provisions (what happens to data, models, and outputs if the relationship ends).
Ongoing Third-Party Monitoring
Pre-engagement due diligence is necessary but insufficient. Third-party AI must be monitored continuously, just as internally developed AI is monitored.
Performance monitoring: The organization should monitor the performance of third-party AI systems using the same metrics and thresholds applied to internal systems. This requires contractual access to performance data or the ability to independently measure performance through output analysis.
Bias monitoring: Third-party AI systems should be subject to the same bias monitoring as internal systems. This is particularly important for high-risk applications such as lending, hiring, and healthcare, where bias in third-party AI creates the same regulatory and ethical exposure as bias in internal AI.
Compliance monitoring: As regulatory requirements evolve, the organization must assess whether third-party AI remains compliant. This requires ongoing regulatory intelligence (Module 3.4, Article 3: Proactive Regulatory Engagement) and the ability to assess third-party compliance against new requirements.
Vendor governance monitoring: The provider's governance practices should be reassessed periodically — not just at engagement initiation. Providers may change their governance practices, undergo organizational changes that affect governance capability, or experience incidents that signal governance degradation.
Incident Response for Third-Party AI
When a third-party AI system produces a governance incident — a bias event, a performance failure, a data breach, or a regulatory non-compliance finding — the organization's incident response process must be able to address the incident even though the organization does not control the underlying system.
This requires pre-established protocols: who is notified at the provider? What information must the provider supply? What remediation authority does the organization have? Under what circumstances can the organization suspend the system's operation? How are regulatory notifications handled? These protocols should be established contractually during the engagement phase, not improvised during an incident.
Supply Chain Integrity
Beyond governing individual third-party relationships, the EATE must address AI supply chain integrity — the end-to-end integrity of the chain of components, data, and services that comprise the organization's AI portfolio.
Model Provenance
Model provenance — the documented history of a model's development, including its training data, architecture decisions, development team, testing results, and deployment history — is a critical governance requirement. For internally developed models, provenance documentation is a standard governance control. For third-party and open-source models, provenance information may be incomplete, unavailable, or unverifiable.
The EATE should design provenance requirements for all AI entering the organization — whether developed internally, procured from vendors, downloaded from open-source repositories, or acquired through M&A. These requirements should specify the minimum provenance information the organization requires and the processes for obtaining, verifying, and maintaining provenance records.
Data Supply Chain
AI models are only as trustworthy as their training data. The data supply chain — the chain of data sources, data processing steps, and data quality controls that produce the data used to train and operate AI models — must be governed with the same rigor as any other critical supply chain.
For internally developed models using internal data, data supply chain governance is primarily a data governance challenge — addressed through the data governance framework. For third-party models, data supply chain governance extends across organizational boundaries: Where did the provider obtain its training data? What rights does the provider have to that data? What data quality controls did the provider apply? Is the training data representative of the populations the model will serve in the organization's context?
These questions are difficult to answer when dealing with third-party AI. Large language models, for example, are trained on vast corpora of data whose composition is often not fully documented. The EATE must design governance that addresses this uncertainty — establishing minimum transparency requirements for data provenance while accepting that perfect transparency may not be achievable for all third-party AI components.
Software Bill of Materials for AI
The concept of a Software Bill of Materials (SBOM) — a comprehensive inventory of all components in a software system — is being extended to AI. An AI Bill of Materials (AI BOM or ML BOM) documents the components of an AI system: the model architecture, training data sources, open-source libraries and frameworks, pre-trained model components, training infrastructure, and deployment dependencies.
The EATE should work toward AI BOM practices that provide visibility into the full component structure of the organization's AI portfolio — including components contributed by third parties. This visibility is essential for risk management (understanding which third-party components carry which risks), compliance (demonstrating the composition of regulated AI systems), and incident response (quickly identifying which systems are affected when a vulnerability is discovered in a shared component).
Governance of AI in M&A Transactions
A specialized but increasingly important dimension of third-party AI governance is the governance of AI assets during mergers and acquisitions. When an organization acquires another entity, it inherits that entity's AI portfolio — including any governance gaps, biases, compliance issues, and technical debt.
AI Due Diligence in M&A
The EATE should design an AI-specific due diligence framework for M&A transactions that includes: inventory of all AI systems in the target entity; governance maturity assessment of the target's AI program; review of model documentation, testing results, and performance metrics; assessment of data rights and data supply chain integrity; identification of regulatory compliance gaps; evaluation of technical debt and remediation costs; and review of pending or historical AI-related incidents, complaints, or regulatory actions.
This due diligence directly informs deal valuation. Significant AI governance gaps in an acquisition target represent remediation costs that should be factored into the deal price. Material compliance risks may affect deal structure or require representations and warranties that protect the acquirer.
Post-Acquisition Integration
After acquisition, the acquired AI portfolio must be integrated into the acquirer's governance framework. This integration process should be planned as part of the integration management office's scope, with specific workstreams for: applying the acquirer's governance standards to inherited AI systems; conducting bias testing and performance validation on inherited models; establishing monitoring for inherited systems; migrating inherited AI to the acquirer's technical infrastructure where appropriate; and remediating identified governance gaps.
The EATE should ensure that AI governance integration receives the same attention in M&A planning as financial integration, technology integration, and people integration. AI governance gaps discovered post-close are significantly more expensive to remediate than gaps identified and addressed during due diligence.
The EATE's Role in Third-Party AI Governance
The EATE designs the enterprise framework for third-party AI governance — not the individual vendor assessments but the policies, processes, tools, and organizational structures that enable consistent governance of all externally sourced AI.
This includes establishing the organization's minimum governance standards for third-party AI; designing the due diligence process and assessment tools; defining contractual requirements for AI procurement; designing ongoing monitoring processes; establishing incident response protocols for third-party AI; and creating the AI supply chain integrity program.
The EATE must also help the organization navigate the cultural challenge of third-party AI governance. Business teams that procure vendor AI often perceive governance requirements as obstacles to adoption. The EATE must make the case — supported by the strategic governance argument from Module 3.4, Article 1: Governance as Strategic Advantage — that governing third-party AI is not about preventing adoption but about enabling safe, sustainable adoption at scale.
Key Takeaways for the EATE
- The governance perimeter extends well beyond internally developed AI. Vendor AI, open-source AI, partner AI, and acquired AI all require governance.
- Third-party AI governance operates across the full relationship lifecycle: pre-engagement due diligence, contractual requirements, ongoing monitoring, and incident response.
- Supply chain integrity — including model provenance, data supply chain governance, and AI Bills of Materials — is essential for enterprise-level governance.
- AI governance must be a formal component of M&A due diligence and post-acquisition integration planning.
- The EATE designs the enterprise framework for third-party AI governance, ensuring that externally sourced AI meets the same governance standards as internally developed AI.