Industry Standards For Agentic Ai Iso Nist And Emerging Frameworks

Level 4: AI Transformation Leader Module M4.5: Industry Standards Development and Methodology Advancement Article 11 of 10 13 min read Version 1.0 Last reviewed: 2025-01-15 Open Access

COMPEL Certification Body of Knowledge — Module 4.5: AI Standards, Regulation, and Industry Leadership

Article 11 of 12


The standards landscape for artificial intelligence is racing to keep pace with a technology that advances faster than committees can convene. For conventional AI systems — predictive models, classification engines, recommendation systems — the standards ecosystem has achieved meaningful maturity. ISO/IEC 42001 provides a management system standard for AI. The NIST AI Risk Management Framework offers a comprehensive approach to AI risk. The EU AI Act establishes regulatory requirements with legal force. But agentic AI — systems that autonomously pursue goals, make decisions, and take actions — stretches these frameworks to their limits and, in many cases, beyond them.

This article maps the current standards landscape to the specific requirements of agentic AI evaluation and governance. It examines where existing standards provide adequate coverage, where they fall short, and where emerging frameworks are beginning to address the gap. For governance leaders, this mapping is essential: applying existing standards without recognizing their agentic limitations creates a false sense of compliance, while waiting for purpose-built agentic standards leaves a governance vacuum that grows more dangerous as agentic deployments accelerate.

The Current Standards Landscape

ISO/IEC Standards for AI

The International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) have published a substantial body of AI-related standards through Joint Technical Committee 1, Subcommittee 42 (JTC 1/SC 42):

ISO/IEC 42001:2023 — AI Management System. The flagship management system standard for AI, establishing requirements for organizations to manage AI system development, deployment, and operation. It provides a framework for AI governance that covers policy, planning, support, operation, performance evaluation, and improvement.

Agentic AI gap: ISO/IEC 42001 treats AI systems as managed assets — built, deployed, and operated. It does not address the unique challenges of systems that act autonomously, delegate authority, or adapt their behavior. The standard's management system approach is necessary but insufficient for agentic governance, which requires runtime enforcement, dynamic authority management, and continuous behavioral monitoring that go beyond the standard's scope.

ISO/IEC 23894:2023 — AI Risk Management. Provides guidance for managing risk associated with AI systems, aligned with the ISO 31000 risk management framework. It covers risk identification, analysis, evaluation, and treatment for AI systems.

Agentic AI gap: The standard addresses AI-specific risks such as bias, transparency, and data quality, but does not address risks unique to agentic systems: action-space risks, cascading failure risks, delegation risks, and emergent behavior risks (as detailed in Module 3.4, Article 12: Agentic AI Risk Taxonomy). Its risk assessment methodology assumes relatively static systems and does not account for systems whose risk profiles change as they adapt.

ISO/IEC 22989:2022 — AI Concepts and Terminology. Establishes fundamental AI concepts and terminology. The standard defines terms including "AI system," "machine learning," and "neural network" but does not include terms specific to agentic AI such as "agent," "orchestrator," "delegation," or "autonomy level."

ISO/IEC 25059:2023 — Quality Model for AI Systems. Extends the SQuaRE (Systems and Software Quality Requirements and Evaluation) framework to AI systems, defining quality characteristics including functional correctness, explainability, and robustness.

Agentic AI gap: The quality model evaluates system characteristics at a point in time. For agentic systems that adapt and evolve, quality must be evaluated continuously. The standard also does not address quality characteristics unique to agentic systems: goal achievement, behavioral consistency, delegation effectiveness, and autonomy appropriateness.

NIST AI Framework

The National Institute of Standards and Technology (NIST) has established itself as a leading voice in AI governance through several key publications:

NIST AI Risk Management Framework (AI RMF 1.0, 2023). The AI RMF provides a voluntary framework for managing AI risks, organized around four core functions: Govern, Map, Measure, and Manage. It is widely adopted as a reference framework for AI risk management in the United States and internationally.

Agentic AI coverage: The AI RMF's principles-based approach provides a foundation that can be extended to agentic systems. Its emphasis on organizational governance (Govern function), context mapping (Map function), risk measurement (Measure function), and risk management (Manage function) applies to agentic AI. However, the framework's guidance and examples are predominantly oriented toward predictive and generative AI, and it does not provide specific guidance for agentic risks.

NIST AI 600-1: Generative AI Profile (2024). A companion to the AI RMF that addresses risks specific to generative AI systems, including content provenance, hallucination, and misuse.

Agentic AI relevance: The Generative AI Profile addresses several risks relevant to agentic systems (since many agents use generative AI models), including information integrity, confabulation, and data privacy. However, it focuses on content generation risks rather than autonomous action risks. An agent that generates accurate content but takes inappropriate actions is not adequately covered.

NIST AI 100-2: Adversarial Machine Learning (2024). Taxonomy and guidance for adversarial attacks against AI systems, covering evasion attacks, poisoning attacks, and privacy attacks.

Agentic AI relevance: Adversarial attacks against agentic systems include prompt injection, tool manipulation, and inter-agent communication attacks that go beyond the taxonomy's current scope. The standard provides a foundation but needs extension for the expanded attack surface that agents present.

EU AI Act

The European Union's Artificial Intelligence Act establishes a risk-based regulatory framework for AI systems, categorizing them into risk levels (unacceptable, high, limited, minimal) with corresponding requirements.

Agentic AI implications: The EU AI Act's requirements for high-risk AI systems — including risk management, data governance, transparency, human oversight, accuracy, robustness, and cybersecurity — apply to many agentic AI deployments. However, the Act was designed before agentic AI became widespread, and its requirements do not specifically address:

  • How human oversight applies to systems designed to operate autonomously for extended periods.
  • How transparency requirements apply to multi-agent systems where decision-making is distributed across multiple agents.
  • How accuracy requirements apply to systems whose behavior changes through adaptation.
  • How accountability is apportioned in multi-agent, multi-organization agentic deployments.

The European Commission's implementing guidance and delegated acts are expected to address some of these gaps, but as of this writing, specific agentic AI guidance has not been finalized.

Where Existing Standards Fall Short

The Autonomy Gap

Existing standards assume a degree of human control that agentic AI, by design, reduces. Standards require human oversight, but agentic AI is valuable precisely because it reduces the need for human oversight. Standards require predictable behavior, but agentic AI's value comes partly from its ability to adapt and respond to novel situations. Standards require clear accountability, but agentic AI distributes decision-making across multiple autonomous entities.

Resolving the autonomy gap requires standards that:

  • Define levels of autonomy with corresponding governance requirements (as the COMPEL autonomy spectrum does, but formalized as a standard).
  • Specify minimum human oversight requirements that scale with autonomy level rather than being binary (present or absent).
  • Establish testing and evaluation methodologies that validate autonomous behavior within bounded parameters.

The Composition Gap

Existing standards evaluate AI systems individually. Multi-agent systems are composed of multiple AI systems that interact in ways that cannot be predicted by evaluating each system in isolation. Standards must address:

  • How to evaluate the safety and reliability of composed multi-agent systems.
  • How to allocate responsibility when composed systems produce emergent failures.
  • How to certify or validate multi-agent systems when the component agents may be from different vendors or organizations.

The Adaptation Gap

Existing standards assume that the system validated during development is the same system operating in production. Adaptive AI systems change after deployment. Standards must address:

  • What constitutes a material change that triggers re-validation.
  • How continuous validation works for systems that change continuously.
  • How to maintain regulatory compliance when the regulated system is, by design, impermanent.

The Cross-Boundary Gap

Existing standards address AI systems within a single organizational context. Cross-organizational agent interactions (as discussed in Module 4.3, Article 11: Cross-Organizational Agentic AI Governance) create governance requirements that current standards do not address:

  • Interoperability standards for agent-to-agent communication across organizations.
  • Shared governance frameworks for multi-party agent ecosystems.
  • Liability standards for harms arising from cross-organizational agent interactions.

Emerging Frameworks for Agentic AI

Industry Initiatives

Several industry initiatives are developing standards and frameworks specifically for agentic AI:

Agent Protocol specifications. Industry-led efforts to standardize how AI agents communicate with tools, data sources, and other agents. These protocols aim to create interoperability standards that enable agents from different vendors to work together reliably. While focused on technical interoperability rather than governance, standardized protocols provide the foundation on which governance standards can be built.

Enterprise AI governance frameworks. Major technology companies and consulting firms are publishing enterprise governance frameworks that include agentic AI-specific guidance. While not formal standards, these frameworks influence organizational practice and may inform future formal standards development.

Responsible AI coalitions. Multi-stakeholder initiatives bringing together technology companies, civil society organizations, and academic institutions to develop principles and guidelines for responsible AI development, increasingly addressing agentic systems.

Academic and Research Contributions

Academic research is advancing several areas relevant to agentic AI standards:

Agent safety research. Formal methods for specifying and verifying agent safety properties — ensuring that agents cannot take certain actions regardless of their reasoning. This research aims to provide mathematical guarantees about agent behavior that could form the basis for safety standards.

Multi-agent systems evaluation. Methodologies for evaluating the behavior of multi-agent systems, including testing for emergent behaviors, coordination failures, and adversarial vulnerabilities. These methodologies could inform standardized evaluation procedures.

Alignment measurement. Techniques for measuring the degree to which an AI system's behavior aligns with intended objectives. Applied to agentic systems, alignment measurement could provide standardized metrics for governance compliance.

Mapping Existing Standards to Agentic Requirements

While purpose-built agentic AI standards are still developing, organizations can map existing standards to agentic requirements as an interim measure:

ISO/IEC 42001 extension mapping:

  • Extend the AI policy (Clause 5.2) to include agentic-specific policies: autonomy levels, delegation authorities, action-space boundaries.
  • Extend risk assessment (Clause 6.1) to include the agentic risk taxonomy: action-space, cascading failure, delegation, learning, and emergent behavior risks.
  • Extend operational planning (Clause 8.1) to include agent lifecycle management: design, testing, deployment, monitoring, adaptation, and retirement.
  • Extend performance evaluation (Clause 9.1) to include continuous behavioral monitoring and adaptation quality measurement.

NIST AI RMF extension mapping:

  • Govern: Add governance structures for delegation authority, agent registries, and cross-organizational interaction policies.
  • Map: Add context mapping for agent action spaces, delegation hierarchies, and adaptation mechanisms.
  • Measure: Add measurement for autonomy-level appropriateness, delegation effectiveness, behavioral drift, and emergent behavior detection.
  • Manage: Add management practices for runtime governance enforcement, adaptive learning controls, and cross-organizational governance coordination.

EU AI Act compliance mapping:

  • Risk management (Article 9): Extend to include agentic risk taxonomy with dynamic, continuous risk assessment.
  • Data governance (Article 10): Extend to include governance of adaptation data and inter-agent data exchange.
  • Transparency (Article 13): Extend to include multi-agent decision provenance and delegation chain transparency.
  • Human oversight (Article 14): Extend to define oversight requirements calibrated to autonomy level.
  • Accuracy and robustness (Article 15): Extend to include behavioral consistency under adaptation and multi-agent composition reliability.

Building an Organizational Standards Strategy

Pragmatic Adoption

Organizations should not wait for purpose-built agentic AI standards to be finalized. A pragmatic adoption strategy includes:

Step 1: Baseline with existing standards. Implement ISO/IEC 42001 or NIST AI RMF as the foundational governance framework. These provide the organizational structures, processes, and accountability mechanisms needed for any AI governance.

Step 2: Extend for agentic requirements. Using the extension mappings described above, add agentic-specific policies, risk categories, controls, and monitoring to the baseline framework. Document these extensions explicitly so they can be updated as formal standards emerge.

Step 3: Monitor standards development. Track the development of formal agentic AI standards and participate in standards development where appropriate. Early participation ensures that emerging standards reflect practical experience.

Step 4: Converge with formal standards. As purpose-built agentic AI standards are published, align organizational practices with those standards, retiring custom extensions where formal standards provide adequate coverage.

Contributing to Standards Development

Organizations with experience deploying and governing agentic AI systems are uniquely positioned to contribute to standards development:

  • Share lessons learned from agentic AI deployments with standards bodies.
  • Participate in technical committees developing agentic AI standards.
  • Contribute test cases, evaluation methodologies, and governance patterns derived from operational experience.
  • Engage with regulatory bodies to inform the development of agentic AI-specific regulatory guidance.

The Standards Horizon

Near-Term Expectations

Over the next two to three years, the standards landscape for agentic AI is expected to evolve significantly:

  • ISO/IEC SC 42 working groups are expected to produce guidance documents addressing autonomous AI systems, building on the foundation of ISO/IEC 42001 and 23894.
  • NIST is expected to release additional profiles or supplements to the AI RMF addressing agentic AI risks, following the model of the Generative AI Profile.
  • Industry consortia are expected to publish interoperability standards for agent-to-agent communication and tool interaction.
  • Regulatory bodies in the EU, US, UK, and other jurisdictions are expected to issue guidance on how existing regulations apply to agentic AI systems.

Long-Term Trajectory

In the longer term, the standards landscape for agentic AI is likely to develop along several trajectories:

  • Autonomy classification standards that provide a standardized framework for classifying agent autonomy levels with corresponding governance requirements.
  • Multi-agent system certification frameworks that enable organizations to certify the safety and reliability of composed multi-agent systems.
  • Cross-organizational agent interaction standards that enable governed interactions between agents from different organizations, including identity, trust, and liability frameworks.
  • Adaptive AI governance standards that address the unique challenges of systems that change their behavior after deployment.

Key Takeaways

  • The current standards landscape — ISO/IEC 42001, NIST AI RMF, EU AI Act — provides essential foundations for AI governance but was designed for predictive and generative AI, leaving significant gaps for agentic systems in autonomy governance, multi-agent composition, behavioral adaptation, and cross-organizational interaction.
  • Four critical gaps must be addressed: the autonomy gap (standards assume human control that agents reduce), the composition gap (standards evaluate systems individually, not as interacting multi-agent systems), the adaptation gap (standards assume static systems), and the cross-boundary gap (standards assume single-organization contexts).
  • Organizations should not wait for purpose-built agentic AI standards — adopt existing frameworks as a baseline, extend them with agentic-specific policies and controls using documented extension mappings, and converge with formal standards as they emerge.
  • Mapping existing standards to agentic requirements provides immediate governance coverage: extending ISO/IEC 42001 with agentic policies, extending NIST AI RMF functions with agentic-specific guidance, and extending EU AI Act requirements with autonomy-calibrated interpretations.
  • Contributing to standards development — sharing deployment lessons, participating in technical committees, and engaging with regulatory bodies — ensures that emerging standards reflect practical organizational experience with agentic AI governance.
  • The standards horizon over the next two to five years will likely include autonomy classification standards, multi-agent certification frameworks, cross-organizational interaction standards, and adaptive AI governance standards that formalize the emerging practices described in this body of knowledge.

© FlowRidge.io — COMPEL AI Transformation Methodology. All rights reserved.