NIST AI 100-1: Artificial Intelligence Risk Management Framework (AI RMF 1.0)
National Institute of Standards and Technology (NIST), U.S. Department of Commerce (2023) — The risk management framework for AI — what functions to implement
Overview
The NIST AI RMF defines four core functions for managing AI risk: GOVERN (creating a culture of risk-aware AI), MAP (categorizing AI systems and identifying risks), MEASURE (analyzing and assessing risks), and MANAGE (prioritizing and responding to risks). Each function contains sub-functions and specific practices organized across six AI lifecycle stages. The framework is voluntary and sector-agnostic.
Why It Matters
The AI RMF has become the de facto standard for AI risk management in the United States and is widely referenced internationally. Federal agencies are increasingly required to align with it, and enterprise customers — particularly in financial services, healthcare, and critical infrastructure — expect AI vendors and partners to demonstrate AI RMF conformance. The NIST AI RMF Playbook provides 72 sub-categories of implementable practices, creating a concrete implementation roadmap.
How COMPEL Aligns
COMPEL's 18-domain structure provides the operational implementation layer that the AI RMF describes at the function level. Each COMPEL domain maps to specific AI RMF sub-categories: D17 Risk Management implements GOVERN and MAP practices; D6 Data Governance and D15 Ethics & Fairness implement MEASURE practices; D7 MLOps and D13 Security Hardening implement MANAGE practices. Organizations operating COMPEL generate the evidence, documentation, and institutional practices that AI RMF conformance requires.
COMPEL Operationalizes
- GOVERN: D1 Leadership Sponsorship, D14 AI Strategy Alignment, D18 Governance Structure, D15 Ethics & Fairness — organizational AI risk culture and policies
- MAP: D5 Use Case Management, D17 Risk Management, D16 Regulatory Compliance — AI system categorization, context, and risk identification
- MEASURE: D6 Data Governance, D15 Ethics & Fairness, D7 MLOps — AI risk analysis, bias testing, performance measurement
- MANAGE: D17 Risk Management, D13 Security Hardening, D9 Continuous Improvement — risk treatment, incident response, ongoing risk management
Stage Alignment
- Calibrate (primary): GOVERN, MAP — Assessment & Categorization
- Organize (primary): GOVERN — Culture, Roles, Accountability
- Model (secondary): MAP — Risk ID, Context, Documentation
- Produce (primary): MANAGE — Controls, Risk Treatment
- Evaluate (primary): MEASURE — Testing, Benchmarking, Evaluation
- Learn (primary): MANAGE — Monitoring, Continual Risk Response
Key Requirements
- GOVERN 1.1: Policies, processes, and documentation for trustworthy AI: COMPEL Model stage AI Policy Framework and D18 Governance Structure domain
- MAP 1.5: AI risk assessment across lifecycle stages: COMPEL D17 Risk Management taxonomy applied at Calibrate and maintained through Learn stage monitoring
- MEASURE 2.5: AI risk is measured, assessed, and documented: COMPEL Evaluate stage governance scorecard and Gate E review process
- MANAGE 1.3: Responses to identified AI risks are planned and documented: COMPEL risk treatment plans in D17, tracked through the Continuous Improvement Register in Learn stage
- GOVERN 6.2: Mechanisms for individuals to report AI concerns: COMPEL D18 Governance Structure includes escalation paths and AI Ethics Board reporting channels
Abdelalim, T. (2025). “NIST AI RMF — Standards Alignment.” COMPEL by FlowRidge. https://www.compel.one/standards/nist-ai-rmf