NIST AI 100-1: Artificial Intelligence Risk Management Framework (AI RMF 1.0)

National Institute of Standards and Technology (NIST), U.S. Department of Commerce (2023) — The risk management framework for AI — what functions to implement

Overview

The NIST AI RMF defines four core functions for managing AI risk: GOVERN (creating a culture of risk-aware AI), MAP (categorizing AI systems and identifying risks), MEASURE (analyzing and assessing risks), and MANAGE (prioritizing and responding to risks). Each function contains sub-functions and specific practices organized across six AI lifecycle stages. The framework is voluntary and sector-agnostic.

Why It Matters

The AI RMF has become the de facto standard for AI risk management in the United States and is widely referenced internationally. Federal agencies are increasingly required to align with it, and enterprise customers — particularly in financial services, healthcare, and critical infrastructure — expect AI vendors and partners to demonstrate AI RMF conformance. The NIST AI RMF Playbook provides 72 sub-categories of implementable practices, creating a concrete implementation roadmap.

How COMPEL Aligns

COMPEL's 18-domain structure provides the operational implementation layer that the AI RMF describes at the function level. Each COMPEL domain maps to specific AI RMF sub-categories: D17 Risk Management implements GOVERN and MAP practices; D6 Data Governance and D15 Ethics & Fairness implement MEASURE practices; D7 MLOps and D13 Security Hardening implement MANAGE practices. Organizations operating COMPEL generate the evidence, documentation, and institutional practices that AI RMF conformance requires.

COMPEL Operationalizes

Stage Alignment

Key Requirements


Abdelalim, T. (2025). “NIST AI RMF — Standards Alignment.” COMPEL by FlowRidge. https://www.compel.one/standards/nist-ai-rmf