Regulation (EU) 2024/1689 — Artificial Intelligence Act

European Parliament and Council of the European Union (2024) — The legal framework for AI in the EU — what to comply with

Overview

The EU AI Act is the world's first comprehensive legal framework for AI. It classifies AI systems into four risk categories — unacceptable (prohibited), high-risk (mandatory requirements), limited-risk (transparency obligations), and minimal-risk (voluntary codes of practice) — and imposes requirements that scale with risk level. High-risk AI systems face mandatory conformity assessment, technical documentation, human oversight, and post-market monitoring obligations.

Why It Matters

With full enforcement from August 2026 (with earlier deadlines for prohibited practices and GPAI models), the EU AI Act creates binding legal obligations for any organization deploying covered AI systems in the EU market — regardless of where the organization is based. Non-compliance carries fines of up to €35 million or 7% of global annual turnover. For organizations with EU customers, partners, or data subjects, AI Act compliance is a legal necessity, not a choice.

How COMPEL Aligns

COMPEL produces the documentation, governance structures, and evidence trails that EU AI Act compliance requires. Risk classification occurs in Calibrate; transparency documentation and human oversight mechanisms are designed in Model; technical documentation and quality management are implemented in Produce; conformity assessment evidence is generated in Evaluate; and post-market monitoring is operationalized in Learn. Organizations that mature through COMPEL accumulate EU AI Act compliance artifacts as a natural output of their governance operations.

COMPEL Operationalizes

Stage Alignment

Key Requirements


Abdelalim, T. (2025). “EU AI Act — Standards Alignment.” COMPEL by FlowRidge. https://www.compel.one/standards/eu-ai-act