← Back to Essays
· 3 min read

The Regulatory Mapping Table

An interactive reference that turns EU AI Act high-risk obligations into operating controls, verification methods, evidence artifacts, owners, and review cadence. Filter by role, article, cluster, or cadence to map obligations into your operating responsibilities.

The regulation tells you what to prove. The mapping table tells you how to operate the proof.


This is a companion to Mapping the EU AI Act to Engineering Evidence in the Reliable Agent Systems series.


Essay #9 mapped EU AI Act obligations to engineering evidence. From Obligation to Evidence in 90 Minutes walked through the loop for a single article. This reference operationalizes many obligations at once.

What this reference covers

Twenty-five operating controls derived from Articles 9–15, 26–27, and 72–73 of the EU AI Act, organized into eight clusters:

  1. Risk Management — continuous risk identification, residual risk documentation, risk-proportionate testing
  2. Data Governance — training data quality controls, bias monitoring, data-sheet maintenance
  3. Technical Documentation — system design records, model cards, change-log discipline
  4. Logging and Traceability — automatic event logging, log retention, audit trail completeness
  5. Transparency — user-facing disclosure, interaction transparency, decision explainability
  6. Human Oversight — override mechanisms, escalation protocols, operator competence
  7. Accuracy, Robustness and Cybersecurity — performance baselines, adversarial testing, security controls
  8. Post-Market Monitoring and Incidents — field performance tracking, serious incident reporting, corrective action

Each row answers five questions: What control do I need? How do I verify it holds? What evidence does that produce? Who owns it? When must it be reviewed?

How to use it

The reference is an interactive, filterable table. Start with role (Provider or Deployer) to see only the obligations that apply to you. Then use article, cluster, owner, or cadence to map each obligation into your operating responsibilities.

Click any row to expand interpretation notes, framework crosswalks (ISO 42001, NIST AI RMF), and direct links to the official regulation text.

EU AI Act: High-Risk AI Operational Control Reference

25 controls · 8 clusters · Filterable by role, article, owner, cadence, and evidence type

Open the interactive reference →

Scope and dates

This version focuses on Annex III high-risk AI systems and selected provider/deployer operational duties. It does not yet cover high-risk AI systems embedded in regulated products (Annex I), conformity assessment procedures, EU database registration, or value-chain obligations in full.

Under the AI Act as currently in force, Annex III high-risk obligations apply from 2 August 2026 and high-risk AI systems embedded in regulated products from 2 August 2027. The Commission has proposed amendments through the Digital Omnibus (published 19 November 2025), but those changes should be treated as draft until adopted.

How this connects

The Reliable Agent Systems series has been building toward this: Controls Are Not Guardrails defined what a control is. Anatomy of an Evidence Pack defined what evidence looks like. Essay #9 mapped obligations to engineering artifacts. This table turns that mapping into operating procedures — the controls you run, the evidence they produce, and the cadence at which you review them.


Previously: Drift Detection Patterns for Production Agents. Related series essay: Mapping the EU AI Act to Engineering Evidence.


Selected references