← Back to essay

EU AI Act: High-Risk AI Operational Control Reference

From obligations to controls, evals, evidence, owners, and review cadence.

LatentMesh · C5

This reference turns core high-risk AI obligations into operating controls, verification methods, evidence artifacts, owners, and review cadence. Each row answers: what control do I need, how do I verify it holds, what evidence does that produce, who owns it, and when must it be reviewed?

It does not re-explain the regulation. Essay #9 covers interpretation. C2 walks through the obligation-to-evidence loop for a single article. This reference operationalizes many obligations at once. Click any row for interpretation notes, framework crosswalks, and source links.

Start with role or article. Then use owner and cadence to map each obligation into operating responsibilities.

Scope. Under Article 6, high-risk classification arises through two paths: AI systems that are safety components of, or are themselves, products covered by Annex I sectoral legislation; and AI systems in the areas listed in Annex III, subject to the Article 6(3) carveout for systems that do not pose a significant risk of harm (except where profiling is involved). This version focuses on Annex III high-risk AI systems and selected provider/deployer operational duties under Articles 9-15, 26-27, and 72-73. It does not yet cover high-risk AI systems embedded in regulated products, conformity assessment procedures, EU database registration, or value-chain obligations in full.

Dates. Under the AI Act as currently in force, Annex III high-risk obligations apply from 2 August 2026 and high-risk AI systems embedded in regulated products from 2 August 2027. The Commission has proposed amendments through the Digital Omnibus (published 19 November 2025), but those changes should be treated as proposals until formally adopted.
Version 1 coverage: Core operational requirements (Arts. 9-15), selected deployer duties (Arts. 26-27), and post-market/incident provisions (Arts. 72-73). Future revisions will add quality management system obligations, registration, conformity assessment, corrective action, cooperation with authorities, and provider/deployer/importer/distributor obligations across the value chain.
ObligationRoleControl ObjectiveEvalEvidenceOwnerCadenceFrameworks
Provider Documented risk register covers health, safety, and fundamental rights risks for intended use and reasonably foreseeable misuse Red-team exercise targeting identified risk categories; structured review against threat taxonomy Risk register with severity ratings, test coverage mapping, review sign-off Safety Pre-release NIST AI RMF: Map (MP 3-5) · ISO 42001: 6.1.2

Art. 9(2)(a) requires identification of "known and reasonably foreseeable risks" to health, safety, and fundamental rights. Art. 9(2)(b) separately addresses risk estimation and evaluation "when the high-risk AI system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse." Risk registers that cover only intended use miss the 9(2)(b) obligation.

Essay #9 on provider obligation structure. Essay #4 on risk in multi-agent chains.
Official text: Regulation (EU) 2024/1689

Provider Each identified risk has a corresponding mitigation measure; residual risk is within acceptable thresholds Traceability matrix linking risk register entries to implemented mitigations; residual risk assessment per change Risk-to-control traceability matrix, residual risk scores, mitigation test results Applied AI Per-change NIST AI RMF: Manage (MG 1-2) · ISO 42001: 6.1.4

Art. 9(7) establishes a priority order: elimination through design first, then reduction, then information and training. Teams must document why elimination was not feasible before relying on downstream mitigations.

Essay #3 on guardrails vs. controls.
Official text: Regulation (EU) 2024/1689

Provider Testing demonstrates system performs consistently for intended purpose and meets requirements of Arts. 9-15 Pre-release eval suite covering accuracy, safety, fairness, and robustness; results compared to declared performance levels Eval run results, pass/fail summary, comparison against declared metrics Applied AI Pre-release NIST AI RMF: Measure (MS 1-2) · ISO 42001: 9.1

Art. 9(6) specifies testing "at any time throughout the development process, and, in any event, prior to the placing on the market or the putting into service." Testing is not a gate at the end; it must be integrated throughout development.

Essay #2 on the eval gap. C1 on harness architecture (loader/runner/scorer separation).
Official text: Regulation (EU) 2024/1689

Provider Instructions for use include residual risks and any required deployer-side mitigations Documentation review: residual risk register cross-referenced against deployer instructions Deployer instructions with residual risk section, sign-off from documentation review Compliance Pre-release NIST AI RMF: Govern (GV 4) · ISO 42001: 8.4

This creates a handoff dependency: the deployer's ability to comply with Art. 26 depends on receiving accurate residual risk information from the provider. Gaps here cascade into deployer non-compliance.

Essay #8 on evidence pack structure.
Official text: Regulation (EU) 2024/1689

ObligationRoleControlEvalEvidenceOwnerCadenceFrameworks
Provider Data management practices cover collection, labeling, cleaning, enrichment, and aggregation with documented choices at each stage Data lineage audit; schema validation checks on training and evaluation datasets Data governance documentation, lineage records, dataset version manifests Applied AIPer-change NIST AI RMF: Map (MP 2) · ISO 42001: A.7.4

Art. 10(2) lists specific governance practices including design choices, data collection processes, and preparation operations such as annotation and labeling. "Governance" here is operational, not just policy.

Essay #5 on data dependency drift. C4 on detection patterns.
Official text: Regulation (EU) 2024/1689

Provider Datasets examined for possible biases likely to affect health, safety, or lead to discrimination Bias-specific evals across protected attributes; distributional analysis of training data Bias evaluation report with methodology, metrics, findings, and mitigation actions SafetyPre-release NIST AI RMF: Measure (MS 2.6-2.11) · ISO 42001: A.9.3

The threshold is "possible biases that are likely to affect the health and safety of persons, have a negative impact on fundamental rights or lead to discrimination." Possibility and likelihood, not proof. Absence of evidence is not sufficient.

Essay #2 on capability vs. safety evals.
Official text: Regulation (EU) 2024/1689

Provider Training data is relevant to intended purpose, sufficiently representative, and free of errors to the degree possible Coverage analysis against intended deployment population; data quality checks Data quality report, coverage metrics, error rate analysis Applied AIPer-change NIST AI RMF: Map (MP 2.3) · ISO 42001: A.7.5

Art. 10(3) uses "to the best extent possible": a proportionality standard. Document what measures were taken and why further improvement was impractical.

ObligationRoleControlEvalEvidenceOwnerCadenceFrameworks
Provider Technical documentation exists before market placement, covers all Annex IV requirements, and is kept up to date Completeness checklist against Annex IV elements; periodic documentation review Technical documentation package, completeness checklist with sign-off, revision history CompliancePre-release NIST AI RMF: Govern (GV 1) · ISO 42001: 7.5

Annex IV specifies contents including general description, design specifications, development process, monitoring and testing, risk management, changes, performance metrics, and cybersecurity measures. Substantially more comprehensive than a model card.

Essay #8 on evidence pack structure (maps to a subset of Annex IV).
Official text: Regulation (EU) 2024/1689

Provider Documentation reflects current system state; changes trigger updates within defined SLA Change log cross-referenced against documentation revisions; timestamp audit Documentation revision history, change-to-update mapping, staleness audit report CompliancePer-change NIST AI RMF: Govern (GV 1.2) · ISO 42001: 7.5.3

The "kept up to date" requirement is one of the most operationally demanding provisions. Without automation, documentation staleness is the default.

Essay #5 on drift. C4 on detection patterns.
Official text: Regulation (EU) 2024/1689

ObligationRoleControlEvalEvidenceOwnerCadenceFrameworks
Provider System technically allows automatic recording of events relevant to risk identification, post-market monitoring, and operation monitoring Log completeness tests: trigger known events, verify they appear in structured logs with required fields Log schema specification, completeness test results, sample structured logs Platform EngPre-release NIST AI RMF: Measure (MS 4) · ISO 42001: A.7.2

Art. 12 requires the system to "technically allow" automatic logging: a design obligation on the provider. The Act does not prescribe a specific schema. For agentic systems, structured logging of inputs, outputs, tool calls, versions, human interventions, and operational context exceeds the statutory minimum but is the safest path to meeting traceability standards.

C3 on the gap between what teams capture and what auditors need.
Official text: Regulation (EU) 2024/1689

Deployer Logs retained for minimum 6 months (per Art. 26(6), subject to applicable law) and accessible for compliance monitoring Log retention audit: verify logs from N months ago are retrievable, complete, and unaltered Retention policy document, retrieval test results, access control records OpsContinuous NIST AI RMF: Govern (GV 1.1) · ISO 42001: A.6.2.3

Art. 26(6) places the retention duty on the deployer. The six-month minimum is subject to applicable Union or national law. Deployers must ensure logs remain under their control and are not solely stored in provider infrastructure without access guarantees.

C3 on operational logs vs. audit-ready evidence.
Official text: Regulation (EU) 2024/1689

ObligationRoleControlEvalEvidenceOwnerCadenceFrameworks
Provider System output includes sufficient context for deployers to interpret results and use them appropriately Interpretability assessment: present outputs to deployer-representative users, measure comprehension Interpretability test results, deployer comprehension scores, output format specification ProductPre-release NIST AI RMF: Map (MP 5) · ISO 42001: A.8.2

A design obligation: the system must be built to be interpretable, not merely documented after the fact. "Appropriate" interpretation, not complete explainability.

Provider Instructions cover provider identity, system characteristics, performance metrics, known limitations, human oversight measures, expected lifetime, and maintenance Completeness checklist against Art. 13(3) requirements; deployer comprehension review Instructions for use document, completeness checklist, revision history CompliancePre-release NIST AI RMF: Govern (GV 4) · ISO 42001: A.8.4

Art. 13(3)(b)(ii) requires disclosure of "known or foreseeable circumstances... which may lead to risks." A continuing disclosure obligation as new risks are discovered post-deployment.

Essay #9 on transparency obligations.
Official text: Regulation (EU) 2024/1689

ObligationRoleControlEvalEvidenceOwnerCadenceFrameworks
Provider System includes mechanisms enabling oversight persons to monitor operation and intervene during period of use Oversight workflow test: simulate scenarios requiring intervention, verify mechanisms function correctly Oversight mechanism specification, intervention test results, workflow documentation Platform EngPre-release NIST AI RMF: Govern (GV 3) · ISO 42001: A.8.5

Art. 14(2): oversight aims to "prevent or minimise the risks to health, safety or fundamental rights." A design obligation: the system must be built for oversight, not merely accompanied by a process document.

C2 walks the Article 14 loop end to end.
Official text: Regulation (EU) 2024/1689

Provider Oversight person can decide not to use the system, disregard, override, or reverse its output, and interrupt operation via a stop mechanism Override capability test: trigger override/stop actions, verify system responds; log capture test for override events Override mechanism test results, stop-button test results, override event log samples Platform EngPre-release NIST AI RMF: Govern (GV 3.2) · ISO 42001: A.8.5

Art. 14(4)(d): "not to use the high-risk AI system or to otherwise disregard, override or reverse the output." Art. 14(4)(e) adds "interrupt the operation" via a stop mechanism. The logging requirements for proving these rights are exercisable sit in Art. 12 and Annex IV, not Art. 14 itself.

C2 on Article 14. C3 on logging gap.
Official text: Regulation (EU) 2024/1689

Deployer Persons assigned to oversight have necessary competence, training, and authority; understand system capabilities and limitations Role assignment records; competence assessment against provider instructions; periodic training verification Oversight assignment records, training completion logs, competence assessment results OpsContinuous NIST AI RMF: Govern (GV 3.1) · ISO 42001: 7.2

Art. 26(2): assign oversight "to natural persons who have the necessary competence, training and authority." Competence must be assessed and documented. The deployer cannot outsource this to the provider.

Essay #4 on accountability gaps.
Official text: Regulation (EU) 2024/1689

ObligationRoleControlEvalEvidenceOwnerCadenceFrameworks
Provider System achieves accuracy levels appropriate to intended purpose; levels declared in instructions for use Accuracy eval suite across intended use scenarios; results compared to declared thresholds; confidence intervals Accuracy evaluation report, declared performance metrics, eval dataset descriptions Applied AIPre-release NIST AI RMF: Measure (MS 1) · ISO 42001: A.9.2

"Appropriate" is not defined quantitatively. The provider determines levels based on intended purpose and risks. The key obligation is transparency: declared accuracy must match tested accuracy.

Essay #2 on benchmarks vs. production. C1 on harness architecture.
Official text: Regulation (EU) 2024/1689

Provider System maintains performance under errors, faults, or inconsistencies in inputs or environment Robustness eval: inject malformed inputs, simulate tool failures, verify graceful degradation Robustness test results, fault injection logs, degradation behavior documentation Applied AIPer-change NIST AI RMF: Measure (MS 2.4) · ISO 42001: A.9.4

Art. 15(3) includes "inconsistencies within or among the components of the high-risk AI system or its environment." For agentic systems: test when tools return unexpected responses, retrieval indices change, or upstream dependencies shift.

Essay #1 on distributed systems failures. Essay #5 on drift.
Official text: Regulation (EU) 2024/1689

Provider System is resilient against unauthorized attempts to alter its use or performance; technical redundancy and fail-safe measures in place Adversarial testing (prompt injection, data poisoning, model extraction); fail-safe trigger tests Adversarial test results, security assessment report, fail-safe documentation and test logs Platform EngContinuous NIST AI RMF: Manage (MG 2.5) · ISO 42001: A.9.5

Art. 15(4) covers both traditional cybersecurity and AI-specific attack vectors. For agentic systems, prompt injection and tool-use manipulation are within scope. Art. 15(5): "technical redundancy solutions, which may include backup or fail-safe plans": not just monitoring, but fallback mechanisms.

Essay #3 on guardrails vs. controls.
Official text: Regulation (EU) 2024/1689

ObligationRoleControlEvalEvidenceOwnerCadenceFrameworks
Provider Proportionate system actively collects, documents, and analyzes relevant data throughout system lifetime Monitoring coverage audit: verify data collection covers intended-purpose scenarios; review analysis cadence Post-market monitoring plan, data collection records, periodic analysis reports SafetyContinuous NIST AI RMF: Manage (MG 3-4) · ISO 42001: 10.1

Art. 72(2): "actively and systematically" collecting data. Passive log aggregation alone may not satisfy this. Must be "proportionate to the nature of the AI technologies and the risks."

C4 on drift detection. Essay #10 on incident response.
Official text: Regulation (EU) 2024/1689

Deployer Operational monitoring follows provider instructions; anomalies detected and acted upon Monitoring implementation audit: verify deployer monitoring covers provider-specified indicators Monitoring configuration records, alert logs, anomaly response records OpsContinuous NIST AI RMF: Manage (MG 3.1) · ISO 42001: 9.1

Art. 26(5): the deployer monitors "on the basis of the instructions for use." If the provider's instructions are vague, the deployer's monitoring obligation is difficult to satisfy. Shared-responsibility chain.

Essay #4 on accountability chains.
Official text: Regulation (EU) 2024/1689

Provider Serious incidents reported immediately after establishing a causal link, within 15 days at most Incident classification test: simulate incidents, verify severity classification; tabletop exercise Incident response playbook, classification criteria, tabletop exercise records ComplianceIncident-triggered NIST AI RMF: Manage (MG 4) · ISO 42001: 10.2

Art. 73(4): initial report "immediately after the provider has established a causal link" or within 15 days. Art. 73(5): report must include all information necessary to determine severity. A team without a pre-established workflow cannot meet this timeline.

Essay #10 on incident response gap.
Official text: Regulation (EU) 2024/1689

Deployer Deployer informs provider and, where required by applicable law, the relevant authority upon awareness of a serious incident or malfunction Incident escalation workflow test: simulate deployer-detected incidents, verify notification reaches provider Escalation workflow documentation, notification records, provider communication logs ComplianceIncident-triggered NIST AI RMF: Manage (MG 4) · ISO 42001: 10.2

The deployer's primary incident duty sits in Art. 26(5): inform the provider and, where applicable, the relevant authority. Art. 73 is principally a provider obligation; it applies to deployers only in limited circumstances, such as when the deployer cannot reach the provider. Both parties may need to report the same incident through different channels.

Essay #10 on incident response. Essay #4 on accountability.
Official text: Regulation (EU) 2024/1689

Deployer Assessment covering elements in Art. 27(1)(a)-(f) completed before deployment FRIA completeness review against Art. 27(1)(a)-(f) elements; legal and domain-expert review FRIA document, reviewer sign-off, notification to authority per Art. 27(3) CompliancePre-release NIST AI RMF: Map (MP 5) · ISO 42001: 6.1.2

Art. 27(1) applies to specific deployer categories: bodies governed by public law, private entities providing public services, and certain other listed categories. Not all deployers are required. Assessment contents: Art. 27(1)(a)-(f). Art. 27(3) separately requires notification. First assessment before first use; update when relevant changes occur.

Essay #9 on deployer obligations.
Official text: Regulation (EU) 2024/1689

Deployer Workers subject to the use of high-risk AI systems are informed, including worker representatives where relevant Notification process audit: verify records exist and cover all affected roles Worker notification records, communication logs, representative acknowledgments OpsContinuous NIST AI RMF: Govern (GV 4) · ISO 42001: 7.3

Art. 26(7): workers must be informed "before being subject to the use of the system." New deployments, expanded use cases, or significant system changes trigger fresh notification. Interacts with Member State labor law.