Regulation Guide
EU AI Act: A Developer's Guide
The world's first comprehensive AI regulation. Full enforcement for high-risk AI systems: August 2, 2026.
What is the EU AI Act?
The EU AI Act (Regulation 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. It entered into force on August 1, 2024, with full enforcement for high-risk AI systems starting August 2, 2026.
The regulation applies to any provider or deployer who places an AI system on the EU market, or whose AI output is used within the EU — regardless of where the company is based. If your agent acts on behalf of EU users, you are in scope.
Key point for agent developers: Agents that take real-world actions — booking flights, sending emails, charging cards, submitting forms — are likely classified as high-risk AI under Annex III. The Act was written with automated decision-making in mind.
Enforcement Timeline
Prohibited practices + AI literacy obligations
GPAI model obligations
High-risk AI system requirements← You are here
Extended transition for regulated products
Risk Classification
| Tier | Examples | Status |
|---|---|---|
| Unacceptable | Social scoring, emotion recognition in workplaces, real-time biometric surveillance | Banned |
| High-Risk | Critical infrastructure, employment, education, law enforcement, credit scoring, border control | Heavy regulation |
| Limited Risk | Chatbots, deepfakes, AI-generated text and images | Transparency obligations |
| Minimal Risk | Spam filters, AI games, recommendation engines with no significant impact | Unregulated |
Most autonomous AI agents that interact with real systems — finance, HR, customer service, or any system that takes consequential action — fall into High-Risk or Limited Risk. If your agent can affect someone's finances, employment, or access to services, assume High-Risk applies.
Penalties
€35M or 7%
of global annual revenue
Prohibited practices violations
€15M or 3%
of global annual revenue
High-risk non-compliance
€7.5M or 1%
of global annual revenue
Incorrect information to authorities
Key Articles for Agent Developers
These are the articles most likely to affect developers building AI agents. Summaries are plain English — consult the full regulation text for legal certainty.
Risk Management System
Requires providers of high-risk AI to implement a documented risk management system throughout the entire AI lifecycle. The system must identify, analyze, and evaluate foreseeable risks, and put in place appropriate risk mitigation measures.
For agents: You need documented processes for what happens when your agent fails, loops, or takes unexpected actions.
Data Governance
Training, validation, and testing datasets must meet quality criteria and be subject to data governance practices. This includes examining for biases, documenting relevant design choices, and identifying data gaps or shortcomings.
For agents: If your agent learns from interaction data or uses RAG, you need to document your data sources and quality controls.
Technical Documentation
Before placing a high-risk AI system on the market, providers must draw up technical documentation demonstrating compliance with the Act. This documentation must be kept up to date throughout the system's lifecycle.
For agents: Maintain docs describing your agent's purpose, capabilities, limitations, and known failure modes.
Record-Keeping (Logging)
High-risk AI systems must be capable of automatically logging events throughout their operation to ensure traceability. Logs must cover the period of each use, the reference database used, input data, and any verification by a natural person.
For agents: Every agent decision, tool call, and output must be logged with timestamps. Fuze handles this automatically.
Transparency
High-risk AI systems must be designed to ensure sufficient transparency that deployers can understand the system's outputs and use them appropriately. The system must include instructions for use covering its capabilities and limitations.
For agents: Your agent must communicate its confidence, limitations, and when it is uncertain.
Human Oversight
High-risk AI must be designed to allow effective human oversight during deployment. Humans must be able to decide not to use the system, override its outputs, or intervene in its operation. The system must be stoppable.
For agents: You need a kill switch and escalation path. An agent that cannot be stopped is non-compliant.
Accuracy, Robustness, Cybersecurity
High-risk AI must achieve appropriate levels of accuracy and be resilient against errors, faults, and inconsistencies. Systems must be robust against attempts to alter their behavior through adversarial examples or data poisoning.
For agents: Loop detection, budget caps, and input validation are robustness requirements, not just nice-to-haves.
Log Retention
Logs generated by high-risk AI systems must be kept for a period specified by applicable law. Where no specific period is prescribed, the minimum is 6 months. Providers should actively recommend a 180-day retention period to deployers.
For agents: Store your Fuze trace files for at least 6 months. Set retention_days = 180 in fuze.toml.
Deployer Obligations
Deployers — those who put a high-risk AI system into service — have their own obligations distinct from the provider's. They must ensure human oversight, monitor for risks in use, and report serious incidents to national authorities.
For agents: Even if you are using someone else's agent platform, you are responsible for how it is deployed.
Fundamental Rights Impact Assessment
Deployers of certain high-risk AI systems must conduct a Fundamental Rights Impact Assessment (FRIA) before deployment. This covers the population likely to be affected and the nature of potential harms to fundamental rights.
For agents: If your agent affects employment, credit, education, or law enforcement decisions, you need a FRIA.
Transparency for AI-Generated Content
AI systems that interact directly with natural persons must disclose that they are AI unless it is obvious from context. AI-generated content must be marked in a machine-readable format where technically feasible.
For agents: Chatbot-style agents must identify themselves as AI.
Post-Market Monitoring
Providers must establish a post-market monitoring system that proactively collects and reviews data on the performance of their AI systems after deployment. Findings must feed back into the risk management system.
For agents: You need ongoing monitoring of your agent's behavior in production, not just during testing.
Reporting of Serious Incidents
Providers must report serious incidents to the relevant national authority without undue delay. The window is 15 days for incidents involving death or serious harm to health, and 3 days where there is an immediate threat to health or safety.
For agents: If your agent causes financial harm or a data breach, you have strict reporting deadlines.
GDPR Overlap
GDPR already applies. The AI Act adds compliance obligations on top of it — it does not replace or supersede GDPR. You need both.
The core tension: The AI Act requires detailed logs for traceability (Art. 12), while GDPR requires a lawful basis for any personal data stored in those logs. Logging everything an agent does may inadvertently capture PII you are not entitled to retain.
Fuze's approach: Hash arguments by default, and do not store raw PII in traces. Set log_pii = false in fuze.toml to enable this by default.
Next step
See exactly how Fuze maps to each article — what is covered, what is partially assisted, and what gaps remain.