Voltar ao Blog
/5 min read

The EU AI Act Is Now Law. Here's What It Actually Requires From Your Product.

AIStrategy

The EU AI Act is no longer a draft, a proposal, or something to worry about later. As of January 2026, it's fully enforceable law. Every AI system deployed in the European market must meet specific transparency, safety, and accountability standards. Fines for non-compliance go up to 35 million euros or 7% of global annual revenue.

Most companies building AI products are still treating this as a legal problem for the legal team to handle. It's not. It's an engineering problem. And the earlier you build compliance into your development process, the cheaper and less painful it will be.

The risk classification you need to understand

The AI Act sorts AI systems into four risk categories. Your category determines your obligations.

Unacceptable risk. Banned outright. Social scoring systems. Real-time biometric surveillance in public spaces (with limited law enforcement exceptions). Manipulation techniques that exploit vulnerabilities. If your product does any of these, stop building it.

High risk. The category that affects the most companies. AI systems used in hiring and recruitment. Credit scoring and lending decisions. Educational assessment. Medical device diagnostics. Safety components in critical infrastructure. Insurance pricing. Legal and judicial applications.

If your AI touches any of these areas, you have the heaviest compliance obligations. More on these below.

Limited risk. AI systems with specific transparency obligations. Chatbots must tell users they're interacting with AI. Deepfake generators must label their output. Emotion recognition systems must disclose their use. The obligations are lighter, but they're real.

Minimal risk. Everything else. Spam filters. AI-powered search. Recommendation systems. Game AI. Minimal obligations beyond existing consumer protection laws.

What high-risk classification means for your engineering team

If your system falls into the high-risk category, here's what you need to build.

A risk management system

Not a document. A living engineering process. You need to identify foreseeable risks throughout the AI system's lifecycle. Document them. Implement technical measures to mitigate them. Test those measures. Update them as the system evolves.

In practice, this means maintaining a risk registry that maps each identified risk to a specific mitigation in your codebase. "Model may exhibit gender bias in hiring recommendations" maps to "bias testing suite runs on every model update, blocks deployment if disparity exceeds threshold X."

Data governance documentation

You need to document your training data. What data was used. How it was collected. What steps were taken to identify and address bias. How data quality was validated.

This is easier if you build it into your data pipeline from the start. Tag datasets with provenance metadata. Log preprocessing decisions. Track data quality metrics over time. If you're doing this retroactively on a system that's already in production, budget significant engineering time.

Technical documentation

A detailed description of how your AI system works. Its intended purpose. Its limitations. The metrics used to evaluate its performance. The data it was trained and tested on. How it should be monitored in production.

This goes beyond a README. Think of it as a technical specification that a regulator could read and understand how your system makes decisions. If your system is a black box to your own team, it definitely won't pass regulatory scrutiny.

Human oversight mechanisms

High-risk systems must include provisions for human oversight. In practice, this means humans must be able to understand the system's outputs, override or reverse decisions, and intervene in real time when needed.

For engineering teams, this translates to: build an admin interface that shows how the AI reached each decision. Add override capabilities that let authorized humans reverse AI outputs. Implement circuit breakers that pause automated decisions when confidence drops below a threshold.

Logging and audit trails

Every decision your high-risk AI system makes must be logged in a way that enables post-hoc analysis. Not just "the model returned X." The inputs, the model version, the confidence score, and enough context to reconstruct why the system made that specific decision.

If your current logging is "request in, response out," you need to add structured decision logs. These logs must be retained for the duration specified by the regulation (varies by use case, typically years).

The transparency code of practice

The European Commission released its first draft Code of Practice alongside the Act's enforcement. AI-generated content must be marked in machine-readable, detectable, and interoperable formats. If your system generates text, images, audio, or video, the output needs metadata that identifies it as AI-generated.

This affects every generative AI product serving European users. The technical standard is still evolving, but the direction is clear: invisible AI content won't fly in the EU market.

Build it in, don't bolt it on

The companies that will spend the most on compliance are the ones trying to retrofit it. Auditing an existing system for bias is expensive. Reconstructing training data provenance after the fact is sometimes impossible. Adding logging to a system that wasn't designed for it touches every layer of the stack.

The companies that will spend the least are the ones building these requirements into their development process today. Bias testing in the CI pipeline. Data provenance in the ingestion layer. Decision logging in the inference service. Human oversight in the product design.

If you're starting a new AI project that might serve European users, build these five things from day one: risk management process, data documentation, technical documentation, human oversight, and audit logging.

If you're maintaining an existing system, prioritize logging and human oversight first. Those are the hardest to add retroactively and the most likely to be audited first.

The EU AI Act is just the beginning. Colorado's AI Act and California's AI Transparency Act follow similar patterns. Compliance built into your architecture today saves you from rebuilding tomorrow.

Talvez goste de