EU AI Act enforcement begins: what data teams must do now

EU AI Act enforcement begins: what data teams must do now

Simor Consulting | 25 Apr, 2026 | 04 Mins read

The first enforcement window of the EU AI Act opened in February 2026, and the grace periods that protected early movers are expiring on a rolling schedule through 2027. This is no longer a policy discussion. It is a compliance obligation with real penalties: up to 35 million euros or 7% of global annual turnover for the most serious violations. Data teams that assumed regulation would stay in the consultation phase are now discovering that their production systems fall squarely inside the Act’s scope.

The confusion is not about whether the Act applies. It is about what “compliance” actually requires at the infrastructure level, and which obligations fall on the data team versus the legal team.

What the Act Actually Requires from Data Teams

The EU AI Act classifies AI systems into four risk tiers: unacceptable, high, limited, and minimal. Most of the systems data teams build and maintain — recommendation engines, credit scoring models, fraud detection, hiring screeners, content moderation — fall into the high-risk category. High-risk systems carry the heaviest obligations.

For data teams, the obligations break into four concrete areas:

Data governance documentation. The Act requires that training datasets be described, audited, and documented. This means a data team must maintain records of dataset provenance, composition, known biases, and preprocessing steps. If your training data pipeline does not produce a data sheet or model card automatically, you are not compliant.

Logging and traceability. High-risk systems must log their inputs, outputs, and decision pathways. This is not application-level logging. The Act expects that an auditor can trace any individual prediction back to the data and model version that produced it. If your feature store or model registry does not preserve this lineage, the compliance gap is architectural, not administrative.

Human oversight mechanisms. The Act requires that high-risk systems be designed so a human can understand, monitor, and intervene in their operation. For data teams, this means building override workflows, confidence thresholds that trigger human review, and dashboards that surface model behavior to non-technical stakeholders.

Risk management and testing. Ongoing risk assessment is mandatory. Models must be tested for bias, accuracy, and robustness before deployment and at regular intervals afterward. One-time evaluation at training time is not sufficient.

The Compliance Gap Most Teams Have

The typical data team’s infrastructure was not built for regulatory compliance. It was built for performance. The gap shows up in predictable places:

  • Feature engineering code exists, but the data lineage documentation does not. A regulator asking “where did this feature come from and what does it represent” gets a shrug.
  • Model evaluation happens at training time, but there is no continuous monitoring that would catch drift or degradation in production.
  • Logs exist, but they are application logs designed for debugging, not audit logs designed for regulatory review.
  • Bias testing is done informally, if at all, with no standardized methodology and no record of results.

The fix is not to bolt compliance onto existing systems. It is to integrate compliance artifacts into the data pipeline so they are produced automatically and remain current.

What to Do This Quarter

Start with a scope assessment. Not every model in your organization is high-risk under the Act. Map your model inventory to the risk categories in Annex III of the regulation. If you do not have a model inventory, that is the first task.

For each high-risk model, produce three artifacts: a data sheet documenting the training data, a model card documenting the model’s intended use and limitations, and a risk assessment identifying potential harms and mitigations. These documents are the minimum viable compliance package.

Then address the infrastructure gaps. Implement model registry with versioning and lineage tracking. Add structured audit logging to your inference pipeline. Build a bias evaluation suite that runs on a schedule, not just at deployment time.

Assign ownership. Compliance is not a side project for the data engineering team. Someone must own the ongoing documentation, testing, and reporting obligations. If your organization has a GRC (governance, risk, and compliance) function, the data team needs a direct line to it.

What to Watch

The European AI Office is expected to publish detailed codes of practice for high-risk systems in Q3 2026. These codes will provide sector-specific guidance that will clarify ambiguous requirements. Watch for the codes relevant to your industry.

National enforcement bodies are being established in each EU member state. The pace and rigor of enforcement will vary by country, but the trend is toward active enforcement, not passive rule-setting.

The interaction between the AI Act and GDPR is also worth tracking. The same system may trigger obligations under both frameworks, and the compliance mechanisms overlap but do not align perfectly.

Bounded Recommendation

If you operate AI systems that serve EU users or EU-based customers, treat the AI Act as a hard deadline, not a policy horizon. The teams that built compliance infrastructure early are not scrambling now. The teams that deferred are discovering that retrofitting compliance onto production systems is significantly more expensive than building it in from the start. Begin with the model inventory and data documentation. Those two artifacts resolve 60% of the compliance questions a regulator is likely to ask.

Ready to Implement These AI Data Engineering Solutions?

Get a comprehensive AI Readiness Assessment to determine the best approach for your organization's data infrastructure and AI implementation needs.

Similar Articles

The open-source LLM landscape just shifted — again
The open-source LLM landscape just shifted — again
02 May, 2026 | 03 Mins read

Three releases in the last six weeks have redrawn the open-source LLM map. Meta shipped Llama 4 with a mixture-of-experts architecture that narrows the gap with proprietary frontier models. Mistral re

Why every cloud provider launched an AI operating system this year
Why every cloud provider launched an AI operating system this year
09 May, 2026 | 03 Mins read

AWS announced Bedrock Studio. Google shipped Vertex AI Platform as a unified surface. Azure consolidated its AI offerings under a single "AI Foundry" brand. Databricks, Snowflake, and even Cloudflare

Metadata Management for AI Governance
Metadata Management for AI Governance
24 May, 2024 | 03 Mins read

# Metadata Management for AI Governance AI systems in production require metadata management to support compliance, auditing, and model oversight. Without systematic tracking of model lineage, traini

2025 Year-in-Review & 2026 Trends in Data & AI Architecture
2025 Year-in-Review & 2026 Trends in Data & AI Architecture
19 Dec, 2025 | 03 Mins read

2025 was the year AI moved from experimentation to industrialization. While 2024 saw the explosion of generative AI capabilities, 2025 was about making those capabilities production-ready, cost-effect

The Governance Layer: Managing AI Risk, Compliance, and Audit
The Governance Layer: Managing AI Risk, Compliance, and Audit
07 Feb, 2026 | 13 Mins read

A healthcare system deployed an AI triage assistant. It worked well in testing. In production, it started routing patients with chest pain to low-priority queues. The error was subtle and infrequent.