When the EU AI Act Stumbles: What the Delay in High-Risk AI Guidance Means for Privacy Professionals 2026
The EU Artificial Intelligence Act is often described as the world’s most ambitious attempt to regulate artificial intelligence. With its risk-based structure and strong focus on fundamental rights, the AI Act is especially relevant for privacy professionals. That is why the European Commission’s recent failure to meet its deadline for guidance on high-risk AI systems has raised eyebrows across the compliance community.
Under the AI Act, certain AI systems are classified as “high-risk” because of their potential impact on individuals’ rights and freedoms. These include systems used in areas such as biometric identification, employment, education, creditworthiness, healthcare, and law enforcement. Providers and deployers of such systems face strict obligations, ranging from risk management and data governance requirements to human oversight and post-market monitoring.
To help organisations understand whether their systems fall into this category, the AI Act requires the European Commission to publish detailed guidance explaining how high-risk classification should work in practice. That guidance was legally due in early February 2026. However, the deadline came and went without publication, leaving organisations and regulators alike in a state of uncertainty.
According to the Commission, the delay is largely due to ongoing stakeholder consultations and unresolved technical questions. Standard-setting bodies responsible for developing harmonised technical standards are themselves running behind schedule, and several Member States are still setting up national authorities to enforce the AI Act. While these challenges are not unexpected for a regulation of this scale, the absence of guidance creates real-world problems for compliance teams.
For privacy professionals, the implications are significant. Many high-risk AI systems rely heavily on personal data and intersect directly with GDPR principles such as lawfulness, fairness, transparency, data minimisation, and accountability. Without clear guidance, organisations must interpret the legal text on their own when deciding whether an AI system is high-risk. A conservative interpretation may lead to over-compliance and higher costs, while an overly narrow reading could expose organisations to enforcement action once authorities begin auditing AI systems.
Importantly, the delay does not pause the AI Act itself. The regulation’s compliance timelines remain in force, meaning organisations cannot simply wait for the Commission to publish guidance before taking action. This mirrors a familiar lesson from GDPR: guidance helps, but legal responsibility exists regardless of whether regulators have provided detailed explanations.
Another concern is the risk of inconsistent enforcement across the EU. In the absence of EU-level guidance, national supervisory authorities may develop their own interpretations of what qualifies as high-risk AI. For multinational organisations, this could result in fragmented compliance expectations and increased legal complexity.
For privacy students and CIPP candidates, this situation offers a valuable real-world case study. It highlights how emerging technology regulation often evolves unevenly and why professionals must be comfortable working with incomplete information. Understanding the structure of the AI Act, its connection to fundamental rights, and its overlap with data protection law will be essential skills in the years ahead.
The delayed guidance is a reminder that compliance is not only about following checklists, but about applying legal principles in uncertain and evolving environments. For future privacy professionals, that may be the most important lesson of all.


