EU AI Act Takes Shape: How New Incident Reporting Guidance Empowers Safer AI

Introduction
Imagine a future where AI systems power health diagnostics, critical infrastructure, and legal decisions. It’s an exciting prospect—until something goes wrong. That’s where the European Commission’s new draft guidance on serious incident reporting under the EU AI Act comes in.
Published on September 26, 2025, this guidance provides a crucial roadmap for organizations deploying high-risk AI systems—those that could harm people, property, or fundamental rights—to report serious incidents. Whether you’re an AI developer, a legal expert, or simply curious about how regulation is making AI safer, here’s what you need to know.(natlawreview.com)
Why This Matters: A Framework for Trustworthy AI
Building an Early Warning System
AI has immense potential, but the consequences can be severe when things go wrong. The draft guidance outlines a structured system designed to:
- Identify serious incidents early
- Clarify accountability and timelines
- Enable swift corrective actions
- Increase transparency to build public trust(natlawreview.com)
Critically, providers must now anticipate how their systems might fail and have a clear plan to report and respond when they do.
What Counts as a ‘Serious Incident’?
Under Article 3(49) of the EU AI Act, a serious incident is defined as any event involving:
- The death of a person or serious harm to their health
- Severe, irreversible disruption of critical infrastructure
- A breach of fundamental rights protected under EU law
- Significant damage to property or the environment
Even indirect causes are included. For example, if incorrect medical advice from an AI system leads a doctor to make a harmful decision, that still counts. This means AI providers must consider the full downstream impact of their systems during risk planning.(natlawreview.com)
What the Draft Guidance Covers
Timeline for Reporting
The draft guidance establishes a clear timeline for reporting serious incidents, as outlined in Article 73 of the AI Act:
- Within 15 days of becoming aware of the incident.
- Within 10 days if the incident involves a person’s death.
- Within 2 days for widespread incidents or disruptions to critical infrastructure.
- An initial brief report is permitted, followed by a more detailed update.(artificialintelligenceact.eu)
This structured approach gives organizations clear expectations and authorities the transparency needed to respond effectively.
Harmonizing with Other Compliance Regimes
The guidance also harmonizes reporting with other EU regulations. If a high-risk AI system is already covered by rules like NIS2 (cybersecurity), DORA (financial resilience), or CER (critical entities), organizations may only need to report fundamental rights violations under the AI Act. This streamlining helps avoid redundant reporting and simplifies compliance.(natlawreview.com)
Developing Clear Internal Processes
To prepare, the guidance encourages organizations to:
- Develop formal incident response plans.
- Continuously monitor AI systems for potential issues.
- Establish cross-functional teams for investigation and reporting.
- Implement procedures to preserve evidence during investigations.
- Update risk assessments to account for serious incident scenarios.(natlawreview.com)
These steps help turn regulatory requirements into a proactive culture of safety.
What’s Next?
- The draft guidance and reporting templates are open for public consultation until November 7, 2025. Stakeholder input, especially regarding coordination with other reporting laws, is highly encouraged.(natlawreview.com)
- While these rules become mandatory in August 2026, organizations can and should start preparing now. Early preparation will ease the transition to compliance and build operational resilience.(natlawreview.com)
Connecting to the Bigger Picture
This guidance is part of a broader EU strategy that includes a voluntary Code of Practice to help AI firms with compliance. That code focuses on transparency, copyright, safety, and security for general-purpose AI systems.(reuters.com)
For models with systemic risks, such as large foundation models, the Commission has issued targeted guidelines covering risk assessments, adversarial testing, cybersecurity, and incident reporting. Major companies like Google, OpenAI, and Meta must comply by August 2026 or face significant penalties.(reuters.com)
Conclusion
The draft guidance on serious incident reporting under the EU AI Act is more than a compliance checklist—it’s a foundational element for building trust in AI. By clearly defining what constitutes a serious incident, setting firm reporting deadlines, and clarifying how these rules interact with other regulations, the framework empowers organizations to operate responsibly and transparently.
If you’re involved in AI governance, engaging with this consultation and building internal readiness now will give your organization a critical head start in achieving compliance and leading the way toward safer, more trustworthy AI.
FAQs
1. When does this guidance become mandatory?
The incident reporting requirement under Article 73 becomes mandatory on August 2, 2026. The draft guidance was published on September 26, 2025, to help organizations prepare.(natlawreview.com)
2. What qualifies as a ‘serious incident’?
It includes any event resulting in death or serious health harm, critical infrastructure failure, fundamental rights violations, or significant environmental or property damage, including those caused indirectly by the AI system.(natlawreview.com)
3. What are the reporting timelines?
– General serious incidents: within 15 days.
– Incidents involving death: within 10 days.
– Widespread or critical incidents: within 2 days.
An initial report can be submitted, followed by a more complete one later.(artificialintelligenceact.eu)
4. How does this interact with other EU rules?
To reduce duplication, if a high-risk AI system is already subject to similar reporting obligations (e.g., under NIS2, DORA, or CER), organizations may only need to report fundamental rights violations under the AI Act.(natlawreview.com)
5. Can I provide feedback on the guidance?
Yes. The public consultation is open until November 7, 2025. The Commission is seeking feedback, particularly on how to coordinate with other reporting frameworks.(digital-strategy.ec.europa.eu)
Thank You for Reading this Blog and See You Soon! 🙏 👋
Let's connect 🚀
Latest Blogs
Read My Latest Blogs about AI

AI’s Uneven Takeoff: Insights from Anthropic’s 2025 Economic Index on Work Transformation
Discover insights from Anthropic's 2025 Economic Index, highlighting faster but uneven AI adoption trends. Understand where usage is concentrated and how enterprises are adapting.
Read more