The Change to the EU AI Act That No One Is Talking About
.png)
.png)
Towards the end of last year, the EU announced changes to the AI Act, making headlines by delaying the enforcement of “high-risk” AI system rules – moving the date from August 2026 out to December 2027. This postponement was widely seen as a win for big tech firms that had lobbied against aggressive timelines. However, another change in the AI Act – arguably far more significant – received little attention, and it risks catching organisations off guard.
That overlooked change is a shift from oversight by national authorities to a self-assessment model for high-risk AI compliance. In other words, legal accountability for complying with the AI Act will now fall directly on companies themselves. If an AI system is found in violation, the company has no one else to blame – there won’t be an external regulator who “missed” classifying it as high-risk.
For understanding, a high-risk AI system is one used in situations where automated or AI-assisted decisions can have a meaningful impact on an individual’s rights, safety, or access to essential opportunities and services.
.png)
Under the original framework, high-risk AI systems were expected to be assessed through a combination of predefined regulatory pathways and, in certain cases, oversight by designated national authorities or notified bodies before they could be placed on the market. This meant that an external classification or review process played a formal role in determining whether an AI system met the Act’s requirements.
Under the updated approach, organisations developing or using high-risk AI systems must self-certify that they comply with the Act’s requirements. Rather than an outside agency deciding whether an AI system is compliant, the onus is now on the company to ensure its own AI offerings meet the standards. Put simply, there is no external authority to point fingers at if things go wrong. This shift places much greater responsibility on businesses to understand and manage the risks of their AI systems.
It is not surprising, then, that many organisations are already seeking independent third-party validation of their AI systems’ compliance. According to a 2025 survey by the International Association of Privacy Professionals (IAPP), 77 percent of organisations were working on collaborative AI governance programs, rising to nearly 90 percent among those already using AI. This suggests that companies recognise the need for robust governance and, in practice, often seek external guidance even where the law allows for self-assessment.
Article 17 of the act specifically mandates quality management systems (QMS) for high-risk AI providers. The QMS has 12 core aspects, including regulatory compliance strategy, testing and validation, technical specifications, post-market monitoring, incident reporting and record keeping. Following publication of Article 17, the EU then issued a European standard specifically addressing its requirements: prEN 18286. With the presumption of conformity, organisations implementing prEN 18286 can assume they meet Article 17 obligations.
Simply put, prEN 18286 compliance becomes legally required for high-risk AI systems marketed in Europe, and it’s this that firms need to focus on.
ISO 42001 is the existing international standard for AI management systems that was published in December 2023. While it’s voluntary, organizations with existing ISO 42001 certification have a significant head start, as it provides the operational foundation for prEN 18286.
The delay in high-risk AI enforcement should not be read as a reason to “kick the can down the road.” If anything, it signals a period in which responsibility has been placed more squarely on organisations themselves, offering a narrow but valuable window to get their internal governance, documentation, and risk management in order. Rather than a pause, it is better understood as a strategic grace period, extra time to prepare and adapt.
The experience of GDPR’s rollout in 2018 is instructive: late adopters of GDPR ended up in last-minute, deadline-driven panic. Surveys at the time found that awareness and understanding of GDPR’s requirements were alarmingly low in the months leading up to enforcement. Organisations must learn from that history by using all the available time now to get ready for the AI Act’s requirements.
Immediate steps that organizations should be taking include:
The world will be watching the EU AI Act’s implementation closely. This is the first comprehensive attempt to set global standards for AI regulation, aiming to ensure AI systems are safe and trustworthy. Other countries are already drafting their own AI laws, inspired in part by the EU’s lead. A multinational business that can demonstrate compliance with the EU AI Act will put itself in an excellent position to meet whatever other regulations emerge in the coming years.
At the same time, companies should remember how GDPR enforcement played out: EU regulators have not been afraid to go after violators. Organisations of all sizes need to be prepared to avoid the financial and reputational damage that comes with being sanctioned under these new AI rules.
In addition to these compliance tasks, businesses should focus on building internal AI literacy and skills. Every level of the organisation – from leadership to front-line employees – needs a solid understanding of AI’s capabilities, limitations, and the ethical and legal responsibilities that come with its use. Strengthening your team’s AI literacy will make it easier to comply with the Act and use AI effectively, experts suggest that staff AI training should be a formal part of AI governance programs. This kind of internal education ensures that when new AI tools or policies are rolled out, employees can adapt quickly and make responsible decisions aligned with the company’s compliance strategy.
.png)
Finally, given the complexity of these regulations and the rapid pace of AI innovation, many organisations are turning to experienced AI partners to help navigate the journey. Finding a partner with end-to-end expertise in AI – one that understands strategy, governance, technology, and training – can be invaluable. Digital Bricks, is an end-to-end Microsoft AI partner that has experience across the full AI adoption lifecycle. We not only assist companies in preparing for the legislation by developing AI strategy, crafting policies, and setting up proper governance frameworks, but also help drive action on key readiness areas. This includes rolling out AI literacy programs and upskilling personnel, developing a library of AI use cases mapped to business value, and even working with organisations to build out their “agentic” AI infrastructure for the future. By engaging a capable partner, business leaders can ensure they meet the EU AI Act’s requirements and leverage AI for innovation – turning compliance from a headache into a catalyst for growth.