We use cookies to enhance your browsing experience, analyze site traffic and deliver personalized content. For more information, please read our Privacy Policy.
Back to Blog

The Change to the EU AI Act That No One Is Talking About

Date
February 1, 2026
Strategy & Compliance
The Change to the EU AI Act That No One Is Talking About

Towards the end of last year, the EU announced changes to the AI Act, making headlines by delaying the enforcement of “high-risk” AI system rules – moving the date from August 2026 out to December 2027. This postponement was widely seen as a win for big tech firms that had lobbied against aggressive timelines. However, another change in the AI Act – arguably far more significant – received little attention, and it risks catching organisations off guard.

That overlooked change is a shift from oversight by national authorities to a self-assessment model for high-risk AI compliance. In other words, legal accountability for complying with the AI Act will now fall directly on companies themselves. If an AI system is found in violation, the company has no one else to blame – there won’t be an external regulator who “missed” classifying it as high-risk.

For understanding, a high-risk AI system is one used in situations where automated or AI-assisted decisions can have a meaningful impact on an individual’s rights, safety, or access to essential opportunities and services.

The EU AI Act takes a risk-based approach to classifying AI systems. General-purpose AI models are treated as adaptable foundations that can be used across many different applications, such as a large language model that can be configured for customer support, document analysis, or recruitment screening, and are regulated according to the level of risk associated with the specific uses they support.

From national authority classification to self-assessment: What does this mean?

Under the original framework, high-risk AI systems were expected to be assessed through a combination of predefined regulatory pathways and, in certain cases, oversight by designated national authorities or notified bodies before they could be placed on the market. This meant that an external classification or review process played a formal role in determining whether an AI system met the Act’s requirements.

Under the updated approach, organisations developing or using high-risk AI systems must self-certify that they comply with the Act’s requirements. Rather than an outside agency deciding whether an AI system is compliant, the onus is now on the company to ensure its own AI offerings meet the standards. Put simply, there is no external authority to point fingers at if things go wrong. This shift places much greater responsibility on businesses to understand and manage the risks of their AI systems.

It is not surprising, then, that many organisations are already seeking independent third-party validation of their AI systems’ compliance. According to a 2025 survey by the International Association of Privacy Professionals (IAPP), 77 percent of organisations were working on collaborative AI governance programs, rising to nearly 90 percent among those already using AI. This suggests that companies recognise the need for robust governance and, in practice, often seek external guidance even where the law allows for self-assessment.

Article 17, prEN 18286, ISO 42001 – how do they tie together?

Article 17 of the act specifically mandates quality management systems (QMS) for high-risk AI providers. The QMS has 12 core aspects, including regulatory compliance strategy, testing and validation, technical specifications, post-market monitoring, incident reporting and record keeping. Following publication of Article 17, the EU then issued a European standard specifically addressing its requirements: prEN 18286. With the presumption of conformity, organisations implementing prEN 18286 can assume they meet Article 17 obligations.

Simply put, prEN 18286 compliance becomes legally required for high-risk AI systems marketed in Europe, and it’s this that firms need to focus on.

ISO 42001 is the existing international standard for AI management systems that was published in December 2023. While it’s voluntary, organizations with existing ISO 42001 certification have a significant head start, as it provides the operational foundation for prEN 18286.

What should you be doing now?

The delay in high-risk AI enforcement should not be read as a reason to “kick the can down the road.” If anything, it signals a period in which responsibility has been placed more squarely on organisations themselves, offering a narrow but valuable window to get their internal governance, documentation, and risk management in order. Rather than a pause, it is better understood as a strategic grace period, extra time to prepare and adapt.

The experience of GDPR’s rollout in 2018 is instructive: late adopters of GDPR ended up in last-minute, deadline-driven panic. Surveys at the time found that awareness and understanding of GDPR’s requirements were alarmingly low in the months leading up to enforcement. Organisations must learn from that history by using all the available time now to get ready for the AI Act’s requirements.

Immediate steps that organizations should be taking include:

  • Identify and understand their AI risks. The scope of the Act is very broad – any AI model used in the EU is covered, regardless of where it was built. So if a company provides an AI system to EU customers, or even uses an AI tool internally with team members or stakeholders in the EU, it will need to comply. No use of AI is too trivial to be exempt if it touches EU persons or markets.
  • Study prEN 18286 requirements. Teams need to understand the draft prEN 18286 standard and take steps to ensure that their AI development and deployment processes will meet those QMS requirements. This might involve new documentation practices, monitoring procedures, or other quality controls in line with the standard.
  • Determine the required conformity assessment route. The Act will have different compliance assessment procedures depending on the situation. Organizations should figure out whether their AI systems can be self-assessed under internal control or if they fall under a category that requires a notified third-party assessment for compliance. This affects how they should prepare documentation and whether an external auditor will eventually be involved.

The world will be watching the EU AI Act’s implementation closely. This is the first comprehensive attempt to set global standards for AI regulation, aiming to ensure AI systems are safe and trustworthy. Other countries are already drafting their own AI laws, inspired in part by the EU’s lead. A multinational business that can demonstrate compliance with the EU AI Act will put itself in an excellent position to meet whatever other regulations emerge in the coming years.

At the same time, companies should remember how GDPR enforcement played out: EU regulators have not been afraid to go after violators. Organisations of all sizes need to be prepared to avoid the financial and reputational damage that comes with being sanctioned under these new AI rules.

In addition to these compliance tasks, businesses should focus on building internal AI literacy and skills. Every level of the organisation – from leadership to front-line employees – needs a solid understanding of AI’s capabilities, limitations, and the ethical and legal responsibilities that come with its use. Strengthening your team’s AI literacy will make it easier to comply with the Act and use AI effectively, experts suggest that staff AI training should be a formal part of AI governance programs. This kind of internal education ensures that when new AI tools or policies are rolled out, employees can adapt quickly and make responsible decisions aligned with the company’s compliance strategy.

AI skills are for everyone in the organisation, not just technical specialists. It means ensuring every role, from leadership to frontline employees, understands how to use, question, and govern AI responsibly in their daily work, aligning with obligations under Article 4 of EU AI Act.

Finally, given the complexity of these regulations and the rapid pace of AI innovation, many organisations are turning to experienced AI partners to help navigate the journey. Finding a partner with end-to-end expertise in AI – one that understands strategy, governance, technology, and training – can be invaluable. Digital Bricks, is an end-to-end Microsoft AI partner that has experience across the full AI adoption lifecycle. We not only assist companies in preparing for the legislation by developing AI strategy, crafting policies, and setting up proper governance frameworks, but also help drive action on key readiness areas. This includes rolling out AI literacy programs and upskilling personnel, developing a library of AI use cases mapped to business value, and even working with organisations to build out their “agentic” AI infrastructure for the future. By engaging a capable partner, business leaders can ensure they meet the EU AI Act’s requirements and leverage AI for innovation – turning compliance from a headache into a catalyst for growth.