Back to All Insights

Adapting to Evolving AI Regulations

Strategic Insights for Legal and Compliance Leaders

By Weronika Dominas

In 2023, the rapid emergence and adoption of AI-based tools across various industries and sectors marked a significant trend. Artificial Intelligence has become a pivotal element in the business landscape, as organisations strive to integrate these innovative technologies to streamline operations and significantly improve efficiency.

Adapting to Evolving AI Regulations
Adapting to Evolving AI Regulations

In response, the European Union, as part of its digital strategy, is spearheading efforts to regulate AI to foster better development and usage conditions. In April 2021, the European Commission introduced the first EU regulatory framework, setting a precedent for how AI systems are to be evaluated and regulated based on their potential risks to users. The proposed framework requires that AI systems are analysed and categorised according to the level of risk they pose, dictating the degree of regulation each warrants. Moreover, the proposed definition of "artificial intelligence" is deliberately broad, encompassing a wide array of data analysis techniques to ensure that it remains applicable as technology evolves. This broad scope means that many technologies currently in use by businesses will be subject to these new regulations.

On June 14, 2023, the European Parliament solidified its negotiating stance on the Artificial Intelligence Act, which was subsequently passed on March 13, 2024, with a robust majority of 523 votes. This new regulation is critical for legal and compliance teams, requiring them to understand the nuances of the new regulations and prepare their organisations to thrive in a regulated AI environment.

The EU AI Act Explained

The proposed EU AI Act embraces a tiered regulatory approach. Under this framework, the compliance requirements for AI developers and users will vary, scaled to match the level of risk inherent in each AI application, ranging from minimal risk to outright prohibition.

At the top of the classification pyramid are AI applications deemed to pose an "unacceptable risk." These are applications that could potentially endanger people's safety or rights. Specifically, the Act targets AI tools that could manipulate cognitive behaviour of people or specific vulnerable groups —like voice-activated toys promoting harmful behaviours, or social scoring systems that rank individuals based on their behaviour, social standing, or personal characteristics. Such applications are slated for a total ban.

However, there are nuanced exceptions, particularly in the realm of public safety. The legislation carves out provisions for law enforcement, allowing the use of real-time remote biometric identification tools—but only in strictly controlled circumstances, such as in the case of serious crimes including missing persons, preventing imminent threats, or identifying suspects. This is subject to stringent legal oversight and court authorisation.

The second classification pertains to High-Risk AI systems. These systems are subject to stringent obligations due to their considerable potential to impact health, safety, fundamental rights, the environment, democracy, and the rule of law adversely. Some of these high-risk AI systems are utilised in various critical sectors, including:

  • Infrastructure: AI technologies in sectors such as transportation which can pose risks to the life and health of citizens.
  • Education and Vocational Training: AI applications that influence educational paths and career trajectories, such as automated scoring systems in examinations.
  • Product Safety Components: AI implementations in areas like robot-assisted surgery that are integral to product safety.
  • Employment and Workforce Management: AI tools used in employment processes, such as resume sorting software for recruitment.
  • Law Enforcement: AI applications that could impact fundamental rights, for instance, tools evaluating the reliability of evidence.
  • Migration, Asylum, and Border Control: AI systems managing processes like visa application screenings.
  • Justice Administration and Democratic Processes: AI solutions employed to facilitate searches for court rulings.

The third segment of the AI regulatory framework addresses generative AI tools. Generative AI encompasses models trained on extensive datasets using self-supervision at scale, capable of performing a broad array of tasks across different markets and systems. While these tools are not categorised as high-risk, they are subject to specific transparency obligations and must adhere to EU copyright law. Key regulatory requirements for generative AI include:

  • A clear disclosure of content generated by AI systems.
    Models must be designed to prevent the generation of illegal content.
  • Publication of summaries detailing copyrighted data used in model training.
  • Thorough evaluation of more complex AI tools, like GPT-4. 
    Reporting of serious incidents involving AI to the European Commission.
  • Content modified or generated by AI, such as images, audio, and video files, must be explicitly labelled as AI-generated to ensure user awareness.

Lastly, AI applications considered to be of lower risk still carry specific obligations, as users must always be informed when they are interacting with an AI system. Additionally, adherence to best practices in data quality and fairness is mandatory even for these lower-risk applications, which include image and video processing, recommender systems, and chatbots.

Timelines & Next Steps

The implementation of the AI Act will follow a structured timeline after its final version approval - the regulation will become fully applicable 24 months post-enactment, with specific provisions set to activate earlier:

  • Immediate Restrictions: A complete ban on AI systems deemed to pose unacceptable risks will take effect 6 months after the regulation enters into force.
  • Codes of Practice: Implementation of codes of practice is scheduled for 9 months following enactment.
  • Transparency for General Purpose AI: Rules concerning transparency requirements for general-purpose AI systems will come into effect 12 months after the regulation's activation.
  • High-Risk AI Systems: Systems classified as high-risk will have a longer period (36 months) to comply with the regulatory requirements.

The gradual introduction of these rules will be overseen by a newly established AI Office within the European Commission. This office is tasked with monitoring the effective implementation and compliance with the regulations, ensuring that the transition towards these new standards is managed effectively across all member states. This phased approach allows different sectors and technologies sufficient time to adjust to the new regulatory environment.

Penalties: The High Cost of Non-Compliance

Under the EU AI Act, strict penalties are set for companies failing to comply with the regulations. Fines for violations are substantial, calculated as either a percentage of the offending company's global annual turnover from the previous year or a fixed amount, with the greater of the two being applied. Specifically, non-compliance could result in fines of up to €40 million or 7% of the company's total annual worldwide turnover. This highlights the critical importance for companies to integrate regulatory considerations into their business practices well before the enforcement deadlines.

Additionally, the provisional agreement under the AI Act includes provisions for more proportionate caps on administrative fines specifically tailored for small and medium-sized enterprises (SMEs) and start-ups. This consideration aims to ensure that penalties are fair and do not disproportionately impact smaller entities, which might have fewer resources to meet compliance demands compared to larger corporations. These structured penalties reinforce the EU's commitment to enforcing its AI regulations while supporting innovation and fairness across all business sizes.

The EU AI Act is set to be a pivotal development in the realm of AI regulation and innovation. In preparation for this legislation, organisations must evaluate their risk exposure and begin preparations for the forthcoming regulatory changes. This readiness will facilitate a transition towards a safer, more reliable AI landscape, enabling organisations to maximise the benefits of this transformative technology, whilst safeguarding fundamental rights and ensuring user safety.

 

Request Your Handy Guide

To get your copy of the handy guide, Adapting to Evolving AI Regulations: Strategic Insights for Legal and Compliance Leaders, please complete the form. Once we have received your request a member of the team will be in touch with your copy.