GLD Vacancies

AI Regulation and the EU AI Act

2024 is going to be the year of AI regulation. As the impact of AI increases in our daily lives, governments and regulatory bodies globally are grappling with the need to establish clear guidelines and standards for its responsible use. Ibrahim Hasan looks at the latest developments.

ChatGPT 

Ask people about AI and many will talk about AI powered chat bots like ChatGPT and Gemini –The Bard Replacement from Google. The former currently has around 180.5 million users who generated 1.6 billion visits in December 2023. However, with great popularity comes increased scrutiny as well as privacy and regulatory challenges. 

In March 2023, Italy became the first Western country to block ChatGPT when its data protection regulator (Garante Per La Protezione Dei Dati Personali) cited privacy concerns. Garante’s communication to to OpenAI, owner of ChatGPT, highlighted both the lack of a suitable legal basis for the collection and processing of personal data for the purpose of training the algorithms underlying ChatGPT, the potential to produce inaccurate information about individuals and child safety. In total, Garante said that it suspected ChatGPT to be breaching Articles 5, 6, 8, 13 and 25 of the EU GDPR. 

ChatGPT was made accessible in Italy, fours week after the above decision but Garante launched a “fact-finding activity” at the time. This culminated in a statement on 31 January 2024, in which it said it “concluded that the available evidence pointed to the existence of breaches of the provisions contained in the EU GDPR [General Data Protection Regulation]”. The cited breaches are essentially the same as the provisional finding discussed above; focussing on the mass collection of users’ data for training purposes and the risk of younger users may being exposed to inappropriate content. ChatGPT has 30 days to respond with a defence. 

EU AI Act 

Of course there is more to AI than ChatGPT and some would say much more beneficial use cases. Examples include the ability to match drugs to patients, numerous stories of major cancer research breakthroughs as well as the ability for robots to do major surgery. But there are downsides too including bias, lack of transparency, and failure to take account of the ethical implications. 

On 2nd February 2024, EU member states unanimously reached an agreement on the text of the harmonised rules on artificial intelligence, the so-called “Artificial Intelligence Act” (AI Act). The final draft of the Act will be adopted by the European Parliament in a plenary vote in April and will come into force in 2025 with a two year transition period.  

The main provisions of Act can be read here. They do not differ much from the previous draft can be read on our previous blog here. In summary, the AI Act sets out comprehensive rules for AI applications, including a risk-based system to address potential threats to health and safety, and human rights. The Act will ban some AI applications which pose an “unacceptable risk” (e.g. Real-time and remote biometric identification systems, like facial recognition) and impose strict obligations on others considered as “high risk” (e.g. AI in EU-regulated product safety categories such as cars and medical devices). These obligations include adherence to data governance standards, transparency rules, and the incorporation of human oversight mechanisms.  

Despite Brexit, UK businesses and entities engaged in AI-related activities will still be affected by the Act if they intend to operate within the EU market. The Act will have an extra territorial reach just like the EU GDPR.  

UK response 

The UK Government’s own decisions on how to regulate AI will be influenced by the EU’s approach. An AI White Paper was published in March last year entitled “A pro-innovation approach to AI regulation”. The paper sets out the UK’s preference not to place AI regulation on a statutory footing but to make use of “regulators’ domain-specific expertise to tailor the implementation of the principles to the specific context in which AI is used.”  

The government’s long-awaited follow-up to the AI White Paper was published last week. 

Key takeaways are: 

  • The government’s  proposals for regulating AI, still revolve around empowering existing regulators to create tailored, context-specific rules that suit the ways the technology is being used in the sectors they scrutinise i.e. no legislation yet (regulators have been given until 30th April 2024 to publish their AI plans).
  • The government generally reaffirmed its commitment to the whitepaper’s proposals, claiming this approach to regulation will ensure the UK remains more agile than “competitor nations” while also putting it on course to be a leader in safe, responsible AI innovation.
  • It will though consider creating “targeted binding requirements” for select companies developing highly capable AI systems.
  • It also committed to conducting regular reviews of potential regulatory gaps on an ongoing basis: “We remain committed to the iterative approach set out in the whitepaper, anticipating that our framework will need to evolve as new risks or regulatory gaps emerge.”

According to Michelle Donelan,  Secretary of State for Science, Innovation and Technology, the UK’s approach to AI regulation has already made the country a world leader in both AI safety and AI development.

“AI is moving fast, but we have shown that humans can move just as fast,” she said. “By taking an agile, sector-specific approach, we have begun to grip the risks immediately, which in turn is paving the way for the UK to become one of the first countries in the world to reap the benefits of AI safely.” 

Practical steps 

Last year, the ICO conducted an inquiry after concerns were raised about the use of algorithms in decision-making in the welfare system by local authorities and the DWP. In this instance, the ICO did not find any evidence to suggest that benefit claimants are subjected to any harms or financial detriment as a result of the use of algorithms. It did though emphasise a number of practical steps that local authorities and central government can take when using AI: 

  • Take a data protection by design and default approach 
  • Be transparent with people about how you are using their data by regularly reviewing privacy policies
  • Identify the potential risks to people’s privacy by conducting a Data Protection Impact Assessment

In January 2024, the ICO launched  a consultation series on Generative AI, examining how aspects of data protection law should apply to the development and use of the technology. It is expected to issue more AI guidance later in 2024.

Ibrahim Hasan is a solicitor and director of Act Now Training.

Join Act Now's Artificial Intelligence and Machine Learning, How to Implement Good Information Governance workshop for hands-on insights, key resource awareness, and best practices, ensuring you’re ready to navigate AI complexities fairly and lawfully.