United Kingdom Flag

The world’s first artificial intelligence (AI) regulatory framework is “a step closer” to becoming law, the European Parliament recently announced. Following the European Commission’s 2021 draft proposal, a draft negotiating mandate was adopted by a large majority in the EU Parliament’s Committee on Civil Liberties, Justice and Home Affairs and Committee on the Internal Market and Consumer Protection on 11 May 2023.

Quick Hits

  • The purpose of the AI Act is to manage AI safely by ensuring there is appropriate human oversight.
  • Members of the European Parliament have endorsed a four-tiered categorisation of AI systems: unacceptable, high, low, and minimal.
  • An EU AI Office would be created to monitor progress of the AI Act, be a point of consultation, and produce guidance on compliance.

The fundamental purpose of the legislation—called the Artificial Intelligence Act, or AI Act—is to manage the use of AI safely, ensuring there is appropriate human oversight. The key principles set out in the legislation include ensuring that AI is developed in a way which helps and respects people, minimises harm, complies with privacy and data protection rules, is transparent, and promotes equality and democracy. These aims are woven throughout the new drafting, much like the fundamental principles of the General Data Protection Regulation (GDPR).

In the amended proposal, a “best efforts” obligation has been introduced in line with the general aims of the legislation. AI providers would be required to “[establish] a high-level framework that promotes a coherent humancentric European approach to ethical and trustworthy Artificial Intelligence” in accordance with EU values.

What Is AI?

An initial point of interest to onlookers was how legislators would define AI itself, as there is currently no universally recognised term. The draft proposal included a “single future-proof definition of AI” which was defined as: “software … [which] can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.”

Despite best intentions, the definition is not as future-proof as initially thought, as critics observed it left room for interpretation and legal uncertainty. The amended definition of AI in the AI Act is as follows: “a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.”

EU Approach to Regulation

Members of the European Parliament (MEPs) have endorsed the Commission’s risk-based approach from their draft proposal in which AI systems are divided into four risk categories: unacceptable, high, low, and minimal. Certain AI systems would be prohibited altogether, whereas other types of AI would be permitted, but would be subject to several obligations regarding their development, placing on the market, and use.

Unacceptable Risk

AI systems are those that manipulate human behaviour or use “social scoring” (assessing the trustworthiness of people based on their social behaviour). These would be forbidden under the current proposals. MEPs have substantially amended the unacceptable list to include a ban on using AI in an intrusive and discriminatory manner. The AI Act would now also prohibit biometric categorisation systems using sensitive characteristics such as gender and race. The amended act also provides that AI should be prohibited in the use of predictive policing systems. In addition, the amended act would disallow the use of emotion recognition systems in the workplace, education, law enforcement, and border management. Creating facial recognition databases by scraping of biometric data from social media and CCTV would also be banned as this amounts to a violation of an individual’s right to privacy.

High Risk

AI systems, such as those that make decisions about people in areas sensitive to fundamental rights, would be required to meet strict requirements for their use including transparency, safety, and human oversight. MEPs expanded the high-risk classification to include harm to people’s health, safety, or the environment. This area also now includes AI systems that influence voters in political campaigns, and AI use in recommendation systems used by social media platforms. In addition, a “fundamental rights impact assessment” would also be required to be carried out before using high-risk systems for the first time.

Low and Minimal Risk

AI, such as chatbots or spam filters, would remain largely unregulated so that competitiveness is maintained in the EU.

Other Safeguards

An EU AI Office would be introduced which would monitor progress of the AI Act, be a point of consultation, and produce guidance on compliance.

Additional transparency requirements have been introduced for “generative AI systems” which can provide autonomous texts, images, or audio. Such systems would be required to disclose that the content was artificially manipulated or generated. The Commission and the AI Office will consult and develop guidelines on how these transparency obligations would be implemented.

AI providers would be obliged to ensure their staff or others dealing with AI on their behalf will have a “sufficient level of AI literacy” by means of training, including knowledge of AI functions, how the products are beneficial, but also the risks involved.

Next Steps

Following committee approval, the draft needs to be endorsed by the whole Parliament. It is anticipated a vote will be held during the 12-15 June 2023 session. Once this has been approved, tripartite negotiations on the final form of the law between the Council of the EU, the European Parliament, and the European Commission can commence.

Ogletree Deakins will continue to monitor developments with respect to the AI Act and will provide updates on the Cross-Border, Cybersecurity and Privacy, and Technology blogs as additional information becomes available. Important information for employers is also available via the firm’s webinar and podcast programs. For immediate updates on these and other topics, please follow us on LinkedIn and Twitter.

Simon J. McMenemy is the managing partner of the London office of Ogletree Deakins.

Ellie Burston is a second-year trainee solicitor in the London office of Ogletree Deakins.

Follow and Subscribe                                                                                                                        LinkedIn | Twitter | Webinars | Podcasts

Authors


Browse More Insights

Fingerprint Biometric Authentication Button. Digital Security Concept
Practice Group

Technology

Ogletree Deakins is uniquely situated to provide tech employers and users (the “TECHPLACE™”) with labor and employment advice, compliance counseling, and litigation services that embrace innovation and mitigate legal risk. Through our Technology Practice Group, we support clients in the exploration, invention, and/or implementation of new and evolving technologies to navigate the unique and emerging labor and employment issues present in the workplace.

Learn more
Modern dark data center, all objects in the scene are 3D
Practice Group

Cybersecurity and Privacy

The attorneys in the Cybersecurity and Privacy Practice Group at Ogletree Deakins understand that data now accumulates quickly and transmits easily. As the law adapts to technical advancements, we effectively advise our clients as they work to comply with new developments and best practices for protecting the privacy of the data that their businesses collect and retain.

Learn more
Glass globe representing international business and trade
Practice Group

Cross-Border

Often, a company’s employment issues are not isolated to one state, country, or region of the world. Our Cross-Border Practice Group helps clients with matters worldwide—whether involving a single non-U.S. jurisdiction or dozens.

Learn more

Sign up to receive emails about new developments and upcoming programs.

Sign Up Now