Modern dark data center interior

Quick Hits

  • The AI Act’s risk-based approach subjects AI applications to four different levels of restrictions and requirements, including “unacceptable risk,” which are banned; “high risk”; “limited risk”; and “minimal risk.”
  • The AI Act treats the use of AI in the workplace as potentially high-risk.
  • The AI Act is expected to be published soon and go into effect in spring or early summer this year.

While the AI Act does not exclusively regulate employers, it treats the use of AI in the workplace as potentially high-risk, and specifically requires employers to:

  • notify employees and workers’ representatives before implementing “high-risk AI systems,” such as systems that are used for recruiting or other employment-related decision-making purposes;
  • follow “instructions of use” provided by the producers of high-risk AI systems;
  • implement “human oversight” by individuals “who have the necessary competence, training and authority, as well as the necessary support”; and
  • retain records of the AI output, and maintain compliance with other data privacy obligations.

A Risk-Based Approach

  1. Unacceptable Risk applications are banned. They include:
  • the scraping of faces from the internet or security footage to create facial recognition databases;
  • emotion recognition in the workplace and educational institutions;
  • cognitive behavioral manipulation;
  • biometric categorization to infer sensitive data, such as sexual orientation or religious beliefs; and
  • certain cases of predictive policing for individuals.

2. High Risk applications, including the use of AI in employment applications and other aspects of the workplace, are subject to a variety of requirements.

3. Limited Risk applications, such as chatbots, must adhere to transparency obligations.

4. Minimal Risk applications, such as games and spam filters, can be developed and used without restriction.

Hefty Penalties for Violations

Using prohibited AI practices can result in hefty penalties, with fines of up to €35 million, or 7 percent of worldwide annual turnover for the preceding financial year—whichever is higher. Similarly, failure to comply with the AI Act’s data governance and transparency requirements can lead to fines up to €15 million, or 3 percent of worldwide turnover for the preceding financial year. Violation of the AI Act’s other requirements can result in fines of up to €7.5 million or 1 percent of worldwide turnover for the preceding financial year.

The AI Act is expected to be published and go into effect in later spring or early summer of 2024. In the meantime, employers can expect other countries to quickly follow suit with legislation modeled on the AI Act.

Ogletree Deakins’ Cross-Border Practice Group, Cybersecurity and Privacy Practice Group, and Technology Practice Group will continue to monitor developments and will provide updates on the Cross-Border, Cybersecurity and Privacy, and Technology blogs as additional information becomes available.

Follow and Subscribe

LinkedIn | Instagram | Webinars | Podcasts

Authors


Browse More Insights

Modern dark data center, all objects in the scene are 3D
Practice Group

Cybersecurity and Privacy

The attorneys in the Cybersecurity and Privacy Practice Group at Ogletree Deakins understand that data now accumulates quickly and transmits easily. As the law adapts to technical advancements, we effectively advise our clients as they work to comply with new developments and best practices for protecting the privacy of the data that their businesses collect and retain.

Learn more
Glass globe representing international business and trade
Practice Group

Cross-Border

Often, a company’s employment issues are not isolated to one state, country, or region of the world. Our Cross-Border Practice Group helps clients with matters worldwide—whether involving a single non-U.S. jurisdiction or dozens.

Learn more

Sign up to receive emails about new developments and upcoming programs.

Sign Up Now