Flag of the European Union

Quick Hits

  • The EU Platform Work Directive took effect on December 1, 2024, but EU member states have until December 2, 2026, to implement it into national law and develop appropriate guidance for the classification of platform workers as independent contractors versus employees.
  • The directive imposes obligations on digital platforms using automated decision-making or automated monitoring systems that go above and beyond the requirements of the EU AI Act.
  • The directive’s rebuttable presumption of employment does not change existing law regarding misclassification, but it does promise future guidance from member states on rebutting the presumption, and, in turn, appropriately classifying platform workers.

State of the Law

The directive came into force on December 1, 2024, imposing new obligations on companies that make work available to gig workers via electronic means or digital platforms. However, the consequences of noncompliance with the majority of the directive’s provisions are not yet known because member states have until December 2, 2026, to implement the directive into national law, including defining penalties that each deems “effective, dissuasive and proportionate to the nature, gravity and duration of the … infringement and to the number of workers affected.”

Key AI Regulations

To the extent a digital platform uses automated decision-making or automated monitoring systems, the directive imposes a number of requirements.

Notice

Companies must notify platform workers in writing (which may be in electronic form), no later than their first working day, as to when artificial intelligence (AI) will be used, as well as before changes are made that will affect working conditions, the organization of work, or monitoring work performance, and at any other time upon a worker’s request. Companies must notify job candidates in advance if AI will be used in recruitment or selection procedures.

Human Oversight and Review

Much like the EU AI Act, the directive requires digital platforms to implement human oversight of AI tools by individuals who have “the competence, training, and authority necessary to exercise that function, including for overriding automated decisions.” However, it goes further than the EU AI Act by implementing the following mandates:

  • At least every two years, digital platforms must evaluate the impact of decisions regarding platform workers that were taken or supported by AI, including how it affected their working conditions and equal treatment. Digital platforms must share the results of that evaluation with workers’ representatives and workers upon request.
  • Any decision to “restrict, suspend or terminate the contractual relationship or the account of a person performing platform work” must be made by a human being. Unless the worker is an independent contractor, the digital platform must provide a written explanation to the worker “without undue delay” and no later than on the date on which the decision will take effect.
  • Unless the worker is an independent contractor, the digital platform must “provide persons performing platform work with access to a [designated] contact person,” who must provide an oral or written explanation, without undue delay, “for any decision taken or supported by automated decision-making system.” If a worker challenges a decision, the digital platform must respond in writing no later than two weeks after receipt of the challenge.

Health and Safety

The directive imposes ambiguous requirements on digital platforms that use AI tools, such as the requirement to evaluate the risks—including psychosocial and ergonomic risks—posed by AI tools to platform workers’ safety and health. Companies may not use AI tools “in a manner that puts undue pressure on platform workers or otherwise puts at risk the safety and physical and mental health of platform workers.”

Information and Consultation

In addition to requiring member states to encourage platform worker representation and collective bargaining, the directive grants platform workers’ representatives information and consultation rights with respect to the introduction of, or substantial changes in the use of, AI tools, as well as the right to engage experts, in some cases at the digital platform’s expense. If there are no such representatives, the digital platform must give workers written notice about the introduction of, or substantial change in the use of, AI tools.

Data Privacy

In some cases, the directive requires a data-protection impact assessment, the results of which must be provided to workers’ representatives. In all cases, the directive prohibits digital platforms from using AI tools to:

  • collect personal data while a platform worker is not performing or offering to perform work;
  • process a platform worker’s emotional or psychological state or private conversations, including those with other workers or worker representatives;
  • process personal data to infer a protected characteristic or predict the exercise of fundamental legal rights; or
  • process biometric data to establish a platform worker’s identity by comparing “his or her biometric data to stored biometric data of a number of individuals in a database (one-to-many identification).”

Misclassification Issues

As outlined in our prior article, the directive attempts to create a rebuttable presumption that platform workers contracted with on or after December 2, 2026, are employees of the digital platform, not self-employed. However, this provision’s impact is as yet unclear because it defers to member states’ existing law, rather than prescribing a new test for misclassification.

The directive requires member states to issue guidance on the classification of platform workers, which should help companies better understand how to manage platform workers in a way that does not trigger the presumption. The directive states that the presumption of employment does not apply to tax, criminal, or social security proceedings, which means it narrowly applies for employment law purposes (e.g., statutory benefits, termination protections, etc.).

In addition, the directive requires digital platforms to disclose the existence of platform work, which could invite new scrutiny of platform worker classification. Specifically, it requires member states to ensure that digital platforms disclose when platform work is performed in the member state, and upon request by competent authorities or worker representatives, provide the following information, updated at least every six months:

  • the number of platform workers, sorted by level of activity and their contractual or employment status;
  • the general terms and conditions applicable to platform workers;
  • “the average duration of activity, the average weekly number of hours worked per person and the average income from activity of persons performing platform work on a regular basis” through the platform; and
  • the digital platform’s clients receiving the platform work.

Next Steps

During the next two years, EU member states will be tasked with implementing the directive into national law, which could impose additional requirements above those mandated by the directive.

In the interim, digital platforms with platform workers in the EU may want to consider assessing how the directive could impact their business models and which compliance measures can be undertaken now.

Ogletree Deakins’ Cross-Border Practice Group will continue to monitor developments and will provide updates on the Cross-Border, Cybersecurity and Privacy, Employment Law, and Technology blogs as additional information becomes available.

Patty Shapiro is a shareholder in Ogletree Deakins’ San Diego office.

This article was co-authored by Leah J. Shepherd, who is a writer in Ogletree Deakins’ Washington, D.C., office.

Follow and Subscribe

LinkedIn | Instagram | Webinars | Podcasts

Author


Browse More Insights

Fingerprint Biometric Authentication Button. Digital Security Concept
Practice Group

Technology

Ogletree Deakins is uniquely situated to provide tech employers and users (the “TECHPLACE™”) with labor and employment advice, compliance counseling, and litigation services that embrace innovation and mitigate legal risk. Through our Technology Practice Group, we support clients in the exploration, invention, and/or implementation of new and evolving technologies to navigate the unique and emerging labor and employment issues present in the workplace.

Learn more
Modern dark data center, all objects in the scene are 3D
Practice Group

Cybersecurity and Privacy

The attorneys in the Cybersecurity and Privacy Practice Group at Ogletree Deakins understand that data now accumulates quickly and transmits easily. As the law adapts to technical advancements, we effectively advise our clients as they work to comply with new developments and best practices for protecting the privacy of the data that their businesses collect and retain.

Learn more
Fountain pen signing a document, close view with center focus
Practice Group

Employment Law

Ogletree Deakins’ employment lawyers are experienced in all aspects of employment law, from day-to-day advice to complex employment litigation.

Learn more
Glass globe representing international business and trade
Practice Group

Cross-Border

Often, a company’s employment issues are not isolated to one state, country, or region of the world. Our Cross-Border Practice Group helps clients with matters worldwide—whether involving a single non-U.S. jurisdiction or dozens.

Learn more

Sign up to receive emails about new developments and upcoming programs.

Sign Up Now