Quick Hits
- The EU Platform Work Directive will not provide a uniform test for the classification of platform workers as originally anticipated; rather, the presumption of employment will apply when there are “facts indicating control and direction.”
- The directive’s rebuttable presumption of employment will not apply to proceedings concerning tax, social security, or criminal matters.
- Digital platforms using AI to process personal data will be prohibited from undertaking certain kinds of processing, including using biometrics to verify identity, and will be required to perform a data protection impact assessment.
- Platform workers have a number of protections under the directive, which not only includes a right to an inform-and-consult procedure when automated decision-making or automated monitoring tools are introduced or changed, but also allows platform workers to consult an expert during the procedure. In fact, if the digital platform has 250 or more workers in the member state concerned, it must pay for that expert.
Vague Rebuttable Presumption of Employment
For better or worse, the directive will not unify the member states when it comes to identifying a misclassified worker. Much of the debate surrounding the directive concerned the extent to which the directive would articulate a test to determine whether a platform worker is an independent contractor or an employee. The provisional agreement reveals that the original requirement to satisfy two out of five indicators has been scrapped entirely. Now, the directive states that platform workers are presumed to be employees if there are “facts indicating control and direction,” as defined by national laws, collective agreements, or EU case law.
This approach gives significant deference to member states, which scrutinize independent contractor relationships to widely varying degrees and often do not have well-articulated tests to determine classification. However, the directive requires member states to provide guidance to national authorities and digital platforms when implementing the directive, which could mean that member states will offer a clearer framework for classification assessments.
In addition, the rebuttable presumption of employment under the directive now has a limited scope. It will not apply to any proceedings concerning tax, social security, or criminal matters. Although the directive reserves the right for a government authority to initiate a misclassification claim on a platform worker’s behalf, that is less likely to occur outside of the context of tax, social security, or criminal matters. However, it is possible for a labor authority to initiate such an action and that may be a practical consequence of the directive’s new requirement for digital platforms to notify competent national authorities about platform work being performed in-country and a platform worker’s employment status.
Robust Data Privacy Requirements
Unlike the initial draft, the directive now includes significant data protection obligations. In addition to the prohibitions on processing different types of personal data, including the use of biometrics to establish a platform worker’s identity, digital platforms must conduct a data protection impact assessment that it must share with platform workers’ representatives. Also, the directive now requires digital platforms using automated monitoring systems not only to inform platform workers about the fact that they are using such a system and the categories of data to be processed by it, but also to inform them about the “why” and “how” of the monitoring as well as who will receive any such personal data.
Platform Worker Protections
Transparency remains as one of the key objectivities of the directive, but it also now includes a number of new provisions aimed at further protecting platform workers. Those include the following:
- The directive now makes clear that digital platforms’ obligation to inform about the use of automated decision-making or automated monitoring extends to candidates when such systems will be used in the recruitment process.
- Digital platforms will have obligations to assess and address the potential health and safety consequences of automated monitoring or decision-making systems.
- Digital platforms must inform and consult platform workers’ representatives or, if none, the platform workers themselves before making decisions “likely to lead to the introduction of or to substantial changes in the use of automated monitoring or decision-making systems.” Further, platform workers have the right to be assisted by an expert if they feel it is necessary (for example, if they do not understand the technicalities of a change) and, if the digital platform has 250 or more workers in the member state concerned, the digital platform must pay for that expert.
Competent authorities and platform workers’ representatives will now be able to ask the digital platform for details about its workers’ average time worked and average income earned.
When Will the Directive Become Effective?
The text of the agreement will now be finalized in all the official languages, formally adopted by both EU institutions, and then published in the Official Journal of the European Union. It will take effect twenty days after publishing and member states will have two years to fully implement the directive’s provisions into their national laws.
Ogletree Deakins’ Cross-Border Practice Group will continue to monitor developments and will provide updates on the Cross-Border blog as additional information becomes available.
Follow and Subscribe