Modern dark data center, all objects in the scene are 3D

In this episode of our new podcast series, The AI Workplace, Patty Shapiro (shareholder, San Diego) and Sam Sedaei (associate, Chicago) discuss the European Union’s (EU) Platform Work Directive, which aims to regulate gig work and the use of artificial intelligence (AI). Patty outlines the directive’s goals, including the classification of gig workers and the establishment of AI transparency requirements. In addition, Sam and Patty address the directive’s overlap with the EU AI Act and the potential consequences of non-compliance.

Transcript

Announcer:

Welcome to the Ogletree Deakins podcast where we provide listeners with brief discussions about important workplace legal issues. Our podcasts are for informational purposes only and should not be construed as legal advice. You can subscribe through your favorite podcast service. Please consider rating this podcast so we can get your feedback and improve our programs. Please enjoy the podcast.

Sam Sedaei:

Hello everyone. This is an episode of Ogletree’s The AI Workplace podcast where we discuss the latest developments in the use of artificial intelligence in the workplace. My name is Sam Sedaei, I’m an attorney in the Chicago office of Ogletree Deakins. I’m a member of the firm’s technology practice group and cybersecurity practice group. I advise employers on the use of technology and artificial intelligence in the workplace. Joining us today is my colleague Patty Shapiro who is an attorney in the San Diego office of Ogletree Deakins, and a member of the firm’s Cross-Border Practice Group where she advises companies on establishing and managing global workforces. Patty’s also a member of the firm’s technology practice group as well. Welcome, Patty.

Patty Shapiro:

Thank you.

Sam Sedaei:

I also want to note that just this year, just a few weeks ago Patty was promoted to shareholder at the firm. It’s a major accomplishment and certainly one that is well deserved here. So I just want to say congratulations, Patty.

Patty Shapiro:

Thanks, Sam, I appreciate that.

Sam Sedaei:

So, Patty, we are here today to discuss the EU platform directive that recently went into effect. You wrote a nice blog post, very informative, on the OD’s website. Can you give a quick intro as to what the EU platform directive is?

Patty Shapiro:

Yes, absolutely. The EU platform work directive is sort of brand new in that it’s just been … Gone into effect in the EU. It’s in limbo right now because it still needs to be adopted by the member states. So it’s very high level right now. But interestingly, it actually was drafted, and the discussions around it began long before even the EU AI Act went into motion. This has been a long time coming. And the whole point is to regulate platforms meaning, task-based work that is available to gig workers through some sort of electronic means. Usually this is in the context of apps, or ride-sharing, something like that. So it’s regulating how artificial intelligence is used within that.

Sam Sedaei:

That’s interesting. Is the directive now law?

Patty Shapiro:

So not yet. It went into effect so meaning it was published on December 1, 2024, but member states have until December 2, 2026, so two years, to implement it into their national law. So there could be with that some changes country to country depending on if they want to incorporate more regulations. Certainly, it wouldn’t be less. They could have something more and it could vary country to country. So it’ll definitely be something to watch in the next two years.

Sam Sedaei:

That’s very interesting. And I was just recently learning about these directives and the difference between a directive and actual laws. It was interesting to see that there is this gap period of almost two years between the time when this directive went into effect and when the member states are going to adopt the directive through national laws. Now I understand the directive covers both the issue of classification of platform workers as well as AI. So can you give us a brief overview of the classification piece before we move on to discussing the AI regulations?

Patty Shapiro:

Sure. So one of the major goals of the platform work directive at the outset was to address this issue of whether gig workers are independent contractors, are … Or employees which is a debate that we’ve seen around the world and that is still going on even throughout the US. The goal at the outset was to create a presumption that they are employees. And then have a very rigid test, actually, with a high burden to establish that they’re actually independent contractors. But there was a lot of pushback on that during the negotiations. Many countries chimed in saying that they did not want a more strict standard.

That piece of the legislation was watered down quite a bit and now it does not include any sort of test of its own. But it does signal that there should be a rebuttable presumption that they’re an employee and then defers to the member states to determine what sort of test they’re going to use. But I don’t anticipate it to really make any major waves in this area. The direction and control principles that govern this issue globally are likely to continue to apply. I don’t see this making much of a change.

Sam Sedaei:

That’s interesting. Now, Patty, can you talk about the … Any mandates that the directive imposes on platforms?

Patty Shapiro:

Yeah. So that piece of the legislation is fascinating to me because it’s much more prescriptive for employers than the EU AI Act, for example. There’s some overlap. There are general themes we see, and I’m sure you’ve seen too, Sam, in these AI regulations where there needs to be some level of transparency. So they want employers or companies to notify gig workers, in this context, if they’re going to be using AI in any way that affects that gig worker’s access to work, or their working conditions, how … Performance monitoring, anything like that they need to be aware of it. And not only aware of it but also understanding the purpose of it and how it works. So that’s a little bit deeper in their transparency requirements and requires further explanation on the part of the platform, the company that’s hiring these individuals.

And then there’s also a human oversight component which is the same thing that we’ve seen in the EU AI Act where the company, the platform cannot defer entirely to the automated systems they need to have somebody, some human that is overseeing how it works, and the output produced, and trying to flag for any potential legal issues like discrimination. Which, as you know, Sam, in the context of employment this is the primary concern, I would say, with using AI in the workplace.

And then interestingly, there is this piece of the platform work directive that requires companies to provide an explanation. So gig workers can actually say, “I know that this decision was made by AI that affected me, please explain why this happened.” So, for example, why was I pushed XYZ task or not given access to this potential work opportunity? Or let’s say some automated decision-making automatically kicks the workers out of the platform on some condition. Those kinds of decisions they have an opportunity to request an explanation for. So I think that could be a heavy lift for some platforms that don’t necessarily have the resources, right, for this level of human oversight which is the whole reason that they are using AI in the first place. It makes sense but it’s also a little counterintuitive in some ways from a practical perspective.

There are prohibitions on some data processing activities which are generally what you would expect. They don’t want companies processing, for example, or reporting private conversations that may take place through the app, data that the platform may generate when gig worker is not actually working on the platform, things like that to protect the gig worker’s privacy. Is that generally along the lines of what you’ve seen in some of this other legislation, Sam, especially throughout the US?

Sam Sedaei:

I think so. I think it’s interesting this notice requirements, especially … I think it’s something that you almost see invariably popping up in various state legislations including a law that was recently passed in Illinois. The notice, the transparency component, we see all of those things. What repeatedly stands out is this idea of human oversight which I think manifests the policymakers continuous and somewhat justified suspicion towards the ability of our artificial intelligence to make those critical decisions, and hiring decisions, and firing decisions, and discipline decisions.

And part of that I think is justified because we have seen that AI can be wrong, it can hallucinate in many ways, and it can make mistakes. When the stakes are very high and people’s livelihood is on the line you want to have an actual person going through and at least confirming that a certain decision is the correct one to make, especially when it comes to a person’s employment or terms and conditions of employment. So yes, we’ve definitely seen those elements. We briefly discussed what a directive is and the difference between a directive and an actual law. I don’t know if you know the answer to this question yet, but what are the potential consequences of non-compliance? Do we know that at this point?

Patty Shapiro:

We do not. So once the member states adopt this into their local law we will know how each country is going to address potential consequences of non-compliance. So it all depends on how they codify it and what consequences they determine are appropriate. All the directive says is that it should be “Effective, dissuasive, and proportionate” to the non-compliance. I would expect it to not be as severe as let’s say, the EU AI Act which has some very significant consequences, that’s just much less realistic on the member state scale. Especially in countries that are known to be very employee-friendly, that have higher standards for misclassification meaning they find misclassification more often than not, I would expect to see bigger consequences in those countries than others did actually.

Sam Sedaei:

Very interesting. Now, you mentioned the EU AI Act. And I was wondering if you could comment on how this directive fits into the requirements under the EU AI Act.

Patty Shapiro:

Yeah. So that’s what I found interesting about these two pieces of legislation happening in parallel because there is a lot of overlap. A lot of what is mentioned in the platform work directive has now been incorporated into the EU AI Act which went into effect before the platform work directive. So it’s surprising to me a little bit that they continued on this course of having the platform work directive again and still the same requirements. What’s unique in it is that it goes a step further. So particularly with that written explanation requirement, that is unique and something that is not contemplated in the EU AI Act.

So I think in terms of practical guidance, the directive is actually … Offers companies a lot more to lean on in terms of what to actually do in this two-year window period … Compliance period to potentially get in compliance with both of what’s coming both under the EU AI Act and whenever this directive is adopted at the member state level. On the EU AI Act on February 2nd of this year, so in just a couple weeks time, is when the prohibitions will go into effect. That’s the unacceptable risk tier within the EU AI Act. So something for companies to be aware of as well and make sure they’re complying with that fast-approaching deadline.

Sam Sedaei:

Very interesting. And just to give a little context to those people who are not familiar with that, the EU AI Act creates a risk-based approach to regulating AI and it categorizes various types of activities based on the level of risk. And there are different regulations and different levels of restrictions that apply to those things, to those various levels. So thank you for that reminder about the upcoming deadline and the effective date. It’s not the first time that platform work has gotten a special treatment in the EU. And I was trying to add a little context to this conversation and understand why they’re singled out for these special protections. And I found a few interesting factors that I wanted to share.

So there are around 500 digital labor platforms operating in the EU. There are digital labor platforms active in every EU country. The growth of the platform economy is illustrated by the fact that between 2016 and 2020 revenues in the platform economy grew almost fivefold in just four years from an estimated three billion euros to around 14 billion euros. What is most interesting is that the biggest revenues are estimated to be in the sectors of delivery and taxi services.

Patty Shapiro:

Yes.

Sam Sedaei:

So I think that adds a lot of context and helps us understand why there’s this focus on platform work.

Patty Shapiro:

I’d be curious what those stats are from 2020 onward. Because what I’ve seen in my practice is that there has been a huge boom in remote work, gig-based work, that kind of thing since the pandemic. It’s really exploded. To your point about why platform work gets more attention in the EU from a legislative perspective compared to your standard employment situation, I think a large piece of that, at least for purposes of the platform work directive, is because the opportunity for somebody to be adversely impacted and have a lesser ability to maintain their livelihood is much greater. There’s just more occasions where AI is being used in a way that allows somebody to have access or not have access to work when it’s in the context of gig work, these very short-term, sporadic projects or work opportunities.

Sam Sedaei:

Yeah, that’s right and that’s very interesting. So we have President Trump who was inaugurated on January 20th, just two days ago. And one of the first things he did was to repeal President Biden’s AI executive order. So Biden’s order required developers of AI systems that pose risks to US national security, the economy, public health, or safety to share the results of safety tests with the US government in line with the Defense Production Act before they were released to the public. And the order also directed agencies to set standards for that testing and address related chemical, biological, radiological, nuclear, and cybersecurity risks.

Biden’s order came as the US lawmakers were failing to pass legislation setting guardrails for AI development. And then President Trump, as soon as he was inaugurated, he repealed that executive order. And I think what that signals is that the US, at least for the next few years, is going to be taking a different approach than the EU has in terms of regulating AI. And I think what we’re going to be left with is going to be a patchwork of state laws that try to regulate various aspects of AI as it relates to those individual states.

Patty Shapiro:

I completely agree. It’s very on-brand, right, even when you liken it to data privacy regulations, right? The EU has GDPR which is a very robust piece of legislation about data privacy. And the US has not adopted something at all on that scale or has basically gone in the opposite direction.

Sam Sedaei:

That’s right.

Patty Shapiro:

So I think we’re seeing now the same trends with AI. I completely agree.

Sam Sedaei:

I think so too. So, Patty, do you have any final thoughts on the EU directive or anything else?

Patty Shapiro:

No, I don’t I think we’ve covered the bulk of it. I think it’ll be an interesting couple of years as we continue to see all of this play out, and the very rapidly evolving landscape of AI regulation, particularly with the two-year compliance window for the EU AI Act and now the two-year window for member states to incorporate the platform work directive. Lots on the horizon I’m sure.

Sam Sedaei:

For sure. That sounds good. Well, Patty, thank you so much for joining on The AI Workplace podcast. And thank you to everyone else who has been listening. See you on the next episode of The AI Workplace.

Announcer:

Thank you for joining us on the Ogletree Deakins podcast. You can subscribe to our podcasts on Apple Podcasts or through your favorite podcast service. Please consider rating and reviewing so that we may continue to provide the content that covers your needs. And remember, the information in this podcast is for informational purposes only and is not to be construed as legal advice.

 

Speakers

Share Podcast


Fingerprint Biometric Authentication Button. Digital Security Concept
Practice Group

Technology

Ogletree Deakins is uniquely situated to provide tech employers and users (the “TECHPLACE™”) with labor and employment advice, compliance counseling, and litigation services that embrace innovation and mitigate legal risk. Through our Technology Practice Group, we support clients in the exploration, invention, and/or implementation of new and evolving technologies to navigate the unique and emerging labor and employment issues present in the workplace.

Learn more
Glass globe representing international business and trade
Practice Group

Cross-Border

Often, a company’s employment issues are not isolated to one state, country, or region of the world. Our Cross-Border Practice Group helps clients with matters worldwide—whether involving a single non-U.S. jurisdiction or dozens.

Learn more

Sign up to receive emails about new developments and upcoming programs.

Sign Up Now