Announcer: Welcome to the Ogletree Deakins podcast, where we provide listeners with brief discussions about important workplace legal issues. Our podcasts are for informational purposes only and should not be construed as legal advice. You can subscribe through your favorite podcast service. Please consider rating this podcast so we can get your feedback and improve our programs. Please enjoy the podcast.
Hera Arsen: My name is Hera Arsen, and I’m the firm’s Director of Content, working out of Ogletree Deakins’s Torrance, California, office. And I’m here today with your AI Workplace host, Sam Sedaei, for this AI Roundup edition of the podcast. Today, we’ll be discussing several recent developments in the world of AI. Well, before we get started, a little bit about Sam. Sam is an attorney in the firm’s Chicago office. He’s a member of the firm’s Technology Practice Group and the Cybersecurity and Privacy Practice Group. In addition to hosting our podcast, he advises employers on the use of technology, artificial intelligence tools in the workplace, including in hiring, timekeeping, productivity monitoring, and performance assessments. Sam also regularly drafts AI policies for employers. He also advises companies on compliance with a wide array of laws governing the use of AI tools in the workplace. Let’s get started with our first topic, Sam.
Sam Sedaei: Sure. So that’s right, the Big Beautiful Bill, everybody is talking about it these days, is a spending bill that’s put forth, or being negotiated, between the House Republicans and President Trump. And what the Republicans did, which may have gone unnoticed, was that a couple of weeks ago, they added a clause to the bill that would ban states and localities from regulating artificial intelligence for a decade. This, of course, surprised, pleasantly surprised, many in the tech industry who have been lobbying for uniform and light-touch regulation, and it has also outraged state governments, and especially many states that have some form of AI regulations. So, it is certainly an interesting twist on this budget bill.
Hera Arsen: Interesting. Sam, can you explain a little bit, give us some context so our listeners can understand why this provision may have been added to the bill?
Sam Sedaei: Sure. So, the advancement in artificial intelligence tools and technology over the past several years has led to heightened awareness regarding its wide adoption and potential risks that would be associated with that. There has been very little or no federal action to date that would address some of those concerns about risks associated, such as inaccuracies in AI or potential lack of ethical standards that AI tools would have to abide by or be compliant with, and then potential discriminatory effects of using AI in the workplace and in other contexts. So to address those issues, many states have taken it upon themselves to come up with various kinds of regulations that would address those issues. We have different states…we have Colorado that has passed a fairly comprehensive AI law. Then we have Illinois, and there are other states. Almost, I believe, half of all the states have some kind of AI-related law regulation at this point.
And so within that context, we still didn’t have much activity from the federal government. We had a few executive orders by President Biden. When President Trump came into power, he withdrew those executive orders, and he has expressed an interest in unleashing tech companies to pursue AI technology. And he’s not very interested in regulating AI, at least not at this time. And so, what the Republicans appear to be doing is to try to put this ban on states passing laws relating to AI, because they’re trying to not only not regulate AI themselves, but to prevent states from doing so. And it’ll be interesting to see how it pans out, but I think that is what’s happening, is that we have a clash between the federal government, that doesn’t want to regulate AI, and state governments that do.
Hera Arsen: Makes sense. What are the chances the bill is going to be approved by the Senate? Can you read the tea leaves a little bit?
Sam Sedaei: Well, I certainly can try. Well, while the bill would be far-reaching if enacted, its viability in the Senate is questionable, where there are procedural rules that could doom the inclusion. This specific proposed language by Republicans could run afoul of the so-called Byrd rule. And what that rule is is a Senate rule that limits what can be included in a reconciliation legislation. And it limits it to provisions that directly affect the federal budget, specifically spending and revenues. And it prevents reconciliation bills from becoming vehicles for non-budgetary policy changes.
So, this looks like a classic policy change, where there is a ban that’s really not in any meaningful way related to the budget and has been put in that bill. So, I think the chances of it passing the Senate, I would put that at less than 50%, but it is something to be mindful of. And even if this doesn’t pass, it does reflect the priority of the party that’s currently in the majority, which is, they will probably try to find some other way of not only avoiding coming up with any AI regulations or laws, but they are going to try in other ways to stop the states from coming up with their own laws governing AI.
Hera Arsen: Okay, great. Well, we’ll be keeping an eye out on that and see where the Senate lands with that. And let’s switch gears a little bit, going from federal regulations and legislation to a state case. So, the next topic we wanted to touch on is a case out of California. I’m going to let you take it from there, Sam. What is the case about, and why is it important for employers to know about it?
Sam Sedaei: Sure. So, this case is out of California, and it involves the use of AI-based hiring recommendation tools and a specific tool in question that was used here. Several plaintiffs allege that the tool reinforced or reinforces existing employer bias, and as a result, they and other job applicants suffered from various kinds of discrimination, including race, age, and disability. The court granted preliminary collective certification under the Age Discrimination in Employment Act to a group of job applicants who allege that they were rejected from various companies for discriminatory reasons, and they allege that all of these companies use the same tool in question.
Hera Arsen: So, what’s different about this case, as compared to other class action cases brought under the ADEA, the Age Discrimination in Employment Act?
Sam Sedaei: The biggest difference is that here we have a class certification for a group of applicants that applied to jobs with different employers. So, the common element here is not that they applied to the same employer, but that all these employers allegedly used the same AI-driven tool. I believe this is the first of its kind with this kind of certification relating to the use of AI in the workplace, and is significant in that way, is that it has the potential to include a large number of collective members.
Hera Arsen: So, is that the primary significance of this case, that it’s the class action component of it?
Sam Sedaei: That is absolutely a huge part of it, because that could impact the amount of damages that could be involved here if there is liability, or even if there is a settlement. That is the biggest part. The other part is that here we have seen a lot of awareness about the various risks associated with AI tools, especially risks that have to do with discrimination and bias. Here we have a case where this is no longer a theory, and this is no longer companies just speaking in broad terms about how they can avoid risk. This is the actual risk coming to fruition, where lawsuits will happen. And in this case, depending on the size of the collective, this could turn into a significant case, and it involves nothing more than the use of AI-driven software in the process. And what’s also important to note is that the plaintiffs have not alleged that AI made actual decisions with respect to who would get interviews or what have you, but that the software was merely used in the process, and that still was deemed by the court to be sufficient to justify certification.
Hera Arsen: Okay. So, Sam, this case could be a real wake-up call for employers that use a tool like this one. This is another one that we’ll be keeping an eye out on. Our third topic, we’re going to stay in California and talk about some regulations. So, the third topic to cover today are the regulations California regulators recently adopted regarding automated decision systems, or ADS for short. Can you tell us a little more about these regulations?
Sam Sedaei: Sure. These regulations aim to protect against employment discrimination, given the dramatic rise in AI use in employment. So, on March 21, the California Civil Council of the Civil Rights Department, or CRD, voted to approve the rules, which now must be cleared by the Office of Administrative Law, or OAL, and it then needs to be published by the Secretary of State. And it adds regulations to the body of laws already in existence that impacts the use of ADS. And it is in a way related to a topic we were discussing when discussing that case out of California. Although that was not based on these regulations, but we can talk a little more about that and the concept of what ADS is.
Hera Arsen: So, how do these regulations define ADS?
Sam Sedaei: So, they define ADS as a computational process that makes a decision or facilitates human decision-making regarding an employment benefit. An automated decision system may be derived from and/or use artificial intelligence, machine learning, algorithms, statistics, or other data processing techniques. Now, to clarify what’s in the scope, the regulations outline exclusions such as word processing software, data storage, and calculators, just because if you don’t have those exclusions, it’s possible that people may think those are included as well. And it also defines other technology-related terms like algorithms, machine learning, and automated decision system data. What they’re trying to do is they’re trying to really zero in on what they’re concerned about. And then to illustrate the types of tasks that ADS performs, the regulations provide a non-exhaustive list of examples, and they include resume screening, using computer-based assessments and/or tests to make predictive assessments about applicants or employees, and analyzing applicant or employee data from third parties. The list of examples reflects common use of AI tools by HR.
Hera Arsen: So, Sam, I’m going to ask you for your predictions again. What’s the likelihood that these regulations would become final, given what you know about California and the California regulators?
Sam Sedaei: Yeah, if I were to put money on this one, I would say that it’s going to pass these final hurdles. We’ll have to wait and see. California wants to be at the forefront of protecting workers in all contexts, and the state is also interested in protecting workers against potential risks from the use of AI. And so, these regulations appear to address that. They appear to express the concern and the policy priorities of California policymakers, which is that they want to make sure that if you’re going to use these ADS in your processes, you want to make sure that you’re not doing it in a way that’s resulting in biases. And if I have to guess, I think that the regulations are going to pass these final hurdles and be finalized. And if they are, they would go into effect in a little over a month. So, it’s coming right up. But that would be my prediction.
Hera Arsen: Okay. And so generally about automated decision systems, what are some of the policy priorities that companies are putting into place to address potential risks associated with using ADS?
Sam Sedaei: So, several things. What I’ve seen companies do is that they are assessing their AI tools and then the tools that they’re using and their AI functions, to make sure that they’re avoiding those pitfalls associated with those tools, that they’re avoiding the possibility of bias, that they’re avoiding inaccuracies, that they’re putting in place that human oversight. I see that repeatedly, and more companies are engaging in those discussions. They are also putting in place AI policies, policies governing their use and governance, and that will allow them to have a system in place to continuously monitor the use of AI tools in their HR functions. They are sometimes coming up with lists of approved AI tools so that they can have some level of control over what the employees can use.
And they’re establishing guidelines with respect to their relationships with vendors, because what they’re seeing is that there could be the potential for liability for the employer for using a tool that could result in a bias. So even though they know that maybe the developer could also be on the hook for any issues, they are so as employers are concerned that they’re going to get pulled in, and so they are really coming up with ways to vet the various tools that they use before they actually put them into widespread use. So I think these are some of the steps they’re taking, and I’m sure they’re going to take additional steps based on the states they’re in and what requirements the laws and regulations in those states may require.
Hera Arsen: So, given some of these risks that employers are facing, companies are facing, developers are facing, to wrap up today’s podcast, do you have any final thoughts or takeaways for organizations based on these developments?
Sam Sedaei: I would just say, be vigilant. The companies do well to monitor the regulations and laws in states that impact them. A lot of things are in flux, in part because of the conflict between the priorities of the federal government and some state governments in terms of regulating AI. But if companies, so long as they’re monitoring these rules and regulations and vetting the tools that they’re using or want to use, I think those would be positive steps to take.
Hera Arsen: Those are some great perspectives. Thank you for your insights, Sam.
Sam Sedaei: Thank you so much, Hera.
Hera Arsen: I also want to thank the audience for being with us today, and please stay tuned for the next episode of The AI Workplace.
Announcer: Thank you for joining us on the Ogletree Deakins podcast. You can subscribe to our podcasts on Apple Podcasts or through your favorite podcast service. Please consider rating and reviewing so that we may continue to provide the content that covers your needs. And remember, the information in this podcast is for informational purposes only and is not to be construed as legal advice.