State Flag of Illinois

In this podcast, Ogletree Deakins attorneys Sam Sedaei and Ben Perry delve into Illinois’s newly enacted artificial intelligence (AI) law, HB 3773. Sam, a member of the firm’s Technology Practice Group who focuses on the use of technology in the workplace, and Ben, who is co-chair of the firm’s Cybersecurity and Privacy Practice Group, discuss the AI law’s implications for employment practices, including the law’s broad definition of AI and its goal to prevent discriminatory effects in employment decision-making. The conversation also explores the challenges faced by employers in complying with the new regulations and the broader trend of state-level AI legislation in the absence of comprehensive federal guidelines.

Transcript

Speaker 1: Welcome to the Ogletree Deakins Podcast, where we provide listeners with brief discussions about important workplace legal issues. Our podcasts are for informational purposes only and should not be construed as legal advice. You can subscribe through your favorite podcast service. Please consider rating this podcast so we can get your feedback and improve our programs. Please enjoy the podcast.

Sam Sedaei: My name is Sam Sedaei, I’m an attorney in the Chicago office of Ogletree Deakins, and I’m here with my friend and colleague, Ben Perry, and we’re going to discuss the Illinois recently enacted artificial intelligence law. I’m going to let Ben introduce himself first.

Ben Perry: Hey everyone, this is Ben Perry. I’m an attorney in Ogletree’s Nashville office. I’m also the co-chair of Ogletree’s Cybersecurity and Privacy Practice Group. And thanks for having me here today, Sam.

Sam Sedaei: Thank you, Ben. I appreciate it. So before we start talking about this AI law in Illinois, I wanted to just give a quick personal note. This is the first Ogletree Deakins Podcast that I am participating in, and it has coincided with my son, Spencer’s, third birthday. So I just wanted to give a quick shout out to him and say happy birthday, and that I love him very much.

Ben Perry: That’s awesome. Happy birthday to your son, Sam. As somebody with a seven-month-old, I know how difficult that can be to juggle things at times, but also how rewarding it is. So I’m sure it’s been a journey.

Sam Sedaei: It certainly has been. It’s been a pleasure. Okay. So let’s talk about this Illinois law. Now this law was passed, this is HB3773, and it was passed on August 9th of this year, so it was a few months ago. Now it doesn’t go into effect until January 1st, 2026. I saw, Ben, that you worked on an article on this law, wanted to kind of have a quick chat about it. I can give you a quick introduction about the law and also get your impressions on it. This law is focused on basically regulating the use of artificial intelligence and employment law in Illinois. It amended the Illinois Human Rights Act anti-discrimination law, and it is written with the goal of preventing discriminatory consequences of using AI in employment decision-making processes. It prohibits the employer from using AI if it has a discriminatory effect on employees based on protected classes or if the AI uses zip codes as a proxy for a protected class. So Ben, I wanted to get your thoughts on the law and whether you have seen similar laws coming to effect in other parts of the country.

Ben Perry: Yeah, the trend that we’ve been seeing is these sorts of knee-jerk reaction laws to forms of generative AI. Let’s be clear about what we’re talking about when we say artificial intelligence, because these things have been around for a long time in terms of what artificial intelligence means and how it’s being defined. And I guess that’s a good place to start, is how the law defines artificial intelligence here. So I just want to read that definition and just to kind of frame the broad set of types of software or analyses this could encompass. So the definition is any machine-based system that for explicit or implicit objectives, generates outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments, and it also includes generative artificial intelligence. What are your thoughts on the types of things that might encompass? I mean, obviously that’s an extremely broad definition.

Sam Sedaei: It is a very broad definition and it’s one that I have read at least 10 to 15 times, and I don’t pretend to fully understand the full scope of it, maybe it’s because of the technical jargon that’s used. That was exactly a question in my mind as well, are we talking about language model AI devices and machines or programs, or whatnot, or is there something more? And I think that is the broad understanding of the law, which is that the definition is intentionally broad and the way it’s written is I think is designed to be broad, and it’s designed to give the Illinois Department of Human Rights the ability to develop rules and regulations to put meat on the bones basically. And the law does direct the Illinois Department of Human Rights to issue rules and guidelines for implementing the law, and I think the definition is intentionally broad to give the IDHR that leeway. And I expect that we are going to see more specific directions from IDHR on that issue.

Ben Perry: I guess to your question before of other states that have automated decision-making laws, I mean first of all, on a regular basis, there could be as many as a dozen state laws on automated decision-making pending at any one time. California has been working on regulations under the California Consumer Privacy Act. Those have been in the works for a long time, and I think they’re kind of struggling with some of these same issues in terms of how broadly do we want to frame this. But just kind of going back, I mean, obviously Illinois has had a law governing job applicant interviews and the use of AI in that process. New York City has had a law in the books for a while governing bias audits and requiring bias audits for certain automated decision-making tools. That law specifically has been kind of maligned a little bit as kind of a toothless law that most people have just been ignoring, and I think there’s been some efforts to amend that. So we’ll kind have to keep an eye on that going forward.

Then the other big one that comes to mind is Colorado. Obviously Colorado’s AI law got a lot of attention recently as just kind of a more comprehensive AI law. But I think this is the trend that we’re going to see going forward and it’s definitely not going to slow down. And so I think companies just need to look at the big common requirements as to all of these laws, not unlike the comprehensive state privacy laws, because for those, you’ve got 19 different comprehensive state privacy laws, all with different requirements, all with different nuances, and companies have basically had to create a list of like, all right, here’s all the common requirements and then here are the specific requirements that we might have to address on a one-off basis. And I think that’s probably the approach that’s going to have to be taken here. I guess, Sam, what are some of these commonalities that you’ve seen, I guess, between some of these different automated decision-making laws?

Sam Sedaei: Several things. I think one thing that we see is there is a notice requirement, and I’ve seen that in several laws, where… And Illinois Law HB3773 has that as well, and it requires the employers to give notice if the employer is using AI for a variety of employment-related purposes. It includes recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, and tenure, or other terms, privileges and conditions of employment. It is vastly broad. It encompasses every employment-related decision that you can think of, and I’ve seen that notice requirement as well.

You mentioned the other Illinois AI law. There is that law from 2020, which is called the Illinois Artificial Intelligence Video Interview Act, that was enacted in 2020 long before there was any renewed discussion about the concerns about AI. That law is very interesting. When you look at that law, again, it requires notice to applicants. It also requires applicants to give their consent for the use of AI. When dealing with situations where the employer is recording the candidate and is going to analyze the video of the candidate using AI, that candidate needs to give consent, that candidate needs to also have the ability to demand destruction of copies of any video, and if he/she demands that destruction, that needs to occur within 30 days.

And it’s very interesting because that law, again, the timing of it is important. In 2020, not many people were talking about AI, but it seems like this was a concern of the Illinois legislature long before there was this wave of all these new laws. And I’m seeing a lot of similarities between this new AI law in Illinois and the previous one in terms of the requirement about notice, the concern about the candidate’s privacy or what have you. The constant element is not there, which is interesting as well. But I’m seeing a lot of these similar elements in laws from other states as well.

Ben Perry: Yeah, it is interesting, because a lot of times with this type of law, there’s generally some impetus for passing it. Back in 2020, I guess there must have been maybe some complaints pertaining to those sorts of tools. I’m not really sure. I mean, I guess that was around the time that COVID struck, so maybe it was passed in reaction to COVID. Maybe there were a lot more of those types of practices going on in the absence of the ability to conduct in-person interviews, who knows? I don’t know if you went back and read the legislative history of that bill, but I’d be interested to do that.

Sam Sedaei: I have not done that. And I think it’s very possible that it was linked to COVID, that was a time when all the interviews were moving online and certainly half of the interviews I had with Ogletree were online. So when I joined it was COVID. And so it’s a very interesting point, Ben, I think that probably had something to do with that.

Ben Perry: Let me ask you this. We’ve been having a lot of clients come to us lately asking about basically identity verification process issues in terms of, they will hire somebody and either at some point in the process after they’re hired, they find out that the person is not who they say they are, either maybe the real person interviewed and then somebody else started working remotely for that job, or maybe they just had falsified credentials and were basically stealing somebody else’s identity. We’ve seen this a lot and there have been many public reports about it of nation state actors using that to get jobs with companies, and sometimes just to actually make money and send it to their government. In other cases, it’s an attempt to either steal company information or install malware on the system.

And so in terms of all of these remote interviews that are happening, our clients are struggling with how to verify people’s identities, how to verify that they are who they say they are. And it does seem like some of these tools would be helpful in conducting video interviews, maybe whether or not somebody’s voice has been digitally altered to the extent that there are tools like that, or whether or not somebody’s face when you’re speaking to them has been digitally altered. And so those sorts of things that seems like could be or probably would be within the scope of some of these laws. So I guess what are your thoughts on that? And not only consent where that might be required, but having some sort of opt out requirement and whether or not that would basically prevent that from being an effective way of vetting candidates?

Sam Sedaei: That’s really interesting, Ben. I actually was not familiar with that trend, but I think that that makes sense. When we started to go remote, when employers started to go remote, those kinds of risks kind of come with the territory, the potential for fraud. So I think, and I want to get your thoughts on the same question, but I think that these AI tools can provide very effective way to try to filter out those situations where somebody is appearing to be a certain individual, but they’re really not, or they’re pretending to be somebody else, and AI could help determine that. Now, I’m curious as to what your thoughts are and are there specific tools you’re familiar with or apps, or software programs, that you think permit employers to engage in that kind of analysis?

Ben Perry: I don’t know any off the top of my head, but I will say, obviously there’s a lot of ways to try to combat that type of thing. I think I just mentioned that as it could be one, but it also is now a manner of verification that seems like it would face a lot of challenges in terms of the legal requirements to be able to implement that sort of tool, and also just other considerations that might apply. Because we put out an article talking about the FCRA implications under the CFPB’s new circular, basically talking about how there could be FCRA implications if your automated decision-making tool is trained on data from other people, which is kind of a bizarre and aggressive take on the scope of what the FCRA might cover.

So yeah, I mean there’s definitely a lot of challenges in this sort of tool, whether you’re implementing it for vetting candidates or for employee monitoring, or whatever, because obviously there are employee monitoring considerations as well. So if you’re talking about a tool that is monitoring employee productivity, or one example that we’ve seen a lot of is telematics in vehicles where there is some mechanism in a company vehicle to record both speeding, acceleration, harsh braking, usually paired with an inward-facing camera that will capture video before and after events to figure out if the employee is distracted, if they’re drowsy.

Sometimes those cameras are even actively scanning the employee’s face and triggering if it determines that the employee is distracted, whether looking away from the road for an extended period or texting, or whatever it may be. And those sorts of technologies implicate a variety of laws, automated decision-making laws, we’ve been talking about employee monitoring laws, potentially biometrics laws, and that’s just state laws we’re talking about. There’s also obviously regulatory guidance that is also implicated by that.

So yeah, I mean there’s kind of a lot of different contexts in which this could come up and the considerations are going to be different depending on the context. So I guess what kind of context have you seen? What are clients coming to you for?

Sam Sedaei: It’s interesting. So one of the questions that I’ve been feeling recently is that employers are coming to us and asking us about the scope of this law. Now, it hasn’t gone into effect and it will not go into effect, as I mentioned, until January 1st, 2026, but clients are very actively thinking about putting in place AI tools that they could use in the recruitment process, and also to monitor performance and to measure performance, but they don’t want to invest a substantial amount of money only to have it be against the law starting in a little over a year from now. So they’re trying to really understand the scope of this law, and of course we have to tell them, we are still waiting on the IDHR to try to add a few more details, but the law, the way it’s written, is pretty broad and it puts the onus on the employer to determine however way they’re going to use the AI tools that they have in mind would possibly impact various protected groups. And there are a lot of protected groups listed under Illinois Human Rights Act.

I think an observation I want to make, which I think you already know, is that the reason we have… I think this might be the 34th or 35th state, Illinois might be the 35th state to have passed some kind of law regulating the use of artificial intelligence. And I think we are seeing these state actions because there is a lack of federal action. Now, I understand that there are tens of bills, possibly hundreds of bills at this point that are being considered in Congress, but it’s a very slow process, and we don’t have anything remotely similar to the, for example, EU AI Act, which is a comprehensive law that governs the use of AI. And in the absence of federal action, we’re left with states coming up with these laws. But this poses a unique challenge to employers that are national or international companies and they want to establish uniform processes for vetting candidates, for monitoring performance. And now they have to keep thinking about specific states. If they have five employees within one state, now they have to either forego the AI usage process altogether. They have to create this system that is designed just for Illinois.

So I really do hope that we get guidance, some law, some regulation, something at the federal level, which could really try to add a little more uniformity. But I think until that happens, we’re going to continue to see these states pass their own laws and then employers will have to deal with that.

Ben Perry: We’ve been waiting on a federal privacy bill for a long time, and we’ve had a couple different iterations proposed in the past couple of years, but I’m not holding my breath for anything at the federal level, whether privacy or AI related. For now, I think we’re going to be stuck with the patchwork of state laws.

Sam Sedaei: I agree. Well, Ben, I think this has been a really interesting discussion. Do you have any kind of final thoughts on the Illinois AI law?

Ben Perry: Yeah, I guess the last couple things I was thinking about, the first being the law doesn’t formally require bias or impact assessments, but in terms of complying with the law, I don’t see how a company would be able to comply with the law if somebody hasn’t conducted a bias or impact assessment. Presumably that’s going to be on the developer of the tool, and I suspect what will happen is the developer will have conducted their own maybe third party bias or impact assessments, hopefully on an annual basis, and they’ll provide that to companies basically to show that their processes aren’t resulting in disparate impact on those protected groups you mentioned. I mean, what are your thoughts on that?

Sam Sedaei: That’s an interesting topic. I don’t know that this law is going to impose that kind of requirement on the developers of AI tools. And the reason I think that is that there is a separate bill that has been proposed, and that’s HB5116, and that’s called Automated Decision Tools Act. And that act, if it passes, it will require deployers to perform and impact assessment that is accessible to the IDHR and employment decisions. In the absence of that, I really do think that this AI law that we’ve been discussing, HB3773 really puts the onus on the employers.

Ben Perry: Yeah. And I don’t disagree with that. I’m not saying that the law requires deployers… Or sorry, developers to do that. I just think that they’re probably going to do that in order to better market their product to employers, and make employers more comfortable and more likely to purchase the product if employers don’t think that they have to basically do their own sort of assessments and they can maybe rely on the developer’s assessment if it’s a third party.

Sam Sedaei: 100% I agree with you. I think that if the deployers voluntarily do that, it will take that responsibility off of the shoulders of the employers and the employers will be more likely to use those tools. The employers hopefully will understand that ultimately they will be liable under the law, but to have that kind of assurance from the developers that we have these protections in place to ensure that there’s no discriminatory effect to the extent we can, that could go a long way towards assuring employers and permitting them to use those tools. So I agree with you 100% on that one.

Ben Perry: Well, it’ll be interesting to see where this goes in the future, but we’ll definitely keep this conversation going.

Sam Sedaei: That sounds good. Well, thank you so much for joining, it has been a pleasure, and have a great rest of the day.

Ben Perry: Yeah, we’ll chat soon. Take care, Sam.

Annuoncer 1: Thank you for joining us on the Ogletree Deakins Podcast. You can subscribe to our podcast on Apple Podcasts or through your favorite podcast service. Please consider rating and reviewing so that we may continue to provide the content that covers your needs. And remember, the information in this podcast is for informational purposes only and is not to be construed as legal advice.

Share Podcast


Modern dark data center, all objects in the scene are 3D
Practice Group

Cybersecurity and Privacy

The attorneys in the Cybersecurity and Privacy Practice Group at Ogletree Deakins understand that data now accumulates quickly and transmits easily. As the law adapts to technical advancements, we effectively advise our clients as they work to comply with new developments and best practices for protecting the privacy of the data that their businesses collect and retain.

Learn more

Sign up to receive emails about new developments and upcoming programs.

Sign Up Now