Cropped shot of woman's hand typing on computer keyboard in the dark

In this podcast recorded at Ogletree’s recent Corporate Labor and Employment Counsel Exclusive® seminar, Kristin Higgins (office managing shareholder, Dallas) and Jenn Betts (office managing shareholder, Pittsburgh) discuss the use of artificial intelligence (AI) by employers, including in hiring and recruiting. Jenn, who is co-chair of Ogletree Deakins’ Technology Practice Group, and Kristin provide an overview of California’s newly effective regulations prohibiting employers from using an “automated decision system” to discriminate against applicants or employees on a basis protected by the California Fair Employment and Housing Act. Kristin offers an overview of the consumer-focused Texas Responsible Artificial Intelligence Governance Act, which goes into effect in January. They conclude the discussion with pointers for employers, such as forming workgroups to evaluate new AI tools before deploying them in the workplace.

Transcript

Announcer: Welcome to the Ogletree Deakins podcast, where we provide listeners with brief discussions about important workplace legal issues. Our podcasts are for informational purposes only and should not be construed as legal advice. You can subscribe through your favorite podcast service. Please consider rating this podcast so we can get your feedback and improve our programs. Please enjoy the podcast.

Kristin Higgins: Hi there. I’m Kristin Higgins, the Office Managing Shareholder from Ogletree’s Dallas office, and I’m here with Jenn Betts, who manages our Pittsburgh office, co-chairs our Technology practice group, and is the co-chair of our internal firm Innovation Council. Our topic for today is artificial intelligence in hiring and recruiting. We’re happy to be here with you. Obviously, this is a timely topic. We have lots of advancements in AI technology that everyone is familiar with, but it’s one of those things in our world that has been rapidly evolving. We had maybe a few years ago, 20% of clients who used AI for HR purposes, and now we understand that more than half of our clients, Jen, are using AI for HR purposes specifically. And so, there’s a lot to dive into here. We’re actually broadcasting live from our Corporate Counsel Exclusive in Colorado Springs where we just had a session on this, and everybody had really good ideas and were interested in all these updates, including in days from now, the California new law that is coming into effect. What do employers need to know about that?

Jenn Betts: Yeah. Thanks, Kristin, and hi, everybody. As you may or may not know, California is going to be one of the first states in the United States with a law regulating the use of artificial intelligence by employers. California’s law is really broadly an anti-discrimination law. It provides that employers using what they call automated decision-making systems, or ADSs, cannot use these tools in a way that discriminates based on protected traits. We already know that. The laws all across the country from a state and federal level already prohibit using any kind of selection devices or employment tools in a way that discriminates, but California’s making what we already know a bit more explicit with the language of their amendments to their Fair Employment and Housing Act. There are a couple of elements of California’s new law that are interesting and that I think employers should have on their radar.
Number one, there’s a record-keeping obligation, which requires employers to maintain documents, records, information created by these AI systems for four years. That sounds in the abstract like that would be easy, but it’s actually harder than it sounds because of the complexity of these systems. Two, and perhaps more interestingly, California’s new law sets up a new defense. It’s looking like it’s going to be a defense where California is saying that it’s relevant to a claim of employment discrimination or an available defense of that claim if an employer can present evidence of anti-bias testing or similar proactive efforts to avoid unlawful discrimination. We’ll see as this law develops and how it plays out in the litigation process, but what California is basically saying to employers is, “Hey, look, you, if you’re using these tools, you really should be doing auditing for bias of these tools. If you’re not, that’s also going to be evidence that’s going to be used against you if you get sued on a disparate impact theory, which is the most common theory in this space.”
That’s something new. We don’t have that anywhere in the United States right now. Employers that are doing that, they are doing auditing, they are putting in place proactive measures, can have a different level of comfort in their use of these tools in California. Interesting developments. We have a webinar. Today is Friday, September 26th, and we have a webinar on September 29th, which will be available on the firm’s website where we’ll go into a lot more detail about California’s new law. California’s not the only state though that is getting in the mix with respect to regulation of AI. Kristin, you’re in Dallas. A little bit of a curveball, but Dallas is part of Texas which has a new AI law that’s going into effect soon, right?

Kristin Higgins: Yeah. As you said in our session, you did not have that on your bingo card.

Jenn Betts: I didn’t, I didn’t.

Kristin Higgins: Well, so Texas does have a new AI law that was passed this summer and will go into effect in January. Perhaps, unsurprisingly, the angle at which the Texas legislature is coming at this issue is different than California. Nowhere in our law is there any discussion of disparate impact, but they, like many states, are trying to balance innovation, having companies be able to innovate, but also protect consumers, is really where they were coming from it, and so it has a broad definition of AI. It has types of AI uses that are prohibited that are unsurprising, but things like non-consensual deep fakes and things like that that should not be used in the consumer world. They also go a step further. Texas, maybe also not on the bingo card, has a biometric law. This new AI law expands or amends our biometric law to put additional parameters, boundaries on people’s use of things like facial recognition technology for consumer purposes.
Texas also has a safe harbor provision, a little bit unlike California, but if companies are following the National Institute of Standards and Technology AI Risk Management Framework, then that can provide some safe harbor. An additional interesting aspect of the Texas law is what they’re calling a regulatory sandbox program. It’s allowing companies under some sort of supervision—TBD what that looks like—but with that supervision, companies can go in and experiment with new types of AI before they put that out to consumers. I mean, we’re off-topic a little bit from our recruiting and hiring uses of AI, but I think the breadth and variation of all of this is very interesting. We have a law in Illinois that is currently passed, and there are more states coming.
I joked that you can look out for a new state law map on this issue because the variation is coming for all of us, so everybody, stay tuned for those updates. But we thought maybe what would be most helpful to the listeners would be to talk about, in your companies, how to develop governance around AI use in the workplace, both for your operations but also for your HR teams. Jenn, I know you’ve got some thoughts on that.

Jenn Betts: Yeah. Thanks, Kristin. We spent time talking in our program about this exact topic and had some interesting discussions with the attendees about the different governance approaches that organizations are taking. It’s not a one size fits all, depends on the size of your company, how you’re using artificial intelligence, how your operations are structured, but there are some consistent best practices that we see time and time again, both with our clients and that we deploy here at Ogletree as we’re developing our own governance strategies related to artificial intelligence. Some of the things that we talked about, work groups or committees. A lot of organizations today, in 2025 and going into 2026, have developed internal committees. We call it our Innovation Council here at Ogletree. Different organizations have different names. The favorite one that I’ve heard, I mentioned in the program yesterday, a client calls their group the AI Rangers, which I think is adorable.

Kristin Higgins: Love it.

Jenn Betts: But different titles, but the same purpose. It’s a multifunctional different types of stakeholders with different specialty areas, a group that includes usually HR, legal, operations, IT, and they get together, and they do a couple of different things. They evaluate new AI tools before they’re deployed in the organization. “Is this something that is going to be useful for our organization? Do we have concerns about the way that it might create biased or wrong decisions?” They pressure test it; they vet it; they develop policies like an AI policy or a responsible use policy. They really help manage the organization’s day-to-day operations related to artificial intelligence.

Kristin Higgins: Can you explain to the listeners what the difference between an AI policy and a responsible use policy is? Because I think those terms may get thrown around and interchanged.

Jenn Betts: Yeah, absolutely. They don’t have to be separate policies. Some of our clients combine them together. But an AI policy, an AI use policy, is really like a prescriptive guideline about what your organization’s practices are concerning AI. “Here are the list of approved AI tools. Here’s how you’re allowed to and not allowed to use them. Here’s what reporting looks like for violations of our policies.” That’s an AI use policy. A responsible use policy is more like foundational principles about how, as an organization, we’re approaching artificial intelligence. Things like transparency, disclosure, fairness, those more kind of conceptual principles are put together in a policy so that everybody knows, “Here’s how our company is approaching this topic.”

Kristin Higgins: One of the things that came up in our session was if your organization does not have a stance on AI or an idea of what their philosophy is, I think now is the time to have those conversations and figure out what is our . . . are we going to be on the more conservative side and stay back? Are we going to be more on the innovative and risk-taking side and be on the forefront? I think the time has passed to stay on the sidelines on having a philosophy.

Jenn Betts: I think you’re totally right. We had a guest speaker on our panel who is in-house at a large organization, and he made that exact point. I think the reality is if you don’t communicate to your employees what your approach is, employees don’t know what the expectations surrounding their use are, and they’re going to misuse these tools, and that’s a problem.

Kristin Higgins: Multiple people said even if you don’t have a policy or think they’re not using it, your employees are absolutely using AI, and so you’re going to be better off having your philosophy and your parameters out there and visible to them.

Jenn Betts: A couple of other things that we talked about from a governance approach. One is ongoing monitoring, so this idea of auditing both for disparate impact and also auditing to validate that this tool is accomplishing our goals, because you spend a lot of money typically getting a contract with a vendor deploying these tools, and sometimes they just don’t work the way that you want them to, and so just gut-checking that on a regular basis is another reason to do a different type of auditing. A lot of organizations are also rolling out training programs, both with respect to just like basic AI literacy, so like, “Let’s all get on the same page with some foundational knowledge on this topic,” and also training about how to use these tools appropriately, like still having human review, understanding the concept of bias, understanding the issues related to hallucinations and what that looks like and what that means. A lot of other specific practices and governance approaches, but those are the things that I think, from a high level, a lot of organizations are looking at doing and deploying right now today.

Kristin Higgins: I think our group that attended came away with a big, nice to-do list of things to think about and to potentially implement in their organizations. Stay tuned to the Ogletree webinars, and podcasts, and blogs, because we try to stay up on these things and deploy to you hopefully practical information that you can use in your practice or your organization.

Jenn Betts: Thanks, everybody.

Announcer: Thank you for joining us on the Ogletree Deakins podcast. You can subscribe to our podcast on Apple Podcasts or through your favorite podcast service. Please consider rating and reviewing so that we may continue to provide the content that covers your needs, and remember, the information in this podcast is for informational purposes only and is not to be construed as legal advice.

Speakers

Share Podcast


Fingerprint Biometric Authentication Button. Digital Security Concept
Practice Group

Technology

Ogletree Deakins is uniquely situated to provide tech employers and users (the “TECHPLACE™”) with labor and employment advice, compliance counseling, and litigation services that embrace innovation and mitigate legal risk. Through our Technology Practice Group, we support clients in the exploration, invention, and/or implementation of new and evolving technologies to navigate the unique and emerging labor and employment issues present in the workplace.

Learn more
Modern dark data center, all objects in the scene are 3D
Practice Group

Cybersecurity and Privacy

The attorneys in the Cybersecurity and Privacy Practice Group at Ogletree Deakins understand that data now accumulates quickly and transmits easily. As the law adapts to technical advancements, we effectively advise our clients as they work to comply with new developments and best practices for protecting the privacy of the data that their businesses collect and retain.

Learn more

Sign up to receive emails about new developments and upcoming programs.

Sign Up Now