Modern dark data center, all objects in the scene are 3D

In part one of this episode of our Defensible Decisions podcast, Scott Kelly (shareholder, Birmingham) sits down with Lauren Hicks (shareholder, Indianapolis/Atlanta) to unpack how AI is actually used across the talent lifecycle—and why “human in the loop” isn’t a compliance shield. The speakers break down what a credible, privileged bias audit entails, how to pair fairness testing with effectiveness validation, and the practical governance steps employers should know to manage a fast-evolving, patchwork regulatory landscape.

Transcript

Announcer: Welcome to the Ogletree Deakins podcast, where we provide listeners with brief discussions about important workplace legal issues. Our podcasts are for informational purposes only and should not be construed as legal advice. You can subscribe through your favorite podcast service. Please consider rating this podcast so we can get your feedback and improve our programs. Please enjoy the podcast.

Scott Kelly: Hello, everyone, and welcome to Defensible Decisions. I’m Scott Kelly, an employment lawyer at Ogletree Deakins. And for the last 25 years, I’ve helped employers design and defend fair, effective workforce systems across hiring, pay, promotion, retention, and now artificial intelligence systems using legal insights, analytics, and audit-ready documentation. Each episode is going to blend employment law developments with rigorous workforce analytics, so your decisions are defensible, compliant, and effective. We translate evolving enforcement priorities and regulations into practical steps you can apply before the lawsuit or the investigation arrives. You’re going to hear from attorneys, experts such as data scientists, labor economists, social scientists, and hopefully some governmental officials. Together, we’ll cover recruiting and selection, pay equity, systemic discrimination, DEI compliance, posts, students for fair admissions, artificial intelligence bias and audits, and federal contracting and reporting. Subscribe to Defensible Decisions. This podcast is for informational purposes only and does not constitute legal advice. Listening does not create an attorney-client relationship. The opinions expressed are those of the speakers and do not necessarily reflect the views of their employers or clients.

Today, we’re digging into one of the most pressing issues for human resources and employment law professionals. The use of AI across the talent life cycle. From sourcing to promotion, artificial intelligence promises efficiency, but it also brings real compliance risk. I’m joined today by Ogletree’s AI bias lead, my partner from Indianapolis, Lauren Hicks, who advises employers on AI governance and defensibility. And as I mentioned, she’s leading a team of attorneys and data analytic professionals here at Ogletree to assist our clients with privileged AI bias audits.
Lauren, thanks for being here.

Lauren Hicks: Thanks, Scott, for having me today.

Scott Kelly: Absolutely. Let’s start at the top. Where are you seeing, when you’re talking to different employers, where they’re actually using AI? And why should HR and in-house counsel be concerned or be paying attention to this issue?

Lauren Hicks: AI and algorithmic tools are now embedded throughout workforce management. Employers might use them for sourcing and ranking candidates, scoring resumes, screening assessments, automated interviewing, onboarding workflow, performance reviews, development and promotions, scheduling, or even retention analytics. We see tools doing things like summarizing performance-related information or listening in on customer service calls and providing a score. The appeal of the technology is speed and consistency, but the legal risk is that these tools can systematically disfavor or favor protected groups. Even when a human signs off at the end, regulators and courts might treat AI influence scores, rankings, or screens as part of the decision itself. And that means all the obligations that attach to normal employment decision making, including validation, bias testing, notice, transparency, can be triggered.

Scott Kelly: A quick stop here because I hear a lot when I’m talking to clients and saying, “Well, we really don’t need to be worried because we’re not letting the technology do it all. We’re not really using AI.” Can you tell us, when we say AI in this context, what actually matters?

Lauren Hicks: That’s a really great point. We do hear clients often make comments like that. If a software processes employment-related data and produces a piece of information or an action, it could be a score, a recommendation, maybe a ranking, a stack ranking or a classification, or any other output, really, that helps inform an employment action, it’s functionally an AI or automated decision-making tool. Some basic calculators or simplistic spreadsheets doing manual tasks may not be in scope, certainly, with all of the regulations, but most modern software that sorts or filters or prioritizes, maybe it flags, predicts. Any type of rules-based system, machine learning, smart features that are very appealing to recruiting are generally going to be within the concept of AI, which is a loose term.

I think many people think that AI means purely generative-type tools that have gotten a lot of attention in the last year or two, but that’s too narrow of a scope. Most employment software running even basic algorithms will be in scope for many of the AI-type regulations. The safest posture is to govern the technology based on what the tool does, not necessarily what it’s called or not giving too much weight to the technicality of whether it’s generative AI, agentic AI, or within some other definition of AI. Think about it more as software technology tools.

Scott Kelly: Another thing I’m hearing a lot is, “Well, we have human interaction in it,” and I think some of the phrases that I hear are kind of human in the loop, but that doesn’t automatically protect employers, does it, if you have a human helping this technology?

Lauren Hicks: That is exactly right, Scott. Human review is important, but it is not a magical shield, and I do think there’s a bit of an impression that it can be a magical shield. If the human relies on AI or technology outputs, or if the AI narrows a pool or provides information, that information can be enough to bring about anti-discrimination obligations into play and open the door to some risk. AI and human resources used to feel a little bit like the Wild West. This technology is not really brand new. It’s been creeping up in the applicant side for quite a while, almost a decade. There’s been lots of excitement, very little structure, and everyone kind of has been experimenting in different directions. I think that era is over because regulators now expect employers to treat AI with the same seriousness as any other employment practice, policy, or system.

Scott Kelly: To me, it feels like employers in the current environment where there’s not much oversight on a lot of different issues, but particularly in this, with regulating artificial intelligence systems, that the regulatory picture is a bit of a moving target. It kind of depends upon where you might have locations or where you might be doing certain activity because there’s not, unfortunately, one consistent place to go or to be concerned about to govern what’s going on here. What’s your kind of take on big picture what legal teams and HR professionals or talent acquisition professionals need to have on their radar about regulatory compliance?

Lauren Hicks: Scott, you hit the nail on the head. I mean, this is sort of the classic and highly dreaded patchwork that we’re seeing pop up, right? Employers with employment laws like consistency across the board and high predictability. And because there are not federal regulations, that is not what we’re seeing. The landscape is expanding quickly and unevenly. We’ve got jurisdictions like California, New York City, Colorado, Illinois, the European Union. They’ve either adopted or are moving toward requirements around bias testing, notices, transparency, and in some cases, even public summaries. The frameworks do vary a bit in scope and timing, and they interact with longstanding kind of anti-discrimination, privacy, and/or consumer reporting laws. We’re seeing agencies issue guidance and bring early enforcement actions, and private litigants are sort of testing theories that treat algorithmic inputs as part of the employment decision, even when there is human review. That’s very much an area where everyone should stay tuned, as it’s new, and it’s developing in litigation now. Penalties can range from administrative fines to mandated disclosures and restrictions on use. Some jurisdictions claim sort of extraterritorial reach, and we’re seeing sometimes short implementation windows. Some do provide a longer implementation window, but all of these things create real complexity for multi-state and global employers.

Scott Kelly: No easy button here. If you’re operating across jurisdictions, you really need some kind of compliance baseline monitoring of your own because this is just a fast-moving target, right?

Lauren Hicks: That is right. Employers need a governance program that can flex to different rules without kind of rebuilding from scratch every time a new jurisdiction acts because we are going to keep seeing more and more variation in the laws that do arise.

Scott Kelly: I know you’re working on a lot of these bias audits, but one thing that I think that we do at Ogletree is try to provide really what all this means in a practical way. Can you kind of break down for the listeners what a credible bias audit might look like?

Lauren Hicks: Absolutely. Generally speaking, at a high level, what you want to look at is adverse impact or analytical testing at each step of sort of the funnel. Of course, that funnel depends on what you’re using the tool for, but generally speaking, the most common usage of the tools would be applications and the hiring process. As an example, you would look at how scores, rankings, interview selections, assessment pass rates, or hiring outcomes differ across protected groups. If you find statistically significant disparities, you want to treat that as a warning light and then dig deeper into the model’s training and reference data, the features that might act as proxies for protected traits, how the system and its features are engineered, any cutoffs or rules that have been applied, and any unique settings or application by role or location, which we do see quite a bit of variation in how maybe HR recruiters apply the use of a tool. Equally important is definitely understanding how the humans involved, whether it’s recruiters or managers, actually use the tool and if there’s significant variation. If there is a potentially discriminatory impact, then we need to assess business necessity and explore whether there are less discriminatory alternatives or whether there’s some other sort of investigative path we can take that explains what we have found with a potentially discriminatory impact.

At the end, that could lead to certain fixes like adjusting tool thresholds, removing or swapping certain features, retraining reviewers who are actually using the output, or modifying where in the workflow the tool is used. Things like that can really help make sure that it’s performing equitably.

Scott Kelly: You’ve emphasized effectiveness auditing as well. Is that different from a bias audit? Are they all kind of rolled in together? Help me understand that.

Lauren Hicks: Those are distinct concepts, but they are critical to work hand in hand. Effectiveness asks whether the tool actually improves decision making in the real-world context. Does it perform as advertised? Does it beat your existing process? Is it stable across roles, teams, sites, and times? In other words, is this tool doing what we are paying it to do? You benchmark model outputs against structured human evaluations. You can look at post-decisional outcomes like performance and retention. You can examine quality and error rates and kind of validate the drivers of the recommendations to make sure they’re predictive of actual success. It is critical to understand that fairness of the tool and effectiveness of the tool cannot be separated. A system that’s fast but unevenly accurate across protected groups or that relies on features correlated with protected traits creates legal and operational risk. Evaluating that accuracy, stability, and business impact alongside adverse impact or bias metrics help ensure that better does not ultimately mean cheaper but bias.

Scott Kelly: And maybe I’m oversimplifying it, and if I am, please correct me, but it sounds like the thing to think about is, does this technology work? Who does it work for? And does it work for the right reasons?

Lauren Hicks: No, that’s exactly right.

Scott Kelly: All right. I’ve heard you say a lot that you need to lead with the privilege when conducting these audits. What does it look like in practice to do that?

Lauren Hicks: In practice, that means beginning your technology bias, evaluation, testing, and audits through council. And that is really critical, Scott. We often see analytics, maybe services are offered by the vendor that provides the tool, and that can really open the door to risk. Legal should scope the questions. They should be directing the work entirely and should be engaging the right experts. Once the privilege review identifies any type of issues, and the organization might need to take a remedial action, then, if necessary, you can prepare non-privileged regulatory summaries as a separate project. There’s a lot of overlap obviously now with these regulatory obligations popping up, and employers want to meet them, but taking the approach of doing consistent, routine, privileged audits and then, as necessary, doing narrowly tailored regulatory compliance audits that are separate from that privileged work, that will help preserve privilege over the detailed analyses while still meeting notice and transparency obligations.

Scott Kelly: If you’re being careful here, it’s more than just belt-and-suspenders types approach. I mean, really, it’s thinking strategically about creating the right space under the privilege to diagnose and fix problems and then determining where it’s appropriate to do some of this analysis for regulatory compliance, where there are those obligations to be more transparent about things altogether. Is that right?

Lauren Hicks: Yes. Privileged posture helps protect the company. It encourages candid assessments, faster remediation, and kind of clearer, cleaner lines between internal diagnostics and public-facing compliance documents.

Scott Kelly: I know we’ve kind of started off this session together talking about some of the maybe misconceptions or things that we’re seeing. Do you think it’s fair that some organizations are underestimating how much they’re actually using AI? And if you agree with me, anything employers could do to kind of get around that hurdle?

Lauren Hicks: This is such a critical step. It sounds very simple. Sometimes it’s not because oftentimes organizations haven’t yet created sort of a centralized structure to monitor technology. Absolutely you’re right. Most organizations use more AI and algorithmic tools than they might realize. It might be sourcing databases that score candidates, resume parsers, ranking engines, pre-hire assessments, interview technology, predictive retention models, HR analytic dashboards, and many other types. I have seen single employers that have eight to 12 technology add-ons just on their applicant tracking system alone, each adding some unique functionality that would fall within scope of AI technology that needs to be reviewed. It is absolutely critical for governance to maintain a living inventory that covers every tool, the tool’s purpose, where it’s deployed, the data sources, the ownership and vendor details. Are there human-in-the-loop steps? Which jurisdictions is it in use? Things like that.

One takeaway that anyone listening should really focus on is every employer should have one inventory that serves as the backbone of governance. That’ll allow you to help manage vendor oversight, incident responses, such as a data privacy breach, and other regulatory compliance or disclosures.

Scott Kelly: All right. Well, since we’re kind of talking about some of the guardrails here, what should organizations be locking down related to bias in their AI inventory?

Lauren Hicks: You want to write and kind of enforce plain-language policies that cover notice and consent where applicable or meaningful human review and appeal, data minimization and retention, and kind of security features. But on the contracting side, consider whether your vendor agreements address audit rights, data access for testing, security parameters, documentation rights, and remediation obligations if problems are found. You may want clear triggers for vendor calibration, commitments to cooperate in bias and effectiveness testing. Although, again, quick caveat with that, it is not sort of adequate to think about vendor testing as a standalone meeting any obligations. You really want to make sure you’re looking at things from an independent and internal point of view with your real data. And then also make sure you have a good expectation or understanding around any transparency or other type of requirements that you might have to provide to regulators or that may be necessary for litigation.

Scott Kelly: All right. Well, sounds like governance and just going to be internal, that you need to be looking in your supply chain, also?

Lauren Hicks: I think that’s right. One way to think about it is kind of that your compliance posture is only as strong as the weakest link in your vendor sort of chain. Managing these things, while a little bit tedious, can be very critical to risk management.

Scott Kelly: All right. One last thing before we wrap up this episode. I know we’re probably got a lot more to cover. I’m wondering if you’d be willing to come back and talk with me again on some of the other issues–

Lauren Hicks: Absolutely.

Scott Kelly: –about AI bias audits. Sorry. Great. I was hoping you’d say yes.
But before we go, something that is striking out to me is once . . . if I’ve got a tool that I want to roll out, if I do this privileged audit that you’re talking about, doesn’t seem like it’s a one-and-done thing. Is it?

Lauren Hicks: You’ve hit the nail on the head. Ongoing monitoring is the key that everyone needs to kind of think about or adapt to. AI is not necessarily something to fear, Scott. It’s something to manage. Additionally, these regulatory frameworks either require or sort of strongly encourage, if you will, periodic reviews. Create defined interval testing that might be quarterly at your organization or twice a year, maybe it’s annual, and then supplement it by obviously trigger-based reviews if there are material changes to the tool, a workforce composition, maybe after an acquisition or something like that, or if there’s a change in sort of the deployment context.
If obviously adverse impact crosses a threshold or effectiveness drifts, then you can sort of adjust features, look at options to retrain reviewers, or start working with the vendor to recalibrate depending on the circumstances and what might be appropriate. Obviously, legal should continue to manage that process so that analytical iterations, remediations, and any discussion of risks remain covered within that privilege.

Scott Kelly: This has been really helpful. I’m hoping in our next episode on AI bias audits, maybe we could talk a little bit about proper documentation, maybe go through a hypothetical or two, and then some practical steps. Some kind of things to avoid. Does that sound like a good agenda for the next time we get together?

Lauren Hicks: Let’s reconvene, and we’ll cover those topics.

Scott Kelly: All right. Well, thank you, Lauren. I really appreciate your insights here. They’ve been eye-opening for me and hopefully for our listeners. For the listeners, thank you all for being here and listening to Defensible Decisions. For more on employment law and workforce analytics, follow this show on the Ogletree Deakins podcast page. And please note it’s for informational purposes only, and it does not constitute legal advice. But thanks again for joining us and stay tuned for episode two.

Announcer: Thank you for joining us on the Ogletree Deakins podcast. You can subscribe to our podcast on Apple Podcasts or through your favorite podcast service. Please consider rating and reviewing so that we may continue to provide the content that covers your needs. And remember, the information in this podcast is for informational purposes only and is not to be construed as legal advice.

Share Podcast


Computer laptop with financial graph data on table in the office
Practice Group

Workforce Analytics and Compliance

Ogletree Deakins’ Workforce Analytics and Compliance Practice Group provides tailored guidance and legal recommendations for a myriad of workforce issues, informed by data-driven, state-of-the-art compliance and risk assessment services. Our services encompass all stages of the employment life cycle, such as selections, career advancement, compensation and benefits, and retention, which enables employers to make informed decisions […]

Learn more
Fingerprint Biometric Authentication Button. Digital Security Concept
Practice Group

Technology

Ogletree Deakins is uniquely situated to provide tech employers and users (the “TECHPLACE™”) with labor and employment advice, compliance counseling, and litigation services that embrace innovation and mitigate legal risk. Through our Technology Practice Group, we support clients in the exploration, invention, and/or implementation of new and evolving technologies to navigate the unique and emerging labor and employment issues present in the workplace.

Learn more

Sign up to receive emails about new developments and upcoming programs.

Sign Up Now