Cropped shot of woman's hand typing on computer keyboard in the dark

Quick Hits

  • Using automated technology to make workforce decisions presents significant legal risks under existing anti-discrimination laws, such as Title VII, the ADEA, and the ADA, because bias in algorithms can lead to allegations of discrimination.
  • Algorithmic HR software is uniquely risky because, unlike human judgment, it amplifies the scale of potential harm. A single biased algorithm can impact thousands of candidates or employees, exponentially increasing the liability risk compared to biased individual human decisions.
  • Proactive, privileged software audits are critical for mitigating legal risks and monitoring the effectiveness of AI in making workforce decisions.

What Are Automated Technology Tools and How Does AI Relate?

In the employment context, algorithmic or automated HR tools refer to software systems that utilize predefined rules to run data through algorithms to assist with various human resources functions. These tools can range from simple rule-based formula systems to more advanced generative AI-powered technologies. Unlike traditional algorithms, which operate based on fixed, explicit instructions to process data and make decisions, generative AI systems differ in that they can learn from data, adapt over time, and make autonomous adjustments without being limited to predefined rules.

Employers use these tools in numerous ways to automate and enhance HR functions. A few examples:

  • Applicant Tracking Systems (ATS) often use algorithms to score applicants compared to the position description or rank resumes by comparing the skills of the applicants to one another.
  • Skills-based search engines rely on algorithms to match job seekers with open positions based on their qualifications, experience, and keywords in their resumes.
  • AI-powered interview platforms assess candidate responses in video interviews, evaluating facial expressions, tone, and language to predict things like skills, fit, or likelihood of success.
  • Automated performance evaluation systems can analyze employee data such as productivity metrics and feedback to provide ratings of individual performance.
  • AI systems can listen in on phone calls to score employee and customer interactions, a feature often used in the customer service and sales industries.
  • AI systems can analyze background check information as part of the hiring process.
  • Automated technology can be incorporated into compensation processes to predict salaries, assess market fairness, or evaluate pay equity.
  • Automated systems can be utilized by employers or candidates in the hiring process for scheduling, note-taking, or other logistics.
  • AI models can analyze historical hiring and employee data to predict which candidates are most likely to succeed in a role or which new hires may be at risk of early turnover.

AI Liability Risks Under Current Laws

AI-driven workforce decisions are covered by a variety of employment laws, and employers are facing an increasing number of agency investigations and lawsuits related to their use of AI in employment. Some of the key legal frameworks include:

  1. Title VII: Title VII prohibits discrimination on the basis of race, color, religion, sex, or national origin in employment practices. Under Title VII, employers can be held liable for facially neutral practices that have a disproportionate, adverse impact on members of a protected class. This includes decisions made by AI systems. Even if an AI system is designed to be neutral, if it has a discriminatory effect on a protected class, an employer can be held liable under the disparate impact theory. While the current administration has directed federal agencies to deprioritize disparate impact theory, it is still a viable legal theory under federal, state, and local anti-discrimination laws. Where AI systems are providing an assessment that is utilized as one of many factors by human decision-makers, they can also contribute to disparate treatment discrimination risks.
  1. The ADA: If AI systems screen out individuals with disabilities, they may violate the Americans with Disabilities Act (ADA). It is also critical that AI-based systems are accessible and that employers provide reasonable accommodations as appropriate to avoid discrimination against individuals with disabilities.
  1. The ADEA: The Age Discrimination in Employment Act (ADEA) prohibits discrimination against applicants and employees ages forty or older.
  1. The Equal Pay Act: AI tools that factor in compensation and salary data can be prone to replicating past pay disparities. Employers using AI must ensure that their systems are not creating or perpetuating sex-based pay inequities, or they risk violating the Equal Pay Act.
  • The EU AI Act:This comprehensive legislation is designed to ensure the safe and ethical use of artificial intelligence across the European Union. It treats employers’ use of AI in the workplace as potentially high-risk and imposes obligations for continued use, as well as potential penalties for violations.
  • State and Local Laws: There is no federal AI legislation yet, but a number of states and localities have passed or proposed AI legislation and regulations, covering topics like video interviews, facial recognition software, bias audits of automated employment decision-making tools (AEDTs), and robust notice and disclosure requirements. While the Trump administration has reversed Biden-era guidance on AI and is emphasizing the need for minimal barriers to foster AI innovation, states may step in to fill the regulatory gap. In addition, existing state and local anti-discrimination laws also create liability risk for employers.
  • Data Privacy Laws: AI also implicates a number of other types of laws, including international, state, and local laws governing data privacy, which creates another potential risk area for employers.

The Challenge of Algorithmic Transparency and Accountability

One of the most significant challenges with the use of AI in workforce decisions is the lack of transparency in how algorithms make decisions. Unlike human decision-makers who can explain their reasoning, generative AI systems operate as “black boxes,” making it difficult, if not impossible, for employers to understand—or defend—how decisions are reached.

This opacity creates significant legal risks. Without a clear understanding of how an algorithm reaches its conclusions, it may be difficult to defend against discrimination claims. If a company cannot provide a clear rationale for why an AI system made a particular decision, it could face regulatory action or legal liability.

Algorithmic systems generally apply the same formula against all candidates, creating relative consistency in the comparisons. For generative AI systems, there is greater complexity because the judgments and standards  change over time as the system absorbs more information. As a result, the decision-making applied to one candidate or employee will vary from the decisions made at a different point in time.

Mitigating the Legal Risks: AI Audits, Workforce Analytics, and Bias Detection

While the potential legal risks are significant, there are proactive steps employers may want to take to mitigate exposure to algorithmic bias and discrimination claims. These steps include:

  • Ensuring that there is a robust policy governing AI use and related issues, like transparency, nondiscrimination, and data privacy
  • Doing due diligence to vet AI vendors, and not utilizing any AI tools without a thorough understanding of their intended purpose and impact
  • Training HR, talent acquisition, and managers on the appropriate use of AI tools
  • Continuing to have human oversight over ultimate workforce decisions so that AI is not the decisionmaker
  • Ensuring compliance with all applicant and employee notice and disclosure requirements, as well as bias audit requirements
  • Providing reasonable accommodations
  • Regularly monitoring AI tools through privileged workforce analytics to ensure there is no disparate impact against any protected groups
  • Creating an ongoing monitoring program to ensure human oversight of impact, privacy, legal risks, etc.

Implementing routine and ongoing audits under legal privilege is one of the most critical steps to ensuring AI is being used in a legally defensible way. These audits may include monitoring algorithms for disparate impacts on protected groups. If a hiring algorithm disproportionately screens out individuals in a protected group, employers may want to take steps to correct these biases before they lead to discrimination charges or lawsuits. Given the risks associated with volume, and to ensure corrective action as quickly as possible, companies may want to undertake these privileged audits on a routine (monthly, quarterly, etc.) basis.

The AI landscape is rapidly evolving, so employers may want to continue to track changing laws and regulations in order to implement policies and procedures to ensure the safe, compliant, and nondiscriminatory use of AI in their workplace, and to reduce risk by engaging in privileged, proactive analyses to evaluate AI tools for bias.

Ogletree Deakins’ Technology Practice Group and Workforce Analytics and Compliance Practice Group will continue to monitor developments and will provide updates on the Employment Law, Technology, and Workforce Analytics and Compliance blogs as additional information becomes available.

Follow and Subscribe
LinkedIn | Instagram | Webinars | Podcasts


Browse More Insights

Computer laptop with financial graph data on table in the office
Practice Group

Workforce Analytics and Compliance

Ogletree Deakins’ Workforce Analytics and Compliance Practice Group provides tailored guidance and legal recommendations for a myriad of workforce issues, informed by data-driven, state-of-the-art compliance and risk assessment services. Our services encompass all stages of the employment life cycle, such as selections, career advancement, compensation and benefits, and retention, which enables employers to make informed decisions […]

Learn more
Fingerprint Biometric Authentication Button. Digital Security Concept
Practice Group

Technology

Ogletree Deakins is uniquely situated to provide tech employers and users (the “TECHPLACE™”) with labor and employment advice, compliance counseling, and litigation services that embrace innovation and mitigate legal risk. Through our Technology Practice Group, we support clients in the exploration, invention, and/or implementation of new and evolving technologies to navigate the unique and emerging labor and employment issues present in the workplace.

Learn more
Fountain pen signing a document, close view with center focus
Practice Group

Employment Law

Ogletree Deakins’ employment lawyers are experienced in all aspects of employment law, from day-to-day advice to complex employment litigation.

Learn more

Sign up to receive emails about new developments and upcoming programs.

Sign Up Now