Hallway of servers.

The U.S. Equal Employment Opportunity Commission (EEOC) and the U.S. Department of Justice (DOJ), on May 12, 2022, issued guidance advising employers that the use of artificial intelligence (AI) and algorithmic decision-making processes to make employment decisions could result in unlawful discrimination against applicants and employees with disabilities.

The new technical assistance from the EEOC highlights issues the agency thinks employers should consider to ensure such tools are not used to treat job applicants and employees in ways that the agency says might constitute unlawful discrimination under the Americans with Disabilities Act (ADA). The DOJ jointly issued similar guidance to employers under its authority. Further, the EEOC provided a summary document designed for use by employees and job applicants, identifying potential issues and laying out steps employees and applicants can take to raise concerns.

The EEOC identified three “primary concerns:”

  • “Employers should have a process in place to provide reasonable accommodations when using algorithmic decision-making tools;
  • Without proper safeguards, workers with disabilities may be ‘screened out’ from consideration in a job or promotion even if they can do the job with or without a reasonable accommodation; and
  • If the use of AI or algorithms results in applicants or employees having to provide information about disabilities or medical conditions, it may result in prohibited disability-related inquiries or medical exams.”

The EEOC outlined examples of when an employer might be held liable under the ADA. For instance, an employer may be found to have discriminated again individuals with disabilities by using a pre-employment test—even if that test was developed by an outside vendor. In such a case, employers may have to provide a “reasonable accommodation” such as giving the applicant extended time or an alternate test.

The EEOC also identified a number of “promising practices” that employers should consider to mitigate the risk of ADA violations connected to their use of AI tools. Among other “promising practices,” the EEOC recommends:

  • Telling applicants or employees what steps any evaluative process includes (e.g., if there is an algorithm being used to assess an employee) and providing a way to request a reasonable accommodation.
  • Using algorithmic tools that have been designed to be accessible to individuals with as many different types of disabilities as possible.
  • Describing in plain language and accessible format the traits that an algorithm is designed to assess, the method by which the traits are assessed, and the variables or factors that may affect a rating.
  • Ensuring that the algorithmic tool only measures abilities or qualifications that are truly necessary for the job, even for people who are entitled to on-the-job reasonable accommodations.
  • Ensuring that the necessary abilities or qualifications are measured directly rather than by way of characteristics or scores that are correlated with the abilities or qualifications.
  • Asking an algorithmic tool vendor to confirm that the tool does not ask job applicants or employees questions likely to elicit information about a disability or seek information about an individual’s physical or mental impairment or health, unless the inquiries are related to a request for reasonable accommodation.

The technical assistance applies to the growing use of AI and algorithmic decision-making tools in recruitment, including to screen resumes and implement computer-based tests, and in other employment decisions, such as pay and promotions, the EEOC stated. It is not meant to be new policy but to explain existing principles for the enforcement of the ADA and previously issued guidance, the EEOC stated.

The new assistance comes after EEOC Chair Charlotte A. Burrows in October 2021, launched the agency’s Artificial Intelligence and Algorithmic Fairness Initiative to examine the use of AI, machine learning, and other emerging technologies in the context of federal civil rights laws.

A growing number of jurisdictions, including Illinois and New York City, have also begun to pass laws regulating the use of certain types of AI and algorithmic decision-making tools in employment decisions.

For more information on the new EEOC AI guidance, please listen to our latest podcast in which Phoenix shareholder Nonnie Shivers interviewed EEOC Vice Chair Jocelyn Samuels while at Ogletree Deakins’ Workplace Strategies seminar to ask her about this hot topic and more. Listen to the full podcast, “Workplace Strategies Watercooler: An Interview With EEOC Vice Chair Jocelyn Samuels” here and on your favorite podcast platforms.

In addition, please join us for our upcoming webinar, “The EEOC’s New Guidance on the Use of Software, Algorithms, and Artificial Intelligence: What Employers Need to Know,” which will take place on Friday, May 20, 2022, from 12:00 noon to 1:00 p.m. EST. The speakers, Jennifer G. Betts and Danielle Ochs, will discuss the key provisions of the new technical assistance document and other issues that employers may need to consider to ensure that the use of software tools in employment does not disadvantage individuals with disabilities in ways that violate the ADA. Register here.

Ogletree Deakins will continue to monitor and post updates to the firm’s Technology blog on the evolving regulatory landscape and potential compliance issues related to the use of this emerging technology in the workplace.


Browse More Insights

Practice Group

Employment Law

Ogletree Deakins’ employment lawyers are experienced in all aspects of employment law, from day-to-day advice to complex employment litigation.

Learn more

Sign up to receive emails about new developments and upcoming programs.

Sign Up Now