State flag of California

California is considering new regulations on the use of technology or artificial intelligence (AI) to screen job candidates or make other employment decisions. If the regulations become law, California would be the first state to adopt substantive restrictions specifically addressing this emerging, and often misunderstood, technology.

Background

AI generally refers to the development of computer systems and algorithms to perform tasks historically requiring human intelligence. One form or type of AI is machine learning, which refers to the process by which machines use large sets of data to make better and better predictions. Some forms of AI can be used to automate certain aspects of decision-making.

Ogletree Deakins’ recent survey report, Strategies and Benchmarks for the Workplace: Ogletree’s Survey of Key Decision-Makers, highlighted that nearly a quarter of employers are currently using AI tools as part of their talent and recruitment processes. Indeed, many employers are increasingly using these tools in talent acquisition and recruitment, including to screen resumes, analyze online tests, evaluate an applicant’s facial expressions, body language, word choice, and tone of voice during interviews, and to implement gamified testing.

While this technology has the potential to enhance efficiency and decision-making, some have raised concerns about the potential of these tools to produce biased or discriminatory results, which could trigger issues under state and federal employment discrimination laws. A growing number of states, including California, and the federal government are considering updating traditional labor and employment laws to require that companies using this technology review its impacts on the hiring and promotion process and provide notice to job candidates that they might be screened using such tools.

New draft regulations in California released earlier this year could be the first to adopt substantive restrictions by clarifying that the use of automated decision-making tools are subject to employment discrimination laws if they negatively impact employees and job candidates of protected classes.

California Draft Regulations

The California Fair Employment and Housing Council, on March 15, 2022, published draft modifications to its employment anti-discrimination laws that would impose liability on companies or third-party agencies administrating artificial intelligence tools that have a discriminatory impact.

The draft regulations would make it unlawful for an employer or covered entity to “use … automated-decision systems, or other selection criteria that screen out or tend to screen out an applicant or employee … on the basis” of a protected characteristic, unless the “selection criteria” used “are shown to be job-related for the position in question and are consistent with business necessity.”

This prohibition would apply to the use of an “automated-decision system,” which is defined broadly in the draft regulations as any “computational process, including one derived from machine-learning, statistics, or other data processing or artificial intelligence techniques, that screens, evaluates, categorizes, recommends, or otherwise makes a decision or facilitates human decision making that impacts employees or applicants.”

The draft regulations would place specific limitations on hiring practices, including in pre-employment inquiries, applications, interviews, selection devices, and background checks.

Other Laws & Regulations

At least three other jurisdictions have passed laws addressing the use of AI in hiring. Illinois was the first to do so in August 2019 with the passage of the Artificial Intelligence Video Interview Act, which took effect in January 2020.

The Illinois law has three main components. First, it requires Illinois-based employers to “notify” applicants that “artificial intelligence may be used to analyze” a video interview to “consider the applicant’s fitness for the position.” Second, the law requires employers to explain “how the artificial intelligence works” and what “characteristics it uses to evaluate applicants.” Finally, the law requires the hiring company to obtain “consent from the applicant” to be evaluated by AI tools and prohibits there use if consent is not granted.

The law was amended this year to require employers that rely “solely” on AI analytical tools to select candidates for an “in-person” interview to “collect and report” the “race and ethnicity” of both candidates who “are and are not” offered an in-person interview and of those who are hired. That data will be analyzed by the state, which will then produce a report on whether the data collected “discloses a racial bias in the use of artificial intelligence.” Maryland has a similar law, H.B. 1202, banning the use of “a facial recognition service for the purpose of creating a facial template during an applicant’s interview for employment,” unless the applicant signs a waiver.

Last year, the New York City Council passed a more comprehensive and detailed law regulating the use of “automated employment decision tools” on job candidates and employees in the city. The law prohibits employers and employment agencies from using such tools unless it has been subjected to a “bias audit” within the last year and the results of the most recent bias audit and the “distribution date of the tool” have been made publicly available on the employer’s or employment agency’s website.

Further, the New York City law requires employers and employment agencies to notify job candidates and employees who reside in the city that an “automated decision tool” will be used to assess them, no less than 10 business days prior to its use, and to disclose what “job qualifications and characteristics” will be used in the assessment.

On the federal level, Senator Ron Wyden in February re-introduced the Algorithmic Accountability Act. The bill would provide baseline requirement that companies assess the impact of automated decision-making processes and empower the U.S. Federal Trade Commission to issue regulations for the assessment and reporting.

Importantly, the U.S. Equal Employment Opportunity Commission in May 2022, issued new technical guidance warning employers that the use of AI and algorithmic decision-making in employment decisions may violate the Americans with Disabilities Act (ADA) if the tools screen out job applicants with disabilities. The assistance is part of the agency’s initiative to explore the impact of such tools launched in October 2021.

How the California Draft Regulations Are Different

While the Illinois and New York City laws reflect a natural convergence around regulating notice of the use of AI technology and examination of the impact on hiring, the California draft regulations would go further by specifying that employers and companies administering the technology could indeed face liability under state anti-discrimination laws, regardless of discriminatory intent.

While all tools used by employers that have an impact on employees are subject to potential claims, there is much confusion about how existing laws will govern and apply to the evolving AI technologies. The draft California regulations would clarify that the use of “automated-decision systems” by employers and individuals considered agents of the employers may constitute unlawful discrimination under California anti-discrimination laws even if the automated systems are neutral on their face but have a discriminatory impact. Employers may be held liable under either unlawful disparate treatment or disparate impact theories.

For example, a system that measures an applicant’s reaction time “may unlawfully screen out individuals with certain disabilities” unless the employer demonstrates that “quick reaction time while using an electronic device is job-related and consistent with business necessity,” according to the draft regulations.

The draft regulations would thus explain the substantive protection provided by California’s law from various types of hiring discrimination under the Fair Employment and Housing Act, including based on accent, English proficiency, immigration status, national origin, height, weight, sex, pregnancy, childbirth, marital status, age, or religion.

Key Takeaways

With businesses’ increasing reliance on artificial intelligence and other machine learning technologies, employers may want to take steps to evaluate and take steps to mitigate any potential discriminatory impact of these tools by reviewing their use with the relevant stakeholders. It is clear that states and other regulators are going to continue to look closely at such tools and how employers use them. Such substantive regulations as the ones being considered by California, if finalized, could set a standard for other states across the country to follow.

For more information on the EEOC’s new AI guidance, please listen to our latest podcast in which Phoenix shareholder Nonnie Shivers interviewed EEOC Vice Chair Jocelyn Samuels while at Ogletree Deakins’ Workplace Strategies seminar to ask her about this hot topic and more. Listen to the full podcast, “Workplace Strategies Watercooler: An Interview With EEOC Vice Chair Jocelyn Samuels” here and on your favorite podcast platforms.

In addition, please join us for our upcoming webinar, “The EEOC’s New Guidance on the Use of Software, Algorithms, and Artificial Intelligence: What Employers Need to Know,” which will take place on Friday, May 20, 2022, from 12:00 noon to 1:00 p.m. EST. The speakers, Jennifer G. Betts and Danielle Ochs, will discuss the key provisions of the new technical assistance document and other issues that employers may need to consider to ensure that the use of software tools in employment does not disadvantage individuals with disabilities in ways that violate the ADA. Register here.

Ogletree Deakins will continue to monitor and post updates to the firm’s Technology blog on the evolving regulatory landscape and potential compliance issues related to the use of this emerging technology in the workplace.

Authors


Browse More Insights

Fingerprint Biometric Authentication Button. Digital Security Concept
Practice Group

Technology

Ogletree Deakins is uniquely situated to provide tech employers and users (the “TECHPLACE™”) with labor and employment advice, compliance counseling, and litigation services that embrace innovation and mitigate legal risk. Through our Technology Practice Group, we support clients in the exploration, invention, and/or implementation of new and evolving technologies to navigate the unique and emerging labor and employment issues present in the workplace.

Learn more
Modern dark data center, all objects in the scene are 3D
Practice Group

Cybersecurity and Privacy

The attorneys in the Cybersecurity and Privacy Practice Group at Ogletree Deakins understand that data now accumulates quickly and transmits easily. As the law adapts to technical advancements, we effectively advise our clients as they work to comply with new developments and best practices for protecting the privacy of the data that their businesses collect and retain.

Learn more
Fountain pen signing a document, close view with center focus
Practice Group

Employment Law

Ogletree Deakins’ employment lawyers are experienced in all aspects of employment law, from day-to-day advice to complex employment litigation.

Learn more

Sign up to receive emails about new developments and upcoming programs.

Sign Up Now