State Flag of California

Quick Hits

  • The California Civil Rights Department finalized modified regulations for employers’ use of AI and automated decision-making systems.
  • The regulations confirm that the use of such technology to make employment decisions may violate the state’s anti-discrimination laws and clarify limits on such technology, including in conducting criminal background checks and medical/psychological inquiries.

On March 21, 2025, the Civil Rights Council, a branch of the California Civil Rights Department (CRD), voted to approve the final and modified text of California’s new “Employment Regulations Regarding Automated-Decision Systems.” The regulations were filed with the Office of Administrative Law, which must approve the regulations. At this time, it is not clear when the finalized modifications will take effect, although they are likely to become effective this year.

The CRD has been considering automated-decision system regulations for years amid concerns over employers’ increasing use of AI and other automated decision-making systems, or “Automated-Decision Systems,” to make or facilitate employment decisions, such as recruitment, hiring, and promotions.

While the final regulations have some key differences from the proposed regulations released in May 2024, they clarify that it is unlawful to use AI and automated decision-making systems to make employment decisions that discriminate against applicants or employees in a way prohibited by the California Fair Employment and Housing Act (FEHA) or other California antidiscrimination laws.

Here are some key aspects of the final regulations.

Automated-Decision Systems

The final regulations define “automated-decision system[s]” as “[a] computational process that makes a decision or facilitates human decision making regarding an employment benefit,” including processes that “may be derived from and/or use artificial intelligence, machine-learning, algorithms, statistics, and/or other data processing techniques.” This definition is narrower than the proposed regulations, which would have included any computational process that “screens, evaluates, categorizes, or otherwise makes a decision….”

Covered systems include a range of technological processes, including tests, games, or puzzles used to assess applicants or employees, processes for targeting job advertisements, screening resumes, processes to analyze “facial expression, word choice, and/or voice in online interviews,” or processes to “analyz[e] employee or applicant data acquired from third parties.” Such systems do not include “word processing software, spreadsheet software, [and] map navigation systems.”

Automated-decision systems do not include typical software or programs such as word processors, spreadsheets, map navigation systems, web hosting, firewalls, and common security software, “provided that these technologies do not make a decision regarding an employment benefit.” Notably, the final regulations do not include language from the proposed rule’s excluded technology provision that would have excluded systems used to “facilitate human decision making regarding” an employment benefit.

Other Key Terms

  • “Agent”—The final regulations would consider an employer’s “agent” to be an “employer” under the FEHA regulations. An “agent” would be defined as “any person acting on behalf of an employer, directly or indirectly, to exercise a function traditionally exercised by the employer or any other FEHA-regulated activity … including when such activities and decisions are conducted in whole or in part through the use of an automated decision system.” (Emphasis added.)
  • “Automated-Decision System Data”—The regulations define such data as “[a]ny data used to develop or customize an automated-decision system for use by a particular employer or other covered entity.” However, the final regulations narrow what is included as “automated-decision system data,” removing language from the proposed regulations that would have included “[a]ny data used in the process of developing and/or applying machine learning, algorithms, and/or artificial intelligence” used in an automated-decision system, including “data used to train a machine learning algorithm.” (Emphasis added.)
  • “Artificial Intelligence”—The regulations define AI as “[a] machine-based system that infers, from the input it receives, how to generate outputs,” which can include “predictions, content, recommendations, or decisions.” The proposed regulations had included “machine learning system[s] that can, for a given set of human defined objectives, make predictions, recommendations, or decisions.”
  • “Machine Learning”—The term is defined as the “ability for a computer to use and learn from its own analysis of data or experience and apply this learning automatically in future calculations or tasks.”

Unlawful Selection Criteria

Potentially discriminatory hiring tools have long been unlawful in California, but the final regulations confirm that those antidiscrimination laws apply to potential discrimination on the basis of protected class or disability that is carried out by AI or automated decision-making systems. Specifically, the regulations state that it is “unlawful for an employer or other covered entity to use an automated-decision system or selection criteria (including a qualification standard, employment test, or proxy) that discriminates against an applicant or employee or a class of applicants or employees on a basis protected” by FEHA.

Removal of Disparate Impact

However, the final regulations do not include the proposed definition of “adverse impact” caused by an automated-decision system under the FEHA regulations. The prior proposed regulations had specified that an adverse impact includes “disparate impact” theories and may be the result of a “facially neutral practice that negatively limits, screens out, tends to limit or screen out, ranks, or prioritizes applicants or employees on a basis protected by” FEHA. Further, the final regulations do not include similar language defining automated-decision systems to include systems that screen out or make decisions related to employment benefits.

Pre-Employment Practices

The final regulations further clarify that the use of online application technology that “screens out, ranks, or prioritizes applicants based on” scheduling restrictions “may discriminate against applicants based on their religious creed, disability, or medical condition,” unless it is job-related and required by business necessity and there is a mechanism for the applicant to request an accommodation.

The regulations specify that this would apply to automated-decision systems. The regulations state that use of such a system “that, for example, measures an applicant’s skill, dexterity, reaction time, and/or other abilities or characteristics may discriminate against individuals with certain disabilities or other characteristics protected under the Act” without reasonable accommodation may result in unlawful discrimination. Similarly, a system that “analyzes an applicant’s tone of voice, facial expressions or other physical characteristics or behavior may discriminate against individuals based on race, national origin, gender, disability, or other” protected characteristic may result in unlawful discrimination.

Criminal Records

California law provides that before employers deny applicants based on a criminal record, the employer “must first make an individualized assessment of whether the applicant’s conviction history has a direct and adverse relationship with the specific duties of the job” that would justify denying the applicant. The final regulations state that “prohibited consideration” of criminal records “includes, but is not limited to, inquiring about criminal history through an employment application, background check, or the use of an automated-decision system.” (Emphasis added.)

However, the final regulations do not include the proposed language that would have clarified that the use of an automated decision-system alone, “in the absence of additional processes or actions” is not a sufficient individualized assessment. The final regulations further do not include the proposed language that would have required employers to provide “a copy or description” of a report generated that is used to withdraw a conditional job offer.

Unlawful Medical or Psychological Inquiries

The final regulations state that rules against asking job applicants about their medical or psychological histories include “through the use of an automated-decision system.” The regulations state that such an inquiry “includes any such examination or inquiry administered through the use of an automate-decision system,” including puzzles or games that are “likely to elicit information about a disability.”

Third-Party Liability

The final regulations clarify that the prohibitions on aiding and abetting unlawful employment practices apply to the use of automated decision-making systems, potentially implicating third parties that design or implement such systems. Still, the regulations specify that “evidence, or the lack of evidence, of anti-bias testing or similar proactive efforts to avoid unlawful discrimination, including the quality, efficacy, recency, and scope of such effort, the results of such testing or other effort, and the response to the results” is relevant to a claim of unlawful discrimination. However, the final regulations do not include the proposed language that would have created third-party liability for the design and development and advertising, promotion, or sale of such systems.

Next Steps

Once effective, the final regulations will make California one of the first jurisdictions to promulgate comprehensive regulations concerning AI and/or automated decision-making technologies, along with Colorado, Illinois, and New York City. The regulations also come as President Donald Trump is seeking to reshape federal AI policy, focusing on removing barriers to the United States being a leader in the development of the technology. The new policy shifts away from the Biden administration’s focus on safeguarding employees and consumers from potential negative impacts from the use of such technology, particularly the possibility of unlawful employment discrimination and harassment. It is expected that states and localities will continue to regulate AI to fill in the gap.

Ogletree Deakins’ Technology Practice Group will continue to monitor developments and will provide updates on the California, Employment Law, and Technology blogs as additional information becomes available.

Follow and Subscribe
LinkedIn | Instagram | Webinars | Podcasts

Authors


Browse More Insights

Fingerprint Biometric Authentication Button. Digital Security Concept
Practice Group

Technology

Ogletree Deakins is uniquely situated to provide tech employers and users (the “TECHPLACE™”) with labor and employment advice, compliance counseling, and litigation services that embrace innovation and mitigate legal risk. Through our Technology Practice Group, we support clients in the exploration, invention, and/or implementation of new and evolving technologies to navigate the unique and emerging labor and employment issues present in the workplace.

Learn more
Fountain pen signing a document, close view with center focus
Practice Group

Employment Law

Ogletree Deakins’ employment lawyers are experienced in all aspects of employment law, from day-to-day advice to complex employment litigation.

Learn more

Sign up to receive emails about new developments and upcoming programs.

Sign Up Now