Quick Hits
- California’s new AI regulations, effective October 1, 2025, prohibit the use of AI tools that discriminate against applicants or employees based on protected characteristics under the Fair Employment and Housing Act (FEHA).
- Employers are required to keep all data related to automated decision systems for four years and are held responsible for any discriminatory practices, even if the AI tools are sourced from third parties.
- The regulations target AI tools that cause disparate impacts in various employment processes, including recruitment, screening, and employee evaluations, while allowing legal uses of AI for hiring and productivity management.
Question 1: What are California’s new algorithmic discrimination regulations?
Answer 1: The new AI regulations prohibit the use of an ADS or AI tool that discriminates against an applicant or employee on any basis protected by FEHA. The new regulations will make the state one of the first to adopt comprehensive algorithmic discrimination regulations regarding the growing use of AI tools to make employment decisions.
Q2: When are the regulations effective?
A2: On October 1, 2025.
Q3: What exactly is an ADS?
A3: An ADS is “[a] computational process that makes a decision or facilitates human decision making regarding an employment benefit,” including processes that “may be derived from and/or use artificial intelligence, machine-learning, algorithms, statistics, and/or other data processing techniques.” (Emphasis added.) Many AI hiring tools fall within this definition.
Q4: Are employers prohibited from using all AI tools?
A4:No. The regulations do not prohibit any particular tool or limit the legal ways in which employers may use AI tools, including to source, rank, and select applicants; facilitate the hiring process; and monitor and manage employee productivity and performance. Instead, they prohibit the use of any AI tool to discriminate intentionally or unintentionally against applicants or employees based on their membership in any class of employees protected from discrimination under FEHA.
Q5: Who is an “applicant”?
A5: An “applicant” is “[a]ny individual who files a written application or, where an employer or other covered entity does not provide an application form, any individual who otherwise indicates a specific desire to an employer or other covered entity to be considered for employment. Except for recordkeeping purposes, “Applicant” is also an individual who can prove that they have been deterred from applying for a job by an employer’s or other covered entity’s alleged discriminatory practice.” ‘Applicant’ does not include an individual who without coercion or intimidation willingly withdraws their application prior to being interviewed, tested or hired.”
Q6: What conduct is targeted?
A6: The regulations seek to limit the use of AI tools that rely on unlawful selection criteria and/or cause a disparate impact in the areas of recruitment, screening, pre-employment inquiries, job applications, interviews, employee selection and testing, placement, promotions, and transfer. The California Civil Rights Department (CRD) identifies several examples of automated employment decisions potentially implicated by the regulations.
- “Using computer-based assessments or tests, such as questions, puzzles, games, or other challenges to: [m]ake predictive assessments about an applicant or employee; [m]easure an applicant’s or employee’s skills, dexterity, reaction-time, and/or other abilities or characteristics; [m]easure an applicant’s or employee’s personality trait, aptitude, attitude, and/or cultural fit; and/or [s]creen, evaluate, categorize, and/or recommend applicants or employees
- “Directing job advertisements or other recruiting materials to targeted groups”
- “Screening resumes for particular terms or patterns”
- “Analyzing facial expression, word choice, and/or voice in online interviews”
- “Analyzing employee or applicant data acquired from third parties”
Q7: Are there new record-keeping requirements?
A7: Yes. Employers must keep for four years all automated-decision system data created or received by the employer or other covered entity dealing with any employment practice and affecting any employment benefit of any applicant or employee.
Q8: Who can be held responsible for algorithmic discrimination?
A8: Employers will be held responsible for the AI tools they use, whether or not they procured them from third parties. The final regulations also clarify that the prohibitions on aiding and abetting unlawful employment practices apply to the use of AI tools, potentially implicating third parties that design or implement such tools.
Q9: Are there available defenses?
A9:Yes, claims under the regulations are generally subject to existing defenses to claims of discrimination. The regulations also clarify that “evidence, or the lack of evidence, of anti-bias testing or similar proactive efforts to avoid unlawful discrimination, including the quality, efficacy, recency, and scope of such effort, the results of such testing or other effort, and the response to the results” is relevant to a claim of unlawful discrimination.
Q10: What should employers do now?
A10: Employers may want to consider the following steps:
- Reviewing internal AI tool usage, practices, procedures, and policies to determine whether any tool being used would be covered by the regulations.
- Piloting proposed AI tools before rolling them out to the workforce. This includes thoroughly vetting the steps taken by AI developers to avoid algorithmic discrimination.
- Training workforce on the appropriate use of AI tools.
- Notifying applicants and employees when AI tools are in use, and providing accommodations and/or human alternatives where required.
- Establishing an auditing protocol. Although auditing is not required by the regulations, the act of engaging in anti-bias testing or similar proactive efforts may form the basis for a defense to any future claims of algorithmic discrimination. The regulations also suggest that a fact-finder may also consider the quality, efficacy, recency, and scope of any auditing effort, as well as the results of and response to that effort.
- Reviewing record-keeping practices to be sure required data can be securely maintained for at least four years.
Ogletree Deakins’ Technology Practice Group will continue to monitor developments and will provide updates on the California, Employment Law, and Technology blogs as additional information becomes available.
For further information, please join us for our upcoming webinar, “California’s New AI Employment Law Takes Effect October 1—Are You Ready?,” which will take place on September 29, 2025, from 2:00 p.m. to 3:00 p.m. EDT. Jennifer G. Betts will interview Danielle Ochs on how the AI and ADS rules will affect employers, and discuss strategies for complying with California’s antidiscrimination laws. Register online at www.ogletree.com or email webinars@ogletree.com.
Follow and Subscribe
LinkedIn | Instagram | Webinars | Podcasts