State Flag of California

Quick Hits

  • The California Civil Rights Department released new proposed regulations for employers’ use of AI and automated decision-making systems.
  • The proposed regulations would affirm that employers’ use of such hiring technologies may violate the state’s antidiscrimination laws and clarify limits on the use of such technology in criminal background checks and medical/psychological inquiries.
  • The proposed regulations would also clarify the scope of third-party liability arising from the use of AI tools.
  • Written comments on the proposed regulations must be submitted by July 18, 2024.

On May 17, 2024, the Civil Rights Council, a branch of the CRD, issued a notice of proposed rulemaking and new proposed modifications to California’s employment discrimination regulations. The notice kicks off a forty-five-day period for public comment on the proposed regulations.

The regulations, titled “Proposed Modifications to Employment Regulations Regarding Automated-Decision Systems,” address the growing use of AI and other automated systems by employers to make decisions regarding job applicants and employees, such as hiring or promotion decisions. While this technology has the potential to improve efficiency, in a statement, the CRD highlighted concerns that AI and automated systems can “exacerbate” existing biases and discrimination, such as reinforcing gender or racial stereotypes.

In general, the California Civil Rights Council’s proposed regulations would affirm that California’s antidiscrimination laws and regulations apply to potential discrimination caused by the use of automated systems, whether or not an employer uses a third party to design or implement the systems. The proposed regulations also address the use of automated systems for background checks and medical or psychological inquiries.

The proposed regulations come after years of consideration by the CRD and amid growing concern and follow-up on proposed regulations released in March 2022. Several states and the federal government are considering restrictions on the use of such emerging technologies—with most states focused on procedural regulations, such as requiring certain notices be provided to employees and that employers take certain steps to discover and rout out bias in such systems. If adopted, the proposed regulations would make California the first state to adopt substantive restrictions.

Key Terms

The proposed regulations add definitions for certain key terms in the proposed regulations, including: 

  • “Adverse impact”—The proposed regulations would define the term “adverse impact” the same as the commonly used term “disparate impact,” to refer to the use of a “facially neutral practice that negatively limits, screens out, tends to limit or screen out, ranks, or prioritizes applicants or employees on a basis protected by” the California Fair Employment and Housing Act (FEHA).
  • “Automated-Decision System”—The proposed regulations define this as any computation process that “screens, evaluates, categorizes, recommends, or otherwise makes a decision or facilitates human decision making that impacts applicants or employees.” Covered systems would include a broad category of technological processes, including: computer-based tests that are used to predict an employee’s or applicant’s potential job performance or measure their skills or other personality traits or “cultural fit”; targeting job advertisements to specific groups; analyzing facial expressions; and screening resumes. Such systems do not include “word processing software, spreadsheet software, and map navigation systems.”
  • “Automated-Decision System Data”—Such data would include any data used in the development or application of machine learning, algorithms, or AI used by an automated-decision system, including data to train the system, data provided by applicants or employees, and data produced by the systems in operation.
  • “Machine Learning”—The term would be defined as the “ability for a computer to use and learn from its own analysis of data or experience and apply this learning automatically in future calculations or tasks.”

Unlawful Selection Criteria

Although the use of potentially discriminatory hiring tools has long been unlawful in California, the proposed regulations would explicitly affirm that those rules apply to automated decision systems. Thus, the law would make it unlawful for employers or other covered entities to “use selection criteria” that have an “adverse impact on or constitutes disparate treatment of an applicant or employee or a class of applicants or employees on the basis” of a protected characteristic. Such proposed selection criteria include “a qualification standard, employment test, automated-decision system, or proxy.” “Proxy” is defined as “[a] technically neutral characteristic or category correlated with a” protected group or characteristic.

As with other adverse impact doctrines, under the proposed regulations, employers would be able to assert “business necessity” as a defense where the selection criteria are “job-related for the position in question and consistent with business necessity,” so long as there are no less discriminatory polic[ies] or practice[s] that serve[] the employer’s goals as effectively as the challenged policy or practice.” The inquiry of whether there is a less discriminatory alternative would include consideration of whether “anti-bias testing or similar proactive efforts” were taken to avoid unlawful discrimination, “including the quality, recency, and scope of such effort, the results of such testing or other effort, and the response to the results.”

Criminal Records

California employment discrimination regulations provide that if an employer intends to deny an applicant based on the applicant’s criminal conviction history, it “must first make an individualized assessment of whether the applicant’s conviction history has a direct and adverse relationship with the specific duties of the job” that would justify denying the applicant. 

The proposed regulations would clarify that the use of an automated-decision system alone, without any additional processes or actions, “does not constitute an individualized assessment.” The regulations would further provide that employers using an automated-decision system to make a preliminary decision to withdraw a conditional job offer must provide the applicant “a copy or description of any report or information from the operation of the automated-decision system, related data, and assessment criteria used as part of an automated-decision system.”

Unlawful Medical or Psychological Inquiries

The proposed regulations would also modify California employment regulations to clarify that restrictions on conducting “[m]edical or psychological examinations or inquiries” with job applicants prior to an offer of employment and on employees following employment apply to such examinations or inquiries conducted through automated decision systems.

Specifically, the proposed regulations would clarify that such “medical or psychological examinations or inquiries” include, but are not limited to: (1) “[p]ersonality-based questions,” such as questions meant to measure optimism, emotional stability, extroversion versus introversion, and intensity; and (2) “[p]uzzles, games, or other challenges that evaluate physical or mental abilities” such as through “gamified screens included in an automated-decision system.”

Third Party Liability

Finally, the proposed regulations would clarify that prohibitions on aiding and abetting unlawful employment practices apply to third parties that design or implement automated-decision systems. The regulations would provide that discrimination would apply to the design, development, advertisement, and sale of automated-decision systems and “use of an automated-decision system on behalf of a person or individual, where the use of the automated-decision system constitutes unlawful disparate treatment or has an unlawful adverse impact on applicants or employees.”

Similarly, third parties that sell automated-decision systems to employers and covered entities would be required to “maintain relevant records,” such as system data from the application of the technology for the employer that uses it, training set data, modeling, assessment criteria, and outputs. These records would be required to be maintained for at least four years following the last date the system was used.

Next Steps

The proposed regulations come amid growing concern by states across the country and the federal government over the use of such emerging technologies. Notably, the Biden administration issued Executive Order 14110 in October 2023, which called for a “coordinated, Federal Government-wide approach” to the responsible development and implementation of AI, and in May 2024, the U.S. Department of Labor’s Wage and Hour Division and Office of Federal Contract Compliance Programs followed up on that by issuing new AI guidance.

Employers and other stakeholders have until July 18, 2024, to submit written comments on the proposed California AI regulations. The CRD provided additional information on how to submit comments here.

Ogletree Deakins’ Technology Practice Group will continue to monitor developments and will provide updates on the California, Cybersecurity and Privacy, and Technology blogs as additional information becomes available.

Follow and Subscribe

LinkedIn | Instagram | Webinars | Podcasts

Authors


Browse More Insights

Fingerprint Biometric Authentication Button. Digital Security Concept
Practice Group

Technology

Ogletree Deakins is uniquely situated to provide tech employers and users (the “TECHPLACE™”) with labor and employment advice, compliance counseling, and litigation services that embrace innovation and mitigate legal risk. Through our Technology Practice Group, we support clients in the exploration, invention, and/or implementation of new and evolving technologies to navigate the unique and emerging labor and employment issues present in the workplace.

Learn more
Modern dark data center, all objects in the scene are 3D
Practice Group

Cybersecurity and Privacy

The attorneys in the Cybersecurity and Privacy Practice Group at Ogletree Deakins understand that data now accumulates quickly and transmits easily. As the law adapts to technical advancements, we effectively advise our clients as they work to comply with new developments and best practices for protecting the privacy of the data that their businesses collect and retain.

Learn more

Sign up to receive emails about new developments and upcoming programs.

Sign Up Now