Quick Hits
- The New Jersey Division on Civil Rights (DCR) issued guidance that explains how an employer’s use of automated decision-making tools can lead to algorithmic discrimination that violates the New Jersey Law Against Discrimination (NJLAD).
- The guidance does not impose any new obligations on employers but reinforces the importance of NJLAD compliance and instructs that the NJLAD “draws no distinctions based on the mechanism of discrimination.”
- Given the increasing use of AI tools to make employment decisions, the DCR explains that all “New Jerseyans [should] understand what these tools are, how they are being used, and the risks and benefits associated with their use.”
The DCR rolled out the guidance as part of the agency’s launch of a new Civil Rights and Technology Initiative “to address the risks of discrimination and bias-based harassment stemming from the use of artificial intelligence (AI) and other advanced technologies” and provide guidance concerning how the New Jersey Law Against Discrimination (NJLAD) applies to these new technologies. The guidance does not impose any new requirements that are not already included in the NJLAD or establish any new rights or obligations. However, given the DCR’s decision to release guidance on the topic, employers doing business in New Jersey may wish to audit their existing uses of AI to ensure that their policies and practices comply with the NJLAD. While AI technology can be complex, and an employer may not fully grasp how a particular tool generates results, the guidance reinforces that employers are fully responsible for the AI technology they utilize and may not delegate their compliance responsibilities to third parties.
What Are Automatic Decision-Making Tools?
The guidance defines an “automatic decision-making tool” as “any technological tool, including but not limited to, a software tool, system, or process that is used to automate all or part of the human decision-making process.” An automated decision-making tool might be used to determine whether a human resources professional will review a certain resume, hire a job applicant or terminate an employee. The DCR referenced May 18, 2023, guidance from the U.S. Equal Employment Opportunity Commission (EEOC) in providing these examples. See “Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964.”
The DCR explained that automated decision-making tools “accomplish their aims by using algorithms, or sets of instructions that can be followed, typically by a computer, to achieve a desired outcome.” Depending upon how the algorithms are designed, the tools “can create classes of individuals who will be either advantaged or disadvantaged in ways that may exclude or burden them based on their protected characteristics.” Given the role that algorithms play in the operation of these tools, the DCR defines any discrimination resulting from them as “algorithmic discrimination.”
Citing recent studies, the guidance explains how, for example, an automated decision-making tool that ranks job applicants of a particular race or gender more favorably (or less favorably) than applicants in another group could lead to discriminatory hiring. The DCR further explained that while these tools can also be used in a positive way to reduce bias and discrimination, given the risk of achieving the wrong outcome, employers must fully understand the mechanics of any AI tool upon which they rely to make employment decisions, including the risks and benefits involved.
How Do Automated Decision-Making Tools Lead to Discriminatory Outcomes?
The DCR acknowledges that it may not be easy to detect whether a particular automated decision-making tool might lead to discriminatory outcomes because the calculations made by these tools “can be invisible and not well understood.” Nevertheless, the agency explained that when discriminatory outcomes do arise, it is generally because of the way the tools are (1) designed, (2) trained, or (3) deployed.
Design
The guidance explains that a tool’s design may be intentionally or unintentionally skewed. The tool’s developer makes decisions about “the output the tool provides, the model or algorithms the tool uses, and what inputs the tool assesses.” Each of these decisions could introduce bias into the tool, which could then generate discriminatory outcomes. Referring to an example from an EEOC enforcement action, the agency explains how a tool was programmed to exclude job applicants who were of a certain age or older depending upon their gender. The case resolved with the company agreeing to stop requesting age-related information from applicants in the future.
Training
The DCR explained that before an automated decision-making tool is used in a real-world environment, the tool must be “trained.” This training “occurs by exposing the tool to training data from which the tool learns correlations or rules.” If the training data that is relied upon reflects the developer’s own biases, or otherwise reflects institutional inequities, the tool can become biased through the training process itself.
Deployment
Finally, the guidance explains that algorithmic discrimination can occur once the tool is utilized in the real world. If, for example, the employer intentionally uses the tool with members of a particular protected class, doing so can lead to purposeful discrimination. Or “[i]f a tool is used to make decisions that it was not designed to assess, its deployment may amplify any bias in the tool and systemic inequities that exist outside of the tool.” Real-world use of the tool may also reveal biases that did not reveal themselves during the testing process. If the tool is flawed, it can contribute to discriminatory decisions that are then fed back into the tool for further training. “Each iteration of this loop exacerbates the tool’s bias.”
NJLAD ‘Draws No Distinctions’ Based on Discrimination Mechanism
The DCR concluded its guidance by reinforcing the NJLAD’s prohibitions on employment discrimination. Whether prohibited discrimination occurs because of the actions of a “live” human being or based on the decisions of an AI tool is immaterial. As always, the impact of an employer’s decision is the critical issue. As the DCR put it, “the LAD draws no distinctions based on the mechanism of discrimination.” If an employer uses an automated decision-making tool to discriminate against a protected class, that employer is liable for unlawful discrimination, the same as if the employer engaged in that behavior without a tool. Such actions constitute disparate impact or intentional discrimination.
If use of an automated decision-making tool generates decisions that disproportionately impact members of a protected class, the employer that used the tool may be liable for disparate impact discrimination. Under well-established principles of disparate impact discrimination, even if the tool serves a “substantial, legitimate nondiscriminatory interest,” use of the tool could be argued to be unlawful if a “less discriminatory alternative” exists. The guidance shares an example about a company that uses an automated decision-making tool to assess contract bids. If that tool disproportionately excludes women-owned businesses, the tool is problematic and may cause disparate impact discrimination. Similarly, if a store uses facial recognition software to flag shoplifters, and the software disproportionately generates false positives for customers who wear certain religious headwear, the tool’s design is flawed, and the employer may be liable for disparate impact discrimination.
Use of Automated Decision-Making Tools and Reasonable Accommodations
The guidance also provides examples of how AI tools can affect applicants or employees who require reasonable accommodations. If, for example, the employer relies upon an AI tool to test an applicant’s typing speed, and the tool cannot assess the speed of an applicant who utilizes a nontraditional keyboard due to a disability, the employer’s use of the tool may cause discrimination against a disabled applicant.
In another context, if an AI tool is not “trained” (see above) on data that includes individuals who require accommodations, the tool may unintentionally penalize individuals who require accommodations. Similarly, an AI-screening tool used in the hiring process may screen out individuals who state in their applications that they require an accommodation to perform the job. Another example is an AI tool that tracks employee productivity by the number of breaks an employee takes. This tool may disproportionately target for discipline employees who require additional break time to accommodate a disability or to express breast milk. If an employer relies upon such tools to discipline employees, the employer could violate the NJLAD.
Next Steps
While the guidance creates no new obligations for employers, its issuance strongly suggests that the DCR, like the EEOC, Office of Federal Contract Compliance Programs (OFCCP), and the the U.S. Department of Labor (DOL), may focus increased attention on employers’ use of automated decision-making tools. New Jersey employers may want to consider reviewing and evaluating their use of these tools and subject them to a bias audit. Additionally, as employers can be liable for unlawful algorithmic discrimination even if they rely on a vendor’s representation that the tool they offer is sound and will not lead to discriminatory outcomes, employers may want to evaluate their vendor contracts and work closely with their vendors to determine how these potential risks and liabilities are spelled out.
Employers may want to stay tuned for new developments on the legislative front involving the use of AI. The New Jersey Legislature introduced two bills early last year (A. 3854 and A. 3911) that seek to regulate employers’ use of this technology in the hiring process. Among other provisions, A. 3854 would require companies that sell automated decision-making tools to conduct an annual bias audit and require employers relying on such tools to notify job candidates that such technology was used in the hiring process and provide a summary of its most recent bias audit. The proposed legislation would also impose monetary penalties ranging from $500 for a first offense and $500 to $1,500 penalty for each subsequent offense. A. 3911 addresses the use of AI-enabled video interviews, and, among other provisions, requires employers to obtain a candidate’s consent to use the technology. If enacted, New Jersey will join other jurisdictions, including Colorado, Illinois, and New York City, that have taken steps to regulate the use of AI in employment decision making.
Ogletree Deakins will continue to monitor developments respecting the use of automated decision-making tools and AI and will provide updates on the Cybersecurity and Privacy, Employment Law, New Jersey, and Technology blogs as additional information becomes available.
Follow and Subscribe
LinkedIn | Instagram | Webinars | Podcasts