Quick Hits
- The New York State Human Rights Law’s express recognition of disparate impact claims diverges from President Trump’s executive order directing the deprioritization of disparate impact claims by federal agencies.
- Disparate impact can result from the use of artificial intelligence to assist in employment decision-making processes.
A practice is deemed to have a discriminatory effect when it “actually or predictably results in a disparate impact on a group of persons” because of their membership to a class protected by the NYSHRL. By expressly recognizing disparate impact claims, the amendment largely codifies what New York courts have already treated as permissible under existing case law construing the NYSHRL.
New York’s amendment runs counter to the actions taken by the federal government. President Donald Trump’s April 2025 executive order called for “all agencies” to “deprioritize enforcement of all statutes and regulations to the extent they include disparate-impact liability,” including Title VII of the Civil Rights Act of 1964, and directed the attorney general and the chair of the U.S. Equal Employment Opportunity Commission (EEOC) to “assess pending investigations, civil suits, or positions taken in ongoing matters” that “rely on a theory of disparate-impact liability.” The U.S. Department of Justice issued a final rule in December 2025 eliminating disparate impact liability for organizations receiving federal funding.
Artificial Intelligence and Disparate Impact
The amended NYSHRL, juxtaposed against the actions of the federal government, may increase the frequency of state law claims alleging discrimination based upon an alleged, disparate impact due to a particular action or practice. For the increasing number of employers that use artificial intelligence (AI) for employment decision-making processes such as hiring or firing, this amendment may result in additional scrutiny of AI systems in administrative proceedings or litigation. Even if an employer utilizes AI systems to facilitate decision-making based upon the expectation that such systems will increase efficiencies and potentially eliminate inconsistencies that could be associated with human decision-making, it may still face liability if the use of the AI system results in a discriminatory impact on a protected class absent specific evidence to demonstrate that the use of the AI tool is job related for the position in question and consistent with business necessity and that the business necessity could not be served by another practice that has a less discriminatory effect.
New York City previously enacted legislation that requires employers who use automated employment decision tools (AEDTs) to substantially assist or replace discretionary decision-making to conduct bias audits and make the results of the bias audit publicly available. The purpose of the bias audit is to assess the tool’s potential disparate impact on sex, race, and ethnicity. The frequently asked questions guidance issued by the Department of Consumer and Worker Protection clarifies that employers are not required to take action based on the outcome of the bias audit. However, the passage of the amendment to the NYSHRL highlights the potential risks of using an AEDT in the face of data that shows that use of the tool actually results in a disparate impact on a group of persons who are members in a protected class, as well as the potential risks of using an AEDT or other form of AI without conducting a proactive audit, even if no such audit is required by applicable law. Because the data from a bias audit could support a disparate impact claim, the two laws considered together highlight the importance of proactively evaluating any AI platforms or processes being used by New York employers to substantially assist in employment decisions.
Next Steps
In defense against discrimination lawsuits, employers can present evidence that their employment practices are job related and consistent with business necessity and that the business necessity could not be served by another practice with a less discriminatory effect. Going forward, employers may wish to carefully document the reasons for utilizing AI in advance of deploying new systems or enhanced features of existing AI systems to assist or facilitate employment-related decisions. In addition, in light of the complexities of establishing the defense in practice, employers may wish to consider conducting privileged, proactive audits to evaluate AI tools currently in use to determine if there are any outcomes of concern, and where warranted, implement appropriate measures to enhance their potential defenses in administrative proceedings or litigation.
Ogletree Deakins’ New York office and Technology Practice Group will continue to monitor developments and will provide updates on the New York and Technology blogs as additional information becomes available.
Simone R.D. Francis is the office managing shareholder of Ogletree Deakins’ St. Thomas office and a shareholder in the firm’s New York office.
Matthew P. Gizzo is a shareholder in the New York office of Ogletree Deakins.
Emily A. Hall is a 2025 graduate of the Cardozo School of Law and is currently awaiting admission to the State bar of New York.
Follow and Subscribe
LinkedIn | Instagram | Webinars | Podcasts