On April 25, 2023, four U.S. federal agencies issued a joint statement pledging to enforce federal laws to “promote responsible innovation” in the context of automated decision-making and artificial intelligence (AI) systems that are increasingly being used by public and private organizations, including to make employment-related decisions.
Quick Hits
- The joint statement by the EEOC, DOJ, CFPB, and FTC raised concerns with the growing use of AI.
- The agencies pledged to enforce federal law to promote ‘responsible innovation’.
- This is the latest move by EEOC to warn employers it is watching the issue.
Joint Statement
In the joint statement from the U.S. Equal Employment Opportunity Commission (EEOC), Department of Justice (DOJ) Civil Rights Division, Consumer Financial Protection Bureau (CFPB), and the Federal Trade Commission (FTC), the agencies highlighted their concerns that this emerging technology could impact civil rights, fair competition, consumer protection, and equal employment opportunities.
Automated systems, which use or are labeled as “artificial intelligence” or “AI,” are defined in the statement broadly to include “software and algorithmic processes … used to automate workflows and help people complete tasks or make decisions.”
“Today, our agencies reiterate our resolve to monitor the development and use of automated systems and promote responsible innovation,” the joint statement reads. “We also pledge to vigorously use our collective authorities to protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies.”
Potential for ‘Unlawful Discrimination’
According to the joint statement, the agencies believe automated systems and AI have the potential to “produce outcomes that result in unlawful discrimination.” While the agencies acknowledged that such tools “can be useful,” they stated that they have concerns with how the tools rely on “vast amounts of data to find patterns or correlations” to “perform tasks or make recommendations and predictions.”
The joint statement highlighted three potential sources for discrimination:
- “Data and Datasets” – The statement pointed to the potential for outcomes to be “skewed” by “unrepresentative or imbalanced datasets” that “incorporate historical bias.”
- “Model Opacity and Access” – The statement raised concerns that automated systems can be “black boxes” of which there is a lack of transparency on how the system works.
- “Design and Use” – The statement warned that developers might not fully understand how the technology would be used and “may design a system on the basis of flawed assumptions about its users, relevant context, or the underlying practices and procedures it may replace.”
Equal Employment Concerns
The joint statement comes as the EEOC and Chair Charlotte Burrows has made scrutinizing the use of automated systems and AI by employers a priority and has launched its Artificial Intelligence and Algorithmic Fairness Initiative. In announcing the joint statement, Burrows stated that the agency will use its “enforcement authorities to ensure AI does not become a high-tech pathway to discrimination.”
In January 2023, the EEOC held a hearing on the issue in which panelists called for the EEOC to take a greater role in evaluating the technology and to issue guidance concerning compliance with Title VII of the Civil Rights Act of 1964 and the Age Discrimination in Employment Act. The hearing followed the EEOC’s May 2022 release of technical assistance that warned that the use of automated systems and AI to make employment-related decisions may violate the Americans with Disabilities Act (ADA). Specifically, the agency highlighted, in that assistance, concerns that the tools may potentially “screen out” job candidates with disabilities, fail to provide job candidates with reasonable accommodations, and create privacy concerns as candidates may be forced to disclose information about disabilities or medical conditions.
Key Takeaways
The federal agencies’ joint statement is the latest sign that regulators are taking a closer look at how automated decision-making technology is being used and its impact on the workplace and other arenas. A growing number of state and local jurisdictions, including Illinois and New York City, have also begun to regulate certain types of AI and algorithmic decision-making tools in employment decisions. Employers may want to consider reviewing to what extent they are already utilizing such tools and evaluate how they will be used moving forward.
Ogletree Deakins’ Technology Practice Group will continue to monitor developments with respect to automated decision-making systems and AI in employment-related matters and will provide updates on the Technology, Cybersecurity and Privacy, Diversity and Inclusion, and Employment Law blogs as additional information becomes available. Important information for employers is also available via the firm’s webinar and podcast programs.