Modern dark data center, all objects in the scene are 3D

Quick Hits

  • President Biden signs first-of-its-kind executive order addressing the growing use and development of artificial intelligence.
  • The order directs various federal agencies to take certain exploratory steps in the coming year.
  • Most relevant for employers, the EO focuses on new immigration policy, civil rights issues, wage-and-hour compliance, and labor risks and opportunities.

As various state, local, and international jurisdictions continue to propose and enact legislation relating to AI, the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence is the first broad-based roadmap of the federal government’s approach to the quickly developing AI space.

Consistent with trends in other AI-regulatory developments, the Biden Administration’s focus is on both procedural safeguards surrounding AI (such as auditing/vetting of AI tools) as well as substantive safeguards (steps designed to ensure, for example, equity and fairness in the use of AI tools).

A core theme of the EO is the inherent tension between the promise of AI to help “solve urgent challenges while making our world more prosperous, productive, innovative, and secure” with the potential perils of irresponsible use “such as fraud, discrimination, bias, and disinformation.” The EO lays out the administration’s initial playbook for balancing these promises and perils by encouraging the development of AI through various mechanisms while also setting in motion exploratory steps to impose checks and balances on AI use/development.

Guiding Principles

As relevant to the workplace, the EO identifies certain guiding principles and priorities for the development and implementation of AI, including:

“Artificial Intelligence must be safe and secure.”

According to the EO, meeting this goal “requires robust, reliable, repeatable, and standardized evaluations of AI systems, as well as policies, institutions, and, as appropriate, other mechanisms to test, understand, and mitigate risks from these systems before they are put to use.” The concept of testing and evaluation of AI systems is not new in the evolving guidance on workplace AI use. For example, mandatory testing is required in New York City’s law regulating the use of automated employment decision tools (AEDTs), and the Equal Employment Opportunity Commission (EEOC) has also recommended employers implement testing of AI tools pre- and post-implementation.

“The responsible development and use of AI require a commitment to supporting American workers.”

The EO ties this commitment to labor issues, flagging that as AI “creates new jobs and industries, all workers need a seat at the table, including through collective bargaining, to ensure that they benefit from these opportunities.” The EO further states that “[i]n the workplace itself, AI should not be deployed in ways that undermine rights, worsen job quality, encourage undue worker surveillance, lessen market competition, introduce new health and safety risks, or cause harmful labor-force disruptions.”

“Artificial Intelligence policies must be consistent with [the] Administration’s dedication to advancing equity and civil rights.”

The EO further states that the administration will seek “to ensure that AI complies with all Federal laws and to promote robust technical evaluations, careful oversight, engagement with affected communities, and rigorous regulation.” Interestingly, the EO underscores guidance previously provided by the EEOC that both developers and users of AI may have potential liability, stating that “[i] t is necessary to hold those developing and deploying AI accountable to standards that protect against unlawful discrimination and abuse.”

Immigration

The EO contains new immigration policies focused on spearheading United States leadership in AI development. For example, the EO instructs the Departments of State (DOS) and the Department of Homeland Security (DHS) to streamline visa processing times for noncitizens traveling to the United States to “work on, study, or conduct research in AI or other critical and emerging technologies.” Additionally, the EO encourages DHS to evaluate rulemaking to ease the process for noncitizens “including experts in AI and other critical and emerging technologies and their spouses, dependents, and children, to adjust their status to lawful permanent resident.”

Further, the EO instructs the Department of Labor (DOL) to solicit public input on AI and other science, technology, engineering, and mathematics (STEM) labor shortages for purposes of considering updates to the “Schedule A” list of occupations. Finally, the EO encourages the State Department to initiate changes to J-1 visitor exchange program to attract and retain experts in AI and related fields. Through these—and likely further—steps, the Biden administration seeks to continue to position the United States at the forefront of AI progress.

Department of Labor

The EO also requires the DOL to take certain actions designed to address the risks of AI in the workplace.

1. AI-induced worker displacement

First, the EO requires the DOL, within 180 days, to issue a “report analyzing the abilities of agencies to support workers displaced by the adoption of AI and other technological advancements.”

2. Best practices

The EO further requires the DOL, within 180 days, to “develop and publish principles and best practices for employers that could be used to mitigate AI’s potential harms to employees’ well-being and maximize its potential benefits.” The EO requires these best practices to address, at a minimum:

  • “[J]ob-displacement risks and career opportunities related to AI, including effects on job skills and evaluation of applicants and workers”;
  • “[L]abor standards and job quality, including issues related to the equity, protected-activity, compensation, health, and safety implications of AI in the workplace”; and
  • “[I]mplications for workers of employers’ AI-related collection and use of data about them, including transparency, engagement, management, and activity protected under worker-protection laws.”

Notably, to develop these standards, the EO directs the DOL to consult with “other agencies and with outside entities, including labor unions and workers, as the Secretary of Labor deems appropriate.” Absent from this list is a direction that the secretary consult with the employer community. The best practices structure contemplated by the EO may ultimately look similar to the technical assistance guidance issued by the EEOC relating to employer use of AI, which recommended employers deploy various “promising practices” as they use technology like AI.

3. Wage-and-hour

Interestingly, the EO requires the DOL to “issue guidance to make clear that employers that deploy AI to monitor or augment employees’ work must continue to comply with protections that ensure that workers are compensated for their hours worked, as defined under the Fair Labor Standards Act.” Wage-and-hour issues have not historically been a focus in the development of AI employment regulatory guidance; this latest guidance may be a sign of more to come in this area.

4. Federal contractors

Finally, the EO requires the DOL to publish, within the year, “guidance for Federal contractors regarding nondiscrimination in hiring involving AI and other technology-based hiring systems.”

Next Steps

The EO follows up on the Biden administration’s “Blueprint for an AI Bill of Rights,” released in October 2022, which outlined nonbinding recommendations for the design, use, and deployment of AI and automated decision-making systems and guidance from the EEOC on the potential disparate impact of employers’ use of such technology.

As companies increasingly utilize and rely on AI for a variety of purposes, including making employment decisions, guidance from various regulatory bodies will continue to issue.

Employers may want to consider reviewing their current and planned use of AI and the impact on employees in light of increasing federal scrutiny and carefully watch for additional forthcoming guidance.

Ogletree Deakins will continue to monitor developments and will provide updates on the Employment Law and Technology blogs.

Follow and Subscribe

LinkedIn | Instagram | Webinars | Podcasts

Authors


Browse More Insights

Fingerprint Biometric Authentication Button. Digital Security Concept
Practice Group

Technology

Ogletree Deakins is uniquely situated to provide tech employers and users (the “TECHPLACE™”) with labor and employment advice, compliance counseling, and litigation services that embrace innovation and mitigate legal risk. Through our Technology Practice Group, we support clients in the exploration, invention, and/or implementation of new and evolving technologies to navigate the unique and emerging labor and employment issues present in the workplace.

Learn more
Fountain pen signing a document, close view with center focus
Practice Group

Employment Law

Ogletree Deakins’ employment lawyers are experienced in all aspects of employment law, from day-to-day advice to complex employment litigation.

Learn more

Sign up to receive emails about new developments and upcoming programs.

Sign Up Now