American flag flapping in front of corporate office building in Lower Manhattan

Quick Hits

  • The Trump administration released a new AI action plan that presents a broad roadmap for AI development in the United States.
  • The plan aims to eliminate regulatory barriers and promote innovation in AI, promote the adoption of AI across the federal government, and increase workforce development in the private sector.
  • The plan further seeks to discourage states from imposing their own regulations by recommending that funding for AI projects be sent to states with favorable regulatory climates.

On July 23, 2025, the White House released a policy document titled “Winning the Race: America’s AI Action Plan,” setting forth a roadmap for federal AI policy structured around three pillars:(I) innovation, (II) infrastructure, and (III) international diplomacy and security. The plan could have significant implications for employers adopting and utilizing AI and similar technologies amid a patchwork of regulations across states and even cities.

Among its wide-ranging set of goals, the plan focuses on “remov[ing] red tape and onerous regulation” concerning AI, accelerating the adoption of AI across the federal government, and promoting AI education and workforce development. In particular, the plan directs certain federal funding decisions for AI-related projects to be guided by states’ regulatory climates, potentially pressuring states and local jurisdictions to avoid implementing new AI laws or regulations.

Shifting Policy and Regulatory Landscape

Although AI promises productivity and efficiency, potential risks remain—particularly in the area of automated decisionmaking tools. While the Biden administration previously sought to balance AI innovation with regulatory safeguards to protect employees and consumers (such as concerns about employment discrimination, privacy concerns, and job displacement), President Trump has reversed course, rescinding previous executive orders designed to impose stronger oversight and controls.

The Trump administration’s new AI action plan signals a regulatory rollback, instructing the Office of Science and Technology Policy (OSTP) to solicit information from businesses and the public on federal regulations that “hinder AI innovation and adoption.” Further, the Office of Management and Budget (OMB) is directed to identify and revise or repeal regulations, guidance, administrative orders, and other federal policies that “unnecessarily hinder AI development or deployment.”

New Limits on State Regulation?

Crucially, the action plan addresses the interplay between federal and state regulations. Several states and local jurisdictions—including California, Colorado, Illinois, New York City, Texas, and Utah—have implemented AI laws or regulations. Onlookers have anticipated that state and local jurisdictions will continue to implement new employee and consumer protections in the absence of federal action.

However, the administration’s new action plan recommends that OMB work with federal agencies with “AI-related discretionary funding programs to ensure, consistent with applicable law, that they consider a state’s AI regulatory climate when making funding decisions and limit funding if the state’s AI regulatory regimes may hinder the effectiveness of that funding or award.” (Emphasis added.)

That recommendation comes after congressional lawmakers recently proposed a 10-year moratorium on state AI regulations in the recently passed federal spending bill before dropping it at the last minute before passage. These efforts could influence future state laws and regulations.

Further, the plan recommends that Federal Trade Commission (FTC) investigations initiated under the prior administration “ensure that they do not advance theories of liability that unduly burden AI innovation.” The plan also suggests that FTC final orders, consent decrees, and injunctions be reviewed and modified or set aside to the extent that they “unduly burden AI innovation.”

Addressing Workforce and Labor Market Implications

In addition, the action plan focuses on AI’s workforce implications, emphasizing the need for AI literacy and skills development to ensure that American workers are equipped to thrive in an AI-driven economy. The plan follows President Trump’s EO 14277 to promote education on and integration of AI in K-12, higher education, and workplace settings through public-private partnerships with industry leaders and academic institutions.

Specifically, the AI action plan recommends that the U.S. Department of the Treasury issue guidance clarifying that AI literacy and skills development programs may qualify for eligible educational assistance as a tax-free working condition fringe benefit under Section 132 of the Internal Revenue Code. The plan also recommends that the U.S. Department of Labor (DOL) establish an “AI Workforce Research Hub” to evaluate and mitigate the impact of AI on the labor market, including funding retraining for individuals impacted by AI-related job displacement.

Next Steps

Employers may want to review their use of AI in the workplace and consider the potential benefits. They may also want to invest in training programs and collaborate with educational institutions to prepare their workforce for the technological advancements brought about by AI.

At the same time, potential AI risks remain, implicating existing federal, state, and local laws on antidiscrimination, privacy, and automated decisionmaking tools. It is unclear whether states and local jurisdictions will limit enforcement of existing AI regulations and/or delay implementing new laws and regulations. Regardless of state regulatory approaches, employers may want to continue implementing policies and procedures that set forth reasonable guardrails that allow for innovation and expanded AI use while limiting potential risks associated with the use of automated decisionmaking tools in the workplace.

Moreover, employers may want to audit the results of AI when used to make employment decisions to promote fairness, accuracy, and appropriate human oversight, as well as evaluate whether it would be appropriate to provide affected employees with the opportunity to appeal or request review of decisions made or materially influenced by AI.

Ogletree Deakins’ Cybersecurity and Privacy Practice Group and Technology Practice Group will continue to monitor developments and will provide updates on the Cybersecurity and Privacy, Employment Law, and Technology blogs as additional information becomes available.

This article and more information on how the Trump administration’s actions impact employers can be found on Ogletree Deakins’ New Administration Resource Hub.

Follow and Subscribe
LinkedIn | Instagram | Webinars | Podcasts

Authors


Browse More Insights

Fingerprint Biometric Authentication Button. Digital Security Concept
Practice Group

Technology

Ogletree Deakins is uniquely situated to provide tech employers and users (the “TECHPLACE™”) with labor and employment advice, compliance counseling, and litigation services that embrace innovation and mitigate legal risk. Through our Technology Practice Group, we support clients in the exploration, invention, and/or implementation of new and evolving technologies to navigate the unique and emerging labor and employment issues present in the workplace.

Learn more
Modern dark data center, all objects in the scene are 3D
Practice Group

Cybersecurity and Privacy

The attorneys in the Cybersecurity and Privacy Practice Group at Ogletree Deakins understand that data now accumulates quickly and transmits easily. As the law adapts to technical advancements, we effectively advise our clients as they work to comply with new developments and best practices for protecting the privacy of the data that their businesses collect and retain.

Learn more

Sign up to receive emails about new developments and upcoming programs.

Sign Up Now