Several federal agencies—including the U.S. Department of Homeland Security (DHS), U.S. Department of Energy (DOE), U.S. Department of State (DOS), U.S. Department of Veterans Affairs (VA), the Consumer Financial Protection Bureau (CFPB), the U.S. General Services Administration (GSA), the U.S. National Archives and Records Administration (NARA), the Federal Reserve Board, and others—have now published AI strategies in response to M-25-21. This first wave of plans confirms a federal approach oriented toward accelerating AI adoption and foreshadows operational implications for contractors and grant recipients. Agencies are also working toward an impending end-of-year deadline to finalize detailed policies that will tighten expectations and documentation for AI used by contractors in federal work.
Here, we address some of the common goals and restrictions agencies are adopting in their AI strategies, the immediate lessons for contractors and grant recipients, the practical labor and employment implications, and what these moves portend for the future of AI in federal contracting.
Quick Hits
- As the number of covered agencies with published AI strategies pursuant to OMB Memorandum M-25-21 grows, companies with federal contract work gain new insights into best practices, and sometimes risk management warnings, from their partner agencies.
- Pursuant to OMB Memorandum M-25-22, further clarity on agency expectations regarding AI acquisition is expected from individual covered agencies by December 29, 2025.
- Federal contractors may want to review their partner agencies’ strategic AI plans to ensure their internal policies and procedures align with the government’s expectations.
M-25-21: Accelerate Adoption (With Guardrails)
Agencies with published AI strategy plans converge on several goals, including scalable AI infrastructure, quality data, an AI-ready workforce, and proportional risk governance.
- Standardized, secure AI development and testing are critical to ensure secure pathways to production. In other words, AI must operate within existing security and compliance boundaries. For example, one of DHS’s goals for enabling AI infrastructure involves shifting to a continuous authorization model to implement a secure-by-design policy and provide AI developers with access to existing, secure systems.
- Agencies are building catalogues that prioritize AI data standards and traceability. The DOE has established a robust data governance structure to ensure consistent and effective data management across its enterprise. It has also established a chief data officer position, a chief artificial intelligence officer, as well as a data and AI governance boards, to lead data initiatives and enforce policies for managing data.
- Agencies are scaling AI literacy for all personnel while recruiting for specialized roles such as data science, machine learning (ML) engineering, model evaluation, AI ethics, and cybersecurity. GSA touts its investment in training opportunities and community-building, offering agency-wide learning sessions and hosting initiatives such as “Friday Demo Days,” where employees share their generative AI projects.
- Agencies seek to foster public trust when using high-impact AI—AI with an output that will serve as a principal basis for decisions or actions that have a legal, material, binding, or significant effect on an individual’s or entity’s rights, liberties, privacy, safety, and more. For such systems, M-25-21 directs agencies to implement minimum risk management practices, such as pre-deployment testing, impact assessments, human oversight, continuous monitoring, and appeal mechanisms. Several agencies are operationalizing these requirements through their chief artificial intelligence officers (CAIOs). NARA has implemented an “AI Use Case Inventory,” empowered its CAIO to grant waivers only in “exceptional circumstances,” and will only do so “with a clear, written determination that adhering to a practice would increase risk or impede critical operations.” Likewise, the VA’s CAIO and CFPB’s respective chief information officers will have the authority to suspend or terminate any AI uses that fail to meet minimum safeguards.
For contractors, the shift in federal AI policy provides guidance on agencies’ expectations. As federal agencies roll out AI plans that accelerate the adoption of automated tools across the employment lifecycle, organizations should anticipate parallel compliance obligations and risk management needs. Companies that do substantial business with the federal government, especially with agencies that have released an AI strategy, may want to consider adopting clear acceptable-use policies for AI that prohibit the entry of sensitive client or controlled unclassified information (CUI) into unapproved tools to align with their partner agencies’ expectations as expressed in their strategy plans. Likewise, the use of AI in hiring, promotion, termination, or other high-impact decisions directly implicates federal anti-discrimination law, recent U.S. Equal Employment Opportunity Commission (EEOC) guidance addressing algorithmic decision-making, Office of Federal Contract Compliance Programs (OFCCP) requirements pertaining to protected veteran and disability status, and a growing patchwork of state and local automated employment decision tool regimes. In addition, AI-enabled workplace monitoring and productivity systems such as timekeeping, productivity scoring, and surveillance can trigger National Labor Relations Act (NLRA) concerns around interference with protected concerted activity, privacy obligations under state and federal laws, and wage-and-hour risks.
For federal contractors, it is best practice for any systems used for these purposes to mirror the partner agency’s minimum risk-management practices and require human validation of outputs. And employers generally may want to consider proactively testing for and remediating adverse impacts, implementing human-in-the-loop controls for high-stakes decisions, ensuring clear notice and consent where required, and providing reasonable accommodations for applicants and employees interacting with AI systems. Finally, as agencies invest in workforce literacy, training paths, clear workflows for AI use cases, and role-specific competence, entities competing for federal work or grants that do the same may have an advantage.
M-25-22: More to Come on AI Acquisition
M-25-22 imposes guardrails on the procurement of AI systems that contractors are particularly likely to feel. M-25-22 applies to any contract awarded pursuant to a solicitation issued on or after September 30, 2025. Presently, OMB’s memorandum broadly requires agencies to include contract terms barring vendors from using non-public government data to train publicly or commercially available AI algorithms without explicit consent. Relatedly, contracts must clearly delineate ownership and intellectual property (IP) rights of the government and contractor, data portability, and long-term interoperability. Additionally, M-25-22 urges agencies to “maximize” the use of AI products and services that are developed and produced in the United States.
In addition to the agency strategy plans issued in September 2025, covered agencies have until December 29, 2025, to revisit and update, where necessary, existing internal procedures on acquisition to comply with OMB’s requirements and ensure that their respective agencies’ use of acquired AI conforms to OMB Memorandum M-25-21. Those policies must, at a minimum, enable relevant agency officials to:
- review planned acquisitions of AI systems or services, and provide feedback on AI performance and risk management practices;
- convene a cross-functional team of relevant officials to coordinate and make decisions regarding the acquisition; and
- ensure the use of appropriate contract terms for IP rights.
In practice, the December 29, 2025, policies will likely clarify:
- how agencies will require contractors to identify and document AI used in performance, particularly where federal contractor information (FCI) and CUI is processed;
- minimum documentation for high-impact uses;
- recordkeeping expectations for training and evaluation methodologies; and
- sourcing preferences and restrictions to effectuate OMB’s goal of maximizing AI products and services that are developed and produced in the United States.
Implications and Practical Considerations for Contractors
Between M-25-21, existing AI strategies issued pursuant to M-25-21, M-25-22, and the forthcoming agency policy revisions pursuant to M-25-22, agencies are signaling their alignment with some common patterns. Contractors might not want to wait for contract-by-contract direction. Internal readiness will be critical to certify compliance and respond promptly to documentation requests. Critical steps include the following:
- Matching governance to risk. NARA, for example, will establish a rigorous, multi-layered control system for high-impact AI that includes comprehensive risk assessments by the system owner, a mandatory independent validation by technical and cybersecurity teams, and formal, written risk acceptance from a designated senior official. While approvals for low-risk uses could be quick, when the use of AI implicates rights or safety, contractors can likely expect longer review periods, tighter security, and more burdensome oversight.
- Ensuring secure AI architecture. The DOE’s testbeds and DHS’s application programming interface (API) gateways suggest agencies may favor enterprise channels for shared access. Contractors that have such architecture in place may be advantaged in hitting the ground running.
- Publishing annual public inventories, waiver disclosures, and other notices as required. Like many other agencies, the Federal Reserve’s AI Program Team will maintain records of all determinations and waivers, report to OMB within thirty days of annual certifications and significant modifications, and otherwise fulfill the transparency requirements of M-25-21’s Section 4(a)(iv) by sharing summaries of the above with OMB and the public. Contractors likewise can expect their AI roles to be visible, and prepare public-facing explanations that are accurate, accessible, and justifiable.
- Continuous monitoring and reevaluation throughout the AI life cycle. The DOE has identified the need to invest in secure, scalable, and high-performance AI through development, testing, deployment, and continuous monitoring. For contractors, logging, evaluation, and retention protocols will be a priority.
- Maximizing the use of AI and services developed and produced in the United States. Subject to details in the agencies’ December 29 plans, agencies are likely to increasingly scrutinize origin, ownership, supply chains, and security posture when using certain foreign technologies.
Conclusion
Federal agencies are moving quickly to implement a consistent AI blueprint, scaling adoption through secure platforms, workforce readiness, safeguards for high-impact uses, and transparency measures. The December 29, 2025, policy deadline will crystallize these expectations, particularly for contractors. And as contract language is standardized in the Federal Acquisition Regulation (FAR) and agency supplements, clarity on this evolving technology will both allow for more predictable uses and introduce additional compliance burdens and enforcement risks. Early adopters of AI inventories, strong data governance policies, consistent human oversight, rigorous testing, and alignment with partner agency expectations will be best positioned to avoid costly retrofitting.
Ogletree Deakins’ Cybersecurity and Privacy, Government Contracting and Reporting, Governmental Affairs, Technology, and Workforce Analytics and Compliance Practice Groups will continue to monitor developments and will provide updates on the Cybersecurity and Privacy, Government Contracting and Reporting, Governmental Affairs, Technology, and Workforce Analytics and Compliance blogs as additional information becomes available.
This article and more information on how the Trump administration’s actions impact employers can be found on Ogletree Deakins’ Administration Resource Hub.
Follow and Subscribe
LinkedIn | Instagram | Webinars | Podcasts