Flag of the European Union

Quick Hits

  • The EU AI Act took effect on August 1, 2024, and carries extraterritorial reach. U.S. employers can be covered even without a physical EU presence if AI outputs are intended to be used in the EU—e.g., recruiting EU candidates, evaluating EU-based workers or contractors, or deploying global HR tools used by EU teams.
  • Several obligations have begun to take effect, with more phasing in through 2026–27. The European Commission also released a voluntary General-Purpose AI (GPAI) Code of Practice to streamline compliance for model providers.
  • Employers’ use of AI in the workplace will be treated as potentially “high risk,” triggering duties such as worker notice, human oversight, monitoring for discrimination, logging, and adherence to applicable privacy laws when the core high-risk system requirements begin taking effect starting in August 2026.
  • Now is the time to inventory HR and workforce AI, align contracts and governance, and operationalize notice, oversight, and recordkeeping to meet EU requirements alongside evolving U.S. federal and state AI rules.

This is the third article in a four-part series aligned with Cybersecurity Awareness Month, which occurs annually in October. Part 1 discusses compliance tips for U.S. privacy leaders handling practical data rights requests, Part 2 addresses data rights requests in Canada, and Part 4 covers the considerations for responsible use of artificial intelligence (AI) and automated decision-making tools (ADMTs).

Why the EU AI Act Can Apply to U.S. Employers

The EU AI Act adopts a risk-based framework and applies extraterritorially where AI systems or their outputs are intended to be used in the EU. In practice, this means a U.S. employer may have EU AI Act obligations if it:

  • uses AI-enabled recruiting or screening tools for roles open to EU candidates, even if the hiring team and systems are U.S.-based;
  • applies AI to evaluate performance, promotion, or termination decisions for EU-based employees, contractors, or contingent workers; or
  • operates global HR or IT platforms that incorporate AI functionalities accessible to EU establishments.

For instance, a U.S. company using an AI-powered résumé screener for a global applicant pool could be covered if that system ranks or filters EU-based candidates. In practice, if any AI output influences employment outcomes within the EU, even indirectly, the law can potentially apply.

The law treats many workplace AI uses as “high risk.” While the most prescriptive requirements fall on “providers” that build high-risk AI, “deployers” (i.e., employers that implement those tools) also have direct obligations. As with the EU’s General Data Protection Regulation (GDPR), penalties can be significant—rising to the greater of multimillion-euro fines or a percentage of worldwide annual turnover, depending on the breach category.

Where Implementation Stands Today and What to Watch Next

The AI Act entered into force on August 1, 2024, following adoption by EU institutions earlier that year. For employers, the bottom line is twofold: some obligations already apply, and EU institutions are pressing forward on timelines and supporting guidance rather than pausing implementation:

  • Prohibited AI practices and AI literacy obligations took effect first on February 2, 2025, requiring discontinuation of certain banned uses particularly in HR contexts such as emotion recognition in workplaces or biometric categorization.
  • Codes of practice have begun to be published. For example, the Commission has released the voluntary General-Purpose AI (GPAI) Code of Practice and guidance in the form of frequently asked questions (FAQs) to support transparency and model documentation.
  • Governance, supervision, and penalty frameworks have begun to apply before the high-risk system obligations fully phase in, signaling that enforcement infrastructure is underway.
  • February 2, 2026: Guidance expected on compliance for high-risk AI systems and illustrative examples clarifying which workplace and HR uses of AI (e.g., recruiting, promotion, performance evaluation) qualify as high-risk systems, helping employers determine which tools must meet the AI Act’s documentation, oversight, and logging requirements.
  • August 2026–August 2027: High-risk system obligations fully apply, with a narrow subset delayed until August 2, 2027. Employers will be required to ensure human oversight, worker notice, and logging processes are operational by this point.

Employers will want to monitor additional Commission guidance and national implementation activities by EU member states, which may introduce supplemental detail or supervisory expectations impacting HR deployments.

What Counts as ‘High Risk’ in the Workplace—And What Employers Must Do

The AI Act’s risk tiers run from “unacceptable” (banned) to “high,” “limited,” and “minimal.” In the employment context, AI used for recruiting, screening, selection, performance evaluation, or other employment-related decision-making is explicitly listed as high risk.

In practical terms, this means that many AI tools already used in HR, such as chatbots that screen candidates, résumé-ranking software, or productivity analytics used in performance reviews, may fall within the AI Act’s “high-risk” category. Employers may also want to be careful about relying upon vendor assurances of compliance without independent validation.

In summary, employer obligations are as follows:

  • Worker notification is required before implementing high-risk AI in the workplace, including informing workers’ representatives where applicable.
  • Human oversight must be established by individuals with appropriate competence, training, authority, and support. Oversight should be meaningful, with the ability to intervene and override outputs where necessary.
  • Monitoring is required to detect issues such as discrimination or adverse impacts, with prompt suspension of use and notification obligations where issues arise.
  • Logs automatically generated by an AI system must be maintained for an appropriate period, with at least a six-month minimum retention baseline.
  • Data privacy compliance remains essential, including alignment with EU privacy laws that may apply to HR data processing and cross-border transfers.

Planning for Compliance

  1. Mapping the corporate AI footprint in HR and beyond. This includes inventorying where AI or algorithmic logic influences employment decisions, candidate sourcing, screening, assessments, performance management, scheduling, or compensation. Mapping also includes identifying whether outputs are used for or affect EU candidates, employees, or contingent workers. Employers may also want to classify use cases against the AI Act’s risk tiers and flag any functionality that could edge toward prohibited categories.
  2. Assigning internal roles and accountability. Employers may want to clarify who acts as a “deployer” within the organization for each tool, who owns worker notices, who is responsible for human oversight design and day-to-day review, and who tracks compliance and metrics. Employers may also want to establish escalation paths if potentially discriminatory or inaccurate outputs are detected.
  3. Operationalizing worker notice and human oversight. Building template notices for workers and, where needed, workers’ representatives that explain the AI system’s purpose and use is an important compliance tool. Template notices typically include defined oversight procedures specifying when humans must review, override, or decline to rely on AI outputs, and how those interventions are documented. Employers may want to ensure that overseers are trained on the technology, its limitations, and AI bias risks.
  4. Strengthening vendor diligence and contracts. Requiring providers to supply “instructions of use,” model documentation, and transparency information aligned to the AI Act is another key compliance tool. Employers may want to embed cooperation commitments for discrimination monitoring, prompt remediation, logging, incident reporting, and audit support. Employers may also want to ensure deletion, correction, and security obligations are extended through sub-processors.
  5. Implementing logging and recordkeeping. Employers may want to confirm that high-risk systems automatically generate adequate logs and that the retention schedule meets or exceeds the AI Act’s minimums. Ensuring that logs can support investigations, fairness reviews, and regulator inquiries is a step included in this task.
  6. Monitoring for discriminatory or adverse impacts. Establishing metrics, thresholds, and cadence for fairness and accuracy reviews will help employers maintain compliance with the AI Act. If issues arise, consider suspending use, notifying as required, and coordinating with providers to address root causes, retraining, or configuration changes.
  7. Aligning with privacy and data governance. Employers may want to cross-check AI deployments against EU and U.S. privacy regimes, including transparency, purpose limitation, data minimization, access controls, and security requirements. Harmonizing HR data handling across jurisdictions will help avoid fragmented practices and inconsistent risk controls.
  8. Calibrating globally to avoid conflicts. The U.S. landscape is evolving—federal agencies have issued AI-related guidance, and states and cities continue to explore rules affecting employment decision tools. Design controls that satisfy EU requirements while anticipating U.S. developments to reduce rework and ensure consistency across the company’s global HR stack.

How the EU AI Act Fits Within a Broader Compliance Posture

For many organizations, the AI Act will layer onto existing data subject access request (DSAR) and privacy workflows, ethics reviews, and vendor management practices established under U.S. state privacy laws and the GDPR. Lessons learned from data privacy compliance requirements—such as notification and recordkeeping requirements—translate directly to AI governance.

For in-house teams already managing privacy impact assessments, bias audits, or vendor risk reviews, integrating AI Act compliance into those existing workflows is often the most efficient approach. EU authorities, including the new AI Office, are continuing to develop sector guidance and templates for compliance documentation.

Ogletree Deakins’ Cross-Border Practice Group, Cybersecurity and Privacy Practice Group, and Technology Practice Group will continue to monitor developments and provide updates on the Cross-Border, Cybersecurity and Privacy, and Technology blogs as additional information becomes available.

Follow and Subscribe
LinkedIn | Instagram | Webinars | Podcasts


Browse More Insights

Fingerprint Biometric Authentication Button. Digital Security Concept
Practice Group

Technology

Ogletree Deakins is uniquely situated to provide tech employers and users (the “TECHPLACE™”) with labor and employment advice, compliance counseling, and litigation services that embrace innovation and mitigate legal risk. Through our Technology Practice Group, we support clients in the exploration, invention, and/or implementation of new and evolving technologies to navigate the unique and emerging labor and employment issues present in the workplace.

Learn more
Modern dark data center, all objects in the scene are 3D
Practice Group

Cybersecurity and Privacy

The attorneys in the Cybersecurity and Privacy Practice Group at Ogletree Deakins understand that data now accumulates quickly and transmits easily. As the law adapts to technical advancements, we effectively advise our clients as they work to comply with new developments and best practices for protecting the privacy of the data that their businesses collect and retain.

Learn more
Glass globe representing international business and trade
Practice Group

Cross-Border

Often, a company’s employment issues are not isolated to one state, country, or region of the world. Our Cross-Border Practice Group helps clients with matters worldwide—whether involving a single non-U.S. jurisdiction or dozens.

Learn more

Sign up to receive emails about new developments and upcoming programs.

Sign Up Now