State Flag of Virginia

Quick Hits

  • If signed into law by Governor Youngkin, Virginia’s High-Risk Artificial Intelligence Developer and Deployer Act (H.B. 2094) will go into effect on July 1, 2026, giving affected businesses plenty of time to understand and prepare for its requirements.
  • The legislation applies to AI systems that autonomously make—or significantly influence—consequential decisions, such as lending, housing, education, and healthcare, and potentially job hiring as well.
  • Although H.B. 2094 excludes individuals acting in a commercial or employment context from the definition of “consumer,” the term “consequential decision” specifically includes decisions with a material legal or similar effect regarding “access to employment,” such that job applicants are ostensibly covered by the requirements and prohibitions under a strict reading of the text.

Overview

Virginia’s legislation establishes a duty of reasonable care for businesses employing automated decision-making systems in several regulated domains, including employment, financial services, healthcare, and other consequential sectors. The regulatory framework applies specifically to “high-risk artificial intelligence” systems that are “specifically intended to autonomously” render or be a substantial factor in rendering decisions—statutory language that significantly narrows the legislation’s scope compared to Colorado’s approach. A critical distinction in the Virginia legislation is the requirement that AI must constitute the “principal basis” for a decision to trigger the law’s anti-discrimination provisions. This threshold requirement creates a higher bar for establishing coverage than Colorado’s “substantial factor” standard.

Who Is a ‘Consumer’?

A central goal of this law is to safeguard “consumers” from algorithmic discrimination, especially where automated systems are used to make consequential decisions about individuals. The legislation defines a “consumer” as a natural person who is a resident of Virginia and who acts in an individual or household context. And, as with the Virginia Consumer Data Protection Act, H.B. 2094 contains a specific exclusion for individuals acting in a commercial or employment context.

One potential source of confusion is how “access to employment” can be a “consequential decision” under the law—while simultaneously excluding those in an employment context from the definition of “consumers.” The logical reading of these conflicting definitions is that job applicants do not act in an employment capacity on behalf of a business; instead, they are private individuals seeking employment for personal reasons. In other words, if Virginia residents are applying for a job and an AI-driven hiring or screening tool is used to evaluate their candidacy, a purely textual reading of the legislation suggests that they remain consumers under the statute because they are acting in a personal capacity.

Conversely, once an individual becomes an employee, the employee’s interactions with the business (including the business’s AI systems) are generally understood to reflect action undertaken within an employment context. Accordingly, if an employer uses a high-risk AI system for ongoing employee monitoring (e.g., measuring performance metrics, time tracking, or productivity scores), the employee might no longer be considered a “consumer” under H.B. 2094.

High-Risk AI Systems and Consequential Decisions

H.B. 2094 regulates only those artificial intelligence systems deemed “high-risk.” Such systems autonomously make—or are substantial factors in making—consequential decisions that affect core rights or opportunities, such as admissions to educational programs and other educational opportunities, approval for lending services, the provision or denial of housing or insurance, and, as highlighted above, access to employment. The legislature included these provisions to curb “algorithmic discrimination,” which is the illegal disparate treatment or unfair negative effects that occur on the basis of protected characteristics, such as race, sex, religion, or disability, and result from the use of automated decision-making tools. And, as we have seen with other, more narrowly focused laws in other jurisdictions, even if the developer or deployer does not intend to use an AI tool to engage in discriminatory practice, merely using an AI tool that produces such biased outcomes may trigger liability.

H.B. 2094 also includes a list of nineteen types of technologies that are specifically excluded from the definition of a “high-risk artificial intelligence system.” One notable carve-out is “anti-fraud technology that does not use facial recognition technology.” This is particularly relevant as the prevalence of fraudulent remote worker job applicants increases and more companies seek effective mechanisms to address such risks. Cybersecurity tools, anti-malware, and anti-virus technologies are likewise entirely excluded for obvious reasons. Among the other more granular exclusions, the legislation takes care to specify that spreadsheets and calculators are notconsidered high-risk artificial intelligence. Thus, those who harbor anxieties about the imminent destruction of pivot tables can breathe easy—spreadsheet formulas will not be subject to these heightened regulations.

Obligations for Developers

Developers—entities that create or substantially modify high-risk AI systems—are subject to a “reasonable duty of care” to protect consumers from known or reasonably foreseeable discriminatory harms. Before providing a high-risk AI system to a deployer—entities that use high-risk AI systems to make consequential decisions in Virginia—developers must disclose certain information (such as the system’s intended uses), known limitations, steps taken to mitigate algorithmic discrimination, and information intended to assist the deployer with performing its own ongoing monitoring of the high-risk AI system for algorithmic discrimination. Developers must update these disclosures within ninety days of making any intentional and substantial modifications that alter the system’s risks. Notably, developers are also required to either provide or, in some instances, make available extensive amounts of documentation relating to the high-risk AI tool they develop, including legally significant documents like impact assessments and risk management policies.

In addition, H.B. 2094 appears to take aim at “deep fakes” by mandating that if a developer uses a “generative AI” model to produce audio, video, or images (“synthetic content”), a detectable marking or other method that ensures consumers can identify the content as AI-generated will generally be required. The rules make room for creative works and artistic expressions so that the labeling requirements do not impair legitimate satire or fiction.

Obligations for Deployers

Like developers, deployers must also meet a “reasonable duty of care” to prevent algorithmic discrimination. H.B. 2094 requires deployers to devise and implement a risk management policy and program specific to the high-risk AI system they are using. Risk management policies and programs that are designed, implemented, and maintained pursuant to H.B. 2094, and which rely upon the standards and guidance articulated in frameworks like the National Institute of Standards and Technology’s AI Risk Management Framework (NIST AI RMF) or ISO/IEC 42001, presumptively demonstrate compliance.

Prior to putting a high-risk AI system into practice, deployers must complete an impact assessment that considers eight separate enumerated issues, including the system’s purpose, potential discriminatory risks, and the steps taken to mitigate bias. As with data protection assessments required by the Virginia Consumer Data Protection Act, a single impact assessment may be used to demonstrate compliance with respect to multiple comparable high-risk AI systems. Likewise, under H.B. 2094, an impact assessment used to demonstrate compliance with another similarly scoped law or regulation with similar effects, may be relied upon. In all cases, however, the impact assessment relied upon must be updated when the AI system undergoes a significant update and must be retained for at least three years.

Deployers also must clearly inform consumers when a high-risk AI system will be used to make a consequential decision about them. This notice must include information about:

(i) the purpose of such high-risk artificial intelligence system,

(ii) the nature of such system,

(iii) the nature of the consequential decision,

(iv) the contact information for the deployer, and

(v) a description of the artificial intelligence system in plain language of such system.

Any such disclosures must be updated within thirty days of the deployer’s receipt of notice from the developer of the high-risk AI system that it has made intentional and significant updates to the AI system. Additionally, the deployer must “make available, in a manner that is clear and readily available, a statement summarizing how such deployer manages any reasonably foreseeable risk of algorithmic discrimination that may arise from the use or deployment of the high-risk artificial intelligence system.” In the case of an adverse decision—such as denying a loan or rejecting a job application—the deployer must disclose the principal reasons behind the decision, including whether the AI system was the determining factor, and give the individual an opportunity to correct inaccuracies in the data or appeal the decision.

Exemptions, Cure Periods, and Safe Harbors

Although H.B. 2094 applies broadly, it exempts businesses that operate in certain sectors or engage in regulated activities for which equivalent or more stringent regulations are already in place. For example, federal agencies and regulated financial institutions may be exempted if they adhere to their own AI risk standards. Similarly, H.B. 2094 provides partial exemptions for Health Insurance Portability and Accountability Act (HIPAA)–covered entities or telehealth providers in limited situations, including those where AI-driven systems generate healthcare recommendations but require a licensed healthcare provider to implement those recommendations or where an AI system for administrative, quality measurement, security, or internal cost or performance improvement functions.

H.B. 2094 also contains certain provisions that are broadly protective of businesses. For example, the legislation conspicuously does not require businesses to disclose trade secrets, confidential information, or privileged information. Moreover, entities that discover and cure a violation before the attorney general takes enforcement action may also avoid liability if they promptly remediate the issue and inform the attorney general. And, the legislation contains a limited “safe harbor” in the form of a (rebuttable) presumption that developers and deployers of high-risk AI systems have met their duty of care to consumers if they comply with the applicable operating standards outlined in the legislation.

Enforcement and Penalties

Under H.B. 2094, only the attorney general may enforce the requirements described in the legislation. Nevertheless, the potential enforcement envisioned could be very impactful, as violations can lead to civil investigative demands, injunctive relief, and civil penalties. Generally, non-willful violations of H.B. 2094 may incur up to $1,000 in fines plus attorneys’ fees, expenses, and costs, while willful violations can result in fines of up to $10,000 per instance along with attorneys’ fees, expenses, and costs. Notably, each violation is counted separately, so penalties can accumulate quickly if an AI system impacts many individuals.

Looking Forward

Even though the law would not take effect until July 1, 2026, if signed by the governor, organizations that develop or deploy high-risk AI systems may want to begin compliance planning. By aligning with widely accepted frameworks like the NIST AI RMF and ISO/IEC 42001, businesses may establish a presumption of compliance. And, from a practical perspective, this early adoption can help mitigate legal risks, enhance transparency, and build trust among consumers—which can be particularly beneficial with respect to sensitive issues like hiring decisions.

Final Thoughts

Virginia’s new High-Risk Artificial Intelligence Developer and Deployer Act signals a pivotal moment in the governance of artificial intelligence at the state level and is a likely sign of things to come. The law’s focus on transparent documentation, fairness, and consumer disclosures underscores the rising demand for responsible AI practices. Both developers and deployers must understand the scope of their responsibilities, document their AI processes and make sure consumers receive appropriate information about them, and stay proactive in risk management.

Ogletree Deakins will continue to monitor developments and will provide updates on the Cybersecurity and Privacy, Employment Law, and Virginia blogs.

Follow and Subscribe

LinkedIn | Instagram | Webinars | Podcasts


Browse More Insights

Modern dark data center, all objects in the scene are 3D
Practice Group

Cybersecurity and Privacy

The attorneys in the Cybersecurity and Privacy Practice Group at Ogletree Deakins understand that data now accumulates quickly and transmits easily. As the law adapts to technical advancements, we effectively advise our clients as they work to comply with new developments and best practices for protecting the privacy of the data that their businesses collect and retain.

Learn more
Fountain pen signing a document, close view with center focus
Practice Group

Employment Law

Ogletree Deakins’ employment lawyers are experienced in all aspects of employment law, from day-to-day advice to complex employment litigation.

Learn more

Sign up to receive emails about new developments and upcoming programs.

Sign Up Now