Virginia’s AI Legislative Saga: The Vetoed Omnibus Bill and the Path to Targeted Governance

The legislative session of Two Thousand Twenty-Five in Virginia became a pivotal moment in the national conversation surrounding artificial intelligence governance. The efforts culminated in the passage and subsequent veto of House Bill Two Thousand Ninety-Four (HB Two Thousand Ninety-Four), the High-Risk Artificial Intelligence Developer and Deployer Act. This episode serves as a microcosm of the broader tension across the United States between fostering technological innovation and establishing necessary accountability frameworks for rapidly evolving AI systems. As of January 2026, Virginia’s AI policy landscape is defined by the rejection of this comprehensive approach and the success of a more narrowly tailored legislative measure.
The National Context: Virginia’s Position in the Broader Landscape of State AI Regulation
Virginia’s legislative endeavors in Two Thousand Twenty-Five did not occur in an isolated vacuum; rather, they placed the Commonwealth directly into a national contest among states seeking to pioneer artificial intelligence governance. The fate of HB Two Thousand Ninety-Four made it a subject of intense scrutiny across the nation. This legislative battle was fought against a shifting federal backdrop that openly prioritized innovation over what it deemed “cumbersome” state-level regulation. On January 23, 2025, President Trump issued Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence,” which expressly revoked previous federal efforts toward broader governance and directed the administration to actively remove barriers to AI leadership [cite: 4 from previous search]. This policy pivot toward deregulation arguably lent significant weight to the Governor’s argument that heavy state-level regulation could place Virginia at a competitive disadvantage relative to states adopting a more hands-off approach [cite: 4 from previous search].
The story did not end there. As of late 2025, the federal stance further solidified as President Trump signed a subsequent executive order on December 11, 2025, titled “Ensuring a National Policy Framework for Artificial Intelligence,” which sought to curb the proliferation of state AI laws viewed as inconsistent with a minimally burdensome national standard [cite: 4, 6, 10, 11 from search 2]. This national policy direction strongly influenced the political climate surrounding the veto of HB 2094, as the Governor echoed concerns that such regulation could stifle Virginia’s growing tech economy [cite: 7 from previous search].
Comparative Regulatory Benchmarks: Drawing Parallels with Neighboring States
The High-Risk Artificial Intelligence Developer and Deployer Act was frequently compared to legislation already enacted or being debated in other jurisdictions. Specifically, the Virginia bill was seen as following in the footsteps of Colorado’s AI consumer protection law (Senate Bill 24-205), which had been signed into law the previous year in May 2024 [cite: 4 from previous search]. Both legislative concepts shared the goal of imposing transparency and accountability requirements on developers and deployers of AI systems used for consequential decisions in areas like employment, housing, and finance [cite: 6, 16 from previous search]. By aiming to be the nation’s second “horizontal” state AI bill, Virginia’s attempt was positioned as a potential trendsetter or a cautionary tale for states like Connecticut and New Mexico, which were also moving forward with their own AI guardrails and anti-discrimination proposals [cite: 6 from previous search].
However, critical distinctions emerged between the Virginia proposal and its Colorado counterpart. While both addressed algorithmic discrimination, HB 2094 introduced a higher threshold for triggering the law’s main provisions. Virginia’s proposed legislation required that an AI system constitute the “principal basis” for a consequential decision, creating a stricter standard than Colorado’s “substantial factor” test [cite: 11, 16 from previous search, 8 from search 2]. This nuance suggested Virginia’s bill, despite its ambition, included more industry-friendly language aimed at narrowing the scope of high-risk applications compared to the Colorado Act [cite: 14, 16 from previous search].
Examining the Influence of Existing Jurisdictional Models
The political climate surrounding the veto was heavily influenced by shifting federal priorities. The timing of the veto occurred shortly after a change in the federal executive order landscape, which saw a pivot toward prioritizing innovation and deregulation over previous emphasis on broad federal governance [cite: 4 from search 2]. This national policy shift arguably lent support to the Governor’s argument that heavy state-level regulation could put Virginia at a competitive disadvantage relative to states that might adopt a more hands-off approach, thereby making the debate a microcosm of the larger national tension over technological policy direction [cite: 4 from previous search].
Enforcement Mechanisms and the Role of the State’s Chief Legal Officer
A critical element embedded within the structure of the proposed Act, and one that carried forward into the successful judicial oversight bill, was the designation of a clear enforcement authority and a structure for penalties. The regulatory framework designed in HB 2094 sought to centralize enforcement power, a common feature in modern state consumer protection laws [cite: 12, 13 from previous search].
Establishing Penalties and the Limits of Private Litigation
The enforcement power for violations of the vetoed HB Two Thousand Ninety-Four was exclusively vested in the Attorney General of the Commonwealth [cite: 6 from previous search, 2 from search 2]. This centralized authority would be empowered to issue civil investigative demands and bring actions in circuit court to enjoin violations, even in the absence of proven monetary damages [cite: 4 from previous search].
- Non-Willful Violations: Civil penalties were set at a maximum of one thousand dollars ($1,000), plus associated legal costs [cite: 6 from previous search, 7 from search 2].
- Willful Violations: The penalty structure escalated significantly, ranging from one thousand to ten thousand dollars ($1,000 to $10,000) per violation [cite: 6, 14 from previous search].
- Earmarking of Funds: All collected penalties were earmarked for the state’s Literary Fund [cite: 15 from previous search].
- Financial institutions under state or federal regulatory oversight.
- Insurance companies regulated by the State Corporation Commission (SCC), provided their existing guidance and audits were substantially equivalent to the AI risk mitigation standards proposed [cite: 7 from search 2].
Crucially, the legislation was explicit in stating that it would not create any private cause of action in favor of any aggrieved person, ensuring that enforcement remained a matter of state action rather than opening the floodgates to class-action lawsuits based solely on non-compliance with the AI statute [cite: 9 from previous search].
Navigating Exemptions and Preserving Intellectual Property
No comprehensive regulatory scheme for technology can succeed without clearly delineating the boundaries of its applicability, especially concerning national security, existing regulated sectors, and proprietary information. The proposed Virginia framework included an extensive section dedicated to these necessary carve-outs, reflecting a perceived need to balance regulation with commercial viability [cite: 14 from previous search].
Safeguarding Trade Secrets and Federal/Industry Conformity
To maintain the attractiveness of Virginia as a hub for technological development, the bill contained robust protections for intellectual property when facing enforcement action. Specifically, when responding to a Civil Investigative Demand from the Attorney General, a developer or deployer could redact or omit any “trade secret” or information protected by state or federal law [cite: 2, 9 from search 2]. If the developer exercised this right, they were required to affirmatively state to the Attorney General that the basis for nondisclosure was that the information constituted a trade secret [cite: 9 from search 2].
Furthermore, the legislation recognized the existing regulatory expertise within specialized fields. It provided exemptions for entities already subject to stringent oversight, ensuring that existing state regulatory bodies maintained their jurisdiction. These included:
Systems acquired by or for the federal government were also generally exempted, though exceptions remained for federal systems used in Virginia for housing or employment decisions concerning state residents [cite: 7 from search 2]. This layered approach demonstrated an attempt to accommodate regulated industries, a measure that distinguished it from potentially broader frameworks like the EU AI Act [cite: 4 from previous search].
Implications and Forward Trajectory for Technological Governance in the Commonwealth
Even with the primary omnibus bill vetoed on March 24, 2025, the intense legislative engagement of Two Thousand Twenty-Five has irrevocably shaped the trajectory of AI policy in Virginia, setting a clear expectation for accountability that will influence future debates. The Governor’s veto decision explicitly referenced the existing governance structure established under Executive Order No. 30 (2024), which mandated AI standards for state agencies and created an ongoing Artificial Intelligence Task Force, thus ensuring the administration remained engaged in responsible AI use, at least within the Executive Branch [cite: 13 from search 2].
Human Oversight as a Successful Compromise in a Specific Domain
The most significant legislative success of the session, serving as a clear contrast to the vetoed omnibus bill, was the passage of House Bill 1642 (HB 1642). This bill mandated that any artificial intelligence used in judicial processes must be overseen by a qualified human [cite: 3, 15 from search 2]. This law took effect on July 1, 2025, marking Virginia as the first state to enact a law mandating human oversight for AI in its judiciary [cite: 3, 15 from search 2].
This targeted success demonstrates that the legislature and the executive branch could find common ground when focusing on specific, high-risk use cases where the integrity of foundational state functions is at stake. The principle established—that AI cannot be the sole basis for judicial decisions or the certification of legal transcripts—provides a clear, actionable precedent [cite: 15 from search 2]. This targeted success is likely to serve as a model for addressing other sectors in future legislative sessions, perhaps moving sector-by-sector rather than through a single, broad statute [cite: 3 from search 2].
The Evolution of Responsible Technology in the Commonwealth
The entire saga of the High-Risk Artificial Intelligence Developer and Deployer Act, from its introduction through its passage and subsequent veto, encapsulates the state’s current approach: acknowledging the profound societal shifts driven by AI, attempting to codify comprehensive risk management, facing resistance based on innovation concerns, and ultimately settling on targeted, actionable governance where consensus is achievable [cite: 7 from previous search]. The groundwork laid in defining terms like “algorithmic discrimination” and the criteria for “consequential decision” will undoubtedly form the lexicon for any subsequent AI legislation introduced in the Commonwealth. The debate has effectively shifted the focus from a broad regulatory net to specific, high-impact domains, which may prove a more sustainable policy trajectory for Virginia in the digital age.