Nineteen laws and counting
Since mid-March 2026, nineteen new AI bills have been signed into law across the United States, bringing the year's total to 25. Another 27 have passed both chambers and are on their way to governors' desks. The White House released a National Policy Framework with legislative recommendations. State lawmakers in 45 states have introduced over 1,500 AI-related bills this year alone. The question is not whether AI regulation is coming. It is whether any of it will actually matter.
The stampede
The numbers are staggering. In 2023, fewer than 200 AI-related bills were introduced across all US state legislatures. In 2024, that jumped to over 600, with 99 enacted into law. In 2025, the count hit 1,200 introduced bills. As of March 2026, 45 states have already introduced 1,561 AI bills, and the legislative session is still underway. The 19 laws signed in just the last two weeks of March came from seven states. Utah led the charge with nine new AI laws in a single signing spree, covering everything from AI literacy in schools to deepfake intimate images to health insurance preauthorization restrictions. Washington passed four bills targeting transparency disclosures, chatbot protections for minors, AI-generated child exploitation material, and health insurance AI use. Idaho established frameworks for generative AI in K-12 education and conversational AI services. New York created a regulatory framework for large-scale frontier AI developers. Oregon regulated AI companion platforms, the kind that simulate romantic relationships. Tennessee required disclaimers on political deepfakes and regulated AI systems posing as mental health professionals. Colorado addressed AI platforms in the context of search warrants. This is not a measured policy response. This is a legislative stampede, driven by the same impulse that produces most technology regulation: something new is happening, constituents are worried, and lawmakers need to be seen doing something.
The federal wishlist
While states were passing actual laws, the White House released a four-page document on March 20 titled the National Policy Framework for Artificial Intelligence. It lays out seven pillars: child safety, community protections, intellectual property, free speech, innovation, workforce development, and federal preemption of state AI laws. The framework is nonbinding. It creates no new legal obligations. It directs no agency to take specific regulatory action. It is, in the most literal sense, a set of recommendations. The headline provision is Pillar VII: a call for Congress to "preempt state AI laws that impose undue burdens" in favor of a national standard. The framing is pro-innovation. The framework explicitly warns against "vague standards" and "open-ended liability." It recommends against creating any new federal rulemaking body for AI, preferring instead to rely on existing regulators with "subject matter expertise" and "industry-led standards." Read carefully, the framework proposes a ceiling on regulation while declining to build a meaningful floor. No mandatory risk assessments. No incident reporting requirements. No enforceable duties of care. Regulatory sandboxes and access to federal datasets for AI training, but nothing that would require an AI company to change how it operates. The framework's defenders argue that fragmented state laws create real compliance headaches, especially for smaller companies. That is true. Some state-level AI bills have been poorly drafted, overly broad, or technically uninformed. But the answer to bad state regulation is not no regulation. It is better regulation, with teeth.
The enforcement gap
The deeper problem with the current legislative surge is not the volume of bills. It is the question nobody seems eager to answer: who enforces any of this? Most AI legislation being passed at the state level lacks dedicated enforcement mechanisms. There is no equivalent of a data protection authority for AI in most states. Penalties, where they exist, are often vague or minimal. Compliance monitoring is largely nonexistent. The bills create obligations on paper but provide no infrastructure for accountability in practice. This is not unique to AI regulation. It is a recurring pattern in technology governance. Laws get written faster than enforcement capacity gets built. The result is a regulatory landscape that looks impressive on a tracker but functions more like a suggestion box. The federal picture is no better. The White House framework calls on existing agencies to handle AI oversight, but those agencies are already stretched thin on their current mandates. The FTC has been directed to issue a policy statement on applying Section 5 (unfair and deceptive practices) to AI models. The Department of Commerce was supposed to publish an evaluation of state AI laws by March 11, 2026, identifying "onerous" ones. No public announcement has been made. A DOJ AI Litigation Task Force was established in January 2026 to challenge state AI laws that conflict with federal policy. Whether any of these bodies will produce meaningful enforcement remains an open question.
The governance vacuum
Meanwhile, the entities actually deploying AI at scale are not waiting for regulators to figure things out, and the gap between deployment speed and governance maturity is alarming. According to a 2026 Arkose Labs survey of 300 enterprise leaders, 97% expect a material AI-agent-driven security or fraud incident within the next 12 months. Nearly half expect one within six months. Yet only 6% of security budgets are currently allocated to AI agent risk. The average organization now manages 37 deployed AI agents, with that number growing every quarter as individual teams spin up automation without central review. These are not chatbots answering customer questions. AI agents are autonomous systems that retrieve data, trigger transactions, modify databases, make payments, and interact with external APIs, often with minimal human oversight. Each undiscovered agent operating outside formal governance is an unmapped access path into enterprise systems. Laws are being written faster than companies can implement the basics. The disconnect is not between regulation and innovation. It is between the existence of rules and anyone's capacity to follow or enforce them.
The GDPR parallel
There is a useful precedent for what happens when ambitious regulation meets weak enforcement: GDPR. The General Data Protection Regulation went into effect in May 2018 with the promise of transforming data privacy. Maximum fines of €20 million or 4% of global annual turnover. Mandatory breach notification. Data protection officers. Privacy impact assessments. The law was comprehensive, well-intentioned, and largely toothless for its first several years. In GDPR's early period, fines were rare and small. Data protection authorities across Europe were understaffed, underfunded, and unclear on how to interpret the regulation's broad provisions. Companies paid lip service to compliance while changing little about their actual data practices. The "one-stop shop" mechanism, meant to streamline cross-border enforcement, instead created bottlenecks as the Irish Data Protection Commission became the de facto regulator for most major tech companies and moved at a glacial pace. Then the fines started coming. Total GDPR penalties now exceed €7.1 billion, with €1.2 billion issued in 2025 alone. Over 60% of the total fine value has been imposed since January 2023, five years after the regulation took effect. European data protection authorities now receive 443 breach notifications per day, a 22% year-over-year increase. The GDPR arc, from ambitious law to weak enforcement to eventual reckoning, is the likely template for AI regulation. The laws being passed today will sit on the books, largely unenforced, while the technology races ahead. Then something will go wrong at sufficient scale, a major AI agent security breach, an algorithmic discrimination incident with clear victims, a deepfake that crosses from nuisance to national security threat, and the enforcement machinery will suddenly find its gear. The question is how much damage accumulates in the gap between legislation and enforcement.
The fragmentation problem
Even if enforcement catches up, the current approach creates a structural problem that may be harder to solve: fragmentation. With 45 states introducing AI bills independently, the compliance landscape is becoming a maze that only large companies with dedicated legal teams can navigate. A startup building an AI-powered health tool needs to understand Utah's preauthorization restrictions, Washington's transparency requirements, New York's frontier model reporting obligations, and whatever California, which has its own layered AI compliance environment, decides to do next. The White House framework identifies this problem correctly. A patchwork of 50 different regulatory regimes genuinely does create compliance headaches. But the administration's proposed solution, federal preemption that replaces state laws with a national standard that does not yet exist, trades one problem for another. You do not fix fragmentation by creating a vacuum. Congress has already rejected federal preemption of state AI laws twice: once in the One Big Beautiful Bill Act and again in the National Defense Authorization Act. The political appetite for a federal takeover of AI regulation is limited, particularly when the federal alternative on offer is so thin. The result is the worst of both worlds: states passing laws with limited enforcement capacity while the federal government promises a unified framework it cannot deliver. Companies are left navigating an expanding patchwork of obligations with no clear path to compliance and no confidence that the rules will be enforced consistently.
What would actually work
Effective AI regulation would need to solve three problems simultaneously: it would need to be specific enough to be enforceable, flexible enough to accommodate a fast-moving technology, and resourced enough to actually monitor compliance. None of the current approaches achieve all three. State laws tend to be specific but under-resourced. The federal framework is flexible but deliberately toothless. Nobody is seriously discussing the funding and staffing required to make AI oversight operational. The closest parallel to a working model is sector-specific regulation. The FDA does not regulate all technology. It regulates medical devices and pharmaceuticals with deep domain expertise, clear standards, and real consequences for non-compliance. AI in healthcare, AI in financial services, AI in education, these are meaningfully different applications with meaningfully different risk profiles. Treating them all under one umbrella, whether that umbrella is a state omnibus bill or a federal framework, misses the point. The 19 laws signed in late March are a start, but mostly in the sense that they establish that legislators are paying attention. The hard work, building enforcement infrastructure, funding regulatory capacity, and creating feedback loops between deployed AI systems and the agencies overseeing them, has barely begun. Twenty-five laws and counting. The counting part is easy. The part that comes after is what matters.
References
- Plural Policy, "AI Governance Watch: Nineteen New AI Bills Passed Into Law," April 2026. Link
- The White House, "National Policy Framework for Artificial Intelligence: Legislative Recommendations," March 2026. Link
- The White House, "President Donald J. Trump Unveils National AI Legislative Framework," March 2026. Link
- MultiState, "State AI Legislation Tracker 2026: All 50 States," 2026. Link
- Arkose Labs, "2026 Agentic AI Security Report," 2026. Link
- DLA Piper, "GDPR Fines and Data Breach Survey," January 2026. Link
- Wiley Law, "White House Releases National Legislative Policy Framework for AI," March 2026. Link
- Ropes & Gray, "The White House Legislative Recommendations: National Policy Framework for Artificial Intelligence and Federal Preemption of State AI Laws," March 2026. Link
- Baker Donelson, "Emerging Federal AI Policy: What to Know and How to Prepare," 2026. Link
- VerifyWise, "State of AI Governance Regulations in the United States," 2026. Link
You might also enjoy