The US just killed state AI regulation
On March 20, the White House released a four-page document titled the National Policy Framework for Artificial Intelligence, calling on Congress to preempt state-level AI laws in favor of a single federal "light-touch" regime. It is the most significant US AI governance move in years, and it deserves a skeptical read. The framework lays out seven pillars, from child safety to workforce development, but the headline is Pillar VII: a direct call for Congress to override state AI regulation and establish what amounts to a minimally burdensome national standard. In the White House's words, Congress should "preempt state AI laws that impose undue burdens to ensure a minimally burdensome national standard consistent with these recommendations, not fifty discordant ones." This is not just a policy suggestion. It builds on President Trump's December 2025 executive order, which directed federal agencies to identify and challenge state AI laws deemed "onerous." States that don't fall in line could face restrictions on broadband and internet funding. The message is clear: the federal government wants to be the only voice in the room on AI.
What "light-touch" actually means
The framework's language sounds reasonable on its surface. Who wouldn't want to avoid a "fragmented patchwork" of regulations? But look at what it actually proposes, and the picture changes. No new federal rulemaking body for AI. No mandatory risk assessments. No incident reporting requirements. No enforceable duties of care. Instead, the framework relies on existing regulatory bodies with "subject matter expertise" and "industry-led standards." It calls for regulatory sandboxes and making federal datasets available for AI training. In practice, "light-touch" does not mean better regulation. It means less regulation. The framework creates a ceiling without establishing a meaningful floor. States like Colorado, which passed the country's most comprehensive AI consumer protection law in 2024, and California, which enacted multiple AI transparency measures, would see their work effectively nullified, with nothing concrete taking its place at the federal level.
The Section 230 playbook
If this feels familiar, it should. The United States has run this experiment before. In 1996, Congress passed Section 230 of the Communications Decency Act, granting online platforms broad immunity from liability for user-generated content. The idea was to let the internet flourish by removing legal friction. And it worked, in a narrow sense. The internet did flourish. But Section 230 also created a low-accountability environment where platforms had little incentive to address harms like disinformation, algorithmic radicalization, and the exploitation of children. The parallel to AI preemption is uncomfortable but precise. In both cases, the pitch is the same: let innovation run, deal with harms later, trust the market. And in both cases, the structural problem is identical. Once you build a massive industry inside a permissive legal framework, the political will to retrofit accountability rarely materializes. Brad Carson, president of Americans for Responsible Innovation, put it plainly: "Develop a high-powered industry in a low-accountability environment and the political will to address its harms later will fail to materialize." That is not a prediction. It is what happened with social media. The framework's defenders will argue that this time is different, that AI needs room to grow and that fragmented state laws genuinely create compliance headaches. That last part is true. Some state-level AI bills have been poorly drafted, overly broad, or technically uninformed. Colorado's law, for instance, drew criticism from the White House for potentially "forcing AI models to produce false results." Not all state regulation is good regulation. But the answer to bad state regulation is not no regulation. It is better regulation, with teeth.
The token concessions
The framework does include some substantive provisions. On children's safety, it calls for age-assurance requirements, parental controls, and features to reduce risks of sexual exploitation and self-harm. On energy, it includes a "Ratepayer Protection Pledge" to shield residential consumers from electricity cost increases caused by data center construction. These are real issues, and the provisions are not trivial. But they are also carefully bounded. The children's safety measures avoid defining what "commercially reasonable" actually means, and the energy provisions are silent on the broader environmental costs of the AI infrastructure boom. They read more like political cover than serious guardrails, enough to claim the framework is not purely deregulatory, but not enough to meaningfully constrain the industry.
The gap nobody is talking about: AI agent security
The framework's most glaring omission is what it does not address: the security risks of autonomous AI systems. We are no longer in an era where AI just generates text and images. AI agents, systems that can reason, plan, and take actions on behalf of users, are moving from experimental demos to production systems at a pace that outstrips security infrastructure. A 2026 Gravitee survey found that only 24.4% of organizations have full visibility into which AI agents are communicating with each other. More than half of all agents run without any security oversight or logging. The risks are not theoretical. According to a 2026 report, 88% of organizations reported a confirmed or suspected AI agent security incident in the past year. Only 14.4% of agents went live with full security and IT approval. These systems have access to sensitive data, can modify databases, make payments, and interact with external APIs, often with minimal human oversight. The White House framework mentions national security in passing, calling on Congress to ensure agencies have "sufficient technical capacity to understand frontier AI model capabilities." But it says nothing about agent permissions, autonomous system safeguards, or the cascading failure risks of interconnected AI agents. For a document that claims to set the national AI policy direction, this silence is deafening. Even the federal government's own agencies recognize the gap. In January 2026, the National Institute of Standards and Technology published a Request for Information specifically on security considerations for AI agents, citing vulnerabilities to adversarial manipulation and "cascading failures within interconnected systems." The White House framework appears to have missed the memo from its own government.
What Singapore is doing differently
The contrast with Singapore is instructive. While the US moves to strip regulatory authority from states, Singapore has been building one of the world's most pragmatic AI governance frameworks, and it has already moved ahead on agentic AI. In January 2026, Singapore's Infocomm Media Development Authority (IMDA) launched the world's first Model AI Governance Framework for Agentic AI at the World Economic Forum. The framework provides structured guidance on deploying AI agents responsibly across four dimensions: assessing and bounding risks upfront, making humans meaningfully accountable, implementing technical controls throughout the agent lifecycle, and enabling end-user responsibility through transparency. Singapore's approach is voluntary, not mandatory. But it is also specific, practical, and sector-aware. It addresses the exact risks the US framework ignores: agent autonomy, access controls, human oversight checkpoints, and the governance challenges of systems that can act independently. This is not about Singapore being "stricter" than the US. It is about Singapore being more serious. Their framework treats agentic AI as a genuinely new category of technology that requires genuinely new governance thinking. The US framework, by contrast, treats AI as something that existing bodies can handle with existing tools, just with fewer rules. Singapore has also taken a broader view. The country's 2026 budget included a National AI Council chaired by Prime Minister Lawrence Wong, a "Champions of AI" program for business transformation, and investments in AI workforce training. It is an integrated strategy that connects governance, adoption, and capability development. The US framework, focused almost entirely on removing regulatory barriers, looks one-dimensional by comparison.
Who actually benefits?
The framework's preemption push raises a basic question: who is this for? The stated beneficiary is innovation. The argument is that a patchwork of state laws creates compliance burdens, especially for startups, and that a unified national standard would level the playing field. There is some truth to this. Small AI companies genuinely struggle with multi-state compliance. But the primary beneficiaries of preemption are not startups. They are the frontier AI labs and large tech companies that have the most to lose from state-level accountability measures. These are some of the best-funded companies in the world. The compliance burden argument rings hollow when applied to organizations with billions in revenue and armies of lawyers. Congress has already shown skepticism. In July 2025, the Senate voted to strip a proposed ten-year moratorium on state AI regulation from the One Big Beautiful Bill. A later attempt to attach preemption language to the National Defense Authorization Act also failed. Polling shows 73% of Americans believe AI companies should be liable for harms caused by their technology. The framework is not a law. It is a set of legislative recommendations, and it is not tied to any specific bill. But with House Republican leadership publicly committed to implementing it, the legislative machinery is in motion.
What comes next
The honest answer is: nobody knows. Federal preemption of state AI laws is not a legal reality yet, and Congress has rejected it twice. Courts will have to weigh in on the December 2025 executive order's preemption claims, and legal scholars are already questioning whether the administration's theories will hold up. But the direction of travel is clear. The US is moving toward a regime where the federal government claims exclusive authority over AI governance while deliberately choosing not to exercise that authority in any meaningful way. That is not deregulation in service of innovation. It is a regulatory vacuum, by design. The question is not whether we need a national AI policy. We do. But a national policy that preempts state laws while offering nothing enforceable in return is not governance. It is abdication. And if the Section 230 era taught us anything, it is that the costs of building a powerful industry inside an accountability-free zone do not disappear. They just get passed to the people least equipped to bear them.
References
- The White House, "National Policy Framework for Artificial Intelligence: Legislative Recommendations," March 2026. Link
- The White House, "Ensuring a National Policy Framework for Artificial Intelligence," Executive Order, December 2025. Link
- Politico, "White House releases AI policy blueprint for Congress," March 2026. Link
- Brad Carson, "A New Section 230: Why AI Preemption Would Let Tech Off the Hook Again," Tech Policy Press, September 2025. Link
- Congressional Research Service, "Section 230: An Overview," Congress.gov. Link
- Colorado General Assembly, "SB24-205: Consumer Protections for Artificial Intelligence." Link
- Brownstein Hyatt Farber Schreck, "Colorado's Landmark AI Law Coming Online," 2026. Link
- Gravitee, "State of AI Agent Security 2026." Link
- Beam AI, "AI Agent Security in 2026: The Risks Most Enterprises Still Ignore," 2026. Link
- NIST, "Request for Information Regarding Security Considerations for Artificial Intelligence Agents," Federal Register, January 2026. Link
- IMDA Singapore, "Singapore Launches New Model AI Governance Framework for Agentic AI," January 2026. Link
- CNBC, "Singapore launches AI support measures, tax breaks in 2026 Budget," February 2026. Link
- Maynard Nexsen, "White House Unveils National AI Policy Framework: Key Takeaways for Businesses and Innovators," March 2026. Link
- Axios, "Trump AI plan urges Congress to overrule states," March 2026. Link
You might also enjoy