Washington chose Big AI
The White House released a national AI legislative framework on March 20, 2026, and the headline provision is blunt: Congress should preempt state-level AI laws. No more patchwork. No more local experimentation. One federal standard to rule them all. On the surface, this sounds like a reasonable call for regulatory clarity. But when you read the four-page document closely, a pattern emerges. The framework consistently favors industry flexibility over public accountability, limits liability for developers, opposes new regulatory bodies, and proposes "sandboxes" that let companies apply for exemptions from federal rules. This isn't just a framework for AI policy. It's a framework for who gets to shape the rules, and who doesn't.
What federal preemption actually means
Federal preemption sounds like simplification, but what it does in practice is remove the ability of individual states to respond to AI harms as they emerge. States like California and New York have already passed laws requiring AI companies to establish whistleblower protections, report safety incidents, and disclose how they test models for risk. Utah was working on legislation to require transparency around child safety and catastrophic risk mitigation when the administration pressured lawmakers to back off. The framework says states can still enforce "generally applicable laws," handle zoning for data centers, and manage their own procurement. But they cannot regulate AI development or hold developers liable for how third parties use their models. In other words, states can decide where a data center sits, but not what the AI inside it does. This matters because AI regulation is still in its infancy. Nobody has the definitive answer on how to govern these systems. Preempting state laws this early doesn't just create uniformity, it eliminates the laboratories where better policy might have been discovered.
Who benefits from the sandbox
The framework proposes "regulatory sandboxes," controlled environments where AI companies can apply for temporary exemptions from federal regulations to test and deploy new systems. The concept isn't new. The EU AI Act requires each member state to establish at least one sandbox by August 2026. The UK has launched its own AI Growth Lab. Even Singapore has experimented with sandbox-style approaches in fintech and AI. In principle, sandboxes are a good idea. They let regulators learn alongside innovators, reduce time-to-market for new products, and generate real-world evidence for better rulemaking. But the devil is in access. Senator Ted Cruz introduced a bill last year that would allow companies to apply for regulatory waivers of up to ten years, with oversight managed by the Office of Science and Technology Policy. That's a long window, and the application process, compliance burden, and legal resources required to participate naturally favor larger firms. Startups and open-source developers, the ones who arguably benefit most from regulatory breathing room, are often the least equipped to navigate sandbox bureaucracy. When the framework says it wants to "facilitate broad access to testing environments," the question is: broad for whom? If sandbox participation becomes a privilege of scale, it reinforces the advantage of incumbents like Google, OpenAI, and Meta rather than leveling the playing field.
The EU chose a different path
The contrast with Europe is instructive. The EU AI Act, which began taking effect in 2024 and reaches full implementation by 2027, is built on a risk-based classification system. AI applications are sorted into tiers: unacceptable risk (banned outright), high risk (subject to strict requirements like bias testing, human oversight, and documentation), and lower risk (lighter obligations). Penalties for non-compliance can reach 35 million euros or 7% of global annual turnover. The EU's approach is rights-first. It starts from the premise that AI systems operating in sensitive domains, like healthcare, employment, education, and law enforcement, must prove they meet baseline safety and fairness standards before deployment. The US framework is innovation-first. It starts from the premise that regulation should be minimal, liability should be limited, and industry should lead standard-setting. Neither approach is purely right. The EU's regime risks being too rigid for a technology that evolves faster than regulatory cycles can adapt. Early signs suggest this is already happening: the European Commission is considering amendments through a Digital Omnibus package, partly in response to industry and member state concerns that implementation timelines are too aggressive. But the US framework's risk is the opposite. By defaulting to nonregulatory solutions and opposing "open-ended liability," it places an enormous bet on industry self-governance at a moment when the incentive structures of major AI companies are oriented almost entirely around speed and scale, not caution.
Singapore's quieter experiment
Singapore offers a useful third reference point. Rather than enacting comprehensive AI legislation, Singapore has taken a sector-specific, voluntary approach. The Model AI Governance Framework, first published in 2019 and updated multiple times since, provides guidelines organized around principles like transparency, explainability, fairness, and accountability, but doesn't impose binding obligations. In January 2026, Singapore launched a new governance framework specifically for agentic AI, the first of its kind globally. It addresses the risks posed by autonomous AI systems that can reason and act independently, recommending both technical and non-technical safeguards while emphasizing that humans remain accountable. What makes Singapore's approach interesting isn't its leniency, it's pragmatism. The frameworks are living documents, designed to evolve as the technology changes. They prioritize consensus-building between government, industry, and citizens. And they avoid the political baggage of either the EU's prescriptive regulation or the US's anti-regulation stance. The limitation is obvious: voluntary frameworks only work when companies choose to follow them. Singapore's small market size gives it less leverage than the EU or the US. But as a model for adaptive, evidence-based governance, it's worth studying.
The energy provision nobody is talking about
Buried in the framework's section on "Safeguarding and Strengthening American Communities" is a provision that deserves more attention: Congress should ensure that residential ratepayers don't foot the bill for data center buildouts, and should streamline permitting so data centers can generate power on site. This is a real problem. US data centers consumed approximately 176 TWh of electricity in 2024, roughly 4.4% of total US electricity consumption, equivalent to the annual demand of Pakistan. Projections suggest this could double or triple by 2028, reaching up to 12% of US electricity use. In places where data centers cluster, like Loudoun County, Virginia, they already consume more power than all residential users combined. The framework's acknowledgment of this issue is welcome. But the proposed solution, streamlined permitting for on-site power generation, raises its own questions. On-site generation often means natural gas or diesel, not renewables. And "streamlined permitting" can be a euphemism for reduced environmental review. The deeper tension is this: the same framework that wants to accelerate AI deployment also wants to protect communities from its energy costs. Those goals are in genuine conflict, and the document doesn't resolve that conflict so much as gesture at it.
When states can't regulate, what's left?
If federal preemption passes, the two remaining checks on AI companies become market pressure and litigation. Neither is particularly well-suited to the task. Market pressure works when consumers have alternatives and information. But AI systems are increasingly embedded in infrastructure that individuals don't choose directly, like hiring tools, credit scoring, healthcare triage, and content moderation. You can't boycott an AI system you don't know is making decisions about you. Litigation is slow, expensive, and reactive. It addresses harms after they occur, not before. And the framework explicitly pushes against "open-ended liability," suggesting that even the litigation pathway would be narrowed. Brad Carson, president of Americans for Responsible Innovation, called the framework "another chance for tech companies to launch harmful products with no accountability." More than 50 Republicans wrote to the White House in March arguing that "recent attempts to halt state AI legislation suggest not merely a desire for coordination, but an effort to prevent the passage of measures holding the tech industry accountable." When members of the president's own party are raising this concern, it's worth taking seriously.
What builders should do anyway
None of this means AI developers should wait for regulation to tell them what's safe. In fact, the opposite. The framework's emphasis on industry-led standards is, intentionally or not, an invitation. If the government isn't going to set detailed rules, the companies and developers building these systems have a larger responsibility to define their own safety practices, and to do so transparently. Practically, that means a few things:
- Document your risk assessments. Even if no law requires it yet, understanding and recording how your AI system could cause harm is basic engineering discipline. When regulation does arrive, whether federal, state, or through litigation, having a paper trail of good-faith effort matters.
- Build transparency into your process. Disclose what your models are trained on, how they're tested, and what their known limitations are. The EU already requires this for high-risk systems. Even if the US doesn't mandate it, users and partners will increasingly expect it.
- Don't treat the absence of regulation as permission. The history of technology regulation follows a pattern: industry moves fast, harms accumulate, public backlash triggers aggressive legislation. Companies that build responsibly now are better positioned when that cycle inevitably turns.
- Watch the states. Federal preemption isn't guaranteed. It failed to make it into the budget reconciliation bill and the defense policy bill in 2025. If it stalls again, state laws will continue to proliferate, and compliance with multiple jurisdictions becomes the reality.
The White House framework is a clear signal about where federal policy is heading: less regulation, more industry discretion, and a bet that innovation will outrun the problems it creates. Whether that bet pays off depends less on the framework itself and more on whether the people building AI systems treat the absence of rules as a reason to be careful, or as a reason not to be.
References
- White House, "President Donald J. Trump Unveils National AI Legislative Framework" (March 20, 2026) https://www.whitehouse.gov/articles/2026/03/president-donald-j-trump-unveils-national-ai-legislative-framework/
- Reuters, "Trump releases AI policy for Congress to pre-empt state rules" (March 20, 2026) https://www.reuters.com/world/us/white-house-releases-national-ai-framework-2026-03-20/
- Roll Call, "White House AI framework calls for preemption of state laws" (March 20, 2026) https://rollcall.com/2026/03/20/white-house-ai-framework-calls-for-preemption-of-state-laws/
- NBC News, "White House releases AI legislation framework" (March 20, 2026) https://www.nbcnews.com/tech/tech-news/trump-ai-congress-law-framework-legislation-artificial-intelligence-rcna264433
- The New Stack, "A Field Guide to 2026 Federal, State and EU AI Laws" https://thenewstack.io/a-field-guide-to-2026-federal-state-and-eu-ai-laws/
- EU Artificial Intelligence Act, "AI Regulatory Sandbox Approaches: EU Member State Overview" https://artificialintelligenceact.eu/ai-regulatory-sandbox-approaches-eu-member-state-overview/
- IMDA, "Singapore Launches New Model AI Governance Framework for Agentic AI" (January 2026) https://www.imda.gov.sg/resources/press-releases-factsheets-and-speeches/press-releases/2026/new-model-ai-governance-framework-for-agentic-ai
- Diligent, "Singapore's AI governance framework: A Complete guide" https://www.diligent.com/resources/blog/singapore-ai-regulation
- IEA, "Energy demand from AI" https://www.iea.org/reports/energy-and-ai/energy-demand-from-ai
- Pew Research Center, "What we know about energy use at US data centers amid the AI boom" (October 2025) https://www.pewresearch.org/short-reads/2025/10/24/what-we-know-about-energy-use-at-us-data-centers-amid-the-ai-boom/
- Latham & Watkins, "AI Executive Order Targets State Laws and Seeks Uniform Federal Standards" (December 2025) https://www.lw.com/en/insights/ai-executive-order-targets-state-laws-and-seeks-uniform-federal-standards
- Brookings Institution, "The EU and U.S. diverge on AI regulation: A transatlantic comparison" https://www.brookings.edu/articles/the-eu-and-us-diverge-on-ai-regulation-a-transatlantic-comparison-and-steps-to-alignment/
You might also enjoy