Lobbying is the new moat
In April 2026, a new pro-AI political group called Innovation Council Action, backed by White House AI adviser David Sacks, announced plans to spend more than $100 million on the midterm elections. The same week, ABC News reported that millions of dollars from AI-adjacent interest groups were already flooding races across the country, from Texas to Manhattan. The AI industry is not just building models anymore. It is building political infrastructure. This is not surprising. It is, in fact, the oldest play in the corporate handbook. But the speed at which AI companies have gone from garage startups to Washington power brokers is unlike anything we have seen before. And the stakes are not about which model scores highest on a benchmark. They are about who gets to write the rules.
The new moat
For most of the last decade, the competitive moat in tech was straightforward. First it was data: whoever had the most users generated the most data, which trained the best models, which attracted more users. Then it was distribution: the companies that owned the platforms, the app stores, the default search engines, controlled where attention went. Now the moat is shifting again. In 2026, the most durable competitive advantage in AI may not be technical at all. It may be regulatory. Regulatory capture, the process by which incumbents shape the rules that govern their industry, is not a new concept. Economists have documented it in banking, energy, telecommunications, and pharmaceuticals for decades. The pattern is always the same: a dominant player invests in lobbying, funds campaigns, and hires former regulators. The resulting rules tend to favour the firms that helped write them, often at the expense of smaller competitors and the public. What is new is how fast AI companies have adopted this playbook.
Follow the money
The numbers are staggering. In 2025, the top tech and AI companies spent more than $100 million on federal lobbying for the first time, according to an analysis by DeepLearning.AI. Issue One, a nonpartisan government ethics group, found that seven of the largest tech companies spent a combined $50 million on lobbying in just the first nine months of 2025, averaging nearly $400,000 for every day Congress was in session. The individual trajectories are even more telling. OpenAI spent $1.76 million on lobbying in 2024, up nearly sevenfold from $260,000 the year before. Anthropic spent more than $3.1 million in 2025, quadrupling its previous total. Meta, which has interests spanning social media and AI, spent $6.5 million in the fourth quarter of 2025 alone and now has roughly one lobbyist for every six members of Congress. And that is just the lobbying. The campaign spending is on another level entirely. AI companies and their executives spent at least $83 million on federal elections in 2025, according to The New York Times. OpenAI co-founder Greg Brockman and his wife contributed $25 million to Leading the Future, a super PAC supporting candidates who "champion policies that harness the economic benefits of AI and reject attempts to hinder American innovation." Anthropic put $20 million into Public First Action, a group that describes itself as pro-regulation and has raised around $50 million total. Innovation Council Action, the newest entrant, has announced a $100 million war chest. As Public Citizen reported, one in four federal lobbyists now works on AI. More than 3,500 lobbyists were active on AI issues in 2025. This is an industry that barely existed a few years ago.
The split that proves the point
What makes the AI lobbying story especially revealing is that the industry is not united. The two largest AI labs, OpenAI and Anthropic, are on opposite sides of the regulatory debate, and both positions are self-serving. Anthropic wants regulation. Not out of pure altruism, but because well-designed safety requirements create compliance costs that well-funded incumbents can absorb and startups cannot. If every AI company needs a safety team, a governance framework, and a battery of expensive evaluations before deploying a model, the companies that already have those resources win by default. Regulation becomes a barrier to entry. OpenAI and its allies want deregulation, or at least a "light-touch" approach. Their argument is that excessive rules slow innovation and risk America falling behind China. That framing is not wrong on its face, but it also happens to protect their ability to move fast, ship products, and lock in market share before anyone can challenge them. Both sides frame their position as principled. Both sides are also spending tens of millions of dollars to ensure their preferred version of the rules becomes law. As University of Rochester professor David Primo told ABC News, "Companies have always tried to shape regulations, and they've always tried to shape them in their favor. What we're seeing now, though, is that the big companies are not united." The disagreement is not about whether to influence the system. It is about which version of influence serves each company best.
Big Tech already ran this playbook
If this feels familiar, it should. The major tech platforms spent the 2010s doing exactly what AI companies are doing now, just more slowly. Google, Meta, and Amazon invested heavily in lobbying throughout the last decade to delay, weaken, or kill privacy regulation. The United States still has no comprehensive federal privacy law, despite years of debate. Europe passed the GDPR in 2016. California passed its own privacy act in 2018. The federal government did nothing, and that inaction was not an accident. It was the result of sustained corporate pressure. The digital trade agenda pushed by the U.S. Trade Representative during this period reflected the same dynamic. As Cory Doctorow documented, trade deals became vehicles for Big Tech's policy preferences, limiting liability, weakening privacy protections, and constraining competition enforcement. The companies that lobbied hardest got the rules they wanted. AI companies are running the same playbook, but faster and with more money. The crypto industry's 2024 election spending, which helped elect more than 50 friendly candidates through the Fairshake PAC, has become the explicit template. CNBC reported in January 2026 that the AI industry is taking a page directly from crypto's lobbying success, with Leading the Future even sharing some of the same Silicon Valley donors. The lesson from the 2010s is clear: the companies that shape the regulatory environment early tend to keep their advantage for a long time. Once a framework is in place, the political will to change it rarely materialises.
Singapore is doing it differently
There is another way to approach AI governance, and it looks nothing like what is happening in Washington. In January 2026, Singapore unveiled the world's first governance framework for agentic AI at the World Economic Forum in Davos. The framework, published by the Infocomm Media Development Authority (IMDA), provides guidance on managing risks from autonomous AI systems, those capable of reasoning, planning, and taking actions on behalf of users. Singapore's approach is pragmatic. Rather than a single binding statute, the country uses a combination of voluntary governance frameworks, sector-specific guidance from regulators like the Monetary Authority of Singapore (MAS), and practical tools like AI Verify, a testing framework for AI ethics. The philosophy is sandbox-first, regulate-later: let companies experiment in controlled environments, learn what the real risks are, and then build rules based on evidence rather than ideology. This stands in sharp contrast to the United States, where the debate has devolved into a proxy war between billionaires funding competing super PACs. The White House released its National Policy Framework for Artificial Intelligence on March 20, 2026, calling for Congress to preempt state-level AI laws in favour of a single federal standard. But that standard, as critics have pointed out, is deliberately light on enforceable requirements. It creates a ceiling without establishing a meaningful floor. Singapore's model is not perfect, and a city-state of six million people faces different constraints than a continental superpower. But the contrast is instructive. One country is building governance frameworks based on technical reality. The other is building them based on who writes the biggest check.
The real stakes
"Once a regulatory system gets entrenched, it's really hard to change it," Primo said. This is the sentence that should worry everyone, regardless of where they stand on AI regulation. The rules being written now, in the heat of a midterm election cycle, with hundreds of millions of dollars flowing from interested parties on all sides, will shape the AI industry for decades. And the companies that are best positioned to influence those rules are not necessarily the ones building the best technology. History is consistent on this point. The telecommunications industry in the early 20th century, the financial sector after deregulation in the 1990s, Big Tech in the 2010s, in each case, the firms that invested most in political influence ended up with regulatory frameworks that protected their market position. The technical merits of their products were secondary. AI is following the same path, and it is doing so at a pace that compresses decades of regulatory evolution into a single election cycle. The companies shaping the rules today are the ones that will dominate the market tomorrow, not because their models are better, but because the rules will be written to keep it that way.
This is not about politics
It is tempting to frame this as a partisan issue. It is not. AI money is flowing to both parties. Leading the Future has supported Republican candidates in Texas and North Carolina. Public First Action has backed a Democrat in Illinois and a Republican senator in Tennessee. Innovation Council Action is aligned with the current administration. The common thread is not ideology. It is influence. The AI race was supposed to be about intelligence, about which lab could build the most capable, most aligned, most useful model. Somewhere along the way, it became about something much older: who has the most access, the most connections, the most leverage over the people writing the rules. The companies that win the lobbying war will win the AI war. Not because lobbying makes models smarter, but because it makes competition harder. That is what a moat looks like in 2026.
References
- "The AI industry is all in for the 2026 midterms with government regulations looming," ABC News, April 2, 2026. Link
- "How A.I. Money Is Flooding Into the Midterm Elections," The New York Times, February 21, 2026. Link
- "Meta, Amazon, Microsoft, Google, and Nvidia Pour Millions Into Government Influence," DeepLearning.AI, February 20, 2026. Link
- "As Big Tech Gears Up for the 2026 Midterms, Its Lobbying Operations Continue Unabated," Issue One, October 21, 2025. Link
- "AI's Biggest Builders Are Now Its Biggest Lobbyists," Forbes, February 20, 2026. Link
- "OpenAI has upped its lobbying efforts nearly sevenfold," MIT Technology Review, January 21, 2025. Link
- "Meta outspends Big Tech peers on lobbying again," Axios, January 21, 2026. Link
- "Armies of Lobbyists Helped Big Tech Rack up Victories During First Year of Trump's Second Term," Issue One, 2026. Link
- "One in Four Federal Lobbyists Now Work on AI," Public Citizen, 2026. Link
- "AI industry looks to repeat crypto lobbying success and put war chest to work in midterm elections," CNBC, January 28, 2026. Link
- "Scoop: New pro-AI group preps $100M midterm blitz to boost Trump's agenda," Axios, March 29, 2026. Link
- "AI safety and regulatory capture," AI & Society, Springer, 2025. Link
- "Singapore Launches New Model AI Governance Framework for Agentic AI," IMDA, January 22, 2026. Link
- "Singapore's New Model AI Governance Framework for Agentic AI (2026)," K&L Gates, February 9, 2026. Link
- "As the US midterms approach, AI is going to emerge as a key issue concerning voters," The Guardian, March 24, 2026. Link
- "How tech does regulatory capture," Cory Doctorow, Medium. Link
- "The United States: Birthplace of the Big Tech lobby playbook," SOMO, February 12, 2026. Link
- "National Policy Framework Turns AI Preemption Into A 2026 Political Test," Forbes, April 2, 2026. Link
You might also enjoy