AI becomes political
I don't usually talk about politics. It's not what this blog is about, and frankly, it's not where I'm most comfortable writing. But as someone who works in the AI space, I feel a responsibility to lay out what's happening right now, because the intersection of artificial intelligence and politics has reached a point that affects all of us, whether we build AI, use AI, or simply live in a world shaped by it.
Here's what's been going on.
The Anthropic-Pentagon standoff
One of the most consequential stories in AI right now is the public fallout between Anthropic, the maker of the Claude AI model, and the United States government.
Since 2024, Anthropic had a contract worth up to $200 million with the U.S. Department of Defense (which was given the secondary title "Department of War" by executive order in September 2025). Claude was deployed on the Pentagon's classified networks as part of the Maven Smart System, which supports intelligence gathering, image processing, and tactical operations.
In January 2026, the Pentagon issued a memo requiring that all AI procurement contracts allow models to be used "for any lawful purpose," without restrictions imposed by the AI vendor. Anthropic pushed back on two specific points. The company insisted on contractual safeguards that would explicitly prevent Claude from being used for:
- Mass domestic surveillance of American citizens
- Fully autonomous weapons systems without human oversight
Anthropic's CEO, Dario Amodei, argued that current frontier AI models are not reliable enough to be trusted in fully autonomous weapons, and that mass domestic surveillance of Americans represents a fundamental rights violation. The Pentagon's position was that existing U.S. law already prohibits these uses, making Anthropic's additional restrictions unnecessary and an overreach by a private company.
The timeline of escalation
Here's how events unfolded:
- February 12, 2026: Anthropic announced a $20 million donation to a new Super PAC that would back political candidates supporting AI regulation, putting it in direct opposition to OpenAI, which had advocated for less stringent regulation.
- February 24: Defense Secretary Pete Hegseth gave Anthropic a deadline of 5:01 PM on Friday, February 27, to remove its restrictions on military use or face consequences.
- February 26: Amodei published a statement saying Anthropic would not back down. "We cannot in good conscience accede to their request," he wrote.
- February 27: President Trump posted on Truth Social directing "every federal agency" to "immediately cease all use of Anthropic's technology," with a six-month phaseout period. Shortly after, Hegseth designated Anthropic a "supply chain risk to national security," a label typically reserved for companies considered extensions of foreign adversaries. This was the first time a U.S. company received this designation.
- February 27 (same day): OpenAI announced it had reached its own agreement with the Pentagon to deploy AI on classified systems.
- March 2: The State Department, Treasury Department, HHS, and the Federal Housing Finance Agency all moved to cease use of Anthropic products. The State Department switched its internal chatbot, StateChat, from Claude to OpenAI's GPT-4.1.
- March 5: Anthropic received formal notification of its supply chain risk designation and announced it would challenge the decision in court. Reports also surfaced that Amodei was back in talks with the Pentagon.
OpenAI's Pentagon deal
Hours after the Anthropic ban, OpenAI published a detailed breakdown of its agreement with the Department of Defense for classified systems. The company outlined three "red lines" that it said guide its government work:
- No use of OpenAI technology for mass domestic surveillance
- No use of OpenAI technology to direct autonomous weapons systems
- No use of OpenAI technology for high-stakes automated decisions (such as "social credit" systems)
OpenAI's deal is cloud-only, meaning the company retains control of its safety stack and does not deploy models on edge devices (which would be required for powering autonomous weapons). Cleared OpenAI engineers and safety researchers remain in the loop.
Notably, OpenAI CEO Sam Altman publicly stated that his company shares Anthropic's red lines. On March 2, OpenAI amended its deal to add explicit language stating that its tools "shall not be intentionally used for domestic surveillance of U.S. persons and nationals." OpenAI also stated publicly that it does not believe Anthropic should be designated a supply chain risk.
The situation raises an uncomfortable question: if both companies share the same red lines, why did one get a deal and the other get banned?
AI lobbying is now big money
Behind the scenes, both companies have dramatically increased their spending on government lobbying.
- OpenAI spent $1.76 million on lobbying in 2024, up nearly sevenfold from $260,000 in 2023. In the last quarter of 2024 alone, it spent $510,000.
- Anthropic spent more than $3.1 million lobbying the federal government in 2025, quadrupling its previous spending, making it one of the fastest-growing lobbying spenders in the AI industry.
- Anthropic's $20 million Super PAC puts it in direct political opposition to Super PACs backed by OpenAI's leaders and investors, setting the stage for AI policy to become a central issue in the 2026 midterm elections.
As Forbes reported in February 2026, "AI's biggest builders are now its biggest lobbyists."
AI in the Iran conflict
On February 28, 2026, the United States and Israel launched a military offensive against Iran. This conflict has become the largest real-world test of AI-assisted warfare to date.
According to U.S. Central Command, AI technology has played a critical role in the campaign:
- The U.S. hit more than 2,000 targets, including 1,000 in the first 24 hours, described as nearly "double the scale" of the 2003 "shock and awe" campaign in Iraq.
- AI tools like the Maven Smart System (supported by Palantir's Project Maven, in development since 2018) were used for initial screening of incoming data, target identification, and prioritization, allowing human analysts to focus on higher-level decisions.
- Reports indicate AI enabled a small number of troops to process data at a scale that previously would have required far larger teams.
- AI was also reportedly used in cyberwarfare, including operations to scan for security vulnerabilities and disrupt Iran's banking systems during earlier phases of the conflict.
As Nature reported on March 5, the conflict "has thrown a spotlight on the use of artificial intelligence in warfare." The same week, academics and legal experts met in Geneva to discuss lethal autonomous weapons and AI procurement in the military, as part of ongoing international efforts to establish legal frameworks for AI in warfare.
The possibility that AI's precision targeting could help reduce civilian casualties is often cited as a potential benefit. However, experts note there is currently no evidence that AI lowers civilian deaths, and in recent conflicts where AI has been used to assist targeting, civilian death tolls have remained high. "There is no evidence that AI lowers civilian deaths or wrongful targeting decisions and it may be that the opposite is true," said Craig Jones, a political geographer at Newcastle University.
The deepfake crisis
Perhaps the most immediately dangerous intersection of AI and geopolitics right now is the flood of AI-generated disinformation tied to the Iran conflict.
This is real, it is happening at scale, and it is affecting what people believe about the war:
- Iranian state media circulated AI-generated images claiming to show successful strikes on U.S. and Israeli targets, including a fabricated image of a downed F-35 jet.
- Pro-Iranian social media accounts spread fake footage purporting to show Iranian retaliatory attacks, including a fabricated video of missiles hitting the USS Abraham Lincoln aircraft carrier. NewsGuard found that such videos and images garnered more than 21.9 million views.
- AI-generated footage of Ayatollah Khamenei circulated after his death, despite no official photograph being released.
- Grok, the AI chatbot built into X (formerly Twitter), in some cases confirmed fake AI-generated videos as real when users asked it to verify content.
- A Chosun report from March 7, 2026 documented how "sophisticated AI-generated war footage spreads rapidly on X, complicating verification."
The scale of the problem is staggering. Industry estimates suggest the number of deepfake videos shared online surged from approximately 500,000 in 2023 to 8 million by 2025. Research indicates that 68% of deepfakes are now "nearly indistinguishable from genuine media." And studies have shown that even when people are explicitly warned that a video is a deepfake, most still rely on the content of the video to form their beliefs.
Please verify your sources. If you see dramatic footage of the conflict, check who posted it, when it was posted, and whether credible news organizations have confirmed it. Use tools like reverse image search, check fact-checking organizations (like BBC Verify, which has been actively debunking AI-generated content from this conflict), and be especially skeptical of content that seems designed to provoke an emotional reaction.
The market impact
The convergence of AI uncertainty and geopolitical conflict has rattled financial markets:
- The Dow Jones Industrial Average turned negative for 2026, down 0.2% year-to-date as of March 5, after a 1.6% single-day drop driven by soaring energy prices.
- Oil prices spiked as traffic through the Strait of Hormuz, a critical shipping lane, nearly halted following threats from Iran.
- Gold prices rose to $5,169 as investors sought safe-haven assets.
- Asian markets were hit particularly hard, as the region depends heavily on Middle Eastern energy imports.
- Morgan Stanley warned that prolonged conflict could lead to higher oil prices, hotter inflation, and greater market uncertainty.
Separately, AI-specific fears have also weighed on stocks. In late February, a hypothetical "macro memo from 2028" about how AI could disrupt the economy sent the Dow down 800 points, illustrating how sensitive markets have become to AI-related narratives.
What this means
Here are the facts as they stand:
- AI companies are now political actors. Whether through Super PACs, lobbying, or high-profile disputes with the government, the major AI labs are deeply embedded in political processes. This is a new reality.
- The U.S. government has shown it will use aggressive tools against AI companies that don't comply with its demands, including designations previously reserved for foreign adversaries.
- AI is being used in active warfare at unprecedented scale, and the legal and ethical frameworks have not kept up. International discussions are ongoing but slow.
- AI-generated disinformation is flooding the information space around active conflicts, making it harder than ever to distinguish real events from fabricated ones. Even AI tools meant to help with verification are failing.
- Financial markets are reacting to both AI uncertainty and AI-adjacent geopolitical events, creating new forms of economic risk.
None of this is theoretical anymore. These are things happening right now, in real time, affecting real people and real decisions.
I wrote this post because I believe that anyone working in or around AI has a responsibility to understand the broader context in which this technology exists. I don't have easy answers, and I'm not here to tell anyone what to think. But I do believe that being informed, being skeptical, and being honest about what's happening is the bare minimum.
Stay critical. Verify everything. And if someone shows you a dramatic video from a war zone, ask yourself: is this real?
References
- "Anthropic to donate $20m to US political group backing AI regulation," The Guardian, February 12, 2026. Link
- "Anthropic Puts $20 Million Into a Super PAC Operation to Counter OpenAI," The New York Times, February 12, 2026. Link
- "OpenAI and Anthropic battle in the political arena," Yahoo Finance, February 2026. Link
- "AI's Biggest Builders Are Now Its Biggest Lobbyists," Forbes, February 20, 2026. Link
- "OpenAI has upped its lobbying efforts nearly sevenfold," MIT Technology Review, January 21, 2025. Link
- "OpenAI announces Pentagon deal after Trump bans Anthropic," NPR, February 27, 2026. Link
- "Our agreement with the Department of War," OpenAI, February 28, 2026. Link
- "State Department switches to OpenAI as US agencies start phasing out Anthropic," Reuters, March 2, 2026. Link
- "OpenAI Reaches A.I. Agreement With Defense Dept. After Anthropic Clash," The New York Times, February 27, 2026. Link
- "A Timeline of the Anthropic-Pentagon Dispute," TechPolicy.Press, 2026. Link
- "Anthropic says the Pentagon has declared it a national security risk," NBC News, March 5, 2026. Link
- "How AI is shaping the war in Iran, and what's next for future conflicts," Nature, March 5, 2026. Link
- "US military relying on AI as tool to speed Iran operations," Union Leader, 2026. Link
- "Israel-Iran conflict unleashes wave of AI disinformation," BBC News, 2026. Link
- "Did you spot these fake videos about the Iran war?" Euronews, March 6, 2026. Link
- "AI in the Age of Fake (Imagined) Content," Stimson Center, 2026. Link
- "Dow Jones turns red for 2026 as Iran war roils old-economy gauge," Yahoo Finance, March 5, 2026. Link
- "Iran Conflict: Seven Takeaways for Investors," Morgan Stanley, March 3, 2026. Link
- "Anthropic's AI safety stance clashes with Pentagon, and reshapes spending on primaries," OpenSecrets, March 2026. Link
- "Rewiring democracy: What impact will AI have on our country's future?" Colorado State University, February 26, 2026. Link
- "AI Deepfake Videos Flood Iran-US-Israel War Coverage," Chosun, March 7, 2026. Link
- "Trump signs order to rename Defense Department as Department of War," NPR, September 4, 2025. Link