Anthropic drew a line
An AI company told the Pentagon "no." The Pentagon responded by trying to destroy its business. Now the courts will decide whether a technology company has the right to set limits on how its products are used, even when the customer is the most powerful military on earth.
What happened
The sequence of events moved fast. In late February 2026, Anthropic CEO Dario Amodei publicly confirmed that the company would not allow its Claude AI model to be used for two specific purposes: fully autonomous weapons systems without human oversight, and mass surveillance of American citizens. These were not new positions. Anthropic had maintained these restrictions since it began working with national security contractors in 2024, partnering with firms like Palantir on intelligence analysis, operational planning, and cyber operations. The Pentagon wanted broader terms. Defense Secretary Pete Hegseth pushed for language granting the military permission to use Claude for "any lawful purpose," a framing that would have effectively removed Anthropic's guardrails. Anthropic refused. On February 27, President Trump announced on Truth Social that all federal agencies would stop using Anthropic's technology. On March 4, the Department of Defense formally designated Anthropic a supply chain risk to national security. On March 9, Anthropic filed two federal lawsuits, one in the U.S. District Court for the Northern District of California and one in the D.C. Circuit Court of Appeals, alleging the government had violated its First and Fifth Amendment rights and exceeded the legal scope of the supply chain risk statute.
Why the designation matters
The supply chain risk label isn't a slap on the wrist. Under 10 USC 3252, it effectively blacklists a company from Pentagon procurement, blocking not just direct contracts but also preventing defense contractors from using the designated company's products in their own government work. The designation is typically reserved for foreign adversary contractors suspected of potential sabotage, not American companies with policy disagreements. Anthropic's lawsuit calls the action "unprecedented and unlawful." The company argues the government bypassed mandatory legal procedures by terminating contracts and blocking future work without providing prior notice or a meaningful opportunity to respond. And the economic damage is real. According to the filing, current and future contracts worth hundreds of millions of dollars are already in jeopardy. This matters beyond the courtroom. Anthropic is not a small startup making a symbolic stand. As of early March 2026, the company's annualized revenue had surged past $19 billion, up from $9 billion at the end of 2025. It recently raised $30 billion in Series G funding at a $380 billion valuation. When a company this large draws a line, the consequences ripple across the entire industry.
Two philosophies of control
The contrast with OpenAI could not be sharper. Within days of Anthropic's blacklisting, OpenAI announced a deal to replace Claude on Pentagon systems. CEO Sam Altman later admitted the move looked "sloppy" and "rushed." But the more revealing moment came during an internal all-hands meeting on March 4, when Altman told staff that OpenAI "can't control" how the military uses its technology once deployed. Operational decisions, he said, are the government's domain. This is a fundamentally different philosophy. Anthropic says: we build the tool, and we set boundaries on what it can do. OpenAI says: we build the tool, and once it ships, the customer decides. Both positions have internal logic, but they lead to very different places. Anthropic's approach treats AI guardrails as a design constraint, something baked into the product itself. The company's argument is that Claude was never designed to operate lethal weapons autonomously or to conduct mass surveillance, and allowing those uses would be inconsistent with the system's architecture and the company's founding commitments. OpenAI's approach treats guardrails as a contractual matter, governed by whatever legal framework the customer operates under. If it's technically legal, it's technically permitted. The Pentagon's preferred framing, "any lawful use," reveals the tension. Over the past two decades, the U.S. government has stretched the definition of what counts as lawful surveillance to cover sweeping bulk data collection programs. A restriction that only prohibits illegal uses may not restrict very much at all.
The cracks are internal too
This isn't just a boardroom disagreement. After OpenAI's Pentagon deal was announced, nearly 40 employees from Google and OpenAI filed an amicus brief with the court supporting Anthropic's position. The signatories described themselves as "diverse in politics and philosophies" but "united in the conviction that today's frontier AI systems present risks when deployed to enable domestic mass surveillance or the operation of autonomous lethal weapons systems without human oversight." OpenAI also faced enough internal and external backlash that Altman revised the Pentagon agreement within days, adding restrictions on use for domestic surveillance and requiring intelligence agencies like the NSA to obtain separate contract modifications before accessing the system. The original deal, sources told The Verge, essentially boiled down to: if it's legal, the military can do it with OpenAI's tools. The fact that OpenAI walked back parts of its own agreement under pressure suggests Anthropic's position has more support within the AI industry than the Pentagon might have expected.
Agent safety principles at geopolitical scale
There's an interesting parallel between this dispute and the principles that govern how AI agents are designed to operate in everyday software. Good agent design follows a few core ideas: least-privilege access, human checkpoints before irreversible actions, and clear boundaries on what the system is authorized to do. You don't give a code assistant root access to production servers. You don't let an email agent send messages without confirmation. Anthropic is applying the same logic to military deployment. The argument isn't that the military shouldn't use AI. Anthropic actively supported intelligence analysis, modeling, simulation, and cyber operations through its defense partnerships. The argument is that certain categories of use, specifically autonomous killing and mass surveillance, require human oversight by design, and that removing those constraints undermines the safety architecture of the system itself. The Pentagon's counterargument, that private companies shouldn't dictate how the government uses technology in warfare, has surface-level appeal. But it ignores that AI systems are not passive tools like a wrench or a radio. They make decisions, classify targets, and process data at scales no human team can review. The question of who sets the boundaries isn't academic. It's operational.
Precedent or outlier?
The cynical reading is that Anthropic's stance is partly a brand play. Being the "principled" AI lab distinguishes it in a crowded market, and the backlash against OpenAI's Pentagon deal has arguably boosted Anthropic's reputation among enterprise customers and technical talent. The AP reported that the fight has attracted new customers and supporters who side with Anthropic's refusal to bend. But brand incentives don't negate the substance of the legal argument. Anthropic is testing whether the government can use a supply chain risk designation, a tool designed to protect against foreign sabotage, to punish a domestic company for its publicly stated views on AI safety. The First Amendment claim is straightforward: the government is retaliating against protected speech. The Fifth Amendment claim is procedural: no notice, no hearing, no due process. If Anthropic wins, it establishes that AI companies have a legal right to set usage boundaries, even with government customers. If it loses, the message to every other AI lab is clear: comply or be cut out. Right now, Anthropic looks like an outlier. OpenAI took the Pentagon's terms. Other major labs have stayed quiet. But the amicus brief from employees across rival companies suggests that the underlying tension, between building powerful AI and controlling how it's used, is not confined to one company. The question is whether market pressure and government leverage will keep other labs silent, or whether Anthropic's stand opens space for a broader industry norm.
The uncomfortable tension
There's a final irony worth sitting with. Anthropic was the first frontier AI lab cleared to operate on classified government networks. It built its national security partnerships carefully, working within defined boundaries that both sides understood. The company that invested most heavily in making AI safe for government use is now the one the government doesn't want. The lawsuit will take months, possibly years, to resolve. In the meantime, the Pentagon is moving forward with alternative providers who agreed to softer terms. The practical result is that the military's most powerful AI tools will, at least for now, be governed by weaker safety constraints than the ones Anthropic tried to maintain. Whether that trade-off matters depends on what you think AI safety means in practice. If it's a marketing label, then any provider will do. If it's an engineering discipline, then the boundaries matter as much as the capabilities. Anthropic is betting its business on the second answer.
References
- Bobby Allyn, "Anthropic sues the Trump administration over 'supply chain risk' label," NPR, March 9, 2026. Link
- Dario Amodei, "Where things stand with the Department of War," Anthropic, March 5, 2026. Link
- Jack Queen and Deepa Seetharaman, "Anthropic sues to block Pentagon blacklisting over AI use restrictions," Reuters, March 9, 2026. Link
- "Key claims in Anthropic's lawsuit against Trump's blanket government ban on its tech," Reuters, March 9, 2026. Link
- "Altman Tells Staff: OpenAI Can't Control Military Use," The Tech Buzz, March 4, 2026. Link
- "How OpenAI caved to the Pentagon on AI surveillance," The Verge, 2026. Link
- "Anthropic sues US government for calling it a risk," BBC News, March 9, 2026. Link
- "Anthropic seeks to undo 'supply chain risk' designation from Trump administration," AP News, March 9, 2026. Link
- "Anthropic ARR surges to $19 billion on Claude Code strength," Yahoo Finance, March 4, 2026. Link
- "Anthropic raises $30 billion in Series G funding at $380 billion post-money valuation," Anthropic, 2026. Link
- "The War on Anthropic: Pretextual Designation and Unlawful Punishment," Just Security, 2026. Link
- "Anthropic Challenges the Pentagon's Supply Chain Risk Determination," Lawfare, 2026. Link