The Pentagon left Anthropic on read
On May 1, 2026, the Pentagon announced agreements with seven leading AI companies to deploy their technology on classified military networks: SpaceX, OpenAI, Google, NVIDIA, Microsoft, Amazon Web Services, and Reflection. Conspicuously absent from the list was Anthropic, the company that arguably talks more about AI safety than any other lab on Earth. The omission wasn't an oversight. It was the culmination of a months-long standoff that turned Anthropic from the Pentagon's first frontier AI partner into a federally designated supply chain risk. And it raises an uncomfortable question for the entire "responsible AI" movement: does the market actually reward caution?
Seven in, one out
The deals grant these companies access to the Pentagon's Impact Level 6 and Impact Level 7 classified network environments, the infrastructure used for secret and highly restricted data, respectively. The Pentagon framed the move as building an "AI-first fighting force" with "decision superiority across all domains of warfare." Every company on the list agreed to the same baseline condition: their technology would be available for "any lawful use." That phrase is doing a lot of heavy lifting. It means the Pentagon decides what's lawful, and the vendor doesn't get to draw lines around specific use cases. Anthropic refused that condition. And that refusal set off a chain of events that no one in Silicon Valley saw coming.
How Anthropic went from first-in to locked-out
The irony is hard to overstate. In July 2025, Anthropic won a $200 million contract with the Pentagon's Chief Digital and Artificial Intelligence Office. Claude became the first frontier AI model cleared for classified military use. It was deployed by military and intelligence personnel for analytical and operational support, including, reportedly, in support of active operations. Then in January 2026, the Pentagon's new AI strategy demanded that all contracted AI companies accept unrestricted "any lawful use" terms. Anthropic pushed back on two specific points: no mass surveillance of Americans, and no fully autonomous weapons systems. The Pentagon said a private company doesn't get to decide how the military uses its tools. The standoff escalated fast. On February 27, after a 5:01 PM deadline passed without agreement, President Trump posted on Truth Social directing every federal agency to "immediately cease" using Anthropic's technology. Defense Secretary Pete Hegseth designated Anthropic a supply chain risk, a label previously reserved for foreign adversaries like Huawei. No American company had ever received it. By March 10, an internal Pentagon memo ordered military commanders to remove Anthropic's AI from their systems within 180 days.
The legal battle
Anthropic didn't go quietly. CEO Dario Amodei pointed out the inherent contradiction in the Pentagon's position: you can't simultaneously label a company a security risk and invoke the Defense Production Act to force it to keep providing its technology. One says Anthropic is dangerous, the other says Anthropic is essential. In March, Anthropic filed lawsuits in both Northern California federal court and the D.C. Circuit, arguing the designation violated the Administrative Procedures Act, the First Amendment, and due process. A federal judge in San Francisco agreed enough to grant a preliminary injunction in late March, pausing the government's implementation of the ban. But in April, the D.C. Circuit denied Anthropic's request to lift the supply chain risk label, creating a split between the two courts. The legal fight is ongoing. But the commercial damage is already done.
OpenAI found a different path
Compare Anthropic's approach to OpenAI's. Just a year ago, OpenAI had explicit policies against military use of its technology. By late 2024, the company had pivoted, announcing a defense partnership with Anduril to deploy AI directly on the battlefield. When the Pentagon demanded "any lawful use" terms, OpenAI initially signed a deal that drew criticism for being "opportunistic and sloppy," in CEO Sam Altman's own words. But it quickly renegotiated, adding explicit language prohibiting mass domestic surveillance and fully autonomous weapons, while still operating within the Pentagon's framework. The result: OpenAI is on the list. Anthropic is not. The difference wasn't really about values. Both companies ended up in roughly the same place on the substance, opposing mass surveillance and autonomous weapons. The difference was about how they drew the line. OpenAI negotiated within the Pentagon's terms and secured its protections as contractual provisions. Anthropic demanded that the Pentagon accept its terms as preconditions. In defense procurement, that distinction is everything.
The safety premium myth
Anthropic has built its entire brand around the idea that safety-first AI is not just ethically right but commercially smart. The pitch to investors, to enterprise customers, to regulators, has been that responsible development commands a premium in the market. The Pentagon deals challenge that narrative directly. Defense AI is not a niche. The global military AI market is estimated at roughly $9 to $10 billion in 2025, with projections pointing toward $28 to $32 billion before the end of the decade. Classified network deployments, intelligence fusion, logistics optimization, target analysis, these are the use cases that generate massive, sticky government contracts. And Anthropic has voluntarily exited this market. Voluntarily is doing some work in that sentence, of course. Anthropic would argue it was pushed out by unreasonable demands. But the Pentagon would note that seven other companies, including one (OpenAI) that shares many of Anthropic's stated safety concerns, found a way to make it work. The market doesn't seem to be rewarding caution. It's rewarding compliance.
Why small nations are watching
This isn't just a story about Washington and San Francisco. Small, technologically sophisticated nations, particularly in Asia, watch U.S. defense AI decisions closely because they shape the ecosystem these countries buy into. Singapore is a case in point. The city-state has invested heavily in defense AI through its Defence Science and Technology Agency (DSTA), recently expanding a partnership with Shield AI on autonomous drone capabilities. Singapore's defense procurement philosophy, like that of many U.S. allies, tends to follow the technology stack that Washington validates. When the Pentagon clears seven companies for classified network deployment, that signals to allied nations which vendors are safe bets for their own military modernization. Anthropic's exclusion from the Pentagon's approved list doesn't just lock it out of American defense contracts. It sends a signal to every allied defense ministry evaluating AI vendors: this company might not be around for the long haul in the defense space.
Principled or badly positioned?
The hardest question in this story is whether Anthropic's stance represents genuine principle or a strategic miscalculation dressed up as ethics. There's a case for principle. Dario Amodei has been consistent: Anthropic will support national defense "in all ways except those which would make us more like our autocratic adversaries." The company's two red lines, no mass surveillance of Americans and no fully autonomous weapons, are narrowly drawn. It's not opposed to military use broadly. It permitted its models for missile defense, cyber defense, and even supported active military operations. The line is specific, not sweeping. But there's also a case for bad positioning. As the Small Wars Journal noted, Anthropic's stance looks less like a coherent ethical framework and more like "risk management dressed as moral philosophy." The company is comfortable providing AI that enables warfare at scale. It draws its line at two use cases that are, not coincidentally, the two most politically toxic in American domestic politics. That looks less like principle and more like brand management. The truth is probably somewhere in between. Anthropic genuinely believes in the red lines it drew. But it also badly underestimated the Pentagon's willingness to make an example of a company that tried to set conditions on how the military could use its technology. The supply chain risk designation wasn't proportionate, it was punitive. And Anthropic didn't have a playbook for that scenario.
What this actually tells us
The Pentagon's announcement isn't really about Anthropic. It's about the terms of engagement between the AI industry and the national security state. The message is clear: the Pentagon will work with companies that accept its framework, even if those companies negotiate guardrails within it. What it won't accept is a vendor that tries to define the boundaries from the outside. OpenAI learned this and adapted. Google, which faced its own internal revolt over military AI work in 2018 with Project Maven, quietly came back to the table. Microsoft and Amazon never left. For the "responsible AI" movement, the lesson is uncomfortable. Safety rhetoric is cheap. Safety as a contractual negotiating position is expensive, but viable. Safety as a precondition on doing business with the most powerful customer on Earth is, at least right now, a losing play. Whether that should be the case is a different question. But the market has spoken.
References
- Pentagon reaches agreements with top AI companies, but not Anthropic (Reuters, May 1, 2026)
- Pentagon clears 7 tech firms to deploy their AI on its classified networks (Breaking Defense, May 1, 2026)
- Top AI companies agree to work with Pentagon on secret data (Washington Post, May 1, 2026)
- Pentagon inks deals with seven AI companies for classified military work (The Guardian, May 1, 2026)
- Pentagon Makes Deals With A.I. Companies to Expand Classified Work (New York Times, May 1, 2026)
- Statement from Dario Amodei on our discussions with the Department of War (Anthropic, February 26, 2026)
- Where things stand with the Department of War (Anthropic, March 5, 2026)
- Internal Pentagon memo orders military commanders to remove Anthropic AI technology from key systems (CBS News, March 10, 2026)
- Anthropic vows to sue Pentagon over supply chain risk label (BBC, March 6, 2026)
- Anthropic loses appeals court bid to temporarily block DOD ruling (CNBC, April 8, 2026)
- The Anthropic-DOD Conflict: Privacy Protections Shouldn't Depend On the Decisions of a Few Powerful People (Electronic Frontier Foundation, March 2026)
- Our agreement with the Department of War (OpenAI, 2026)
- OpenAI changes deal with US military after backlash (BBC, March 3, 2026)
- Selective Virtue: Anthropic, the Pentagon, and the Contradictions of AI Governance in Wartime (Small Wars Journal, April 29, 2026)
- Anthropic's Standoff With the Pentagon Is a Test of U.S. Credibility (Council on Foreign Relations, March 5, 2026)
- DoD strikes deals with major tech firms to deploy AI on classified networks (Federal News Network, May 1, 2026)
- Shield AI, Republic of Singapore Air Force, and DSTA expand partnership (Shield AI, February 5, 2026)
- Artificial Intelligence in Defense Market to Reach USD 29.48 Billion by 2035 (SNS Insider via Yahoo Finance)