AI picked the targets
On the first morning of Operation Epic Fury, the U.S. military struck over 1,000 targets in Iran within 24 hours, more than doubling the pace of the opening salvo of the 2003 Iraq invasion. By the end of March 2026, the total had climbed past 11,000. The system that made this possible wasn't a new missile or a bigger fleet. It was software. Specifically, it was Project Maven, a Palantir-run AI targeting platform that fuses satellite imagery, drone feeds, radar data, and signals intelligence into a single interface, then classifies targets, recommends weapons, and generates strike packages in near real time. The AI-in-warfare conversation is no longer theoretical. It's operational. And most people haven't noticed.
From classified experiment to combat backbone
Project Maven started quietly in April 2017, when Deputy Secretary of Defense Robert Work established the Algorithmic Warfare Cross-Functional Team inside the Pentagon. The original goal was modest: use machine learning to help analysts process the crushing volume of drone surveillance footage pouring in from operations against ISIS. Marine Corps Colonel Drew Cukor and Air Force Lt. Gen. Jack Shanahan led the effort, describing it as a "pathfinder" to bring AI into the Department of Defense. Google was the first major tech partner. That didn't last long. In 2018, over 3,100 Google employees signed an internal letter addressed to CEO Sundar Pichai, stating bluntly: "We believe that Google should not be in the business of war." About a dozen engineers resigned in protest. Google announced it would not renew the Maven contract and published a set of AI principles that explicitly forbade work on weapons and surveillance projects violating international norms. Palantir stepped in with zero hesitation. The secretive data analytics company, founded by Peter Thiel, took over Maven and ran it as an internal project codenamed "Tron." Over the following years, Palantir built out the Maven Smart System into something far more ambitious than Google's original computer vision work.
What Maven actually does
Forget the image of a single chatbot deciding who lives and who dies. Maven is better understood as an AI-powered command and control platform, one that compresses the entire military "kill chain" into a radically shorter timeline. The system ingests data from every available intelligence source: satellite imagery, signals intelligence (intercepted communications), surveillance feeds, human intelligence reports, and open-source data. It fuses all of this into a single operational picture, something Bloomberg reporter Katrina Manson describes as "imagine Google Earth for war, a map of war with white dots, infused with information like elevation, coordinate, what is precisely there, whether it's friendly or foe." From there, Maven can identify and prioritize targets in near real time, generate GPS coordinates for strikes, recommend which weapons system to use for each target, and produce automated legal justifications under the laws of armed conflict. What once took intelligence analysts and targeting officers weeks, Maven compresses into hours or minutes. According to a Center for Security and Emerging Technology report, the system's goal is to help a commander process 1,000 tactical decisions per hour. In a 2024 demonstration, 20 soldiers using Maven matched the targeting throughput that required over 2,000 staffers during Operation Iraqi Freedom. Anthropic's Claude model was embedded within Maven, handling intelligence analysis, target prioritization, operational planning, and more. The Verge described the result plainly: "an AI-powered Kanban board for killing people."
The Anthropic split
The relationship between the Pentagon and Anthropic became one of the defining flashpoints of the war. Anthropic's contract with the Department of Defense contained two restrictions: Claude could not be used for mass surveillance of American citizens, and it could not be used with autonomous weapons that kill without human involvement. The Pentagon demanded the ability to use Claude for "all lawful purposes," arguing that existing law already prohibited those things, making the contractual restrictions redundant. Anthropic held firm. The Pentagon responded by designating the company a "national security supply chain risk," effectively blacklisting one of the country's most capable AI startups as though it were a Chinese military front. President Trump called Anthropic a "Radical Left AI company" on social media. Hours after the administration announced the severing of ties, the military was still using Claude in active combat operations in Iran. The message was clear: when an AI company tries to set boundaries on how its technology is used in war, the consequences are existential. Anthropic faces tens of billions of dollars in lost direct and indirect government contracts. As Axios reported, no other AI model on the market, not ChatGPT, Gemini, or Grok, could match Claude's performance for military applications. The company's ethical stand came with an enormous price tag.
When AI gets it wrong, the cost is lives
On the first morning of Operation Epic Fury, American forces struck the Shajareh Tayyebeh primary school in Minab, southern Iran, hitting the building at least twice during the morning session. Between 175 and 180 people were killed, most of them girls between the ages of seven and twelve. The immediate public reaction was to blame AI. After weeks of headlines about Claude-powered targeting, it was an easy narrative. But the preliminary Pentagon investigation told a different story: the strike resulted from outdated targeting data in the system, a failure of the human intelligence pipeline that fed Maven, not a malfunction of the AI itself. This distinction matters, but perhaps not in the way the Pentagon would like. As Military Times reported, the system "compresses kill-chain reasoning and decision making into the fastest timelines ever seen on the battlefield." When you reduce a process that used to involve hundreds of analysts checking each other's work into something that takes minutes and is processed by a small team, the opportunities for errors of exactly this kind multiply. The AI didn't choose the wrong target. Humans fed it bad data, and the system moved too fast for anyone to catch the mistake. Craig Jones, an expert on modern warfare, put it this way on Democracy Now: "You're reducing a massive human workload of tens of thousands of hours into seconds and minutes. You're reducing workflows, and you're automating human-made targeting decisions in ways which open up all kinds of problematic legal, ethical and political questions."
The "human in the loop" illusion
The Pentagon has consistently framed Maven as a "human-in-the-loop" decision support system. Humans approve every strike. The AI just recommends. But consider what this means in practice. When a system can nominate up to 1,000 targets per hour, each with automatically generated GPS coordinates, weapons recommendations, and legal justifications, what does "human approval" actually look like? A person clicking through a queue at machine speed is not exercising the kind of judgment we typically associate with decisions about life and death. As one researcher noted, "human in the loop" becomes a rubber stamp when the system produces targets faster than any human could meaningfully evaluate them. This is a familiar pattern in software design. We see it in content moderation, where human reviewers approve or reject AI recommendations at rates that make genuine review impossible. We see it in autonomous driving, where "human oversight" means a distracted operator who is supposed to grab the wheel in a fraction of a second. The military version of this problem carries higher stakes than all of them. The civilian AI world is consumed with debates about chatbot guardrails, prompt injection, and whether an AI assistant might say something offensive. Meanwhile, the same underlying technology has been given permissions over life and death, with "human oversight" that operates at a pace designed to prevent meaningful oversight from occurring.
The revolving door of ethics
The Google-to-Palantir pipeline reveals something uncomfortable about the relationship between technology companies and military power. Google walked away from Maven under employee pressure and published ethical AI principles. Three years later, the company was cozying back up to the Pentagon, just more quietly. Palantir, which never pretended to have ethical qualms about defense work, took the contract and turned it into a multi-billion dollar business. The pattern repeats with Anthropic. A company builds the most capable AI on the market, sets boundaries on its military use, and gets blacklisted. The Pentagon's message to the rest of Silicon Valley is unmistakable: participate without conditions, or be shut out entirely. By March 2026, the Pentagon had formalized Maven as a core military system with multi-year funding, growing from $480 million in 2024 to $13 billion. Palantir's stock has soared. The defense-tech pipeline doesn't just reward companies willing to build weapons. It punishes companies that try to set limits.
What this reveals about AI governance
The story of Project Maven is not primarily an anti-war story. It's a story about systems design decisions and what they reveal about the state of AI governance. We worry endlessly about whether AI agents in our productivity tools have too many permissions. We debate whether a coding assistant should be able to execute shell commands. We build elaborate permission systems to prevent an AI from sending an email without approval. These are real and important concerns. But while we fine-tune access controls for calendar apps, the U.S. military has deployed an AI system that recommends which human beings should be killed, generates the legal justification for doing so, and selects the weapon to do it, all within minutes. The "human in the loop" is a person clicking through recommendations at a pace that precludes genuine deliberation. The civilian AI safety community and the military AI community are having two completely different conversations. One is about alignment, red-teaming, and constitutional AI. The other is about compressing the kill chain. The technology is the same. The stakes couldn't be more different. Project Maven didn't start in 2026. It started in 2017. The concept isn't new. What's new is the scale, the speed, and the fact that an AI company that tried to draw a line was made an example of. The systems design choices that led to 11,000 targets in 30 days, to a school strike caused by stale data moving through a pipeline too fast to catch, to a "human in the loop" who approves targets at machine speed, those choices were made by people, over years, in conference rooms and contract negotiations. The AI didn't pick the targets. People designed a system that made it almost inevitable that it would.
References
- Katrina Manson, "'God, It's Terrifying': How the Pentagon Got Hooked on AI War Machines," Bloomberg Businessweek, March 12, 2026
- "The AI War on Iran: Project Maven, a Secretive Palantir-Run System, Helps Pentagon Pick Bomb Targets," Democracy Now!, March 31, 2026
- "Speeding Up the 'Kill Chain': Pentagon Bombs Thousands of Targets in Iran Using Palantir AI," Democracy Now!, March 18, 2026
- Tara Copp, Elizabeth Dwoskin, and Ian Duncan, "Anthropic's AI tool Claude central to U.S. campaign in Iran, amid a bitter feud," The Washington Post, March 4, 2026
- "Anthropic's Claude used by Pentagon in war with Iran, official confirms," The Hill
- "Deadly Iran school strike casts shadow over Pentagon's AI targeting push," Military Times, March 24, 2026
- "U.S. at Fault in Strike on School in Iran, Preliminary Inquiry Says," The New York Times, March 11, 2026
- Kevin T Baker, "AI got the blame for the Iran school bombing. The truth is far more worrying," The Guardian, March 26, 2026
- "Humans, not AI, are to blame for deadly Iran school strike, sources say," Semafor, March 18, 2026
- "This system may allow small Army teams to probe 1,000 targets per hour," Army Times, August 21, 2024
- "Palantir grabbed Project Maven defense contract after Google left the program," Business Insider, December 2019
- "'The Business of War': Google Employees Protest Work for the Pentagon," The New York Times, April 4, 2018
- "How Anthropic's Pentagon deal could get revived," Axios, March 26, 2026
- "Anthropic officially told by DOD that it's a supply chain risk even as Claude used in Iran," CNBC, March 5, 2026
- "AI targeting system doubles pace of US strikes in Iran," AZ Family, March 26, 2026
- Mike Brown, "The First AI War: How The Iran Conflict Is Reshaping Warfare," Forbes, March 30, 2026
- "Pentagon formalizes Palantir's Maven AI as a core military system with multi-year funding," Tom's Hardware
- "Palantir's Maven Smart System is an AI-powered Kanban board for killing people," The Verge, March 14, 2026
- "Project Maven," Wikipedia
- "AI in Iran: It's Not (Just) About Capabilities," Defense Security Monitor, March 27, 2026
You might also enjoy