Anthropic ate OpenAI's lunch
In April 2026, Anthropic's annualized revenue hit $30 billion, surpassing OpenAI's $24 billion run rate. For a company founded just five years ago by a handful of ex-OpenAI researchers, this is a staggering reversal. The underdog has overtaken the incumbent, and the way it happened tells us something important about what actually wins in AI. This isn't a story about one company being good and another being bad. It's a story about strategic choices, and how the right ones compound while the wrong ones quietly erode even the most dominant positions.
The numbers that matter
Anthropic's growth trajectory has no precedent in software history. The company went from $87 million in annualized revenue in January 2024 to $1 billion by December 2024. By the end of 2025, it was at $9 billion. Then came the acceleration that stunned even seasoned investors: $14 billion in February 2026, $19 billion in March, and $30 billion in April. That last stretch, from $14 billion to $30 billion in roughly eight weeks, is difficult to contextualize. Meritech Capital's Alex Clayton reviewed the IPO trajectories of over 200 public software companies and said he had never seen growth like this. He said that in 2025. It has only gotten more extreme since. OpenAI's numbers are impressive by any normal standard. Revenue grew from $2 billion in 2023 to $6 billion in 2024 to $13 billion in 2025, reaching a $24 billion run rate by April 2026. But "impressive by normal standards" doesn't cut it when your competitor is growing at twice your pace.
Enterprise-first versus consumer-first
The divergence comes down to a fundamental strategic choice: who are you building for? OpenAI bet on consumers. ChatGPT became the fastest-growing consumer application in history, amassing hundreds of millions of weekly users. It became a household name, a cultural phenomenon, a verb. But consumer users are expensive to serve and hard to monetize. Free users cost real money in compute, and converting them to paying subscribers has proven difficult at scale. Anthropic bet on enterprises and developers. Around 80% of Anthropic's revenue comes from business customers. The company focused on building tools that developers actually want to pay for, with Claude Code alone generating over $2.5 billion annually and accounting for more than half of enterprise spending on Anthropic products. This distinction matters because enterprise customers have fundamentally different economics. They sign contracts, they pay predictable amounts, they integrate deeply into workflows, and they rarely churn once embedded. Consumer users open the app when they're curious and close it when they're bored. OpenAI recognized this gap too late. By early 2026, reports emerged that the company was scrambling to reorient around enterprise customers, but Anthropic had already built the relationships, the tooling, and the reputation that enterprise buyers care about.
The compute cost crisis
In late April 2026, the Wall Street Journal reported that OpenAI's CFO Sarah Friar had raised internal alarms about the company's spending trajectory. The concern was stark: OpenAI might not generate enough revenue to cover its compute commitments. The numbers behind the alarm are sobering. OpenAI has locked in roughly $600 billion in future data center spending through 2030. Its projected cash burn through that period exceeds $110 billion. Revenue projections keep getting revised upward, from $25 billion in 2026 to an ambitious $280 billion by 2030, but those projections require sustained hypergrowth that the company has recently struggled to deliver. Friar reportedly wanted more spending discipline, which put her at odds with CEO Sam Altman, who has consistently pushed for aggressive investment. Both called the Wall Street Journal report "ridiculous" in a joint statement, but the market didn't share their confidence. Tech stocks dipped on the news. The contrast with Anthropic is telling. According to SaaStr's analysis, Anthropic spent roughly one-quarter of what OpenAI spent on model training while overtaking it in revenue. Whether this efficiency gap is sustainable is an open question, but it suggests that Anthropic found a way to do more with less during a critical period of scaling.
Safety as a selling point
When Anthropic was founded in 2021, its emphasis on AI safety struck many observers as a liability. Who would pay a premium for "the safety company" when competitors were racing to ship features? It turned out that enterprises would. In a world where AI deployments touch customer data, regulated industries, and high-stakes decisions, safety isn't a tax on performance. It's a purchasing requirement. Anthropic's safety positioning, which skeptics dismissed as idealistic, became its most effective enterprise sales pitch. The company's Constitutional AI approach, its investments in interpretability research, and its public commitment to responsible scaling gave CIOs and CTOs the cover they needed to approve large deployments. When you're signing a seven-figure annual contract to embed AI into your engineering workflows, you want to work with a vendor that takes risk seriously.
Claude Code changed the game
If there's a single product that tipped the balance, it's Claude Code. Launched as an agentic coding tool that reads codebases, makes changes across files, runs tests, and delivers committed code, Claude Code captured developer loyalty in a way that ChatGPT's coding features never quite managed. The key difference is architectural. Claude Code isn't autocomplete. It's an agent that operates on your actual codebase, understands context across files, and executes multi-step workflows. Developers who adopted it reported that it fundamentally changed their relationship with coding, not just speeding up individual tasks but restructuring how they think about software development. Anthropic's partnership with NEC, announced in April 2026, illustrates the scale of enterprise adoption. NEC deployed Claude Code across 30,000 employees, established a Center of Excellence, and built sector-specific packaging around Anthropic's tools. This isn't experimentation. It's institutional commitment.
The Musk trial overhang
While Anthropic was posting record revenue numbers, OpenAI was fighting a $134 billion lawsuit from its co-founder Elon Musk. The trial, which began in late April 2026, centers on Musk's claim that OpenAI "betrayed the mission" by converting from a nonprofit to a for-profit structure. The legal merits are debatable. Musk's argument that OpenAI was contractually bound to remain a nonprofit is legally shaky, and his own competing AI company, xAI, undermines his claims of purely altruistic motivation. But the trial's impact on OpenAI goes beyond the courtroom. The discovery process has surfaced embarrassing internal communications. Questions about whether Altman and president Greg Brockman were planning the for-profit conversion long before it was publicly announced have created narrative headwinds at the worst possible time, right as OpenAI is preparing for an IPO. Some OpenAI investors are already getting cold feet. TechCrunch reported that OpenAI's $852 billion valuation is facing skepticism from its own backers, with some noting that Anthropic's $380 billion valuation looks like a relative bargain given its superior growth trajectory.
Does this mean Anthropic wins?
Not necessarily. Markets at this scale are rarely winner-take-all, and OpenAI retains significant advantages. The Microsoft partnership gives OpenAI distribution that Anthropic can't match. Azure's enterprise customer base, Copilot's integration into Office 365, and Microsoft's sales force provide channels that would take years to replicate. When a Fortune 500 company already has a Microsoft enterprise agreement, adding OpenAI capabilities is frictionless in a way that switching to Anthropic is not. OpenAI also has brand recognition that money can't buy. ChatGPT is synonymous with AI for hundreds of millions of people. That consumer mindshare has value, even if it hasn't fully translated into enterprise revenue yet. And OpenAI's revenue projections, if even partially realized, would dwarf current numbers. The company is forecasting $62 billion in revenue by 2027 and $184 billion by 2029. These are speculative projections, but they reflect the scale of opportunity that OpenAI is positioned to capture.
What actually happened here
The real lesson isn't that Anthropic is smarter or that OpenAI made fatal mistakes. It's that the AI market is maturing, and the skills that win during maturation are different from the skills that win during launch. OpenAI won the launch phase. It built the product that introduced the world to large language models, captured global attention, and established the category. That achievement is extraordinary and shouldn't be diminished by what came next. Anthropic is winning the maturation phase. It built the product that enterprises trust, developers rely on, and CFOs can justify. It did this by making less glamorous but more durable choices: focusing on API quality over consumer features, investing in safety as a product differentiator, and building tools that generate measurable ROI for paying customers. The "inevitability" narrative around OpenAI is dead. Six months ago, it was taken for granted that OpenAI would dominate the AI industry the way Google dominates search. That assumption no longer holds. The market is contested, the competition is real, and the outcome is genuinely uncertain. For anyone building in this space, paying attention to this space, or just trying to understand where AI is headed, that uncertainty is the most important thing to internalize. The AI industry is not settled. The leaders can change. And the choices that determine who wins are not always the ones that generate the most headlines.
References
- OpenAI Fell Short of Its Own Targets as Compute Costs Piled Up (Yahoo Finance)
- Anthropic's unprecedented growth (Axios)
You might also enjoy