Anthropic's $30B says nothing
Anthropic's run-rate revenue just passed $30 billion. Up from $9 billion at the end of 2025. The headlines are breathless. The investor decks are glowing. And none of it tells you whether this is a good business. Revenue is the easiest number to celebrate and the least useful number to interrogate. In an industry where every major player is subsidizing adoption, burning cash to acquire users, and racing to outspend competitors on infrastructure, topline growth is table stakes. The real question, the one nobody seems eager to answer, is what happens when the music stops. We've seen this movie before. It was called WeWork.
The number without a denominator
Thirty billion dollars in annualized revenue sounds massive until you place it next to the costs required to generate it. Anthropic reportedly burned through $5.2 billion in cash against roughly $9 billion in annualized revenue at the end of 2025. That's a burn rate that would make a Series A founder wince, except Anthropic is valued at $380 billion. The company has raised $30 billion in its Series G alone, led by GIC and Coatue. That capital isn't sitting in a savings account. It's being poured into compute, into partnerships with Google and Broadcom for "multiple gigawatts" of next-generation infrastructure, into the raw materials required to keep Claude running at scale. Revenue that requires more capital to sustain than it generates isn't revenue in any meaningful economic sense. It's a subsidy dressed up as a business model.
The infrastructure tax nobody talks about
To understand why AI revenue is structurally different from software revenue, you need to look at what's happening upstream. Alphabet has guided $175 to $185 billion in capital expenditure for 2026, roughly double what it spent in 2025. Microsoft, Amazon, and Meta are on similar trajectories. Combined, the four major hyperscalers are on track to spend somewhere between $650 billion and $700 billion this year on AI infrastructure. That's chips, data centers, networking, cooling, and power. These aren't speculative bets. They're the direct cost of serving the demand that companies like Anthropic are generating. Every API call, every Claude Code session, every enterprise deployment runs on infrastructure that someone has to pay for. And right now, the economics suggest that customers aren't paying enough. Goldman Sachs has noted that AI services gross margins sit around 50 to 60 percent, compared to 77 percent or higher for traditional cloud computing. That gap matters enormously at scale. It means every dollar of AI revenue consumes roughly twice the infrastructure cost of a dollar of cloud revenue.
The WeWork parallel
WeWork's story is instructive not because the companies are identical, but because the financial pattern rhymes. WeWork grew revenue from $886 million in 2017 to $1.8 billion in 2018 to $3.5 billion in 2019. The growth was real. The demand was real. The customers were real. What wasn't real was the business model. WeWork was signing long-term leases and offering short-term memberships, subsidizing the difference with investor capital. Revenue growth was value-destructive because margins were negative. Anthropic's version of this dynamic looks different on the surface but follows the same logic. The company is purchasing massive amounts of compute (the equivalent of signing long-term leases) and selling AI services at prices that don't yet cover the full cost of delivery (the equivalent of subsidized memberships). More than 1,000 business customers now spend over $1 million annually on Claude, doubling since February. That's impressive adoption. It says nothing about whether those contracts are profitable. The critical lesson from WeWork wasn't that flexible office space was a bad idea. Regus had been running that business profitably for years. The lesson was that revenue growth funded by negative margins is a treadmill, not a trajectory.
The Jevons paradox problem
Here's where optimists typically intervene: costs will come down. Inference is getting cheaper. Models are getting more efficient. Just wait. They're not wrong about the trend. Inference costs for GPT-4-class performance have dropped roughly 50x in three years, according to Epoch AI. Gartner predicts another 90 percent reduction by 2030. Custom silicon from Google's TPU program offers a 40 percent cost advantage over Nvidia GPUs. The efficiency curve is real. But efficiency gains in computing have a well-documented tendency to increase total spending rather than decrease it. This is the Jevons paradox, named after the economist who observed in 1865 that more efficient coal engines led to more coal consumption, not less. The AI version plays out like this: cheaper inference makes new use cases viable. New use cases drive more demand. More demand requires more compute. More compute requires more infrastructure spending. The unit cost goes down, but the total bill goes up. Anthropic's own trajectory illustrates this perfectly. Revenue went from $14 billion in February to $30 billion in April, roughly doubling in eight weeks. That kind of demand acceleration doesn't happen without a corresponding acceleration in compute consumption. The company's partnership with Google and Broadcom for gigawatt-scale infrastructure isn't a luxury. It's a necessity to keep serving the growth that the market is celebrating. Cheaper tokens don't solve the margin problem if usage scales faster than costs decline.
What a sustainable AI business would actually look like
None of this means Anthropic is doomed. It means the current narrative is incomplete. A sustainable AI business probably looks quite different from what the industry is building today. It might involve smaller, more efficient models tuned for specific verticals rather than massive general-purpose systems. It might prioritize owned distribution, where the AI company controls the customer relationship and the margin stack, rather than competing on raw API pricing. It might focus on workflow integration, where AI is embedded deeply enough into business processes that switching costs create durable pricing power. The cloud computing industry went through a similar reckoning. AWS ran for years before becoming consistently profitable. The early cloud companies burned through billions before finding sustainable unit economics. The difference is that cloud infrastructure costs were declining on a predictable curve driven by Moore's Law, and the workloads being served, storage, compute, networking, had well-understood cost structures. AI inference doesn't have that luxury yet. The workloads are novel, the demand curves are unpredictable, and the models themselves keep getting larger and more expensive to run even as per-token costs fall. Training costs alone are staggering. GPT-4 reportedly cost over $100 million to train. The next generation of models will cost multiples of that.
Celebrate the research, question the business model
Let me be clear about something: Anthropic's research is genuinely excellent. Claude is a remarkable product. The company's work on AI safety, on constitutional AI, on understanding the economic impacts of their technology, is among the most thoughtful in the industry. Their Economic Index reports are more honest about AI's real-world impact than most of what comes out of their competitors. But research excellence and business viability are different things. Bell Labs produced some of the most important inventions of the 20th century and still needed AT&T's monopoly profits to fund it. DeepMind has advanced the frontier of AI research more than almost any other organization and has never turned a profit as a standalone entity. The $30 billion revenue number tells you that demand for AI is enormous. It tells you that Claude has found product-market fit across consumer, enterprise, and developer segments. It tells you that the AI industry is growing faster than any technology sector in history. What it doesn't tell you is whether Anthropic can turn that demand into a self-sustaining business before the capital runs out or the market loses patience. And right now, that's the only question that matters. The current trajectory requires one of two things: either a breakthrough in inference economics that fundamentally changes the cost structure, or a reckoning with the gap between what customers are willing to pay and what it actually costs to serve them. The first is possible. The second is inevitable if the first doesn't arrive in time. Thirty billion dollars in revenue is a big number. But in an industry spending $700 billion a year on infrastructure, it's a rounding error with good PR.
References
- Anthropic raises $30 billion in Series G funding at $380 billion post-money valuation, Anthropic, February 2026
- Anthropic expands partnership with Google and Broadcom for multiple gigawatts of next-generation infrastructure, Anthropic, April 2026
- Anthropic Tops $30 Billion Run Rate, Seals Broadcom Deal, Bloomberg via Yahoo Finance, April 2026
- Alphabet says capital spending in 2026 could double, cloud business booms, Reuters, February 2026
- Big Tech set to spend $650 billion in 2026 as AI investments soar, Yahoo Finance, February 2026
- Tech AI spending may approach $700 billion this year, but the blow to cash raises red flags, CNBC, February 2026
- The AI Profit Map: Why the Biggest Winners Aren't Who You Think, Long Yield, 2026
- Gartner Predicts That by 2030, Performing Inference on an LLM With 1 Trillion Parameters Will Cost GenAI Providers Over 90% Less Than in 2025, Gartner, March 2026
- Anthropic could surpass OpenAI in annualized revenue by mid-2026, Epoch AI, February 2026
- Why the AI world is suddenly obsessed with a 160-year-old economics paradox, NPR Planet Money, February 2025
- 5 Lessons From WeWork's $40 Billion Meltdown, Investopedia
- Anthropic: The $380 Billion Powerhouse Hiding In Plain Sight, Forbes, February 2026