Meta blinked
In March 2026, Meta quietly delayed the release of its next frontier AI model, codenamed Avocado. The model, which was supposed to ship this month, has been pushed to at least May after falling short on internal benchmarks for reasoning, coding, and writing compared with the latest models from Google, OpenAI, and Anthropic. This isn't just a scheduling hiccup. It's a signal that something deeper is going on at one of the most aggressive AI spenders on the planet.
The delay, in context
Avocado outperformed Meta's previous models and beat Google's Gemini 2.5 from earlier this year. But it hasn't matched Gemini 3.0, which shipped in November 2025, or OpenAI's GPT-5.4. For a company that has publicly committed to "pushing the frontier" by mid-2026, that gap is a problem. The New York Times reported that leaders within Meta's AI division discussed temporarily licensing Google's Gemini to power Meta's AI products while Avocado catches up. No decisions were reached, but the fact that this was even on the table tells you how wide the gap has become. When you're considering renting your competitor's engine to keep the lights on, confidence in your own roadmap has taken a hit.
The spending doesn't lie
Meta spent $72 billion on capital expenditures in 2025. For 2026, the company has guided between $115 billion and $135 billion, nearly doubling its spend in a single year. The bulk of that money is going toward data centers, GPUs, and the infrastructure needed to train and serve frontier AI models. To put that in perspective, Meta's projected 2026 capex exceeds the GDP of most countries. And unlike Google, Amazon, or Microsoft, Meta doesn't have a cloud business generating direct revenue from AI infrastructure. Google can justify its massive AI spend because Cloud, Search, and Ads all benefit directly. Amazon has AWS. Microsoft has Azure and its OpenAI partnership. Meta's AI revenue story runs almost entirely through its ad platform and social apps, which means the return on every dollar spent has to flow through a narrower set of products. Barron's recently reported that Meta's free cash flow is expected to fall from $43.8 billion in 2025 to roughly $6.25 billion in 2026. Barclays projects the company could swing to negative free cash flow by 2027 or 2028. These are not the financials of a company with room for extended delays.
The open source question
Meta's AI strategy has been built on a compelling premise: open-source the models, commoditize the intelligence layer, and keep the value in the platform. Llama became the default foundation model for thousands of developers and companies. The ecosystem that grew around it gave Meta influence, talent attraction, and a seat at the frontier table without having to win every benchmark. But that strategy depends on one critical assumption: the models have to be competitive. Developers build on Llama because it's good enough, and free. If Llama falls meaningfully behind the frontier, the ecosystem starts to erode. Developers and companies that need cutting-edge performance will migrate to whatever works best, regardless of licensing. There's an additional wrinkle. CNBC reported in December 2025 that Avocado could be proprietary rather than open source, a significant departure from Meta's established playbook. If true, it suggests Meta may be reconsidering the economics of giving away its most expensive models. Training a frontier model costs hundreds of millions of dollars. Open-sourcing it when it's competitive is a strategic move. Open-sourcing it when it's behind is just charity.
Billions don't buy breakthroughs
Meta has been on an aggressive hiring spree, bringing in top AI researchers and engineers. The company reportedly spent $14.3 billion to recruit the founder of Scale AI and a handful of senior researchers and engineers. The talent pool at Meta's AI division is world-class by any measure. But talent alone doesn't guarantee results. Research direction, data quality, training infrastructure, and organizational coordination all matter as much as, if not more than, the people writing the code. OpenAI has fewer researchers than Meta but has consistently shipped models that set the benchmark. Google DeepMind has decades of research depth and the infrastructure to match. The Avocado delay suggests that Meta's challenge isn't a shortage of smart people or compute. It's something harder to solve, the translation of resources into results at the frontier.
What this reveals about the race
The narrative for most of 2024 and 2025 was that five or six companies were in a tight race for AI supremacy: OpenAI, Google, Anthropic, Meta, xAI, and possibly a few Chinese labs. The Avocado delay quietly narrows that field. Google has Gemini 3.0 in production and is spending $175 to $185 billion on AI infrastructure in 2026, backed by diversified revenue streams across Cloud, Search, and Ads. OpenAI continues to ship aggressively, with GPT-5.4 setting new benchmarks and enterprise adoption growing. Anthropic has carved out a strong position in safety-conscious enterprise deployments. Meta, despite spending more than almost anyone, is falling behind on the metric that matters most: model performance at the frontier. The company still has massive distribution through Facebook, Instagram, WhatsApp, and Messenger, over 3 billion daily active users across its platforms. That distribution is a genuine moat. But distribution of what? If the underlying AI isn't competitive, the product experiences built on top of it won't be either.
Don't count them out
It would be a mistake to declare Meta dead in AI. The company has the resources, the talent, and the distribution to remain a major player for years. Llama 4, released earlier this year with its Mixture of Experts architecture, showed genuine innovation. Meta AI is embedded in products used by billions of people daily. The ad targeting improvements driven by AI have helped push Meta's annual revenue past $200 billion. But there's a difference between being a major AI player and being at the frontier. Meta's bet was that it could do both simultaneously, spend aggressively, open-source generously, and still keep pace with the labs that do nothing but build frontier models. The Avocado delay is the first clear sign that this bet may not be paying off as planned. When you're spending $135 billion a year and still considering licensing your competitor's technology, the gap between ambition and execution is showing. The question isn't whether Meta can build good AI. It clearly can. The question is whether it can build the best AI, fast enough, while spending more than almost anyone else in history. And right now, the answer is less certain than Mark Zuckerberg would like.
References
- Meta Delays Rollout of New A.I. Model After Performance Concerns, The New York Times, March 12, 2026
- Meta pushes AI model 'Avocado' rollout to May or later, Reuters, March 12, 2026
- Meta's New AI Model Is Reportedly Delayed Again. Is 'Avocado' Toast?, CNET, March 13, 2026
- Meta Stock Slips After AI Model Delay, Yahoo Finance, March 13, 2026
- Meta Forecasts Spending of at Least $115 Billion This Year, The New York Times, January 28, 2026
- Here's Why Meta's $135 Billion AI Bet in 2026 Could Backfire on Shareholders, Yahoo Finance (The Motley Fool), March 12, 2026
- Meta Hits Pause on 'Avocado': AI Ambitions Face Seasonal Setback as Execution Hurdles Mount, Chronicle Journal / MarketMinute, March 17, 2026
- From Llamas to Avocados: Meta's shifting AI strategy is causing internal confusion, CNBC, December 9, 2025
- Mark Zuckerberg Won't Be Deterred From AI Spending, Barron's, March 2026
- Meta Platforms: The AI Spending Spree Is Out Of Control, Seeking Alpha, 2026