Anthropic keeps breaking things
Anthropic had one of the most turbulent months in AI history. In March 2026, the company shipped 14+ product launches, suffered at least five significant outages, accidentally leaked the existence of its next-generation model through an unsecured data store, and then, on the last day of the month, published the entire source code of Claude Code to the public npm registry. All while positioning itself as the company building the safest AI in the world. The contrast is the story. Not whether Anthropic makes good models, it clearly does, but whether the organization behind those models can keep up with its own pace.
The outages
On March 2, thousands of users reported they couldn't access Claude.ai or Claude Code. The outage hit just days after Anthropic's app climbed to the top of the App Store charts, driven by a surge of attention following the company's dispute with the Pentagon. The company's status page confirmed "elevated errors" across consumer-facing surfaces, while the API remained functional. Anthropic attributed the disruption to unprecedented demand, but the root cause appeared to be basic infrastructure scaling, specifically authentication paths that buckled under load. That outage lasted over two hours. It was resolved, acknowledged, and mostly forgotten. Then March 23 happened. Users on Max plans, the $100 to $200 per month tiers, reported hitting session limits within 10 to 15 minutes. Usage meters advanced even when users had stopped all active work. One user documented his usage indicator jumping from baseline to 91% in three minutes while running zero prompts. Over 2,140 unique reports hit Downdetector by midday. Anthropic's status page? "All Systems Operational." The community reaction was predictable and justified. Reddit threads filled with reports of full weekly limits exhausted in a single afternoon. Paying customers were locked out for hours with no information about when limits would reset. For days, Anthropic said nothing. The outages continued through the final week of March. On March 25, elevated connection reset errors hit Claude Cowork sessions. On March 27, both Sonnet 4.6 and Opus 4.6 showed elevated errors. The status page logged incident after incident, a pattern that The New Stack would later summarize as "14+ launches, 5 outages" in a single month.
The silent rate limit cut
The March 23 usage crisis wasn't just an outage. It was the visible edge of a deliberate policy change that Anthropic implemented without telling anyone. Earlier in March, the company had rolled out a "usage promotion" that doubled limits during off-peak hours for Free, Pro, Max, and Team plans. The promotion ran from March 13 through March 28. It sounded generous. But when the promotion expired, users noticed their baseline limits felt dramatically worse, not back to normal, but reduced to what felt like a fraction of what they'd been paying for. On March 26, after Forbes, MacRumors, and TechRadar had all picked up the story, Anthropic's Thariq Shihipar finally posted a statement on X: "To manage growing demand for Claude we're adjusting our 5 hour session limits for free/Pro/Max subs during peak hours." The key phrase was "move through your 5-hour session limits faster than before," which is a polished way of saying "we're giving you less." During peak hours, session limits could drain in under five minutes of actual usage. The timing looked calculated. If you're about to cut baseline limits during peak hours, offering a temporary off-peak boost is a convenient way to soften the blow, or at least delay the outrage until the promotion ends.
The Mythos leak
While the infrastructure was buckling under existing demand, Anthropic was simultaneously preparing to announce its most ambitious model yet. They didn't get to make that announcement on their own terms. On March 26, Fortune reported that nearly 3,000 unpublished assets had been discovered in a publicly accessible, unsecured data store. Among them: details about Claude Mythos, a model positioned in a new tier called "Capybara," above the existing Opus tier. Leaked draft blog posts described it as having approximately 10 trillion parameters, achieving "dramatically higher scores" than Claude Opus 4.6 on coding, academic reasoning, and cybersecurity benchmarks, and being "by far the most powerful AI model we have ever developed." The leak wasn't a sophisticated hack. It was a CMS misconfiguration. Anthropic uploaded the assets but failed to mark them as private, leaving them in a publicly searchable data lake. Two cybersecurity researchers, Roy Paz of LayerX Security and Alexandre Pauwels of the University of Cambridge, independently discovered the cache before Fortune's inquiry prompted Anthropic to lock it down. Anthropic confirmed the model's existence, calling it "a step change" in AI performance and "the most capable we've built to date." The company said it was being trialed by early access customers. The leaked documents also revealed plans for an invite-only CEO summit in Europe, part of Anthropic's push to sell AI models to large corporate customers. The irony was hard to miss. The leaked draft blog post described Claude Mythos as posing "unprecedented cybersecurity risks" and being "currently far ahead of any other AI model in cyber capabilities." A company warning the world about cybersecurity threats couldn't secure its own content management system.
The source code spill
Then, on March 31, someone at Anthropic shipped the entire source code of Claude Code to the public npm registry. All of it. 512,000 lines of code across 1,906 TypeScript files, including 44 hidden feature flags and various internal assets that were never meant to see the light of day. The cause was reportedly a misconfigured debug file. One configuration error, and the complete internals of one of the most widely used AI coding tools were public knowledge. Cybernews and multiple outlets covered the incident, and cached copies circulated before Anthropic could pull the package. This wasn't a model safety failure. It wasn't an alignment problem. It was someone not double-checking a publish configuration. The kind of operational error that any growing software company might make, except most growing software companies aren't simultaneously telling the world they should be trusted with the most powerful AI systems ever built.
The pattern
Each of these incidents, taken individually, is understandable. Outages happen when demand surges. Rate limits get adjusted as infrastructure strains. CMS misconfigurations happen. npm publishes go wrong. Every fast-growing technology company has experienced some version of these problems. AWS had similar growing pains in its early years. So did every major cloud provider. But the pattern matters. Five significant outages in one month. A silent rate limit reduction that was only acknowledged after media pressure. A data leak caused by failing to click a "private" checkbox. A source code spill caused by a debug configuration. These aren't exotic failure modes. They're the mundane kind, the ones that good operational practices are supposed to catch before they reach users. Anthropic shipped more product in February and March 2026 than most AI companies ship in a year. Extended thinking, voice mode, computer use, Claude Code improvements, Opus 4.6, and more. The pace was genuinely impressive. But pace without operational discipline is just velocity toward the next incident.
The trust problem
Here's where it gets uncomfortable. Anthropic's entire pitch is built on trust. The company's founding story centers on AI safety. Its Responsible Scaling Policy is the most detailed framework any major AI lab has published for self-governance. Dario Amodei has repeatedly made the case that Anthropic is the company that takes the risks seriously. None of that is necessarily wrong. Anthropic's safety research is substantive, and the Responsible Scaling Policy is a genuine contribution to the field. But safety isn't just about model alignment. It's also about whether the organization can execute reliably on the basics. Every outage erodes the case for AI agents handling critical workflows. Every silent policy change undermines the claim of transparency. Every operational leak, whether it's a CMS misconfiguration or an npm publish error, raises the question: if they can't secure a data store, how do we trust them with models they describe as potentially dangerous? This isn't about piling on. It's about the gap between aspiration and execution. Anthropic aspires to build AI that's safe enough to trust with high-stakes decisions. But trust is built in the boring parts: uptime, clear communication, operational rigor, the stuff that doesn't make for exciting product announcements but determines whether anyone should actually depend on your infrastructure.
The industry mirror
Anthropic's March isn't unique to Anthropic. It's a compressed version of what the entire AI industry is experiencing. Every major lab is racing to ship bigger models, more features, and more ambitious capabilities while infrastructure, operations, and reliability lag behind. The dynamic is familiar from previous technology waves. Early cloud computing was plagued by outages that would be unacceptable today. Mobile platforms launched with security holes that took years to close. The pattern is always the same: capability runs ahead, reliability catches up later, and the companies that survive are the ones that invest in the boring stuff before the market forces them to. Anthropic is shipping a model with 10 trillion parameters. That's an extraordinary technical achievement. But parameters without uptime are a demo, not a product. And the customers who are supposed to trust Claude with their critical workflows need a product, not a demo.
What needs to happen
The fix isn't complicated, it's just unglamorous. Anthropic needs to communicate changes before they happen, not after Reddit figures them out. If rate limits are being adjusted, tell users first, in plain language, with actual numbers. The phrase "move through your limits faster" should never appear in an official communication again. The company needs operational investment that matches its product ambition. If you're going to ship features that consume more tokens, like extended thinking, voice mode, and expanded context windows, you need the infrastructure to support them at the service levels you're advertising. Selling $200 per month plans that drain in five minutes during peak hours is not a viable long-term strategy. And the data hygiene issues need to stop. A company that describes its own models as posing "unprecedented cybersecurity risks" cannot afford CMS misconfigurations and accidental npm publishes. These aren't edge cases. They're the fundamentals. The AI race is accelerating. OpenAI, Google, and a growing number of well-funded competitors are all fighting for the same developer and enterprise mindshare. Anthropic's edge has always been the perception that it's the thoughtful option, the lab that cares about getting things right. March 2026 tested that perception harder than any benchmark ever could. The models are excellent. The ambition is real. But ambition without operational discipline is just a more impressive way to break things.
References
- "Anthropic's Claude reports widespread outage," TechCrunch, March 2, 2026. Link
- "Anthropic's Claude hit by widespread service outage," Help Net Security, March 2, 2026. Link
- "Is Claude Down? Anthropic Says It's Resolved the AI Tool's Outage," CNET, March 2, 2026. Link
- "A compiled timeline and detailed reporting of the March 23 usage limit crisis," r/ClaudeCode, March 2026. Link
- "Claude outages lay bare software developers' growing reliance on AI," Business Insider, March 2026. Link
- "Anthropic's madcap March: 14+ launches, 5 outages, and an accidental Claude Mythos leak," The New Stack, March 28, 2026. Link
- "Exclusive: Anthropic 'Mythos' AI model representing 'step change' in power revealed in data leak," Fortune, March 26, 2026. Link
- "Anthropic accidentally leaked details of a new AI model that poses unprecedented cybersecurity risks," Fortune, March 27, 2026. Link
- "The Most Powerful AI Ever Built, Claude Mythos (Leaked)," Reliable Data Engineering, March 2026. Link
- "The Great Claude Code Leak of 2026," DEV Community, April 1, 2026. Link
- "Spud and Mythos: New Models Break on the Shore of 2026," Forbes, April 1, 2026. Link
- "Claude Mythos (Opus 5) Leaked: What We Know So Far," WaveSpeedAI Blog, 2026. Link
- "Is Claude Down? March 2026 Anthropic Outage and Failover Tips," Deployflow, March 2026. Link
- "Claude Outages Surge as Anthropic Chases 2026 Revenue Lead Over OpenAI," Trending Topics, March 2026. Link
- Anthropic Claude status page, March 2026 incidents. Link
You might also enjoy