Everyone is shipping and breaking
Something strange is happening in tech right now. Every tool I use, every platform I depend on, ships updates at a pace that would have been unthinkable a few years ago. And almost all of them are breaking things in the process. I saw a chart floating around on X recently showing how many releases various AI companies had pushed in just a two-week window. Anthropic was at the top. And that tracks, because they shipped 74 product releases in 52 days earlier this year. Fourteen of those came in March alone, alongside five outages. They were, quite literally, releasing faster than they could stabilize. This isn't just an Anthropic thing. It's everywhere.
The new normal is daily releases
AI has fundamentally changed the speed at which software gets built. Ninety-two percent of US developers now use AI coding tools daily. Forty-one percent of all code being written globally is AI-generated. The vibe coding market, which barely existed in 2023, hit an estimated $4.7 billion in 2026. Ramp shipped 270 features in the first half of 2025, more than all of 2024 combined. They moved roughly three times faster across the organization after getting 99.5% of employees onto regular AI use. Their internal coding agent, Inspect, now writes over half of all merged pull requests. Anthropic's own engineers reportedly use Claude for about 60% of their work, which means every release makes the next one faster to build. It's a compounding loop. Ship, learn, ship again. The gap between companies that embrace this and those that don't grows every single day. OpenAI pushes ChatGPT updates constantly. Notion ships feature after feature. Claude Code has had over 170 updates since launch. The changelogs are so long that by the time you finish reading one, there's already a new version.
Faster at creating problems
Here's the thing nobody in the hype cycle wants to talk about: we're not just faster at building. We're faster at breaking. A study by Uplevel tracked around 800 developers after Copilot adoption. The result? No significant improvement in pull request cycle time. But a 41% increase in bugs. Faros AI saw something similar across 10,000 developers: individual output went up 21%, PRs increased 98%, but review times ballooned 91% and PR sizes inflated 154%. At the company level, any correlation between AI adoption and actual performance metrics vanished. DX's research across 43,000 engineers at roughly 100 companies paints a volatile picture. Quality outcomes range from big gains to serious declines. The common thread? AI accelerates whatever engineering culture you already have. If your practices are solid, AI makes them better. If they're shaky, AI makes them worse, faster. This matches what I see as a user. Something that works today might genuinely not work tomorrow, because there's a new update shipping before the last one was properly tested. Notion Calendar gets an update practically every day. Claude has outages that coincide with massive feature drops. Even OpenAI has had to pause features and focus on backend stability.
Vibe coding and the testing gap
The term "vibe coding" has gone mainstream. It describes the practice of describing what you want in plain language and letting AI generate the code. It's powerful for prototyping. It's transformative for non-technical builders. And it's producing a mountain of code that nobody fully understands. A controlled study of 16 experienced open-source developers found that experienced developers were actually 19% slower when using AI tools on mature codebases. The kicker? The developers themselves predicted they'd be 20% faster, and still believed they had been faster even after the study ended. There's a perception gap between how productive AI makes us feel and how productive it actually makes us. When companies are vibe coding their way to production, the testing gap becomes enormous. You can generate code in minutes that would have taken days. But the review, testing, and stabilization process hasn't sped up at the same rate. The bottleneck has shifted from writing code to understanding code, and that's a much harder problem to solve with AI.
The trust cost of moving fast
Facebook's old motto was "move fast and break things." In 2026, Forbes published a piece arguing it's time to retire that philosophy entirely. Their point was sharp: what we break isn't just code or processes. We break trust. We break reliability. We break the confidence users have that the tool they depend on will actually work when they need it. There's a Reddit thread titled "Notion's constant updates are so annoying" that captures the user side of this perfectly. Another one says "Anthropic: Stop shipping. Seriously." Users are paying hundreds of dollars for tools that feel less reliable with each update. Anthropic's status page showed 98.73% uptime, a single nine, which is genuinely unacceptable for production tooling. The irony is that the companies shipping fastest are the ones building the tools that enable everyone else to ship fast. It's turtles all the way down. Anthropic ships Claude updates that break Claude Code, which developers use to ship their own updates that break their own products.
What actually needs to change
I don't think the answer is to slow down. The companies that pause to perfect everything are getting outcompeted by the ones that iterate in public. That's the reality of the market right now. But there's a meaningful difference between shipping fast with guardrails and shipping fast without looking. The companies getting the best results from AI aren't just adopting tools. They're investing in engineering culture, code review processes, automated testing, and the kind of infrastructure that catches problems before users do. Notion itself published a detailed post about how they built CI guardrails to catch breaking schema changes automatically. That's the right instinct. You don't fight speed with slowness. You fight speed with better systems. The pattern I keep seeing is this: the first wave of AI adoption makes everything faster. The second wave, which we're entering now, has to make everything faster and more reliable. The companies that figure out that second part will win. The ones that just keep shipping and breaking will eventually lose the trust that made their products valuable in the first place. We're in an awkward middle period where the tools to build have outpaced the tools to verify. AI can write code in seconds, but it can't yet reliably tell you whether that code will break something three services away. Until that gap closes, every update is a gamble, and users are the ones placing the bets.