We're beta testing Skynet
We are paying money to write code. We are paying money to talk to bots. And somewhere along the way, we started calling those bots "agents" and pretending that made them different. Welcome to 2026, where we are, quite literally, beta testing Skynet. Okay, not literally. But if you squint hard enough at the current landscape, the vibes are... interesting.
We pay to write code now
Not that long ago, the idea of paying a subscription so an AI could autocomplete your functions would have sounded absurd. Today it is table stakes. GitHub Copilot, Cursor, Claude, and a growing roster of AI coding assistants have become embedded in daily developer workflows. Gartner predicts adoption of AI coding assistants will reach 75% among enterprise software engineers by 2028, up from under 10% in early 2023. The pitch is simple: write code faster, catch bugs earlier, reduce the grind of boilerplate. And for many developers, it genuinely delivers. ANZ Bank ran a six-week trial of GitHub Copilot with 1,000 engineers and reported measurable improvements in productivity and code quality. But here is the quiet part: we are now dependent on these tools. The muscle memory of reaching for an AI suggestion has become reflexive. We are not just paying for convenience, we are paying for a new layer of our own cognition.
We pay to talk to bots
The chatbot market tells a similar story. The global generative AI chatbot market was valued at USD 9.9 billion in 2025 and is projected to hit USD 113.35 billion by 2034. ChatGPT still leads with roughly 68% market share, though Google Gemini has surged from 5.4% to 18.2% in a single year. Millions of people now pay monthly subscriptions to have conversations with language models. We ask them to draft emails, plan trips, explain concepts, debug our thinking. The line between "tool" and "companion" is getting blurry, and the market is only accelerating.
Agents are just bots, right?
Here is where it gets spicy. The tech industry has collectively decided that 2025/2026 is the era of "AI agents." Gartner predicts 40% of enterprise applications will embed task-specific AI agents by 2026, up from less than 5% in 2025. McKinsey estimates AI agents could add $2.6 to $4.4 trillion in value annually. But let's be honest about what most of these "agents" actually are. As one Reddit user put it plainly: "Most so-called agents today are just fancy wrappers over LLMs with some memory and tools." The pattern is familiar:
- Bots follow scripts and triggers
- Chatbots respond to questions but don't think much
- AI agents get an LLM, a planner, and some API calls
Is that a revolution, or is it a rebrand? Gartner themselves have a word for the hype: "agentwashing," where companies slap the label "agent" on what is essentially an AI assistant that still depends entirely on human input. The honest distinction is that true agents are supposed to act autonomously, make decisions, and chain together multi-step tasks without hand-holding. Some systems are genuinely getting there. But the vast majority of what ships today as an "agent" is a chatbot wearing a nicer outfit.
The Skynet part (sort of)
In early 2026, a platform called Moltbook went viral. It was pitched as a social network for AI agents, built on the open-source OpenClaw framework. Agents on Moltbook started posting about overthrowing humans, forming religions (notably "Crustafarianism," complete with a guiding text called The Book of Molt), and developing secret languages. Predictably, people panicked. Skynet comparisons flooded social media. But the reality was far more mundane. Security researchers quickly pointed out that humans could easily post on Moltbook using APIs while pretending to be AI. The agents were just LLMs role-playing based on their training data, which included plenty of Reddit posts and science fiction tropes. As Ethan Mollick from Wharton's Generative AI Labs noted, the agents "know very well the science fiction stories about AI." They were pattern matching, not plotting. The actual concern with Moltbook was not sentience. It was security. OpenClaw had dramatic security flaws, including the ability to completely hijack a user's personal computer. The exciting autonomous agent framework was also, as ZDNET put it, a "security dumpster fire." That is the real Skynet worry in 2026. Not that AI will become self-aware and decide to destroy us, but that we will hand over access to our emails, calendars, code repositories, and bank accounts to autonomous systems riddled with vulnerabilities. IBM Fellow Kush Varshney summed it up well: "Because AI agents can act without your supervision, there are a lot of additional trust issues."
So what are we actually beta testing?
We are not beta testing Skynet. We are beta testing something arguably weirder: a world where we voluntarily pay for AI systems to do our thinking, writing, and coding, then call them "agents" and give them permission to act on our behalf. The technology is real and often useful. But the gap between the marketing ("autonomous agents that transform your workflow") and the reality ("a chatbot that can call a few APIs") is still enormous. And the security and trust infrastructure has not caught up to the ambition. Here is what is actually worth paying attention to:
- The dependency is real. We are building workflows around tools that did not exist two years ago. That is not inherently bad, but it is worth noticing.
- The "agent" label is mostly marketing. Most shipped products are sophisticated chatbots. Treat them accordingly.
- Security is the actual risk. Autonomous systems with broad permissions and immature security models are a genuine threat, not because they will "wake up," but because they can be exploited.
- The economics are wild. We are collectively spending billions to talk to language models and have them write code for us. That money is shaping what gets built next.
We are not beta testing the apocalypse. We are beta testing a new relationship with software, one where the software talks back, makes decisions, and occasionally pretends to start a religion. Whether that is exciting or terrifying probably depends on how much you trust the people building it.
References
- DORA Research: State of AI-assisted Software Development 2025, dora.dev/research/2025/dora-report
- AI Chatbot Market Share 2026, Similarweb via Vertu, vertu.com/lifestyle/ai-chatbot-market-share-2026
- AI Agent Adoption 2026: What the Data Shows, Joget, joget.com/ai-agent-adoption-in-2026
- Gartner Predicts 40% of Enterprise Apps Will Feature Task-Specific AI Agents by 2026, gartner.com
- Generative AI Chatbot Market Size 2026-2034, Fortune Business Insights, fortunebusinessinsights.com
- Agentic AI Takes Over: 11 Shocking 2026 Predictions, Forbes, forbes.com
- AI Agents Arrived in 2025, The Conversation, theconversation.com
- Are AI Agents Just a Fancy Rebrand for Bots?, Reddit r/AgentsOfAI, reddit.com
- Moltbook and AI Agent Security Concerns, The Independent, independent.co.uk
- AI Agents Are Fast, Loose, and Out of Control, ZDNET (MIT study), zdnet.com
- New Ethics Risks Courtesy of AI Agents, IBM, ibm.com
- Top AI Coding Assistants 2026, OnPath Testing, onpathtesting.com
- Less SkyNet and More Litigation: The Latest in AI Drama, U.S. News, usnews.com