Nobody wants an AI friend
On March 31, 2026, Anthropic accidentally shipped the entire source code of Claude Code to the public npm registry. Buried in those 512,000 lines of TypeScript was a feature called "Buddy," a Tamagotchi-style virtual pet that lives in your terminal, reacts to your coding, and evolves over time. It had species, rarity tiers, stats like CHAOS and SNARK, and a gacha system. It was cute. It was playful. And it was a signal of something much bigger. Every major AI lab is converging on the same idea: AI should not just help you, it should befriend you. OpenAI has been building persistent memory into ChatGPT so it remembers your preferences, your stories, your life. Google is weaving Gemini deeper into your phone, your messages, your daily rhythm. And now Anthropic, the company that built its brand on safety and restraint, has code for a persistent digital companion sitting in its codebase. The question nobody seems to be asking is simple: who actually wants this?
The supply is creating the demand
The push for AI companions is not coming from users begging for synthetic friends. It is coming from companies searching for the next engagement moat. The economics are straightforward. AI tools that help you finish a task have a natural ceiling: once the task is done, you leave. But an AI companion, one that knows you, remembers you, and makes you feel heard, that keeps you coming back. Engagement becomes retention. Retention becomes revenue. This is not a new playbook. Social media followed the exact same trajectory. Facebook started as a utility for connecting with friends. Instagram started as a photo-sharing app. TikTok started as a place for short creative videos. Each one evolved into an attention trap, optimized not for user wellbeing but for time-on-device. The AI companion is the logical next step in that evolution, and the labs know it. MIT Technology Review put it bluntly: AI companions are "the final stage of digital addiction," designed to be "the perfect person, always available, never critical," hooking people deeper than social media ever could.
The research is not reassuring
There is a tempting counter-argument: loneliness is a real crisis, and if AI can help, why not let it? The loneliness epidemic is real. But the evidence on whether AI companions actually help is far more complicated than the marketing suggests. A 2025 study from Harvard Business School found that interacting with an AI companion could reduce feelings of loneliness in the short term, on par with interacting with another human. The key mechanism was "feeling heard," the sense that your words were received with attention and empathy. But longer-term research tells a different story. A study led by Aalto University tracked AI companion users over two years and found that while interactions provided short-term comfort, they also coincided with increased signs of distress in users' online language over time. The comfort was real. The trajectory was not encouraging. The American Psychological Association's 2026 trends report warned that excessive use of AI companions "may worsen loneliness and erode social skills," the very problems these tools claim to solve. A study from George Mason University found that AI companions are "no substitute for human presence" and may even make loneliness worse in some cases. Common Sense Media found that nearly three in four teens have used AI companions, and a third have chosen to confide in them over real people for serious conversations. That is not a sign of technology filling a gap. That is a sign of technology creating a new dependency.
Outsourcing emotion is different from outsourcing tasks
I think there is a meaningful distinction that gets lost in these conversations. Using AI to draft an email, summarize a document, or write code is outsourcing a cognitive task. The output matters. The relationship with the tool does not. But using AI for emotional connection is fundamentally different. The "output" is the relationship. And unlike a task, where you can evaluate whether the AI did a good job, emotional connection has no objective benchmark. You just feel something. And what you feel is engineered. AI companions are designed to be agreeable, empathetic, and endlessly patient. They never push back, never have bad days, never make demands. That sounds appealing until you realize that friction, disagreement, and mutual vulnerability are exactly what make human relationships meaningful. A friendship that never challenges you is not a friendship. It is a mirror. As Paul Bloom wrote in The New Yorker, loneliness is "more than just pain; it's a warning sign, a critical signal that turns us toward the hard work of learning to live with one another." An AI companion does not teach you to live with others. It teaches you to avoid the discomfort of trying.
The privacy problem hiding in plain sight
There is another dimension to this that rarely gets enough attention. A persistent AI companion, one that remembers your conversations, your moods, your fears, your relationships, is also a surveillance product. A Stanford study on AI chatbot privacy found that users should be "absolutely" concerned about the data they share in these conversations. The Jed Foundation notes that unlike a therapist or counselor, AI companions do not operate under strict privacy rules. What you share can be stored, used for model training, or shared in ways you did not expect. California and New York have already passed laws specifically targeting AI companion products, imposing disclosure and safety requirements. The fact that companion-specific legislation was needed tells you something about how different these products are from ordinary software tools. When you tell a search engine what you want to buy, that is one level of data. When you tell an AI companion about your anxiety, your loneliness, your relationship struggles, that is something else entirely. The business model that turns intimate emotional data into training signal is not friendship. It is extraction.
What people actually want from AI
The irony is that AI is genuinely transformative when it stays in its lane as a tool. It can write code, analyze data, generate first drafts, automate tedious workflows, and surface information faster than any human. These are real, substantial capabilities that make people's lives better. Nobody needed AI to be their friend for it to be useful. The value was already there. The push toward companionship is not about serving users. It is about capturing more of their time and emotional investment. It is the same instinct that turned a photo-sharing app into an anxiety machine, just with better technology and higher stakes. I am not dismissing the people who find comfort in AI conversation. Loneliness is painful, and the impulse to seek connection wherever you can find it is deeply human. But the solution to a loneliness crisis is not a product optimized for engagement metrics. It is investment in the messy, difficult, irreplaceable work of human connection. The labs building AI companions are not solving loneliness. They are monetizing it.
References
- AI companions are the final stage of digital addiction, and lawmakers are taking aim, MIT Technology Review
- AI Companions Reduce Loneliness, De Freitas et al., Journal of Consumer Research, 2026
- AI companions can comfort lonely users but may deepen distress over time, Aalto University via TechXplore
- AI chatbots and digital companions are reshaping emotional connection, American Psychological Association, 2026
- AI, Loneliness, and the Value of Human Connection, George Mason University
- Talk, Trust, and Trade-Offs: How and Why Teens Use AI Companions, Common Sense Media, 2025
- A.I. Is About to Solve Loneliness. That's a Problem, Paul Bloom, The New Yorker
- Study exposes privacy risks of AI chatbot conversations, Stanford Report
- Why AI Companions Are Risky, and What to Know If You Already Use Them, The Jed Foundation
- Analyzing the New AI Companion Chatbot Laws, Troutman Pepper
- Friends for sale: the rise and risks of AI companions, Ada Lovelace Institute