2025, humans stopped thinking
2025 was supposed to be the year AI agents changed everything. Instead, it might be the year we quietly stopped exercising the one thing that makes us irreplaceable: our ability to think.
The year of the agent
The tech industry declared 2025 the year of the AI agent. Not just chatbots that answer questions, but autonomous systems that plan, decide, and act. OpenAI, Google, Anthropic, and Microsoft all shipped agents that could browse the web, write and execute code, manage calendars, draft emails, and coordinate multi-step workflows with minimal human input. The market responded enthusiastically. Gartner predicted that by 2028, 15% of day-to-day work decisions would be made by AI agents. The agentic AI market was projected to hit $45 billion in 2025 alone. Fortune 500 companies raced to pilot autonomous systems across customer service, software engineering, and operations. The pitch was simple: let agents handle the grunt work so humans can focus on what matters. Strategy. Creativity. Judgment. But something else started happening too.
The cognitive offloading problem
A study presented by Microsoft Research at the 2025 CHI conference surveyed 319 knowledge workers and collected 936 instances of generative AI use. The self-reported reductions in cognitive effort were striking across every level of Bloom's taxonomy: knowledge recall dropped by 72%, comprehension by 78%, application by 70%, analysis by 71%, synthesis by 76%, and evaluation by 55%. Read those numbers again. These aren't reductions in busywork. Comprehension, analysis, synthesis, these are the cognitive muscles that define expertise. Then came the MIT Media Lab study that made headlines later that year. Researchers divided 54 participants into three groups and asked them to write SAT essays using ChatGPT, Google Search, or nothing at all. EEG recordings showed that ChatGPT users had the lowest brain engagement across 32 measured regions. Worse, over several months of the study, those users got progressively lazier, often resorting to copy-paste by the end. Harvard's experts weighed in with a pointed summary: excessive reliance on AI-driven solutions may contribute to "cognitive atrophy," a shrinking of critical thinking abilities. Even ChatGPT itself, when asked whether AI makes us dumber or smarter, gave an honest answer: "It depends on how we engage with it, as a crutch or a tool for growth."
Agents amplify the pattern
Chatbots were already encouraging cognitive shortcuts. Agents take this further by removing the human from the loop entirely. When a chatbot drafts your email, you still read it, edit it, and click send. When an agent handles your email, it reads, drafts, sends, and archives, all while you sleep. The convenience is undeniable. But so is the distance it creates between you and the cognitive work that used to keep your judgment sharp. Christopher S. Penn described this shift in terms of attention: "As AI agents become more and more popular, autonomous code that can go out and find just what we want, we offload more of the attention process to AI." When algorithms filter data to present only what you want, you develop a distorted view of reality, a filter bubble maintained not by your own choices but by an agent's optimization function. The research from Frontiers in Psychology frames this as a genuine paradox. On one hand, cognitive offloading through AI can reduce mental effort and conserve resources for more meaningful activities. On the other hand, the same tools may create "an erosion of introspection, over-reliance on algorithmic feedback, and anxiety induced by hyper-monitoring and optimization." The question is not whether AI is good or bad for cognition, but how it is reshaping the very architecture of how we think and cope.
The augmentation counterargument
Not everyone sees this as a crisis. The dominant industry narrative insists that agents augment rather than replace human thinking. IBM's 2025 analysis emphasized that a prevailing vision of agentic adoption "sees agents augmenting, but not necessarily replacing, human workers." They noted that real-world 2025 agents still fell short of full autonomy, requiring human oversight for complex tasks. This pattern held across industries. The doctor using an AI agent still brings clinical judgment. The developer using a coding agent still designs the architecture. The leverage has shifted enormously, but the expertise remains human. The problem with this argument is not that it's wrong. It's that it describes the ideal scenario while ignoring observed behavior. The MIT study didn't find that people used ChatGPT as a collaborative thinking partner. It found that they stopped thinking. The Microsoft Research data didn't show workers redirecting saved cognitive effort toward higher-order strategy. It showed reductions across every cognitive level, including evaluation, the highest-order thinking skill measured. Augmentation requires active engagement. What the data shows is passive delegation.
What we're actually losing
The risk isn't that AI agents will make bad decisions. In many routine contexts, they'll make better decisions than humans, faster and more consistently. The risk is what happens to the humans who stop practicing decision-making altogether. Cognitive skills work like muscles. Memory, analytical reasoning, problem-solving, these capacities require regular exercise. Research by Sparrow et al. found that frequent use of search engines already reduced people's likelihood of remembering information independently. People remembered where to find information rather than the information itself. Extend this pattern to agents that don't just find information but act on it, and you get a workforce that remembers neither the information nor how to use it. This matters because the situations where human judgment is most critical, novel problems, ethical dilemmas, ambiguous tradeoffs, are exactly the situations where agents perform worst. If we've spent years letting agents handle everything else, we may find ourselves cognitively unprepared for the moments that matter most.
What to do about it
The answer isn't to reject AI agents. That ship has sailed, and for good reason. Agents genuinely excel at routine coordination, data processing, and repetitive workflows. The answer is to be deliberate about which cognitive tasks you delegate and which you protect. Keep a thinking practice. Set aside time for unassisted analysis. Write without AI. Reason through problems before consulting an agent. This isn't nostalgia, it's cognitive maintenance. Review, don't just approve. When an agent drafts something, don't skim and send. Read critically. Ask whether you would have reached the same conclusion. If you can't tell, that's a warning sign. Rotate your autopilot. If an agent handles a task category for months, occasionally do it manually. This keeps your skills fresh and helps you catch when the agent's defaults no longer match reality. Audit your filter bubble. Agents optimize for your preferences, which means they can silently narrow your information diet. Deliberately seek out sources and perspectives your agent wouldn't surface. Teach, don't just delegate. The best use of agents is as a forcing function for clarity. If you can't articulate what you want an agent to do well enough for it to succeed, you probably haven't thought through the problem deeply enough.
The real question
2025 was framed as a binary: either agents take our jobs or they make us superhuman. The more interesting reality is subtler. Agents are taking over the cognitive routines that used to keep our thinking sharp, and most of us are letting them without noticing what we're giving up. The humans who thrive alongside agents won't be the ones who delegate the most. They'll be the ones who stay engaged, who treat AI as a sparring partner rather than a replacement for thought, and who understand that the ability to think well is not a static trait but a practice that requires ongoing effort. The agents have started. The question is whether we've stopped.
References
- IBM, "AI Agents in 2025: Expectations vs. Reality", ibm.com/think/insights/ai-agents-2025-expectations-vs-reality
- Microsoft Research, 2025 CHI Conference study on cognitive offloading in knowledge workers (surveying 319 workers, 936 AI use instances), as cited in Prof. Hung-Yi Chen, "2026 AI Cognitive Offloading Crisis," hungyichen.com
- MIT Media Lab, "Your Brain on ChatGPT," EEG study on AI's impact on critical thinking, arxiv.org/pdf/2506.08872v1, as reported by TIME and Harvard Gazette
- Harvard Gazette, "Is AI Dulling Our Minds?", November 2025, news.harvard.edu
- Christopher S. Penn, "Cognitive Offloading and AI," September 2025, christopherspenn.com
- Frontiers in Psychology, "Cognitive Offloading or Cognitive Overload? How AI Alters the Mental Architecture of Coping," 2025, frontiersin.org
- Frontiers in Psychology, "Becoming Human in the Age of AI: Cognitive Co-evolutionary Processes," 2025, frontiersin.org
- Cal Newport, "Why A.I. Didn't Transform Our Lives in 2025," The New Yorker, December 2025, newyorker.com
- Gartner, "Agentic AI Will Autonomously Resolve 80% of Common Customer Service Issues by 2029," March 2025, gartner.com
- Fortune, "2025 Was the Year of Agentic AI. How Did We Do?", December 2025, fortune.com
- MDPI Societies, "AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking," mdpi.com
- Sparrow, D. et al., "Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips," as cited in MDPI Societies (source 11)