We stopped reading
Something strange has happened to reading. Not the ability to decode words on a page, but the willingness to sit with them long enough for anything to actually land. AI summaries are everywhere now. Your email client condenses threads into bullet points. Slack highlights "key messages" so you can skip the rest. Articles arrive pre-digested. Meeting notes auto-generate before the conversation has even cooled. We have, without quite noticing, shifted from reading the thing to reading about the thing. And it feels fine. That's the problem.
The compression trap
Every summary is a lossy compression. It preserves the what and discards the how, the why, the texture that made the original worth writing. The nuance that gets dropped is often the most valuable part, the part that challenges your assumptions or introduces a complication you hadn't considered. A research team at the University of Pennsylvania ran seven experiments with nearly 10,500 participants and found that people who learned about a topic through LLM summaries developed shallower knowledge than those who used traditional web search, even when the core facts were identical. The summary group felt less invested in what they learned, produced sparser and less original advice, and were less likely to have that advice adopted by others. The researchers attributed this to an inherent feature of AI syntheses: by presenting pre-packaged answers rather than individual sources, they remove the need for users to discover and synthesize information themselves. This isn't just about AI getting things wrong. It's about what happens when the right answer arrives too easily.
The GPS problem, but for your mind
A few years ago, researchers at McGill University found that habitual GPS users had measurably worse spatial memory when asked to navigate on their own. The more people relied on turn-by-turn directions, the less they could form cognitive maps of their environment. GPS didn't just supplement navigation, it replaced the underlying skill. The same dynamic appears to be playing out with how we consume information. A 2025 study from MIT's Media Lab measured brain activity during essay writing and found that participants who used ChatGPT showed significantly lower neural engagement than those who wrote without AI assistance. The ChatGPT group couldn't accurately recall what they had "written" just minutes earlier. They showed reduced critical thinking, less sense of ownership, and a pattern the researchers called "cognitive debt," a deficit that deepened with repeated use. Just as GPS eroded spatial reasoning, AI summaries may be eroding deep comprehension. Not because the technology is broken, but because it works exactly as designed.
Reading is not information transfer
There's a common assumption that the point of reading is to extract information, and that any method delivering the same facts is functionally equivalent. But this misses what reading actually does to a brain. When you sit with a long argument, you're not just absorbing conclusions. You're following the author's reasoning, testing it against your own experience, noticing where you agree and where something feels off. You're holding multiple ideas in tension. You're developing what psychologists call "tolerance for ambiguity," the ability to sit with complexity without rushing to resolve it. Summaries collapse all of that into a clean takeaway. The messy middle, where real understanding lives, gets optimized away. This matters more for some people than others. If your job is to process high volumes of routine information, summaries are a genuine productivity tool. But if your work depends on depth, if you're a strategist, researcher, writer, or leader making decisions in uncertain conditions, the shortcut might be costing you the thing you're paid to provide.
The difference between filtering and replacing
Here's the counter-argument worth taking seriously: we were already drowning. The volume of information produced daily is genuinely unmanageable. Email, Slack, reports, articles, meeting transcripts, the firehose was already overwhelming before AI showed up. Summaries are a necessary filter. That's fair. But there's a meaningful difference between using a summary to decide what deserves your full attention and using a summary instead of giving anything your full attention. The first is triage. The second is atrophy. The concern isn't that summaries exist. It's that they're becoming the default, quietly replacing the longer, harder engagement that builds understanding. When "I read the summary" becomes synonymous with "I read it," we've lost something we haven't fully accounted for.
Intentional friction
The practical response isn't to swear off AI tools or pretend they aren't useful. It's to be deliberate about where you allow compression and where you don't. Read one long thing per day without summarizing it first. An essay, a chapter, a detailed report. Something that takes more than five minutes and requires you to hold a thread of thought across multiple pages. Not because it's efficient, but because it exercises a cognitive muscle that atrophies without use. Notice when you're reading about something instead of reading it. The AI-generated meeting summary, the article digest, the Slack highlight reel. Ask yourself whether the compressed version is enough, or whether the original deserves your actual attention. Protect your ability to be bored by a text. Boredom in the middle of something long is often the moment right before insight. Summaries eliminate that moment entirely, and with it, the chance for something unexpected to click. Thinking for yourself requires reading for yourself. Not always, not for everything, but often enough that the skill stays sharp. In an environment optimized for speed, choosing to slow down isn't inefficiency. It's independence.
References
- Melumad, S. & Yun, J. H. (2025). Experimental evidence of the effects of large language models versus web search on depth of learning. PNAS Nexus, 4(10), pgaf316. https://doi.org/10.1093/pnasnexus/pgaf316
- Dahmani, L. & Bhöhbot, V. D. (2020). Habitual use of GPS negatively impacts spatial memory during self-guided navigation. Scientific Reports, 10, 6310. https://www.nature.com/articles/s41598-020-62877-0
- Kosmyna, N. et al. (2025). Your Brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task. MIT Media Lab. https://arxiv.org/abs/2506.08872
- Almaraz-López, C., López-Fernández, R. & Álavarez-Risco, A. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15(1), 6. https://www.mdpi.com/2075-4698/15/1/6
- Brinsa, M. (2025). Too long, must read: Gen Z, AI, and the TL;DR culture. Medium. https://medium.com/@markus_brinsa/too-long-must-read-gen-z-ai-and-the-tl-dr-culture-ea10d2e1195d