The dependency on AI
We go to AI for everything now. Need to write an email? Ask AI. Stuck on a coding problem? Ask AI. Want to understand a complex topic? Ask AI. It has become our default, the same way Google became our default two decades ago. We stopped memorizing phone numbers when contacts apps existed. We stopped remembering directions when GPS took over. And now, we are beginning to stop thinking when AI can do it for us. This is not a hypothetical. It is already happening, and the consequences run deeper than most of us realize.
The new Google
When Google launched, it felt like a superpower. Any question, any topic, instantly searchable. Over time, we restructured our cognition around it. We stopped retaining facts because we knew we could look them up. The phrase "just Google it" became a reflex, not a suggestion. AI is following the same trajectory, but at a much steeper curve. Google helped us find information. AI helps us process it. That is a fundamentally different kind of dependency. When we outsource search, we lose recall. When we outsource reasoning, we risk losing the ability to reason at all. A 2024 study published in Societies found a significant negative correlation (r = -0.68) between AI tool usage and critical thinking scores. Frequent AI users showed a diminished ability to critically evaluate information and engage in reflective problem-solving. Researchers at MIT found that using ChatGPT significantly reduced "relevant cognitive load," the intellectual effort required to transform information into knowledge. In plain terms, the more we lean on AI to think, the less our brains practice thinking. Microsoft's own research echoed this. A survey of knowledge workers found self-reported reductions in cognitive effort and confidence when using generative AI regularly. People were not just offloading tasks, they were offloading judgment.
The blackout scenario
Now imagine a more extreme version of this dependency. Not a gradual erosion, but a sudden collapse. A prolonged, widespread power outage lasting not hours or days, but years or decades. It sounds dramatic, but it is worth sitting with. If the grid went dark tomorrow and stayed dark, what would we actually lose? We would lose access to AI systems, obviously. But we would also lose the vast digital infrastructure that stores and serves our collective knowledge. Cloud servers, data centers, interconnected networks, all of it runs on electricity. A total blackout would not just silence our AI assistants. It would sever our connection to decades of accumulated digital information. Gartner has warned that by 2028, misconfigured AI embedded in cyber-physical systems could shut down national critical infrastructure in a G20 country. The risk is not just theoretical. As AI becomes more deeply woven into power grids, water systems, financial networks, and supply chains, a failure in one layer can cascade through the physical world, damaging equipment, forcing shutdowns, and destabilizing entire economies. The Al Habtoor Research Centre explored this scenario directly, noting that a global AI shutdown could cause trillions to disappear from stock markets and trigger national security crises across the globe. We have built systems that assume AI will always be available. That assumption is a vulnerability.
We forgot how to do things without it
The deeper problem is not the infrastructure. It is us. When calculators became common, we worried people would forget arithmetic. That mostly did not happen, because schools still taught math fundamentals. But AI is different. It is not replacing one narrow skill. It is replacing a way of engaging with problems. Writing, analyzing, diagnosing, planning, these are broad cognitive activities, and we are handing them over wholesale. A study from Harvard Medical School found that AI assistance improved performance for some clinicians but actually damaged it for others. The researchers did not fully understand why, but the implication is clear: AI does not uniformly help. Sometimes it makes us worse at the thing we are using it for. Professor Rose Luckin at University College London has called this "cognitive atrophy," the gradual weakening of skills we stop practicing because AI handles them. Students who lean on AI for coursework show increased procrastination, memory loss, and poorer academic performance. Professionals who rely on AI for analysis find their own analytical muscles weakening. If we woke up one day and AI was gone, could we still diagnose patients, debug code, write coherent arguments, navigate without GPS, or troubleshoot a broken system? For a growing number of people, the honest answer is: not as well as we used to.
Building resilience, not rejection
None of this means we should stop using AI. That ship has sailed, and AI genuinely makes many things better, faster, and more accessible. The point is that dependency without resilience is dangerous. A few things worth thinking about: Preserve core skills deliberately. Just as pilots still train for manual flight even though autopilot handles most of the work, we should keep practicing the fundamentals. Write without AI sometimes. Solve problems from scratch. Do the hard thinking before reaching for the shortcut. Treat AI as a collaborator, not a replacement. The best outcomes happen when humans and AI work together, each contributing what they do best. Use AI to accelerate your thinking, not to replace it. Check its work. Challenge its outputs. Stay in the loop. Build systems that degrade gracefully. Infrastructure should be designed so that if AI components fail, there are fallback mechanisms. Manual overrides, human-readable documentation, analog backups. The more critical the system, the more important it is that it can function without AI. Invest in knowledge preservation. Not everything should live exclusively in the cloud. Physical libraries, printed documentation, offline archives, these are not relics of the past. They are insurance for the future.
The real question
The dependency on AI is not a future problem. It is a present reality. Every time we default to an AI tool without thinking first, we are making a small trade: convenience now for capability later. The question is not whether AI is useful. It clearly is. The question is whether we are maintaining the ability to function without it. Because the moment we cannot, we are no longer using a tool. We are dependent on a crutch. And crutches, by definition, leave you unable to walk on your own. The best time to think about this is before the blackout, not after.
References
- M. Abbas, "From tools to threats: a reflection on the impact of artificial-intelligence chatbots on cognitive health," PMC, 2024. Link
- H.-P. Lee et al., "The impact of generative AI on critical thinking: Self-reported reductions in cognitive effort and confidence effects from a survey of knowledge workers," CHI Conference on Human Factors in Computing Systems, 2025. Link
- "Is AI dulling our minds?" Harvard Gazette, November 2025. Link
- "Are these AI prompts damaging your thinking skills?" BBC News, 2025. Link
- "Generative AI: the risk of cognitive atrophy," Polytechnique Insights, 2025. Link
- "AI will likely shut down critical infrastructure on its own, no attackers required," CIO, 2025. Link
- "What If: Global AI Systems Collapsed Overnight?" Al Habtoor Research Centre, 2025. Link
- "A Misconfigured AI Could Trigger Infrastructure Collapse," BankInfoSecurity, 2026. Link
- "The human side of AI: The growing risks of ubiquitous use of AI on talent," Thomson Reuters, 2025. Link
You might also enjoy