AI literacy won't save you
Today is National AI Literacy Day. At The Tech Interactive in San Jose, nearly 1,000 Bay Area students are exploring how AI systems learn, how machines interpret information, and how to think critically about the role of AI in their daily lives. Across the country, educators, industry leaders, and community members are gathering for the third year running to "demystify AI and spark curiosity." It sounds great on paper. But there's a question nobody at these events seems to be asking: what happens when the tool you're teaching people to understand is actively reshaping how they think?
The literacy gap nobody talks about
AI literacy, as it's currently taught, focuses on comprehension. What is a large language model? How does training data work? What is algorithmic bias? These are useful questions. But they are the wrong starting point. The real challenge isn't that people don't understand AI. It's that AI is changing the way people process information, make decisions, and engage with complexity, and no amount of definitional knowledge prepares you for that. A 2025 MIT Media Lab study found that "excessive reliance on AI-driven solutions" may contribute to "cognitive atrophy," a measurable shrinking of critical thinking abilities. A survey by Oxford University Press found that six in ten schoolchildren felt AI had negatively impacted their skills related to schoolwork. Meanwhile, 89% of school principals worry that AI use could make students dependent on technology for basic tasks, and 87% say the tools could make it less likely that students develop critical thinking at all. AI literacy programs don't address any of this. They teach you what AI is. They don't teach you what AI does to you.
We've been here before
In the 1990s, "computer literacy" was the phrase on every educator's lips. Schools rolled out programs to teach students how to use Microsoft Office, navigate the internet, and type efficiently. These programs were considered essential preparation for the digital age. They were also completely inadequate preparation for what actually happened. Computer literacy didn't prepare anyone for social media addiction, algorithmic radicalization, attention fragmentation, or the misinformation crisis. It taught people how to operate the tools without ever questioning what the tools were doing to their habits, relationships, and capacity for sustained thought. The parallel to today's AI literacy movement is almost eerie. We are once again teaching people to use a transformative technology without equipping them to resist its most insidious effects. As Carl Hendrick wrote in The Learning Dispatch, the term "AI literacy" is "capacious enough to include technical knowledge, ethical reasoning, and a generalised scepticism about Silicon Valley, often in the same breath." A concept defined loosely enough can accommodate any agenda.
Cognitive outsourcing is the real threat
The deeper problem isn't ignorance about AI. It's dependency on it. Researchers at the University of the Basque Country found that students who over-rely on generative AI show diminished critical thinking, reduced originality in written work, and a growing tendency to accept AI-generated outputs without verification. This phenomenon, called "automation bias," describes the human inclination to trust machine-generated information over one's own judgment. When AI outputs sound fluent and confident, errors, hallucinations, and biased reasoning go unchallenged. A paper published in Learning and Instruction described two converging pathways to cognitive fatigue in AI-assisted learning. On one hand, cognitive underload occurs when students delegate too much thinking to AI, leading to insufficient mental stimulation. On the other, information overload from the sheer volume of AI-generated content taxes processing capacity. Both pathways lead to the same place: a diminished ability to think independently. AI literacy that doesn't address dependency is just teaching people to use the tool that will erode their thinking. It's like running a smoking education program that only covers how cigarettes are manufactured.
What real AI education would look like
If we were serious about preparing people for a world shaped by AI, the curriculum would look nothing like what's being offered today. It would start with critical thinking, not as a buzzword, but as a practiced discipline. Students would learn to identify when they're outsourcing cognition, to sit with uncertainty instead of reaching for an instant answer, and to evaluate claims probabilistically rather than accepting whatever sounds most confident. It would include what you might call "friction training," deliberate practice in doing hard things slowly. Writing without autocomplete. Researching without summarization tools. Reasoning through problems step by step before consulting an AI for verification. The goal wouldn't be to reject AI, but to build the cognitive muscle that makes AI use genuinely augmentative rather than substitutive. It would also teach people when not to use AI. This is perhaps the most radical idea in the current landscape, and the one most absent from corporate-sponsored literacy programs. There are contexts where AI actively harms the thinking process, where the struggle itself is the learning, and where outsourcing to a model means losing the thing you were trying to gain. As a researcher at the University of Utah argued, we need to ask whether we want reading, writing, and reasoning to be things students do or things students outsource, "and whether we can even tell the difference between students who need the scaffold and students who are skipping a step they could take on their own."
The Singapore question
Singapore offers an instructive case study. The city-state has invested heavily in digital literacy through its Smart Nation initiative, launched in 2014 and expanded into Smart Nation 2.0. On paper, it's one of the most digitally advanced nations in the world, ranking 9th in the 2025 IMD Smart City Index. But when COVID-19 forced rapid digital adoption, gaps in access and capability among vulnerable groups, including low-income households, the elderly, and migrant workers, were immediately exposed. Universal digital availability didn't translate to universal digital competence. The infrastructure was there, but the deeper skills weren't. This is the pattern that AI literacy risks repeating. You can teach a nation what AI is. You can give everyone access to AI tools. But if the underlying capacity for independent thought, skepticism, and cognitive resilience isn't cultivated, the literacy is just a veneer. It's checkbox compliance, not genuine preparedness.
The paradox at the center
Here's what makes AI literacy particularly tricky: the people who need it most are the least equipped to benefit from it. Casual users who interact with AI through search engines and chatbots rarely seek out formal education about how these systems work. Elderly populations, who are increasingly targeted by AI-generated scams and misinformation, face both access barriers and motivation gaps. Children, who are forming their cognitive habits in an AI-saturated environment, are learning to use AI before they've developed the critical thinking skills to use it wisely. Meanwhile, the people who do engage with AI literacy programs tend to be those who already have strong technical foundations, the professionals and students who were going to figure it out anyway. The gap between those who understand AI and those who are shaped by it without understanding it is widening, not closing.
Who actually benefits
It's worth asking who profits from the AI literacy movement. The 2025 White House Executive Order on AI education has spurred corporate pledges worth millions of dollars from the same companies selling AI tools to schools and businesses. Google, Microsoft, Amazon, and others have committed to "supporting AI education," which in practice means funding programs that teach people to use their products. As a New York Times opinion piece noted, more and more colleges are "eagerly partnering with A.I. companies, despite decades of evidence demonstrating the need to test education technology, which often fails to deliver measurable improvements in student learning." AI companies are using educational settings as training grounds to further their commercial goals. This doesn't mean AI literacy is bad. It means the incentive structures around it are misaligned. When the primary funders of AI education are the companies that benefit from widespread AI adoption, the programs are unlikely to include modules on "when to turn the AI off" or "how to recognize that this tool is making you less capable."
Literacy is necessary but not sufficient
None of this is an argument against education. Understanding how AI works is genuinely important, and programs like National AI Literacy Day do valuable work in making that knowledge accessible. The 1,000 students in San Jose today will leave with a better understanding of the technology reshaping their world, and that matters. But literacy without cognitive resilience is like swimming lessons that never put you in the water. Knowing the mechanics doesn't protect you from the current. The real gap isn't knowledge. It's the ability to think independently when a very persuasive machine is offering to think for you. Until AI literacy programs grapple with that, they'll keep teaching people about the wave without teaching them how to stay afloat.
References
- The Tech Interactive, "National AI Literacy Day Summit," March 27, 2026. https://www.thetech.org/misc-pages/national-ai-literacy-day-summit/
- National AI Literacy Day, "Events," 2026. https://ailiteracyday.org/events
- Harvard Gazette, "Is AI dulling our minds?," November 2025. https://news.harvard.edu/gazette/story/2025/11/is-ai-dulling-our-minds/
- Oxford University Press, survey on AI impact on schoolchildren's skills, as reported by BBC News, "Are these AI prompts damaging your thinking skills?," 2025. https://www.bbc.com/news/articles/cd6xz12j6pzo
- EdWeek, "Teachers Worry AI Will Impede Students' Critical Thinking Skills," October 2025. https://www.edweek.org/technology/teachers-worry-ai-will-impede-students-critical-thinking-skills-many-teens-arent-so-sure/2025/10
- Carl Hendrick, "Does 'AI Literacy' Actually Mean Anything?," The Learning Dispatch. https://carlhendrick.substack.com/p/does-ai-literacy-actually-mean-anything
- Bao et al., "Learners' AI dependence and critical thinking," Acta Psychologica, 2025. https://www.sciencedirect.com/science/article/pii/S0001691825010388
- Self-regulation and overreliance on AI, Computers in Human Behavior, 2026. https://www.sciencedirect.com/science/article/pii/S0747563226000828
- The role of over-reliance on AI in negative consequences of student learning, Cogent Education, 2025. https://www.tandfonline.com/doi/full/10.1080/2331186X.2025.2591503
- Nick Potkalitsky, "The Literacy Gap Behind Cognitive Offloading," Educating AI. https://nickpotkalitsky.substack.com/p/the-literacy-gap-behind-cognitive
- Nancy Butler Songer, "Learning with AI: Losing critical thinking at the worst time," Open Access Government, January 2026. https://www.openaccessgovernment.org/article/learning-with-ai-losing-critical-thinking-at-the-worst-time/203341/
- Smart Nation Singapore. https://www.smartnation.gov.sg/
- PMC, "Making universal digital access universal: lessons from COVID-19 in Singapore," 2022. https://pmc.ncbi.nlm.nih.gov/articles/PMC9010197/
- The White House, "Major Organizations Commit to Supporting AI Education," September 2025. https://www.whitehouse.gov/articles/2025/09/major-organizations-commit-to-supporting-ai-education/
- The New York Times, "A.I. Companies Are Eating Higher Education," February 2026. https://www.nytimes.com/2026/02/12/opinion/ai-companies-college-students.html
You might also enjoy