Nobody reads the terms
You clicked "I agree" this morning. Probably more than once. Maybe before your first cup of coffee. You didn't read what you agreed to, and neither did anyone else. A 2017 Deloitte survey found that 91% of people consent to legal terms and conditions without reading them. Among 18 to 34-year-olds, it's 97%. This isn't laziness. It's a rational response to an irrational system. The entire digital economy runs on a consent mechanism that everyone, companies included, knows is fiction.
The absurdity by the numbers
Researchers at Carnegie Mellon estimated that it would take the average person roughly 76 eight-hour working days per year just to read the privacy policies they encounter. Not the full terms of service, just the privacy policies. That's nearly a third of a working year spent reading legal text before you even open an app. The numbers get worse when you look at individual platforms. A study by The Biggest Lie on the Internet project found that the average social media platform's combined terms of service and privacy policy runs over 34,000 words, taking about 2.7 hours to read. Twitter (now X) alone clocks in at 83,432 words, a 6.7-hour commitment. Microsoft Teams takes 2 hours and 27 minutes. Even Candy Crush demands nearly two hours of legal reading. NYU law professor Florencia Marotta-Wurgler has pointed out that many of these documents contain language "more complex than articles in peer-reviewed scientific journals." They're not just long, they're deliberately impenetrable.
A design choice, not a bug
Here's the thing most people miss: this isn't an accident. Companies benefit enormously from terms that nobody reads. Unread terms are a blank check. They let companies claim broad rights over your data, limit their liability in ways you'd never consciously accept, and shift risk entirely onto you, the user. The complexity isn't a side effect of legal thoroughness. It's a feature. The harder a document is to read, the less likely anyone will challenge what's in it. A 2016 study by researchers Jonathan Obar and Anne Oort-Turkel demonstrated this perfectly. They created a fictitious social networking site with terms of service that included a clause requiring users to surrender their firstborn child as payment and another stating all shared data would be forwarded to the NSA. The vast majority of participants clicked "I agree" without noticing either clause. A similar experiment by Security.org found that 98% of over 1,000 respondents agreed to a consent form with an absurd clause, including 100% of those who claimed they "typically read" such agreements. The lesson isn't that people are careless. It's that the system is designed to make careful reading practically impossible.
The psychology of giving up
There's a term in psychology that maps almost perfectly onto this behavior: learned helplessness. Coined by Martin Seligman in the 1960s, it describes a state where, after repeated exposure to situations you can't control, you stop trying to exert control at all, even when you could. That's exactly what's happened with terms of service. After years of encountering walls of incomprehensible legal text, people have learned that reading doesn't help. The terms are non-negotiable. You can't cross out a clause or ask for amendments. Your only options are "I agree" or "I don't use the service." And since the service is often essential to your work or social life, there's really only one option. This is learned helplessness meets sunk-cost fallacy. You're already on the platform. Your friends are there. Your files are there. Your professional identity might be there. What are you going to do, leave? So you click "I agree" the same way you breathe: automatically, without thinking, because the alternative isn't worth considering.
We say we care about privacy, then prove we don't
The Pew Research Center found in 2019 that only 9% of adults say they always read privacy policies before agreeing to them. And yet, in the same surveys, large majorities express concern about how their data is collected and used. This gap between stated values and actual behavior is the heart of the problem. We say we value privacy. We hand it away ten times before lunch because the alternative is not using the service. This isn't hypocrisy. It's a market failure. When every product demands the same non-negotiable consent, and when opting out means opting out of modern life, "choice" stops being meaningful.
The regulation that backfired
GDPR was supposed to fix this. The European Union's General Data Protection Regulation, implemented in 2018, required websites to obtain informed consent before collecting data. The idea was to empower users, to give them real control over their information. The result was cookie banners. Millions of them. On every website, on every visit. Researchers now have a term for what happened next: consent fatigue. Users are so overwhelmed by the constant barrage of consent pop-ups that they click "Accept All" reflexively, the exact opposite of what the regulation intended. A 2023 study by France's Interministerial Directorate for Public Transformation found that users spend an average of 4.1 seconds looking at a cookie banner, compared to the 40 minutes it would take to actually read the legal text behind it. The EU's Digital Omnibus Proposal, introduced in late 2025, attempts to address this with browser-level consent tools and standardized settings. But critics argue it's more of the same: technical patches on a fundamentally broken consent model. More clicking, zero more reading. The regulation didn't fix the problem. It just added another layer of performative consent.
When governments clash with terms nobody read
In January 2026, India's IT ministry issued a 72-hour ultimatum to Elon Musk's X, ordering the platform to take immediate action after its AI chatbot Grok was used to generate obscene and sexualized images of women. The ministry demanded that X restrict the generation of such content and submit a detailed compliance report, warning that failure could jeopardize X's safe-harbor protections under Indian law. Here's the interesting part: X's terms of service almost certainly contained clauses that gave the platform broad discretion over what its AI tools could generate. Users who interacted with Grok had, in theory, agreed to those terms. But when the output became a matter of public outrage, the terms became irrelevant. No government official pointed to the ToS as a defense. No user was told "well, you agreed to this." This reveals something important about the nature of these agreements. Terms of service operate in a strange legal limbo. They're technically binding, but practically unenforceable in the ways that matter most. When something genuinely harmful happens, we don't turn to the ToS for answers. We turn to governments, regulators, and public pressure. The "agreement" was never the real mechanism of accountability.
The AI paradox: contracts all the way down
And now we've arrived at the strangest chapter yet. AI is increasingly being used to both write and read terms of service. On the drafting side, companies are using AI to generate legal documents faster and at lower cost. On the reading side, tools like AI contract readers promise to scan, summarize, and flag key clauses in seconds. The legal tech market has exploded with products that can review a contract 85% faster than a human lawyer. Think about what this means. We're approaching a world where AI writes terms of service that another AI reads on your behalf, and you, the human supposedly giving "consent," are completely removed from the loop. The agreement exists between two machines. This isn't science fiction. It's already happening in enterprise contract management, where AI reviews thousands of vendor agreements without a human reading a single clause. Consumer-facing versions are close behind. The philosophical question is unavoidable: if no human reads or understands an agreement, is it still consent?
The question we're avoiding
The real issue isn't that people are lazy or that companies are evil. It's that informed consent at scale may simply be impossible. The digital economy requires billions of consent transactions every day. Each one is supposed to represent a meeting of minds, a conscious choice by an informed individual. But we know that's not happening. We've known it for decades. And instead of confronting that truth, we've built increasingly elaborate fictions around it: longer documents, more banners, better regulations, smarter AI, all in service of maintaining the illusion that someone, somewhere, is reading. Maybe the honest thing to do is admit what everyone already knows. Nobody reads the terms. The "I agree" button is a polite fiction, a social contract that says: I know you're taking my data, you know I haven't read how, and we're both going to pretend this is fine because the alternative, admitting that we have no meaningful control, is worse. The question isn't whether we can fix this system. It's whether we're ready to stop pretending it works.
References
- Deloitte, "2017 Global Mobile Consumer Survey" (via Business Insider)
- A. McDonald and L. F. Cranor, "The Cost of Reading Privacy Policies," Carnegie Mellon University (via Time)
- The Biggest Lie on the Internet, "A Policy Length Analysis for 70 Digital Services"
- thinkmoney, "What Phones Know About You" (via PCMag)
- F. Marotta-Wurgler, cited in Los Angeles Times
- J. Obar and A. Oort-Turkel, "The Biggest Lie on the Internet" (via Science / AAAS)
- Pew Research Center, "Americans' Attitudes and Experiences with Privacy Policies and Laws," 2019
- DITP (France), "Impact of Cookie Banner Design on Users," 2023 (via Axeptio)
- EU Digital Omnibus Proposal, 2025 (via Osborne Clarke)
- TechCrunch, "India orders Musk's X to fix Grok over 'obscene' AI content," January 2026
- M. Seligman, "Learned Helplessness," 1967 (via Simply Psychology)
- LegalOn Technologies, "AI Contract Review Software: Complete 2025 Buyer's Guide"
You might also enjoy