We stopped reading terms of service
Nobody reads terms of service. You know it. I know it. The companies writing them definitely know it. We scroll to the bottom, click "I agree," and move on with our lives. This has been true for decades, but in the age of AI, what we're blindly consenting to has gotten significantly more consequential, and the entire consent model is starting to crack.
The scale of the problem
A widely cited Carnegie Mellon study found that the average American would need roughly 76 eight-hour work days per year just to read all the privacy policies they encounter online. Nationally, that adds up to about 53.8 billion hours of reading time, with an estimated opportunity cost exceeding $780 billion. That cost actually surpasses what consumers spend to access the internet in the first place. And these policies aren't getting shorter. A 2019 analysis of 70 popular digital services found that social media platforms alone averaged over 34,000 words of combined terms of service and privacy policy text, requiring nearly three hours to read per service. Twitter (now X) topped the list at a staggering 83,432 words. A Brookings survey put it plainly: about 32% of people never read terms of service, another 39% only sometimes do, and just 20% claim to read them most of the time. Nearly three-quarters of the population are non-readers or partial readers. This isn't laziness. It's a rational response to an irrational system.
What changed with AI
For years, skipping terms of service mostly meant agreeing to let a company send you marketing emails or share anonymized analytics. The stakes were low enough that ignoring the fine print felt reasonable. That's no longer the case. In 2024, a New York Times investigation revealed that companies including Google, Snap, and Meta were quietly rewriting their terms and conditions to include language about "artificial intelligence," "machine learning," and "generative AI." Some changes were as small as a few words. Others added entire new sections explaining how AI models would access user data. The pattern has become widespread. xAI's terms grant the company "full rights to use any data" submitted by users who aren't logged in. Google's free-tier terms allow user content to be used to "provide, improve, and develop Google products and services and machine learning technologies." Starlink updated its privacy policy to permit AI training on customer data, though it at least offers an opt-out toggle. Adobe faced a social media backlash in 2024 after a routine terms update sparked fears that user content would feed AI models, prompting the company to hastily rewrite its agreement in clearer language. The U.S. Federal Trade Commission took notice. In February 2024, the FTC published a pointed blog post warning that "any firm that reneges on its user privacy commitments risks running afoul of the law," specifically calling out companies that adopt more permissive data practices through "surreptitious, retroactive amendment" to their terms. What we're agreeing to now includes training data rights for AI models, voice and likeness usage, conversation logging, and broad content licensing. The gap between what people think they're consenting to and what they're actually consenting to has never been wider.
Consent theater
Here's the uncomfortable truth: the "I agree" button is legal fiction, not informed consent. If it takes 76 work days to read every policy you encounter in a year, then no one, not a single person, is giving informed consent. The system relies on the fact that you won't read it. The length and complexity isn't a bug. It's a feature. This creates what you might call consent theater. The ritual looks like consent. It has the legal structure of consent. But it lacks the substance of consent entirely. You can't meaningfully agree to something you haven't read, and the system is specifically designed to ensure you won't read it. Bloomberg Law reported that companies like LinkedIn, Zoom, and eBay have all faced backlash, and even litigation, over opaque policy updates. But the backlash is always temporary, and the underlying dynamic never changes. Companies write longer, more complex terms. Users click through faster. The gap widens.
If consent is impossible, the system needs to change
When individual responsibility becomes structurally impossible, the problem stops being about individual behavior and starts being about system design. We don't blame people for not reading the ingredients list on every food item they buy, because we built a better system: nutrition labels. The idea of applying this approach to privacy isn't new. Researchers at Carnegie Mellon proposed a "privacy nutrition label" back in 2009, designed to present data collection practices in a standardized, scannable format. Apple actually implemented a version of this in 2020, requiring developers to disclose data practices through privacy labels on App Store listings. The results have been mixed. A study by UNSW found that apps experienced an average 14% drop in weekly downloads after Apple mandated the labels, suggesting they do influence behavior. But research from the Privacy Enhancing Technologies Symposium found that the labels "currently play a limited role in informing or empowering participants," partly because the information is self-reported by developers and not independently audited. Then there's the AI-powered approach. Tools like ToS;DR (Terms of Service; Didn't Read) use community review and scoring to rate terms of service from Grade A to Grade E. Browser extensions now use AI to summarize terms on the fly. ZDNET tested several AI tools for summarizing terms and found that ChatGPT and Perplexity offered the clearest results. There's a delicious irony here: we're increasingly relying on AI to understand what AI companies are doing with our data.
Where regulation is heading
The EU AI Act, which entered into force in August 2024, requires providers of general-purpose AI models to publish summaries of the data used to train their systems. The transparency obligations became applicable from August 2025. It's a step in the right direction, though summaries of training data are very different from clear, enforceable limits on what companies can do with your personal information. Singapore's approach through the Personal Data Protection Act (PDPA) offers a different model. The PDPC released advisory guidelines in 2024 specifically addressing the use of personal data in AI systems. These guidelines cover development, testing, deployment, and procurement stages of AI implementation, and clarify when consent is required versus when exceptions like the Business Improvement Exception or Research Exception might apply. It's a more structured framework, but it still places the burden of understanding on the individual. The deeper issue is that regulation tends to lag technology by years, sometimes decades. By the time rules catch up, the data has already been collected, the models have already been trained, and the terms have already been agreed to.
What could actually work
Fixing this probably requires a combination of approaches rather than any single solution. Standardized disclosure labels. Not the self-reported kind, but audited, regulated labels that tell you in plain language what data is collected, whether it's used for AI training, and whether you can opt out. Think nutrition facts, but with teeth. Meaningful defaults. Instead of opt-out buried in account settings, the default should be that your data isn't used for AI training unless you explicitly choose otherwise. The current approach of "you agreed because you didn't find the toggle" isn't real consent. AI-readable terms. If humans can't read these documents, maybe the solution is to write them in both human language and a machine-readable format that personal AI agents can parse on your behalf. Your AI could flag concerning clauses before you click "I agree." Regulatory floor. Some uses of personal data should simply be prohibited regardless of what any terms of service says. If a practice is harmful enough, no amount of "I agree" buttons should make it legal.
The real question
We built a consent system for an era when the worst case scenario was getting spam emails. Now we're in an era where clicking "I agree" might mean your conversations train an AI model, your writing style gets absorbed into a language model, or your likeness gets used in ways you never imagined. The system isn't broken. It's working exactly as designed, just not for you. The question is whether we'll update the design before the gap between what we think we're agreeing to and what we're actually agreeing to becomes completely unbridgeable. Because right now, the biggest lie on the internet isn't in any terms of service. It's the two words we type every single day: "I agree."
References
- McDonald, A.M. & Cranor, L.F. "The Cost of Reading Privacy Policies," Carnegie Mellon University (2008). Available at: https://techland.time.com/2012/03/06/youd-need-76-work-days-to-read-all-your-privacy-policies-each-year/
- "A Policy Length Analysis for 70 Digital Services," The Biggest Lie on the Internet (2019). Available at: https://www.biggestlieonline.com/policy-length-analysis-2019/
- "Brookings Survey Finds Three-Quarters of Online Users Rarely Read Business Terms of Service," Brookings Institution. Available at: https://www.brookings.edu/articles/brookings-survey-finds-three-quarters-of-online-users-rarely-read-business-terms-of-service/
- "When the Terms of Service Change to Make Way for A.I. Training," The New York Times (June 2024). Available at: https://www.nytimes.com/2024/06/26/technology/terms-service-ai-training.html
- "AI (and Other) Companies: Quietly Changing Your Terms of Service Could Be Unfair or Deceptive," U.S. Federal Trade Commission (February 2024). Available at: https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2024/02/ai-other-companies-quietly-changing-your-terms-service-could-be-unfair-or-deceptive
- "The Fine Print of AI Terms of Service: What Most Businesses Miss," TermsFeed. Available at: https://www.termsfeed.com/blog/ai-terms-service-fine-print/
- "Privacy, AI Fears Jolt Companies to Rewrite Legal Terms of Use," Bloomberg Law. Available at: https://news.bloomberglaw.com/privacy-and-data-security/privacy-ai-fears-jolt-companies-to-rewrite-legal-terms-of-use
- Kelley, P.G. et al. "A 'Nutrition Label' for Privacy," Carnegie Mellon University (2009). Available at: https://cups.cs.cmu.edu/soups/2009/proceedings/a4-kelley.pdf
- "How Mobile App Data Privacy Concerns Impact Firm Performance," UNSW Business Think. Available at: https://www.businessthink.unsw.edu.au/articles/data-privacy-mobile-apps-downloads-labels
- "How Usable Are iOS App Privacy Labels?," Proceedings on Privacy Enhancing Technologies (2022). Available at: https://petsymposium.org/popets/2022/popets-2022-0106.pdf
- ToS;DR (Terms of Service; Didn't Read). Available at: https://tosdr.org/
- "I Used AI to Summarize Boring ToS Agreements, and These Two Tools Did It Best," ZDNET. Available at: https://www.zdnet.com/article/i-used-ai-to-summarize-boring-tos-agreements-and-these-two-tools-did-it-best/
- "AI Act," European Commission Digital Strategy. Available at: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
- "Advisory Guidelines on Use of Personal Data in AI Recommendation and Decision Systems," Singapore PDPC (2024). Available at: https://www.pdpc.gov.sg/guidelines-and-consultation/2024/02/advisory-guidelines-on-use-of-personal-data-in-ai-recommendation-and-decision-systems
- "Starlink Is Using Your Personal Data to Train AI. Here's How to Opt Out," CNET. Available at: https://www.cnet.com/home/internet/starlink-updates-privacy-policy-for-ai-training/
You might also enjoy