Meta is watching you type
On April 21, 2026, Reuters broke a story that should have been bigger than it was. Meta is installing tracking software on every U.S. employee's work computer. The tool, called the Model Capability Initiative (MCI), captures mouse movements, clicks, keystrokes, and periodic screenshots across a designated list of work apps and websites. The stated goal: generate real-world training data so Meta's AI agents can learn to do office work autonomously. Two days later, Meta announced it would lay off 10% of its workforce, roughly 8,000 people, effective May 20. The timing is not a coincidence. It's a playbook.
The surveillance pipeline
MCI was disclosed through an internal memo posted in a channel belonging to Meta's Superintelligence Labs team. According to Reuters, the software runs on work-related apps and websites, including Google, LinkedIn, Wikipedia, GitHub, Slack, and Atlassian products. Meta's own properties like Threads and Manus are also on the list, which was originally slated to include AI tools like ChatGPT and Claude before being revised. The memo framed MCI as a collective effort. "This is where all Meta employees can help our models get better simply by doing their daily work," it read. Meta CTO Andrew Bosworth was more direct about the endgame in a separate communication: "The vision we are building towards is one where our agents primarily do the work and our role is to direct, review and help them improve." A Meta spokesperson told the BBC that if the company is "building agents to help people complete everyday tasks using computers, our models need real examples of how people actually use them." The company insists the data won't be used for performance reviews and that safeguards are in place to protect sensitive content. There is no opt-out.
Track, train, terminate
Here's the sequence that matters:
- Meta installs software that records how employees navigate their computers, solve problems, and complete tasks.
- That behavioral data feeds into AI models designed to replicate those exact workflows.
- Meta lays off 8,000 people to "offset other investments."
The internal memo from Janelle Gale, Meta's chief people officer, framed the layoffs as efficiency measures: "We're doing this as part of our continued effort to run the company more efficiently and to allow us to offset the other investments we're making." Those other investments? Meta raised its 2026 capital expenditure forecast to between $125 billion and $145 billion, up from an already staggering $72.2 billion in 2025. Nearly all of it is going to AI infrastructure, data centers, and the Superintelligence Labs division that created MCI in the first place. So Meta is spending more on AI infrastructure in a single year than most countries spend on their entire defense budgets, while simultaneously harvesting employee behavior to train the systems that will replace them. The employees are, quite literally, building the tools of their own obsolescence.
The consent problem
Meta says MCI isn't optional. If you work at Meta in the U.S., your keystrokes are being logged. Your mouse movements are being captured. Your screen is being periodically photographed. And all of it is being fed into a training pipeline. Legally, this is probably fine. U.S. federal law offers remarkably little protection against employer surveillance on company-owned devices. The Electronic Communications Privacy Act, written in 1986, predates the modern internet, let alone AI agents. Most state laws only require disclosure, not consent. California has stronger protections than most states, but even there, the bar for workplace monitoring on employer hardware is low. Fast Company put it bluntly: "U.S. law is woefully behind in an age when most office work is done on a computer, and 'consent' is virtually meaningless when jobs are on the line." This is the core tension. Employees technically "consent" by continuing to work at Meta. But when the alternative is losing your job in a market where your employer just announced 8,000 layoffs, what does consent actually mean? It's the illusion of choice dressed up in employment agreements.
The security angle nobody's discussing
If Meta can capture granular workflow patterns from every employee, so can anyone who breaches their systems. MCI doesn't just record keystrokes. It records how people navigate applications, which tools they use in sequence, how they structure their problem-solving workflows, and what appears on their screens at random intervals. That's not just productivity data. It's a comprehensive behavioral fingerprint of how an entire organization operates. Consider what a breach of this dataset would look like. An attacker wouldn't just get credentials or documents. They'd get a detailed map of how Meta's workforce interacts with internal tools, third-party services, and each other. That's a social engineering goldmine. Meta claims safeguards are in place to protect sensitive content. But the history of large-scale data collection initiatives is not encouraging on this front. The more data you collect, the more surface area you create for compromise. And the more granular the data, the more dangerous a breach becomes.
Everyone is watching
Meta isn't doing this in a vacuum. As Business Insider reported, the use of employee monitoring data for AI training represents an "evolution of workplace surveillance." The pattern is emerging across the industry: companies that invested heavily in productivity monitoring tools during the remote work era are now realizing that same data pipeline can feed AI training. The difference is that Meta is being explicit about it. Most companies doing similar things are quieter. The employee tracking software market has grown substantially since 2020, and the shift from "monitoring productivity" to "training AI" is a natural, if unsettling, evolution. Every company with AI ambitions is looking at internal data as a competitive advantage. Your workflow patterns, your decision-making sequences, your navigation habits, all of it is potential training data. Meta is just the first major tech company to say the quiet part out loud.
The AI-washing of layoffs
There's a familiar pattern in how tech companies frame workforce reductions in the AI era. The language is always about "efficiency" and "optimization" and "investing in the future." The layoffs are never presented as what they often are: a bet that AI systems can eventually do the work cheaper. Meta's framing is textbook. The layoffs are to "offset other investments." The tracking software is to "help our models get better." The AI agents are designed so employees can "direct, review and help them improve." Every statement positions the change as collaborative, as though the employees being surveilled and laid off are willing participants in a shared mission. But the numbers tell a different story. Meta is cutting 8,000 human jobs while increasing its AI capital expenditure by tens of billions. The company's own CTO described a future where "agents primarily do the work." The investment thesis is clear: human labor is being replaced, and the humans are being asked to train their replacements on the way out. On Blind, the anonymous workplace forum, Meta employees have been blunt. Fast Company reported that negative posts about AI at Meta have more than quadrupled since 2024. One employee described the atmosphere as "dead and depressing."
What this means for everyone else
Meta's MCI program is a preview of a broader shift. As AI agents become more capable, the incentive to capture high-quality human workflow data will only grow. And the cheapest, most comprehensive source of that data is the employees who are already doing the work. The uncomfortable truth is that this dynamic isn't limited to Meta. Any employer with access to your work computer, your email, your Slack messages, and your browsing patterns is sitting on a potential AI training dataset. The question isn't whether other companies will follow Meta's lead. It's whether they'll tell you when they do. For individual workers, the calculus has changed. The old fear was that AI would automate your job. The new fear is that your employer will use you to automate your job, and that the distinction between "using AI tools" and "training AI tools" will quietly disappear. The surveillance-to-replacement pipeline isn't hypothetical. It's running in production at one of the largest employers in tech. And the rest of the industry is taking notes.
References
- Reuters: Meta to start capturing employee mouse movements, keystrokes for AI training data (April 21, 2026)
- CNBC: Meta tracks employee usage on Google, LinkedIn as part of AI training project (April 22, 2026)
- CNN: Meta to cut 10% of staff as it pours billions into AI (April 23, 2026)
- Fast Company: Meta tracking employee keystrokes to train AI is probably legal, experts say that doesn't make it ethical (April 23, 2026)
- Fast Company: Meta staff vent about AI and layoffs on Blind (April 29, 2026)
- Inc: Meta's AI experiment shows why monitoring employees backfires (April 28, 2026)
- Reuters: Eyes on me, monitoring the way through employee privacy (April 27, 2026)