Meta's New Surveillance Program Raises Serious Privacy Questions

Meta has reportedly begun installing tracking software on the computers of its US-based employees, recording mouse movements, clicks, and keystrokes. The program, internally called the Model Capability Initiative (MCI), has a specific purpose: harvesting detailed behavioral data to train AI models that can autonomously perform work tasks by mimicking how real humans interact with software.

The logic is straightforward from an AI development standpoint. If you want a model to navigate a computer interface the way a person does, you need to show it exactly how people actually navigate. That means capturing every hesitation before a click, every cursor path across a screen, every sequence of keystrokes in a workflow. Employees become, in effect, unwitting performance coaches for the next generation of AI agents.

But the implications reach well beyond one company's internal AI project.

What the Model Capability Initiative Actually Captures

Keystroke and mouse tracking at this level is far more revealing than it might initially sound. This is not simply logging which applications someone opens or how many hours they work. Granular input data can expose how a person thinks through a problem, where they hesitate, what they delete and retype, and even emotional states inferred from typing rhythm and error rates.

For employees, this creates a surveillance environment that goes considerably deeper than a basic productivity monitor. The data being collected could theoretically be used to evaluate individual performance, identify behavioral anomalies, or inform decisions about roles and responsibilities, even if none of that is the stated intent of the MCI program.

Workers at Meta are also, notably, among the most technically sophisticated employees anywhere. If this approach is normalized at a company like Meta, the precedent it sets for less technically literate workforces at smaller companies is significant. Corporate adoption of AI training programs built on employee behavioral data could become routine without workers fully understanding what is being recorded or how it might be used.

The Broader Trend of Workplace Data Harvesting

Meta's program did not emerge in a vacuum. The push toward AI agents, systems capable of performing multi-step computer tasks autonomously, has created enormous appetite for behavioral training data across the technology industry. Companies need examples of real human computer use to build these systems, and employees represent a convenient, captive source.

This sits within a longer trend of expanding workplace monitoring. Remote work accelerated the adoption of employee monitoring tools throughout the early 2020s, normalizing the idea that employers have legitimate interests in observing how workers spend their time on company hardware. What Meta is doing extends that logic into new territory: the data is not primarily about measuring productivity. It is about building a commercial AI product.

That distinction matters. Employees generating training data for a product that will be sold or deployed externally raises questions about compensation, consent, and intellectual contribution that standard employment agreements were never designed to address.

What This Means For You

Even if you do not work at Meta, this story is relevant to how you think about privacy at work and beyond.

First, if you work in any technology-adjacent role, it is worth reviewing what monitoring software your employer has installed on company-issued devices. Many organizations have broad rights to monitor activity on hardware they own, but the scope of that monitoring is not always clearly communicated to employees. Asking HR or IT for a plain-language explanation of what is tracked is a reasonable and increasingly necessary step.

Second, the separation between work devices and personal devices matters more than ever. Using a personal laptop or phone for any activity you consider private, rather than a company-issued machine, is one practical way to maintain a boundary. A VPN on your personal device adds an additional layer of protection for your home network traffic, particularly if you work remotely and want to keep personal browsing activity separate from anything that might be visible to employer-controlled network infrastructure.

Third, consider what behavioral data you generate on any platform, not just at work. The instinct driving Meta's MCI program, that detailed human behavioral patterns have significant commercial value for AI development, is not unique to the workplace. It reflects how consumer platforms have operated for years.

Actionable takeaways:

  • Ask your employer what monitoring software, if any, is installed on company devices
  • Keep personal activity on personal devices, not work-issued hardware
  • Use a VPN on personal devices when working remotely to separate your private traffic from employer-visible network activity
  • Review employment agreements for language about data ownership and AI training use
  • Stay informed about your rights under applicable state privacy laws, several US states have enacted or are considering workplace privacy protections

Meta's Model Capability Initiative is a reminder that the line between being an employee and being a data source is becoming harder to see. Understanding where that line sits, and what tools exist to maintain some control over your own behavioral data, is now a practical necessity rather than an abstract concern.