Meta installs tracking software on U.S. employees to train AI models
Meta is recording employees’ mouse movements, clicks, keystrokes and occasional screenshots to train office-focused AI, turning workplace use into model data.

Meta has begun installing tracking software on U.S.-based employees’ computers that records mouse movements, clicks and keystrokes, and can also capture occasional screenshots, to train its artificial intelligence models. The tool, called the Model Capability Initiative, is limited to a list of work-related apps and websites, but it pushes the company deeper into a model of AI development built on detailed traces of everyday office behavior.
Employees were told the data would not be used for performance reviews, a point meant to ease one of the most obvious fears around workplace monitoring. Meta framed the effort as part of a broader push to make its AI products better at day-to-day office work and to build agents that can perform tasks more autonomously. The internal memos describing the rollout were posted in a channel for Meta’s model-building team by a staff AI research scientist, placing the project squarely inside the company’s AI operation rather than a separate workplace analytics program.
The tracking plan arrives as Meta retools its artificial intelligence strategy around Meta Superintelligence Labs, which says it rebuilt the company’s AI stack from the ground up over the last nine months and is aiming toward personal superintelligence. Meta recently introduced Muse Spark, the first model in its new Muse series, and chief AI officer Alexandr Wang has been tied to that broader model push. The company’s bet is clear: if its systems can learn from real employee interactions with software, they may get better at the exact tasks AI agents still struggle with, from dropdown menus to keyboard shortcuts.
The workplace reaction has already been uneasy, with reports of employee backlash and privacy concerns over a mandatory system that records activity on work devices. The larger issue reaches beyond Menlo Park and into a national debate over how far employers and AI companies can go in collecting labor data for commercial training. If mouse movements, keystrokes and screen images become ordinary inputs for AI development, the line between productivity measurement and surveillance could blur fast, especially in white-collar workplaces where employees may be told the data is limited, even as the monitoring grows more invasive.
Know something we missed? Have a correction or additional information?
Submit a Tip

