This week handed us two surveillance stories that the news cycle treated as separate beats. They are not. Together they sketch the full architecture of a world in which being watched is no longer an exception, a crime, or a dystopian premise. It is simply the condition of participation.
From State Spyware to Corporate Keystroke Logging
TechCrunch reports that the UK's cybersecurity chief confirmed that 100 countries now possess commercial spyware capable of hacking phones. The same week, The Verge broke the news that Meta is installing monitoring software on employee computers, collecting behavioral data to train its AI agents. The difference between these two stories is jurisdiction and branding. The mechanism is identical: observe human behavior at granular resolution, extract patterns, use patterns for power. A 2025 arXiv paper on regulating artificial intimacy by Fraser, Szczuka, and Ciriello notes that surveillance systems tend to normalize fastest when they arrive through channels coded as helpful rather than threatening. Meta's framing is productivity. The spyware operators' framing is security. Neither asks consent in any meaningful sense.
The Governance Gap Is the Product
The UK government warning lands in a context where the businesses being warned are simultaneously running their own surveillance regimes on their own workers. There is no clean separation between corporate and state surveillance at this point. They share infrastructure, often share personnel, and definitely share the same epistemological ambition: predict behavior before it happens. The OpenAI-Infosys deal announced this week, covering workflow automation and AI deployment, sits in the same continuum. Every system that learns from human behavior to automate it is, structurally, a surveillance system with a productivity dashboard glued to the front. The watched employee, the hacked dissident, the automated workflow: these are three points on the same graph.