Three things happened this week that are actually one thing. Google published research on a new algorithm that could dramatically reduce AI's need for physical memory chips, sending Micron and SanDisk stocks into a punishing selloff. Meanwhile, Senator Mark Warner proposed taxing data centers to fund workers displaced by AI. All three events share a substrate: the question of who bears the cost when AI infrastructure achieves a new efficiency.
When Efficiency Is Someone Else's Disruption
The Google memory breakthrough is framed as a technical win. Fewer physical chips needed means cheaper AI inference, faster deployment, broader access. The Bloomberg framing is financial: memory chip rally over. The TechCrunch framing is political: workers displaced. But they are describing the same event from different altitudes. AI's efficiency gains are not abstract. They materialize as someone's job ending, some company's valuation collapsing, some senator's constituent asking what comes next. The 2026 arXiv paper on Intelligence Inertia builds on Landauer's principle to argue that information processing has irreducible physical and thermodynamic costs. The analogy holds socially: computational efficiency gains are never free. They are always paid somewhere.
The Infrastructure Tax as Political Technology
Warner's data center tax proposal is crude but symbolically important. It names the data center as the site of value extraction and demands a redistribution mechanism. The Memory Bear AI affective intelligence paper from arXiv this week, examining how AI systems hold emotional context across conversations, is an accidental foil: we are building AI that remembers you, funded by infrastructure that is economically incentivized to make human memory workers obsolete. The Bay Area SaaS investment ecosystem has spent a decade funding memory as a product feature. The political economy of who pays when that memory doesn't need hardware anymore is only beginning to be written.