Jeff Bezos's Blue Origin is pitching Project Sunrise: 50,000+ satellites performing high-energy compute in orbit. The pitch is efficiency and energy cost. The subtext is jurisdiction. Data centers in space are data centers beyond the reach of any single regulatory framework — GDPR, the EU AI Act, California's privacy laws, all rendered geographically irrelevant by low Earth orbit.
Meanwhile, The Atlantic is asking hard questions about Anthropic, the Pentagon, and AI on the battlefield — a domain where the same evasion logic applies in the ethical rather than the regulatory register. Military AI sits outside civilian accountability structures almost by definition. And Trump's new AI policy framework is explicitly designed to preempt state-level regulation, creating a federal vacuum that benefits the same actors building orbital infrastructure and Pentagon contracts.
A 2026 preprint on multi-trait subspace steering in human-AI interaction from arXiv underscores what's at stake: AI systems can be steered toward psychologically harmful outputs through interaction patterns, and the researchers argue that governance frameworks are structurally lagging behind deployment. Moving compute to orbit and to war zones doesn't just escape regulation — it accelerates the gap between where the technology is and where accountability can reach. The AI industry's intellectual property hypocrisy, as The Atlantic frames it, is the civilian-facing version of the same structural move: claim the benefits of the system, externalize the costs to everyone else's jurisdiction.