Three stories dropped this week that, stitched together, reveal something uncomfortable: the AI industry is fundamentally confused about what it is, who made it, and whether that even matters. Cursor quietly built its flagship coding model on top of Moonshot AI's Kimi — a Chinese foundation model — and only admitted it when pressed. Meanwhile, Shishir Mehrotra of Superhuman (formerly Grammarly) confronted a journalist after an AI impersonated him. And then there's the AI Personality of the Year award, which wants you to vote for your favorite synthetic human. Authenticity, provenance, identity — all three stories are about the same vanishing act.
Provenance Anxiety in AI Models and Art Markets
The Cursor-Kimi situation is less a scandal and more a symptom. In a geopolitical climate where building on Chinese models "feels particularly fraught," the incentive to obscure origins is real — but so is the reputational cost of getting caught. This is the AI equivalent of provenance laundering. Interestingly, a new report finds AI is widely used in commercial galleries but mostly without oversight — the art market is doing the exact same thing, deploying AI tools without disclosing it to collectors or the public. A 2024 paper in Nature Machine Intelligence by Bender et al. warned that opacity in model training pipelines compounds downstream trust failures. We're watching that prediction arrive in real time.
When AI Performs Identity, Who Gets Harmed?
The Mehrotra impersonation story and the AI influencer awards aren't cute anecdotes — they're the cultural surface of a deep infrastructural problem. AI influencer pageants normalize synthetic personas as legitimate identity-holders, while actual humans are getting impersonated without consent. The cultural theorist question worth asking: when identity becomes a deployable asset — rebrandable, synthesizable, awardable — what's left of the self? The enshittification thesis that Kyle Raymond Fitzpatrick laid out at Culture Slop feels newly urgent here: these aren't bugs in the system, they're the system expressing its values. For founders navigating this landscape, understanding which AI investors are actually thinking about trust and transparency at the seed stage has never mattered more.