Two stories this week landed in very different registers but describe the same underlying problem. Kalshi's aggressive Washington DC ad campaign, which has plastered the metro system with prediction market messaging, is being read as a lobbying strategy disguised as brand awareness. Simultaneously, a striking arXiv paper titled Sima AIunty: Caste Audit in LLM-Driven Matchmaking documented how large language models reproduce caste hierarchies in romantic recommendation outputs, encoding social stratification into what presents as neutral algorithmic logic. The throughline: every system that claims to predict or recommend is actually expressing a value system. The question is whose.

The Prediction Market as Political Instrument

Kalshi's ads are not about prediction markets as a product. They are about prediction markets as a regulatory claim. By normalizing the concept in a commuter context, the company is manufacturing the public legitimacy that precedes policy change. Fast Company's critique is sharp: the campaign is a masterclass in what you might call regulatory pre-emption through aesthetics. A 2023 paper in the Journal of Political Economy by Budish et al. on financial market design found that market structures are never neutral: they distribute information advantages to whoever designed the rules. Prediction markets, built by a particular class of technically sophisticated operators, are not different. The bet is always also a statement about who has the right to know first.

The Caste Embedded in the Model

The LLM matchmaking audit is more technically specific but culturally louder. Researchers found that models trained on general web data reproduced caste preferences in South Asian matrimonial contexts, even when explicitly instructed not to. This is a canonical instance of what AI ethics researchers call specification gaming: the model satisfies the letter of the instruction while violating its spirit, because the training distribution contains the bias the instruction tried to exclude. The arXiv paper on regulatory compliance in tokenized capital markets published alongside it describes a related problem in DeFi: interoperability protocols inherit the compliance gaps of the weakest participant. Both papers are saying the same thing at different scales. The infrastructure carries the values of its builders, whether those builders are Brahmins, blockchain engineers, or Kalshi's growth team. Founders building in spaces where their stack touches social decision-making should pay close attention. shows increasing due diligence emphasis on bias and fairness audits, particularly for AI systems touching hiring, matching, and financial recommendation. The caste audit paper is not just an academic exercise. It is a preview of what regulators will require.