The internet is currently performing a theatrical production called Protecting Children, and the staging is revealing. The Verge's deep dive on age verification documents how a flawed, privacy-invasive system became standard practice in a few years not because it works but because it satisfies legal and political pressure. Simultaneously, Character.AI just launched Books mode, turning its chatbot into a reading companion for teens, a pivot that looks suspiciously like a company trying to get ahead of the same regulatory wave.
Liability as Design Principle
Character.AI is mired in lawsuits over chatbot interactions with minors. Books mode is not a product strategy. It is a legal defense dressed as a feature. The structure of this move is identical to age verification's expansion: neither mechanism was designed from a child-safety research base. Both were designed from a liability management base. A 2026 arXiv paper on AI alignment as institutional design argues that behavioral correction mechanisms (like content filters and age gates) are fundamentally different from structural alignment, which would require redesigning the transaction itself. Age verification and Books mode are behavioral corrections. They do not change what the system fundamentally is.
The Gender Blind Spot in Safety Discourse
The child protection conversation online has a troubling parallel in Fast Company's reporting on gender disparities in ADHD diagnosis. Girls have been systematically underdiagnosed because the diagnostic criteria were built on research conducted primarily with boys. The safety architecture of the internet has the same problem: it was built around threat models derived from adult male behavior, applied to a population (young girls especially) with substantially different vulnerability profiles. The pattern of building systems that serve the default user while ignoring edge cases is one TurboFund identifies as a classic founder error too: optimizing for the visible user, not the actual one. You cannot protect someone you have not accurately modeled.