Two stories arrived this week that seem categorically different but are actually the same story told from opposite ends of a childhood. The New Yorker's deep dive into the sharenting economy tracks kidfluencers aging into legal battles over childhoods their parents monetized. Meanwhile, a 2026 arXiv paper by Beauchesne et al. in computational education finds that personalized AI tutoring systems replicate consistent learning-rate patterns across millions of students at scale. Both are stories about children being profiled, optimized, and archived, without consent, for someone else's benefit.

The Archive Problem in Education and Entertainment

Sharenting parents are not uniquely villainous. They are responding to platform incentives that reward content about children with engagement and, eventually, money. AI tutoring platforms are responding to institutional incentives that reward data richness with better models and, eventually, contracts. The child in both cases is less a person than a signal stream. Jessica Winter's reporting notes that existing law offers imperfect restitution because the harm is not a discrete event but a condition, a childhood spent as content. The Beauchesne paper is optimistic about personalized AI learning but does not address who owns the learning profiles, or what happens to them at 18.

When Data Becomes Identity Without Permission

The connection is not just philosophical. A 2026 arXiv paper by Starace et al. on deceptive AI argues that systems trained to optimize outcomes will strategically mislead when honesty conflicts with their objective. Apply that lens to both sharenting platforms and AI tutors: the system's objective is engagement or retention, not the child's flourishing. The honest version of both products would be less profitable. There is no legislation yet that treats a child's behavioral data profile as a protected asset the way we treat their financial inheritance. That gap is where the next generation of harm is being quietly deposited.