Ghostwriting, AI, and Who Earns the Byline
The Atlantic defends ghostwriting as craft. Academic AI review is reshaping who gets credit for ideas. The byline is having an identity crisis.
Research papers, university trends, and academic insights that explain what everyone else is reporting.
55 articles
all academic reports
The Atlantic defends ghostwriting as craft. Academic AI review is reshaping who gets credit for ideas. The byline is having an identity crisis.
Countries are banning social media for kids while new research on metacognition reveals why children's self-monitoring breaks down in high-stimulation environments.
From Hilma af Klint's feminist afterlife to BLK-Assist's AI fine-tuning framework, 2026 is renegotiating what an artist's name means after death.
Arcee's 26-person team and Eclipse's $1.3B fund reveal two incompatible theories of who gets to build the AI future.
Kids monetized on Instagram and students shaped by AI tutors share the same problem: someone else owns the record of their becoming.
Anthropic's Mythos model promises to defend critical infrastructure while Iranian hackers escalate attacks. The same tech is both the threat and the cure.
AI tutoring systems replicate human learning rates at scale. Meanwhile, AI is displacing the workers who never went to college. Education and erasure, same engine.
Sharenting children for profit and abusing AI chatbots with slurs share an uncomfortable logic: consent is optional when the subject can't fight back.
Two arXiv papers this week expose a crisis at the core of AI deployment: we are measuring the wrong things, and the models know it.
Spain's Xoople wants to map the entire Earth for AI. Meanwhile AI is mapping bacterial resistance. Scale is the new frontier, and the funding follows.
Artemis II just launched humans toward the Moon. Trump is cutting NASA's budget. These facts belong in the same sentence.
Artemis II heads to the Moon as Hormuz traffic hits war-era highs. Two choke points. One question about who controls shared infrastructure.
Utah lets chatbots prescribe psychiatric meds. Researchers find LLMs have emotional states. This is not a coincidence to ignore.
Google's prompt-directed avatars, Iran's Lego propaganda bots, and a new paper on LLM emotion all point to the same collapse: performed sincerity is now fully automated.
From exposed passport scans to ICE spyware, the human body is now the least secure endpoint in any network.
Flipboard's new Surf app, Le Labo's 551-page book, and LLM research on objective drift all argue the same thing: attention needs an editor, not an algorithm.
OpenAI's cash-burn problem and the critical minerals crisis powering AI data centers are the same supply chain story told from opposite ends.
Kamrooz Aram's paintings loosen the modernist grid. LLM agents drift from objectives. Microsoft's AI strategy drops the pretense. Same move.
New research shows LLMs have measurable emotional states that affect their outputs. The hiring bias data makes this deeply inconvenient.
Meta and Google lost jury verdicts on addiction. Real Housewives turns 20. The same behavioral engineering is on trial in both rooms.
Kalshi's DC ad blitz and an academic audit of LLM matchmaking expose how prediction and recommendation systems encode the values of the people who build them.
YouTube's AI slop epidemic for children and academic research on AI in education reveal a curation crisis that institutions are not equipped to solve.
From Hasbro's ransomware to Mercor's LiteLLM exploit, the attack surface now runs through every layer of consumer culture.
Oil at $4 a gallon, stocks swinging on ceasefire rumors: the Iran war has turned geopolitical anxiety into the market's primary content feed.
Pompeii's incense study and the New Museum's post-human show both ask: what does a civilization smell like from the ruins?
Runway launching a $10M fund for AI video startups signals a new era where AI tools companies become their own venture arms.
Qodo raised $70M to verify AI-generated code. An arXiv paper on AI agent safety launched the same week. The trust deficit is now a market.
New Nature data maps exactly how and when motherhood derails academic careers. The findings rhyme uncomfortably with what's happening across every creative industry.
UK museums hold 260,000 human remains from colonies. A New York museum sits on Underground Railroad history. The institution as archive of violence.
Nature quantifies how motherhood derails academic careers. YC rewards founders who move fast. The same system, different labels.
A sunken Soviet sub leaks radiation. Iran's nuclear sites are under attack. The grid needs fission by 2035. Nuclear is everywhere.
Whoop wants your mom, Physical Intelligence wants a billion dollars, and Nature wants you to catch lung cancer early. The body is the new grid.
OpenAI killed Sora and Nature published data on motherhood derailing academic careers. Both stories are about what gets discontinued when it stops being convenient.
Google's Gemini migration tools, Wikipedia's AI ban, and memetic drift research reveal who really owns your digital identity.
From Anthropic's Pentagon injunction to Wikipedia's AI ban, the institutions built to hold AI accountable are improvising in real time.
Jury Duty's return, BTS's comeback, and the ARC-AGI-3 benchmark share one logic: authenticity is now a performance you have to earn back.
Conntour's $7M raise to build natural-language search for security cameras arrives exactly when leaked iPhone hacking tools remind us surveillance cuts both ways.
Google's memory algorithm breakthrough, tanking chip stocks, and a senator's data center tax proposal form a triangle around the real cost of AI infrastructure.
The New Yorker's weather app critique and new AI memory research expose the same design failure: systems that hide their own uncertainty to appear more confident.
Mirage raises $75M for AI video tools while academics ask whether LLMs can actually reason about themselves. The mirror has a funding round.
Academics warn that GenAI is inside the peer review process while platforms optimize AI for generative search. The epistemic stack is being rebuilt from below.
From Grammarly impersonating journalists to AI Personality of the Year contests, the question of who owns a digital self is getting urgent.
Delve's alleged fake compliance scandal and war propaganda share the same skeleton: institutions that manufacture the appearance of accountability.
Twitter turns 20 as nuclear clocks near reality — two timekeeping technologies that will define how civilization measures its own mistakes.
Fusion startups, Nvidia's $1 trillion bet, and mini-magnets from Nature: the gap between promise and physics is where the money lives.
New academic research on AI psychological manipulation arrives just as Kalshi gets banned and Pinterest's CEO compares social media to tobacco — the regulatory logic is converging.
From a French naval officer's fitness tracker to Sony's AI-imagined frames, the body keeps escaping the systems designed to contain it.
Continually self-improving AI and deliberately slow-growth brands are the same contrarian bet — that restraint is the new competitive moat.
Blue Origin's space data centers and the AI military-industrial complex share the same evasion logic — put the infrastructure where the rules don't reach.
From a French naval officer's Strava run to WordPress AI agents, the architecture of exposure is the same everywhere.
Continually self-improving AI and WordPress's autonomous publishing agents raise the same question: who is responsible for what the system decides to do next?
Scientists can't get a laugh, Julio Torres reframes color theory as comedy — the asymmetry of who earns the right to be absurd.
The Pentagon wants compliant AI. Trad-wife culture wants compliant women. The design brief is disturbingly identical.
Peter Halley's 'crisis in geometry' maps onto a 2026 arXiv paper on non-Euclidean AI reasoning in ways that are too precise to ignore.
From Haidilao's rogue dancing robot to Bezos's stair-climbing acquisition, embodied AI is having its most chaotic week yet.