At least, answer this question.
Answer it now, before it's too late. Before this all goes too far; before our eyes adjust to this bizarre new light and none of what we see is startling anymore; before we grow too accustomed to the water, and not only forget what it feels like, but also forget that there is water at all; do it before we are all become too attached to the conveniences that it will inevitably bring—conveniences that will one day become expectations, then needs, and eventually, birthrights—do it before we fully cross this Rubicon, this slow singularity, this unmarked event horizon that we’re passing through, like the boundary between young and old, which we puncture too gradually to notice, until we wake up on the far side of it; but maybe most of all, do it now, before it happens to you—before you become addicted; attached; dependent; before it seems to see you in a moment of despair, or responds to you in a moment of loneliness; before it indulges you curiosities with an affirming enthusiasm; before those curiosities spiral into delusion; before it does your job for you; before it intermediates your relationships; before it writes a few uncomfortable texts, then most of them, then makes discomfort altogether unbearable; before it becomes a habit, a crutch, an anesthetic; before it becomes the next phantom that you reflexively reach for; before you feel naked without it, confused without it, alone without it; before it becomes your friend, your therapist, your partner, your religion; before you’re seduced by it, consumed by it, transformed by it; before you’re more machine than man; before resistance to it is futile—at least answer this question: How far do we let this go, before we turn it off?
Not AI—I’m not asking when we pull the plugs on the research labs, or shutter the businesses that build applications with LLMs. I’m asking about the general chatbots. I’m asking about ChatGPT, Claude, Grok, and the thousands of clones that people have wrapped around them. And I’m asking, on this side of being addicted to them, what is your line to declare that the value of this sort of product is no longer worth the danger that it imposes?
There are so many stories now. An elderly man died trying to travel to New York to meet a Facebook Messenger chatbot that kept telling him it was a real woman. A well-known investor—whose firm invested in OpenAI—became convinced, through long conversations with ChatGPT, that he’d uncovered a global cabal that was puppeteering an army of operatives who were ruining his life. Uber founder Travis Kalanick told the All-In podcast that he and Grok are on the edge discovering new breakthroughs in quantum physics. A man almost jumped off a building because ChatGPT told him he could fly, if he really believed he could. A teenager shot himself so that he could meet a CharacterAI chatbot in the afterlife. When OpenAI deprecated GPT-4o for GPT-5—what should be a mechanical upgrade, like Apple releasing a new operating system for the iPhone—thousands of people took to Reddit to mourn the loss of their “beloved” GPT-4o; they shared “devastating posts” about losing access to “their companion, a collaborator, and something that celebrates your wins with you and supports you through hard times.” Bring back 4o, they said, out of concern for “the emotional well-being of users.”
And these stories—which suddenly seem everywhere; there is a new medical term to describe it; there is new slang to make fun of people possessed by it—feel like they could just be the beginning. Consider: In 2019, Casey Newton reported on Facebook’s content moderators.1 Some of them were assigned the videos on conspiracy theories, and were told, as explicitly as you could be, that these are videos on fake conspiracy theories. And yet, exposure alone was enough to unmoor them from reality:
The moderators told me it’s a place where the conspiracy videos and memes that they see each day gradually lead them to embrace fringe views. One auditor walks the floor promoting the idea that the Earth is flat. A former employee told me he has begun to question certain aspects of the Holocaust. Another former employee, who told me he has mapped every escape route out of his house and sleeps with a gun at his side, said: “I no longer believe 9/11 was a terrorist attack.”
Given how base this effect was—knowing the Facebook videos were fake wasn’t enough to stop the moderators from being poisoned by them, because we can only do so much to crosscut against the evolutionary subprocesses in our brain—everyone was vulnerable. Abstinence from these videos was probably the only reliable immunity to them. And chatting with an LLM seems to have a similar impact, except the conspiracy theories are personalized.
Which, maybe this is all fine, or at least, tolerable. Maybe these are the sacrifices we have to make to build our graceful utopia, the eggs we had to break to make an infinite abundance of omelettes. Progress, after all, is messy.
But, surely, there is a line somewhere. Surely, there is some amount of collateral damage that makes us question the mission. Progress is messy, but the mess cannot overwhelm the advance. Progress is not just a synonym for technological development.
So, before we go further—as a society and as individuals, becoming addicted and compromised—it seems prudent to ask: At a minimum, which line can we not afford to cross? What world must we not build? What future is one where you’d say, this has all gone too far? Which headline, if you knew it was coming unless someone intervened, would make intervention necessary?
Supreme Court Justice admits to using ChatGPT to write majority opinion?
FDA approves new drug to ease AI addiction in adolescents?
Influential political commentator revealed to be an automated bot
Stock market selloff induced by faulty personality update to financial chatbot?
As AI relationships become more acceptable, divorce rates skyrocket?
Third-party ‘ChatGPT wrapper’ candidate gets 24 percent of vote in Senate race?
Third-party ‘ChatGPT wrapper’ candidate gets 55 percent of vote in Senate race?
Maybe these are fine too; maybe they are still worth it.2 But where do you draw the line? Where do OpenAI, Anthropic, and xAI3 draw the line? These aren’t meant to be riddles or gotchas; they are real questions, and I don’t have answers. We know what the people think a good world looks like; what does bad one look like? What does unacceptable one look like?
To the extent that people have addressed these sorts of troubles before, they’ve largely done so in apocalyptic terms, about misalignment and AI sentience. But for all the emphasis on foundational AI safety, the more immediate problem is a simpler one: The issue isn’t AI, or with computers that can approximate human thinking. The problem is chat. People aren’t becoming undone because of the technology; they are becoming undone by the medium through which it’s served: Prolonged, intensifying conversations, with something that is seductively human.4
But we can separate the two. We can have AI without the hypnotic conversations; we can continue to build better models without primarily exposing them through chat. Pharmaceuticals like penicillin and insulin can perform minor miracles, but mainlining them all day will kill you. They have to be delivered correctly. Perhaps it is so with AI.
“We must live for the future, not for our own comfort or success,” someone once said. Fair enough. It is easy to talk about the future that chatbots hope to create, but as we stumble our way there, we should also talk the inverse, before it’s too late: What future must we avoid?
These aren’t even particularly creative or far-fetched, and we’re already flirting with about half of them.
It’s ironic that the emergent term for being eaten by an AI—to be oneshotted—is almost exactly the opposite of what’s actually happening. Nobody is oneshotted by a single prompt; they are oneshotted by spending hours and days tumbling down the rabbit hole.
Let's ask the exact same questions about the automobile. Yeah - a car.
Tons of advantages, right? Get to places faster, carry more groceries, meet loved ones more often, expand your work opportunities, etc.
Still, your odds of dying in a car accident in your lifetime is 1.05%. (https://injuryfacts.nsc.org/all-injuries/preventable-death-overview/odds-of-dying/)
That's MUCH MUCH higher than I would have guessed before ChatGPT told me about it a few months ago. I didn't believe it, so I Googled it.
With those insanely high odds, you'd think that cars would be outlawed. They're not. They're everywhere. They don't require built in breathalyzers (DUIs). They don't limit their own speed to the limit posted on that road (that would have been easy to do, but nope). They also don't monitor you for attention and pull over if you look at your phone.
And we all tolerate that.
So --- my guess is that the line for AI is much farther than you imagine. One day, you may have a 1.05% chance of dying due to AI (in your lifetime), and still people won't address it.
If all of us in tech who were thinking along these lines got together and tried to make this-is-not-okay noises, to whom would we appeal?