45 Comments
User's avatar
Chris Chivetta's avatar

This is something I’ve felt for a long time. For example, a recommendation engine isn’t inherently bad, but using it to keep people scrolling through videos for hours can be harmful. That’s why I think strong policy guardrails for these companies are necessary.

Expand full comment
Benn Stancil's avatar

Yeah, if nothing else, there are definitely going to be a lot of interesting lawsuits. I don't know if anything comes out of them, though will probably be a bunch of fun discovery documents.

Expand full comment
Meg Bear's avatar

Important questions - not sure chat is the only troubling modality though, I think the hyper personalized filter bubble situation on TikTok/Instagram can get you to the same end result.

Expand full comment
Benn Stancil's avatar

oh, yeah, for sure. I've said this a few times before (https://x.com/bennstancil/status/1943009941301670177), but I think social media is one of the most damaging things humanity has ever built.

Expand full comment
Laurie's avatar

Over the past several months I’ve gone from being a pretty big proponent of chatbots to feeling the same as you. This article is the one where something really snapped in me and I realized this is more than concerning, it’s absolutely not ok. https://www.reuters.com/investigates/special-report/meta-ai-chatbot-guidelines/

Expand full comment
Benn Stancil's avatar

"You cannot make an image of Taylor Swift topless, but you can make an image of Taylor Swift holding an enormous fish" is quite the statement in a policy document.

Expand full comment
Laurie's avatar

Some much needed comic relief at least in such an otherwise bleak document!

Expand full comment
Jose Nilo's avatar

Let's hope we'll just get tired.

Expand full comment
Yoni Leitersdorf's avatar

Let's ask the exact same questions about the automobile. Yeah - a car.

Tons of advantages, right? Get to places faster, carry more groceries, meet loved ones more often, expand your work opportunities, etc.

Still, your odds of dying in a car accident in your lifetime is 1.05%. (https://injuryfacts.nsc.org/all-injuries/preventable-death-overview/odds-of-dying/)

That's MUCH MUCH higher than I would have guessed before ChatGPT told me about it a few months ago. I didn't believe it, so I Googled it.

With those insanely high odds, you'd think that cars would be outlawed. They're not. They're everywhere. They don't require built in breathalyzers (DUIs). They don't limit their own speed to the limit posted on that road (that would have been easy to do, but nope). They also don't monitor you for attention and pull over if you look at your phone.

And we all tolerate that.

So --- my guess is that the line for AI is much farther than you imagine. One day, you may have a 1.05% chance of dying due to AI (in your lifetime), and still people won't address it.

Expand full comment
Benn Stancil's avatar

Sureee, though I think I'd see that analogy differently, for a few reasons:

- "How likely are you to die?" feels like a very narrow concern, for both things. We don't put restrictions on gambling because it might kill you; we do it because it can have a whole host of bad effects. I don't think ChatGPT will outright kill very many people, but I think it could still do a whole bunch of perverse and unintended things that I'd label as bad.

- But even just looking at people dying in cars, I don't think we have those sorts of limits you describe because we didn't have them in the first place. We built cars, stuff changed, we've slowly backed into norms and regulations to make them safer. But those things had to be added after the fact, when people had expectations for what should be allowed. I don't think we tolerate this stuff because it's on net tolerable; I think we tolerate it because it would require taking something away to fix it. But if cars had always had hard speed limits on them, or more size restrictions, I think we'd be fine with that. (Nobody, for instance, is upset that you can't drive an F1 car around a city.)

- And that's ultimately my question, I think. If we could've looked into the future in 1950 and seen modern cars and cell phones and all of that, I think people would've said, "that seems bad, maybe we should do stuff to make sure that's not what happens." But we didn't, so we got here, and now we like our cars and phones and don't want to change it.

- But now it's that moment for AI. And in that moment, before we like our cars and phones and don't want to give them up, it seems worthwhile to ask what future we don't want. Because by the time we get there, it'll be too late change it.

Expand full comment
Yoni Leitersdorf's avatar

I like that last point. Question is, how do you predict the future? Only a small number of people predicted the mobile phone decades before they became reality. Not sure if anyone predicted their disastrous impact on driving.

Can we predict the bad things AI will cause, the propensity for those to happen, the potential impact, and decide what to do with the a priori?

Not sure humans are good at it. Maybe AI can?

Expand full comment
Benn Stancil's avatar

Ah, yeah, I'm 100% with you on that; I don't think you really can. That's why I tried to frame it as "what potential future is bad?" Obviously, all of this is just some people asking questions on a random blog on the internet, but if it weren't, that's the thing I'd want AI companies to hold themselves accountable to: "A long time ago, we said that it'd be bad if these various things happened. So now that they're happening, I guess we have to do something about it?"

Expand full comment
Yoni Leitersdorf's avatar

I think it's unlikely for that to happen. They may say they want to, and maybe they even mean it. But, resources are limited, competition is fierce, capitalism is the major driver of innovation. If they must pick, they will choose progress over safety.

Look historically at all the issues mankind has inflected upon itself. Many times, it was companies, doing something harmful, in order to meet their goals. Then it was governments, who reacted very late to the harm, who curbed it through regulation.

Too much regulation is bad. Too little, though, is probably worse.

While I was typing this comment, I let ChatGPT weigh in on our discussion here: https://chatgpt.com/share/689fb46f-0578-8011-a788-7c0417dc85c8

It was thoughtful, as expected. I do think "it" is optimistic though - it thinks we'll be able to avoid more harm than I think we can... and my LinkedIn tagline is Optimist...

Expand full comment
Benn Stancil's avatar

Yeah, I would not put the estimates of people acting nearly as high as it does. And even if those are roughly right, that still means we'd run into something like 10 of the 15.

That said, this did come out a few hours after I posted it. It's only triggered by the most extreme conversations, apparently, but it is step in the direction of altering the chat paradigm a bit: https://x.com/AnthropicAI/status/1956441209964310583

Expand full comment
Laurie's avatar

I actually feel like if anything that might be a step in the wrong direction, because that’s not about human welfare, it’s about “model welfare” which is not a real thing and further anthropomorphizes large language models. I actually do have a negative reaction to the idea of people being abusive to bots because I have natural human empathy, but it seems like a strange thing to prioritize over the types of concerns you’ve highlighted in your post for example.

Expand full comment
Yoni Leitersdorf's avatar

That's a nice step, but both in the tweet and the related blog post they are vague on details. Let's see :)

Expand full comment
Josh Oakhurst's avatar

If all of us in tech who were thinking along these lines got together and tried to make this-is-not-okay noises, to whom would we appeal?

Expand full comment
Benn Stancil's avatar

On that, I have no idea. Right after I hit send on this, I regretted framing it as "ban," because I don't think the solution here (assuming one is necessary) is regulation. Even if you could simply will some law into existence, that seems like both too blunt of an instrument, and one lots of people reflexively oppose. But I'm not sure what the alternative is? Broad public pressure and bad PR? This feels like it's happened in a few other places in tech, with some social media companies self-imposing some health related limits, companies getting called out for bad anti-patterns, and stuff like that. But I'm not sure even that would work here, because people really like the thing they're buying.

Expand full comment
Josh Oakhurst's avatar

Social media companies all shrugged off their limited bad PR. The Center For Humane Tech has largely been a failure. Sure, people hate Zuck and Co., but the thing to understand about these dang computers is that PEOPLE HAVE A CHEMICAL ADDICTION to them.

Lawfare, if there are any takers, is likely the only way that computer pushers could be forced to behave better, a la state lawsuits against Big Tobacco. There, billions of public health dollars — and deaths — were used to build decades long cases. Smoking has gone down since then.

Computer-addiction has been more pervasive and damaging to our society than was tobacco. It comes it many forms. Most people don't know they have it, but they may recognize it in others. We all have it, only the degree varies.

I don't think you were wrong to call for a ban. I liked that you spoke up forcefully on this topic. Honestly, Benn, more of us should.

Expand full comment
Benn Stancil's avatar

Thanks, I appreciate that. And I do wonder, 50 years from now, how much of this we look back at and say wow, I can't believe we did that. The sort of tough answer is I...don't think we will? Just like I don't think we'll ever really say that about social media either?

Expand full comment
Marco Roy's avatar

Related question: when should the German people have stopped Hitler? Probably before he became "unstoppable" (if there is such a thing). Kinda like stopping a nuclear reaction before it reaches critical mass (let alone supercritical). Kinda like the hundredth monkey effect?

And what about tobacco companies? It would have been a lot easier to deal with them before they amassed a lot of wealth & power, lobbyists, etc.

Or when does/should someone snap out of the honeymoon phase and realize that they are dating (or worse, married to) a manipulative narcissistic sociopath?

It always seems so harmless at the beginning.

In the case of narcissists, I think the technical term is "love bombing" (and I guess Hitler's approach could somewhat be described in those words, or perhaps "patriotism/propaganda bombing"). In the case of tech, I think we call it "the hype cycle"/"peak of inflated expectations". Both seem to blind us to reality.

Maybe it would be a good idea to hit the "trough of disillusionment" as quickly as possible? Or basically, to come down from the collective high we keep pushing onto each other. But that would go against the spirit of making as much money as possible (because it seems like hype == money, and based on your previous posts, I think you'd agree).

It seems like there's no way to stop the wheel from turning.

Expand full comment
Benn Stancil's avatar

Yeah, I don't have a practical answer to how you slow it down. And I don't really think "slowing it down" is quite the right way to put it either - partly because there probably is a lot of good stuff that it can do, and partly because anyone who says we should slow down is immediately labeled as an out of touch luddite.

But to your point on the other examples, I do think you can at least say, "if this thing happens, we'd all agree that it's bad?" It's like talking to a friend and you saying "I think the person you're dating is a sociopath" and me saying "no no we are soul mates" and so you say "ok, fine, maybe, but if they do X, would you say they were a sociopath?" and I say "yes of course, X is sociopathic." And then two years later they do X and you say "ah ah, look! look what happened!" And I'd have a much harder time saying "no X is fine" than I would've if we'd never talked about it before.

Expand full comment
Marco Roy's avatar

Depends. Sometimes "you were right" (and "I was wrong", by extension) seems to be one of the most difficult things for humans to say. So much so that they will often choose denial instead.

That's why it's so hard to get people out of cults: primarily because they are unable or unwilling to face the fact that they were wrong, and someone else was right.

Expand full comment
Benn Stancil's avatar

For sure, and that's already happened plenty with OpenAI stuff about safety and the non-profit thing. But hey, if we're going to get eaten by AI overlords, we might as well make it a little awkward for them when they do.

Expand full comment
Susan Corbin's avatar

I used ChatGPT to plan a four-day London trip with my family and enjoyed both the chatting and the trip. However, I knew that when it complimented me on something I said, it was doing what it had been programmed to say. I also knew that I had to double-check what it told me, because I knew it could be lying to me.

Given your examples of the harm that has been done by the chat-bots, seems like most of those could be alleviated if people had better community contacts and a lot of education.

Expand full comment
Benn Stancil's avatar

Yeah, it certainly seems like some people are more vulnerable to it (or, maybe more precisely, are in situations that make them more vulnerable). But I would guess that that's a very large percentage of people?

One of the questions I've had about this is, is getting eaten by this a Darwin award sort of thing, where, if it happens to you, well, you should've known better? And I think I land pretty firmly on no? Like, sure, there are ways to resist it, just like there are ways to resist other addictions like drinking and gambling. But those things play off such base desires that, even if they are resistible, it seems socially responsible to limit how enticing the people selling those things can make them. I think my view of this is more or less the same, where, sure, lots of people will be able to say no, but it's hard to blame people for succumbing to such profound temptations (especially, as in the case here, there is no warning label, and if anything, we've been told that, for the sake of not falling behind, it's necessary to use AI *more.*)

Expand full comment
Laurie's avatar

This is the thing that’s surprised me. I didn’t realize how many people would be so vulnerable to this and how quickly and extremely it would happen.

Expand full comment
Benn Stancil's avatar

Yeah, and sort of seemingly all at once? I'm sure that part's not quite true, but it does seem like there wasn't much and then there was a lot (and 4o might've been the problem?)

Expand full comment
Laurie's avatar

Yeah I think the extreme sycophancy really spoke to people! (Literally)

Expand full comment
Susan Corbin's avatar

I agree that this is a societal problem. We don't teach people to be wary of chatbots. The companies make them incredibly tempting. And the epidemic loneliness in this culture is heartbreaking.

Expand full comment
Benn Stancil's avatar

it's ok, mark zuckerberg will sell us 12 friends https://www.youtube.com/shorts/xrtOMD6LA3I

Expand full comment
Susan Corbin's avatar

Aww, so kind. if only.

Expand full comment
Anastasia Borovykh's avatar

I think the only moment when we may think to ban it is when a truly global “catastrophy” event happens. The internet becomes unusable due to an overload of garbage information, websites with logins get hacked too frequently, too many fake profiles get created on social media, too many identity stealing phone calls enlisting people for subscriptions they don’t want, bank account fraud, and so on. I don’t think it’s too far out that this will be a possibility; today alone I received 2 scam text messages 🤣

Expand full comment
Benn Stancil's avatar

Only somewhat related, but I found this post as a kind of interesting counterpoint to this. It framed the entire internet as a kind of single meta-product, which is going through its own lifecycle of decay. I'd always thought of the internet as more of an organic economy that goes through perpetual cycles of getting worse and improving (and I still think I do?) but it was interesting to see the argument that it might be more like a regular product.

https://paulkrugman.substack.com/p/the-general-theory-of-enshittification

Expand full comment
Anastasia Borovykh's avatar

Ah, interesting post. Thank you for sharing! It could very well be that all will just accept this “enshittification”.

Expand full comment
James Borden's avatar

Or the general-use AI companies could not sell any general-use product but could license it to domain specialists who have a good sense of what the technology could really do. Medidata AI is an example of one such company which I found on LinkedIn who were careful to collaborate with an actual domain expert for their product.

Expand full comment
James Borden's avatar

(Then we have the problem of people getting their fix from Chinese companies)

Expand full comment
James Borden's avatar

Even if there were no consumer uses for LLMs the "chatGPT wrapper" candidate could still happen because a commercial firm could sell software that wrote speeches and commercials. Then presumably an actual person would have to be articulate when meeting with actual voters.

This year at Wimbledon I asked an AI a question for the first time ("How long was the Alcaraz-Fritz match?") so I may be remote from this problem. I think emotional dependence on chatbots and emotional dependence on social media may be related although the communications on social media are presumably from actual people. Emily Bender presumably has research at her disposal that we are predisposed to think that anything that uses language is a person. We could possibly ban all marketing of these things that implies that they are people or engage in real social interactions such as therapy with the users.

Expand full comment
Benn Stancil's avatar

I don't have any particular evidence for this, but I'm increasingly a believer that that's the issue with these things - it's that they seem so human. Chatting is such a human activity - it's emotional, it's connective, there's all this subtle stuff that goes on it that seem impossible to mechanize - and I'm sure we have all sorts cultural and evolutionary attachments to it.

Would we have some of these problems if you couldn't chat with AI, but could just ask it questions? Or if it didn't chat the way we did, and instead was kind of stilted and artificial? I have no idea, but I don't think so?

(And yeah, to your other point here and in that other thread, plenty of people could build chatbots on top of LLM APIs. So I have no idea what the solution to any of this is, doubt it policy, and very much doubt it's policy that is specific to chatGPT in any way.)

Expand full comment
Josh Oakhurst's avatar

It won't happen, but guardrails making these magic-eight-ball-sometimes-answer-engines without the conversational tone and chat nature would go a long way to stop the emotional bonds being formed with the computer.

Expand full comment
Benn Stancil's avatar

Yeah, I was just talking with someone about the Google "AI mode" answers, and how that doesn't seem to have the same effect that ChatGPT does. It's the same product, essentially, but by putting it in the context of search, and making it less chatty, a bunch of these problems might go away.

Expand full comment
Billy Hansen's avatar

Great stuff, thank you.

Expand full comment
DJ's avatar

The first and only “AI safety” law will be to shield Elon, Sam and Zuck from liability. This will be followed by a gigantic tax cut to “incentivize innovation.”

Expand full comment
Benn Stancil's avatar

I do wonder sometimes if there will be some section 230 type law for AI, where the makers of AI products get shielded from what they're products do. Somehow, it seems obvious that they both should and shouldn't.

Like, if someone uses Claude Code and it introduces a bug that causes harm, is Anthropic liable? It seems like most people would say no, of course not, the company running the code should've reviewed it and fixed it.

So should OpenAI be liable for what it says on ChatGPT? On one hand, it feels sorta like it should be, if it's telling people to do bad stuff. On the other hand, it's not that obvious to me why this is different than the Claude Code question.

Expand full comment