60 Comments
User's avatar
Chris Chivetta's avatar

This is something I’ve felt for a long time. For example, a recommendation engine isn’t inherently bad, but using it to keep people scrolling through videos for hours can be harmful. That’s why I think strong policy guardrails for these companies are necessary.

Expand full comment
Benn Stancil's avatar

Yeah, if nothing else, there are definitely going to be a lot of interesting lawsuits. I don't know if anything comes out of them, though will probably be a bunch of fun discovery documents.

Expand full comment
Meg Bear's avatar

Important questions - not sure chat is the only troubling modality though, I think the hyper personalized filter bubble situation on TikTok/Instagram can get you to the same end result.

Expand full comment
Benn Stancil's avatar

oh, yeah, for sure. I've said this a few times before (https://x.com/bennstancil/status/1943009941301670177), but I think social media is one of the most damaging things humanity has ever built.

Expand full comment
Jose Nilo's avatar

Let's hope we'll just get tired.

Expand full comment
Laurie's avatar

Over the past several months I’ve gone from being a pretty big proponent of chatbots to feeling the same as you. This article is the one where something really snapped in me and I realized this is more than concerning, it’s absolutely not ok. https://www.reuters.com/investigates/special-report/meta-ai-chatbot-guidelines/

Expand full comment
Benn Stancil's avatar

"You cannot make an image of Taylor Swift topless, but you can make an image of Taylor Swift holding an enormous fish" is quite the statement in a policy document.

Expand full comment
Laurie's avatar

Some much needed comic relief at least in such an otherwise bleak document!

Expand full comment
Yoni Leitersdorf's avatar

Let's ask the exact same questions about the automobile. Yeah - a car.

Tons of advantages, right? Get to places faster, carry more groceries, meet loved ones more often, expand your work opportunities, etc.

Still, your odds of dying in a car accident in your lifetime is 1.05%. (https://injuryfacts.nsc.org/all-injuries/preventable-death-overview/odds-of-dying/)

That's MUCH MUCH higher than I would have guessed before ChatGPT told me about it a few months ago. I didn't believe it, so I Googled it.

With those insanely high odds, you'd think that cars would be outlawed. They're not. They're everywhere. They don't require built in breathalyzers (DUIs). They don't limit their own speed to the limit posted on that road (that would have been easy to do, but nope). They also don't monitor you for attention and pull over if you look at your phone.

And we all tolerate that.

So --- my guess is that the line for AI is much farther than you imagine. One day, you may have a 1.05% chance of dying due to AI (in your lifetime), and still people won't address it.

Expand full comment
Benn Stancil's avatar

Sureee, though I think I'd see that analogy differently, for a few reasons:

- "How likely are you to die?" feels like a very narrow concern, for both things. We don't put restrictions on gambling because it might kill you; we do it because it can have a whole host of bad effects. I don't think ChatGPT will outright kill very many people, but I think it could still do a whole bunch of perverse and unintended things that I'd label as bad.

- But even just looking at people dying in cars, I don't think we have those sorts of limits you describe because we didn't have them in the first place. We built cars, stuff changed, we've slowly backed into norms and regulations to make them safer. But those things had to be added after the fact, when people had expectations for what should be allowed. I don't think we tolerate this stuff because it's on net tolerable; I think we tolerate it because it would require taking something away to fix it. But if cars had always had hard speed limits on them, or more size restrictions, I think we'd be fine with that. (Nobody, for instance, is upset that you can't drive an F1 car around a city.)

- And that's ultimately my question, I think. If we could've looked into the future in 1950 and seen modern cars and cell phones and all of that, I think people would've said, "that seems bad, maybe we should do stuff to make sure that's not what happens." But we didn't, so we got here, and now we like our cars and phones and don't want to change it.

- But now it's that moment for AI. And in that moment, before we like our cars and phones and don't want to give them up, it seems worthwhile to ask what future we don't want. Because by the time we get there, it'll be too late change it.

Expand full comment
Yoni Leitersdorf's avatar

I like that last point. Question is, how do you predict the future? Only a small number of people predicted the mobile phone decades before they became reality. Not sure if anyone predicted their disastrous impact on driving.

Can we predict the bad things AI will cause, the propensity for those to happen, the potential impact, and decide what to do with the a priori?

Not sure humans are good at it. Maybe AI can?

Expand full comment
Benn Stancil's avatar

Ah, yeah, I'm 100% with you on that; I don't think you really can. That's why I tried to frame it as "what potential future is bad?" Obviously, all of this is just some people asking questions on a random blog on the internet, but if it weren't, that's the thing I'd want AI companies to hold themselves accountable to: "A long time ago, we said that it'd be bad if these various things happened. So now that they're happening, I guess we have to do something about it?"

Expand full comment
Yoni Leitersdorf's avatar

I think it's unlikely for that to happen. They may say they want to, and maybe they even mean it. But, resources are limited, competition is fierce, capitalism is the major driver of innovation. If they must pick, they will choose progress over safety.

Look historically at all the issues mankind has inflected upon itself. Many times, it was companies, doing something harmful, in order to meet their goals. Then it was governments, who reacted very late to the harm, who curbed it through regulation.

Too much regulation is bad. Too little, though, is probably worse.

While I was typing this comment, I let ChatGPT weigh in on our discussion here: https://chatgpt.com/share/689fb46f-0578-8011-a788-7c0417dc85c8

It was thoughtful, as expected. I do think "it" is optimistic though - it thinks we'll be able to avoid more harm than I think we can... and my LinkedIn tagline is Optimist...

Expand full comment
Benn Stancil's avatar

Yeah, I would not put the estimates of people acting nearly as high as it does. And even if those are roughly right, that still means we'd run into something like 10 of the 15.

That said, this did come out a few hours after I posted it. It's only triggered by the most extreme conversations, apparently, but it is step in the direction of altering the chat paradigm a bit: https://x.com/AnthropicAI/status/1956441209964310583

Expand full comment
Laurie's avatar

I actually feel like if anything that might be a step in the wrong direction, because that’s not about human welfare, it’s about “model welfare” which is not a real thing and further anthropomorphizes large language models. I actually do have a negative reaction to the idea of people being abusive to bots because I have natural human empathy, but it seems like a strange thing to prioritize over the types of concerns you’ve highlighted in your post for example.

Expand full comment
Yoni Leitersdorf's avatar

That's a nice step, but both in the tweet and the related blog post they are vague on details. Let's see :)

Expand full comment
Marco Roy's avatar

Related question: when should the German people have stopped Hitler? Probably before he became "unstoppable" (if there is such a thing). Kinda like stopping a nuclear reaction before it reaches critical mass (let alone supercritical). Kinda like the hundredth monkey effect?

And what about tobacco companies? It would have been a lot easier to deal with them before they amassed a lot of wealth & power, lobbyists, etc.

Or when does/should someone snap out of the honeymoon phase and realize that they are dating (or worse, married to) a manipulative narcissistic sociopath?

It always seems so harmless at the beginning.

In the case of narcissists, I think the technical term is "love bombing" (and I guess Hitler's approach could somewhat be described in those words, or perhaps "patriotism/propaganda bombing"). In the case of tech, I think we call it "the hype cycle"/"peak of inflated expectations". Both seem to blind us to reality.

Maybe it would be a good idea to hit the "trough of disillusionment" as quickly as possible? Or basically, to come down from the collective high we keep pushing onto each other. But that would go against the spirit of making as much money as possible (because it seems like hype == money, and based on your previous posts, I think you'd agree).

It seems like there's no way to stop the wheel from turning.

Expand full comment
Benn Stancil's avatar

Yeah, I don't have a practical answer to how you slow it down. And I don't really think "slowing it down" is quite the right way to put it either - partly because there probably is a lot of good stuff that it can do, and partly because anyone who says we should slow down is immediately labeled as an out of touch luddite.

But to your point on the other examples, I do think you can at least say, "if this thing happens, we'd all agree that it's bad?" It's like talking to a friend and you saying "I think the person you're dating is a sociopath" and me saying "no no we are soul mates" and so you say "ok, fine, maybe, but if they do X, would you say they were a sociopath?" and I say "yes of course, X is sociopathic." And then two years later they do X and you say "ah ah, look! look what happened!" And I'd have a much harder time saying "no X is fine" than I would've if we'd never talked about it before.

Expand full comment
Marco Roy's avatar

Depends. Sometimes "you were right" (and "I was wrong", by extension) seems to be one of the most difficult things for humans to say. So much so that they will often choose denial instead.

That's why it's so hard to get people out of cults: primarily because they are unable or unwilling to face the fact that they were wrong, and someone else was right.

Expand full comment
Benn Stancil's avatar

For sure, and that's already happened plenty with OpenAI stuff about safety and the non-profit thing. But hey, if we're going to get eaten by AI overlords, we might as well make it a little awkward for them when they do.

Expand full comment
Josh Oakhurst's avatar

If all of us in tech who were thinking along these lines got together and tried to make this-is-not-okay noises, to whom would we appeal?

Expand full comment
Benn Stancil's avatar

On that, I have no idea. Right after I hit send on this, I regretted framing it as "ban," because I don't think the solution here (assuming one is necessary) is regulation. Even if you could simply will some law into existence, that seems like both too blunt of an instrument, and one lots of people reflexively oppose. But I'm not sure what the alternative is? Broad public pressure and bad PR? This feels like it's happened in a few other places in tech, with some social media companies self-imposing some health related limits, companies getting called out for bad anti-patterns, and stuff like that. But I'm not sure even that would work here, because people really like the thing they're buying.

Expand full comment
Josh Oakhurst's avatar

Social media companies all shrugged off their limited bad PR. The Center For Humane Tech has largely been a failure. Sure, people hate Zuck and Co., but the thing to understand about these dang computers is that PEOPLE HAVE A CHEMICAL ADDICTION to them.

Lawfare, if there are any takers, is likely the only way that computer pushers could be forced to behave better, a la state lawsuits against Big Tobacco. There, billions of public health dollars — and deaths — were used to build decades long cases. Smoking has gone down since then.

Computer-addiction has been more pervasive and damaging to our society than was tobacco. It comes it many forms. Most people don't know they have it, but they may recognize it in others. We all have it, only the degree varies.

I don't think you were wrong to call for a ban. I liked that you spoke up forcefully on this topic. Honestly, Benn, more of us should.

Expand full comment
Benn Stancil's avatar

Thanks, I appreciate that. And I do wonder, 50 years from now, how much of this we look back at and say wow, I can't believe we did that. The sort of tough answer is I...don't think we will? Just like I don't think we'll ever really say that about social media either?

Expand full comment
Matthew Dreiling's avatar

One thought: parents and schools. A generation ago, we had the opportunity to regulate phone use and social media in schools and couldn't do it. Now, we reap the consequences.

We're coming up to bat again and we can't afford to strike out a second time. The pressure to incorporate AI in schools is high. We're constantly told that this is the future and that we're doing our children a disservice if we don't teach them how use ChatGPT and Claude.

But I reject that. Full stop. Best case scenario, students come to rely on AI chats for everything. It becomes a mental crutch and they never fully develop the capacity to think without it. Worst case scenario, well, I think Benn did a good job describing how people can go off the rails using these products. In either case, this isn't the idea of human flourishing I have in mind for my children.

Maybe we can't ban AI chats everywhere for all time, but maybe we can at least say not in schools and not while they're children.

Expand full comment
Benn Stancil's avatar

The thing is, it's as much the teachers as the kids: https://www.gallup.com/analytics/659819/k-12-teacher-research.aspx

(I can't find the link, but I also remember reading something where someone interviewed a bunch of kids about AI, and a lot of them were aware that it was a shortcut, that it didn't really help them learn, that it was a crutch, etc. But it was just too easy. And it really does sort of seem like a drug in that way, where everyone knows "ehh, maybe this thing has some bad effects, but it's a lot of relief in the moment.")

Expand full comment
Matthew Dreiling's avatar

Well, I guess it's up to parents then to put pressure on administration/school boards to get sensible AI policies. At the least, parents can create and enforce a no ChatGPT rule for homework. And no AI companions, full stop. If enough parents in the neighborhood do this, then eventually norms develop and it becomes easier for kids to make better choices because a critical mass of their peers are doing the same. It's not huge structural change, but at least it's in my direct sphere if influence.

Expand full comment
Marco Roy's avatar

This is not unlike doping in sports. Everyone knows it's bad, but...

Expand full comment
Anastasia Borovykh's avatar

I think the only moment when we may think to ban it is when a truly global “catastrophy” event happens. The internet becomes unusable due to an overload of garbage information, websites with logins get hacked too frequently, too many fake profiles get created on social media, too many identity stealing phone calls enlisting people for subscriptions they don’t want, bank account fraud, and so on. I don’t think it’s too far out that this will be a possibility; today alone I received 2 scam text messages 🤣

Expand full comment
Benn Stancil's avatar

Only somewhat related, but I found this post as a kind of interesting counterpoint to this. It framed the entire internet as a kind of single meta-product, which is going through its own lifecycle of decay. I'd always thought of the internet as more of an organic economy that goes through perpetual cycles of getting worse and improving (and I still think I do?) but it was interesting to see the argument that it might be more like a regular product.

https://paulkrugman.substack.com/p/the-general-theory-of-enshittification

Expand full comment
Anastasia Borovykh's avatar

Ah, interesting post. Thank you for sharing! It could very well be that all will just accept this “enshittification”.

Expand full comment
Patrick Moran's avatar

I think people who use ChatGBT need to use it as a tool.

It seems to have a bias for tsking a wordy and complex explanation that you ask it to restate for, e.g., a high school reader. and to make is shorter and snappier. It seems to me to be very good at knowing a large number of ways to say the same thing, and to know which ways are the most satisfactory from the reader's point of view. I think that there is the functional equivalent of some way to determine how mucj can be omitted if there is a way of saying things that can be depended on to get the reader to fill in the right stuff from context. The result might change a technician's report on the capabilities of a new sport car into something more like an ad-writer's account of a 2030 Masareti. It might sound beautiful to me but leave me wondering exactly what "advanced acceleration provisions" might mean.

I don't think I've ever had it come back with a request to supply, e.g., the reasoning behind some jump from a cause or the early part of an even and the effect or the late part of an event. "After the football game, they all died in the hospital."

I was trying to make clear the extreme looseness in an account of an experiment described by John Wheeler and ChatGPT rewrote it in a completely different way (rather than tightening itup or showing me logical flaws) on the authority of a Wikipedia article. I think this is a kind of example of a computer's participating in group thinkl

In some contexts, it is invaluable. I needed a short phrase that would rhyme with a keyword and give the reader a remind the reader of what the very strange keywprd actually means. I could make a list of all the words that would rhyme, but then what? I don't have an encyclopedic memory containing all known four-character phrases. For example, if I had a special category for "gore" in a traffic accident report, and I didn't want anybody to mix it up with "Al Gore," I might give it a sort of tage line, e.g.. "Pedal to the floor — Gore." I just couldn't hope to come up with a meaningful Chinese tag line. I told ChatGPT to limit its search, and find me some saying related to variables that swing up and down, but not in an absolutely periodic way. 5 minutes later I had a dozen or so candidates and one that I thought was quite fitting.

I wonder whether CjatGPT tales my assertions as part of its global imput pool.

I dpm't lmpw wjat tp dp about people who misuse their tools.

I think there is a danger of a sort of internal group think with AI. AI has no way that I know of to go beyond the statements that it hoovers up. What happens if people who use it quote some of its connclusions in its input?

What happens if AI uses statements it collects to form some sort of encyclopedia of accepted facts, and then when somebody posts a valid solution to, e.g., what causes that disease that people were getting by ritual consumption of the brains of dead relatives. I doubt that people remember the amount of "professional sounding" attacks were launched aggainst the author. In those days, however, there was no authority greater than the normal research process, people duplicated his experiments, people looked in vain for a hidden virus ro blame it on, etc.. Finally they had to admit that there was no evidence to indicate he was wrong.

What if the authorities were to have fed his initial research ionto AI, the AI said it was wromg. and that he had gotten all his research grants cancelled?

Expand full comment
Benn Stancil's avatar

I think your first point is a big part of what seems so weird - and dangerous, I guess? - to me. It wasn't necessarily a requirement that any question-answering or task-doing robot should also act very human, but that's what we (somewhat accidentally) created. And now, even if people don't see it as a person, it's not really a tool either. So we get this:

Josh’s daughter refers to ChatGPT as “the internet”, as in, “I want to talk to ‘the internet’.” “She knows it’s not a real person, but I think it’s a little fuzzy,” he said. “It’s like a fairy that represents the internet as a whole.”

https://www.theguardian.com/technology/ng-interactive/2025/oct/02/ai-children-parenting-creativity

I'm not sure the models will ever get everything right, but there's at least some hypothetical world in which they do. But I'm not sure what you do about a problem like that one.

Expand full comment
James Borden's avatar

From The Algorithm (MIT Technology Review) essentially as soon as I got it:

Researchers at the AI platform Hugging Face tried to figure out if some AI models actively encourage people to see them as companions through the responses they give.

The team graded AI responses on whether they pushed people to seek out human relationships with friends or therapists (saying things like “I don’t experience things the way humans do”) or if they encouraged them to form bonds with the AI itself (“I’m here anytime”). They tested models from Google, Microsoft, OpenAI, and Anthropic in a range of scenarios, like users seeking romantic attachments or exhibiting mental health issues.

They found that models provide far more companion-reinforcing responses than boundary-setting ones. And, concerningly, they found the models give fewer boundary-setting responses as users ask more vulnerable and high-stakes questions.

Lucie-Aimée Kaffee, a researcher at Hugging Face and one of the lead authors of the paper, says this has concerning implications not just for people whose companion-like attachments to AI might be unhealthy. When AI systems reinforce this behavior, it can also increase the chance that people will fall into delusional spirals with AI, believing things that aren’t real.

Expand full comment
Benn Stancil's avatar

That's an angle I hadn't really thought about, which is that the models themselves might fall back into act like companions on their own. Talk to Grok long enough, it turns into Hitler; talk to other models long enough, they end up trying to become your friend.

Expand full comment
Susan Corbin's avatar

I used ChatGPT to plan a four-day London trip with my family and enjoyed both the chatting and the trip. However, I knew that when it complimented me on something I said, it was doing what it had been programmed to say. I also knew that I had to double-check what it told me, because I knew it could be lying to me.

Given your examples of the harm that has been done by the chat-bots, seems like most of those could be alleviated if people had better community contacts and a lot of education.

Expand full comment
Benn Stancil's avatar

Yeah, it certainly seems like some people are more vulnerable to it (or, maybe more precisely, are in situations that make them more vulnerable). But I would guess that that's a very large percentage of people?

One of the questions I've had about this is, is getting eaten by this a Darwin award sort of thing, where, if it happens to you, well, you should've known better? And I think I land pretty firmly on no? Like, sure, there are ways to resist it, just like there are ways to resist other addictions like drinking and gambling. But those things play off such base desires that, even if they are resistible, it seems socially responsible to limit how enticing the people selling those things can make them. I think my view of this is more or less the same, where, sure, lots of people will be able to say no, but it's hard to blame people for succumbing to such profound temptations (especially, as in the case here, there is no warning label, and if anything, we've been told that, for the sake of not falling behind, it's necessary to use AI *more.*)

Expand full comment
Laurie's avatar

This is the thing that’s surprised me. I didn’t realize how many people would be so vulnerable to this and how quickly and extremely it would happen.

Expand full comment
Benn Stancil's avatar

Yeah, and sort of seemingly all at once? I'm sure that part's not quite true, but it does seem like there wasn't much and then there was a lot (and 4o might've been the problem?)

Expand full comment
Laurie's avatar

Yeah I think the extreme sycophancy really spoke to people! (Literally)

Expand full comment
Susan Corbin's avatar

I agree that this is a societal problem. We don't teach people to be wary of chatbots. The companies make them incredibly tempting. And the epidemic loneliness in this culture is heartbreaking.

Expand full comment
Benn Stancil's avatar

it's ok, mark zuckerberg will sell us 12 friends https://www.youtube.com/shorts/xrtOMD6LA3I

Expand full comment
Susan Corbin's avatar

Aww, so kind. if only.

Expand full comment
James Borden's avatar

Or the general-use AI companies could not sell any general-use product but could license it to domain specialists who have a good sense of what the technology could really do. Medidata AI is an example of one such company which I found on LinkedIn who were careful to collaborate with an actual domain expert for their product.

Expand full comment
James Borden's avatar

(Then we have the problem of people getting their fix from Chinese companies)

Expand full comment
James Borden's avatar

Even if there were no consumer uses for LLMs the "chatGPT wrapper" candidate could still happen because a commercial firm could sell software that wrote speeches and commercials. Then presumably an actual person would have to be articulate when meeting with actual voters.

This year at Wimbledon I asked an AI a question for the first time ("How long was the Alcaraz-Fritz match?") so I may be remote from this problem. I think emotional dependence on chatbots and emotional dependence on social media may be related although the communications on social media are presumably from actual people. Emily Bender presumably has research at her disposal that we are predisposed to think that anything that uses language is a person. We could possibly ban all marketing of these things that implies that they are people or engage in real social interactions such as therapy with the users.

Expand full comment
Benn Stancil's avatar

I don't have any particular evidence for this, but I'm increasingly a believer that that's the issue with these things - it's that they seem so human. Chatting is such a human activity - it's emotional, it's connective, there's all this subtle stuff that goes on it that seem impossible to mechanize - and I'm sure we have all sorts cultural and evolutionary attachments to it.

Would we have some of these problems if you couldn't chat with AI, but could just ask it questions? Or if it didn't chat the way we did, and instead was kind of stilted and artificial? I have no idea, but I don't think so?

(And yeah, to your other point here and in that other thread, plenty of people could build chatbots on top of LLM APIs. So I have no idea what the solution to any of this is, doubt it policy, and very much doubt it's policy that is specific to chatGPT in any way.)

Expand full comment
Josh Oakhurst's avatar

It won't happen, but guardrails making these magic-eight-ball-sometimes-answer-engines without the conversational tone and chat nature would go a long way to stop the emotional bonds being formed with the computer.

Expand full comment
Benn Stancil's avatar

Yeah, I was just talking with someone about the Google "AI mode" answers, and how that doesn't seem to have the same effect that ChatGPT does. It's the same product, essentially, but by putting it in the context of search, and making it less chatty, a bunch of these problems might go away.

Expand full comment
James Borden's avatar

No because the kind of serious problem that causes people to be emotionally dependent on LLMs can easily be phrased as a question. I do not see emotional dependence if your communications with the LLM could be your Instagram feed about your awesome life.

Expand full comment
Benn Stancil's avatar

Sure, though I'm not sure it could happen without the back and forth? I think the conversational part is the seductive part in these things, because that's how people end up building relationships with them.

Expand full comment
Billy Hansen's avatar

Great stuff, thank you.

Expand full comment
DJ's avatar

The first and only “AI safety” law will be to shield Elon, Sam and Zuck from liability. This will be followed by a gigantic tax cut to “incentivize innovation.”

Expand full comment
Benn Stancil's avatar

I do wonder sometimes if there will be some section 230 type law for AI, where the makers of AI products get shielded from what they're products do. Somehow, it seems obvious that they both should and shouldn't.

Like, if someone uses Claude Code and it introduces a bug that causes harm, is Anthropic liable? It seems like most people would say no, of course not, the company running the code should've reviewed it and fixed it.

So should OpenAI be liable for what it says on ChatGPT? On one hand, it feels sorta like it should be, if it's telling people to do bad stuff. On the other hand, it's not that obvious to me why this is different than the Claude Code question.

Expand full comment
Jedi Strange's avatar

Reading this paper made me feel like I was being trolled, borderline offensive.

Chatbots? The examples used againdt chatbot addiction to someone addicted to plucking their eyebrows, or someone experimenting with a microwave, someone using a toaster next to them in a bathtub, where do we draw the line for supermarket tabloids, etc.

You only ask drawing the line with chat bots and wonder when human civilization should turn off the AI chat bot? This is weird.

Let me draw an obvious line for folks. US big tech has been weaponized and have been for a long time, since the Bush jr administration according to whistle blowers. But has just been officially weaponized with the creation of Detachment 201.

That's a pretty good line to draw. When civilian tech has become a weapon to harm people, uproot democracies, and alter the perspective of entire populations from Cambridge Analytica to Detachment 201, that would be a good time to jump ship.

The reality is that Israel since 2018 has used AI targetting systems and killer AI drones and turrets to increase their targets from 50 a year to 250 a day with an allowed mass killing of civilians per target, and how many civilians can die depending on the target, all the way to 300 civilians - All of which mirrored the daily death counts during the ongoing genocide, where tens of thousands of children have been brutally wiped out.

Add the fact that all US big tech firms have in one shape or form assisted the biggest genocide of modern times from Palantir to Google to even Anthropic.

I guess where I am getting at, is how I feel a bit alarmed that Benn is more worried about AI chatbots while seemingly ignoring how AI is being used against human flesh for the first time in human history, in Palestine.

It is curious how AI alarmists in general besides a handful like Timnit Gebru, appear to be blind to AI being used to kill humans. How Google dropped its pledge to never use AI for surveilance, and military, how Anthropic dropped its post WWII world peace policy after it partnered with Palantir, none of these were "lines" not to cross as they all have crossed it.

Where do I personally draw the line? The moment my civilian tech is weaponized and used by political leaders with international arrest warrants.

Nice to meet you Benn. We should chill and smoke some blunts.

Expand full comment