11 Comments
User's avatar
Patrick Moran's avatar

I don't want entities absorbing me like the Borg. Ratjer than telling me I'm right and oh so brilliant, I would rather have an AI that points out where I'm wrong, or perhaps points to an alternative hypothesis that seems to make sense. In trying to understand some phenomenon, it may be useful to learn that my model does o.k., but so does somebody else's model. This other researcher may have taken something into consideration that never even thought of.

Suppose I guy an AI that is guaranteed to protect me from hacking attacks, snd other dastardly exploits. Do you suppose that an AI could be suborned by another AI?

Expand full comment
Benn Stancil's avatar

On the first point about having something that points out where you're wrong, I think that makes sense, but feels like it's not the version we're going to get. The analogy to social media fits too well there for me - even though it's probably better if our social media feeds gave us interesting ideas that helped us learn and see new things and all of that, people used the feeds that gave us more of the same things that we like. Wholesome content is too much Type 2 fun, and all the algorithms end up getting trained to give us Type 1 fun (https://www.rei.com/blog/climb/fun-scale).

Expand full comment
Yoni Leitersdorf's avatar

I was at a Google event earlier this week (as part of I/O), and someone who used to be high up in their ads business told me this:

"You know, we did an experiment. For several months, we shut off the ads' ability to target people based on past behavior, cookies, etc, for a very small percentage of traffic. About 1%. What we observed is that those users started using the Internet less. They browsed less. We then did more specific user research and found that people get really annoyed when they see ads that are irrelevant for them.

So... Google and Meta's need to target ads isn't just a profit-seeking thing. It also makes for a better Internet experience."

I somewhat agree with that. I know that when I visit a website that shows ads that are very irrelevant for me, I get annoyed. Go figure.

Expand full comment
Benn Stancil's avatar

Yeah, I buy that. i can imagine being inundated with random ads is a lot worse than being inundated with ads that, if nothing else, have a sort of aesthetic that match things you're interested in. That said:

- The alternative to "lots of targeted ads" probably is "lots of irrelevant ads," but it'd be nice if it was "a few targeted ads."

- Is that a bad thing if we use the internet less? Maybe that's the real solution to our problems; we make the ads terrible and so everyone gets off Tiktok or whatever.

Expand full comment
Yoni Leitersdorf's avatar

And then we all read books!

Expand full comment
DJ's avatar

In 2012 I went to a futurist conference where one of the speakers talked about AI's potential. He predicted that a future AI would be able do all the thinking a person can do in a year in... seven seconds.

Expand full comment
Benn Stancil's avatar

I have no idea how you quantify that, but also, that ... doesn't seem wrong? Does that seem wrong?

Expand full comment
DJ's avatar

It’s very plausible. And I don’t know how even to think about what that means for society.

Expand full comment
Benn Stancil's avatar

Yeah, that's the only real take I have on any of this, which is, it seems like it's gonna all get very weird.

Expand full comment
David Krevitt's avatar

And yet! Here we are reading your unhinged writing, proof that there’s a future outside the model

Expand full comment
Benn Stancil's avatar

If this blog is the world on the other side of the wall, hoo boy, maybe living as servants to the AI overlords is a good thing after all.

Expand full comment