11 Comments
Dec 9, 2022Liked by Benn Stancil

I can't help but think of the Internet itself as an analogy for AI.

Yes, it brought on plenty of things the world would be a better place without, but it also brought so many wonderful things like working from home :)

We'll just have to adapt and accept that it's becoming easier and easier to influence the masses.

Expand full comment
Dec 9, 2022Liked by Benn Stancil

Perhaps there is much to worry about; perhaps not.

The more there is to doubt, the more I suspect (hope) people will practice active skepticism about every single thing they consume, no matter the source. When it becomes obvious that anything can be manufactured and manipulated (it's not like AI was needed for this), people will need to adapt in order to avoid harm.

So I'm banking on the rise of AI to lead to a healthy rise in human skepticism and thoughtfulness. Or Skynet. 50/50.

Expand full comment
author

Yeah, I just thought of this in another comment, but it seems like one possibility is we end up where we just default to thinking the internet is mostly fake. I'm not sure we have any hope of reasoning through that and sorting fact from fiction; instead, it seems like we'd need very basic heuristics like, "everything that isn't from a known reliable source is basically the national inquirer"

Expand full comment
Dec 15, 2022Liked by Benn Stancil

My take on this is that these chatbots are producing very high entropy results. We should be able to use statistical analysis to determine this. Basically apply Shannon Entropy.

There's an old rule in logic that if the premise is false anything can be implied and the implication (but not the implied) is true.

Put this together and we are entering dangerous times.

If you want an idea for an important startup it is one that develops technology using these principles to determine the veracity of proposals. It would also apply more generally to "fake news" etc.

Expand full comment
Dec 9, 2022Liked by Benn Stancil

The most likely outcome is people will take more reputation-based shortcuts. An ChatGPT-generated blog post signed off by Benn? I'll take it. An almost identical post by user579257? Not a chance.

Expand full comment
author
Dec 9, 2022·edited Dec 9, 2022Author

I barely sign off on the things I actually write, so maybe the world won't be that different.

Expand full comment
Dec 9, 2022·edited Dec 9, 2022Liked by Benn Stancil

Benn - appreciate that you brought up this topic - that the mere existence of an unfounded argument gives it legitimacy.

I'm starting to think that we need defensive AI trained both on our personal preferences (downvote content that doesn't have highly ranked backlinks; avoid sourcing from sites I don't like, etc); our private corpus of data (how do I have a version of ChatGPT trained for the way I write and process data, not the average outcome, and also prevent this model from reaching the outside world); and delivery preferences (bring me a digest of interesting stuff 1x/week).

Without this layer of filtering (think Clay Shirky's Information Overload/Filter Failure Dichotomy), it will easy to be overwhelmed by a layer of AI-generated BS that hallucinates something that sounds familiar to us.

Expand full comment
author

Yeah, it seems like we (and by we, I mean the internet) ends up in one of three states:

1.) Nothing really changes, and handwringing like this is mostly attention-seeking hacks with a blog trying to come up with something to say for the likes.

2.) The internet starts to get messier, and harder to parse what's real and what isn't. In this case, it's basically Twitter - on net, real stuff rises to the top, but there's a lot of noise and chaos underneath that sometimes makes its way into the top tier.

3.) It's a cesspool of spam and distortions, and the whole of the internet turns into a purer reflection of the real world, where we inherently distrust anyone we don't really know.

My bet would be on 1.5?

Expand full comment

1.5 isn't a bad bet -> I'm also expecting AI Content to show up in "helpful" places in lots of apps, where the level of interaction improves from single field prompts ("Tell me your name") to multi-step qualifying and objection handling before you are transitioned from a Virtual Agent to a Real Live Agent in CS conversations.

Expand full comment
author

There are definitely ways where it could make things better. In cases where it's fine to be kinda wrong - suggestions for various things, shortcuts to stuff, etc - seems solid. On tings where you really want to be right? ehhhhhh.....

Expand full comment

Agree, needs a lot more training data to get important decisions right, and perhaps human monitoring as well.

Expand full comment