11 Comments
Dec 9, 2022Liked by Benn Stancil

I can't help but think of the Internet itself as an analogy for AI.

Yes, it brought on plenty of things the world would be a better place without, but it also brought so many wonderful things like working from home :)

We'll just have to adapt and accept that it's becoming easier and easier to influence the masses.

Expand full comment
Dec 9, 2022Liked by Benn Stancil

Perhaps there is much to worry about; perhaps not.

The more there is to doubt, the more I suspect (hope) people will practice active skepticism about every single thing they consume, no matter the source. When it becomes obvious that anything can be manufactured and manipulated (it's not like AI was needed for this), people will need to adapt in order to avoid harm.

So I'm banking on the rise of AI to lead to a healthy rise in human skepticism and thoughtfulness. Or Skynet. 50/50.

Expand full comment
Dec 15, 2022Liked by Benn Stancil

My take on this is that these chatbots are producing very high entropy results. We should be able to use statistical analysis to determine this. Basically apply Shannon Entropy.

There's an old rule in logic that if the premise is false anything can be implied and the implication (but not the implied) is true.

Put this together and we are entering dangerous times.

If you want an idea for an important startup it is one that develops technology using these principles to determine the veracity of proposals. It would also apply more generally to "fake news" etc.

Expand full comment
Dec 9, 2022Liked by Benn Stancil

The most likely outcome is people will take more reputation-based shortcuts. An ChatGPT-generated blog post signed off by Benn? I'll take it. An almost identical post by user579257? Not a chance.

Expand full comment
Dec 9, 2022·edited Dec 9, 2022Liked by Benn Stancil

Benn - appreciate that you brought up this topic - that the mere existence of an unfounded argument gives it legitimacy.

I'm starting to think that we need defensive AI trained both on our personal preferences (downvote content that doesn't have highly ranked backlinks; avoid sourcing from sites I don't like, etc); our private corpus of data (how do I have a version of ChatGPT trained for the way I write and process data, not the average outcome, and also prevent this model from reaching the outside world); and delivery preferences (bring me a digest of interesting stuff 1x/week).

Without this layer of filtering (think Clay Shirky's Information Overload/Filter Failure Dichotomy), it will easy to be overwhelmed by a layer of AI-generated BS that hallucinates something that sounds familiar to us.

Expand full comment