See, I'm probably supposed to say the reason I write this blog is to learn, or to have important conversations about meaningful things, and the biggest misses are when you realize your ideas are wrong. But the real reason I write this blog is to link to good tiktoks, and nothing is worse than blowing a chance like this.
(Also, I grew up about 30 minutes from mooresville)
Outstanding piece. It makes me think of how audience capture has polarized so many political commentators. Joe Rogan at least has "Jamie, pull that up" as an occasional check, but what happens when he's just asking his favorite AI instead?
So this felt like it was opening up too long of a sidebar for this post, but I had a whole section about something like this. It was essentially something like this:
- If you hang out with a 5-year old, you can basically win every argument you have with them. It doesn't matter if you're right or wrong; you can basically just outsmart them.
- But as AIs get smarter (or really, more persuasive) do we become the five-year old relative to them?
- And if that happens, like, what then? When Joe Rogan says "tell me this thing," are we sure that it won't be so good at that, and so convincing, that we won't really be able to disagree with it? And not because it's hallucinating or lying, but just because it is smarter than us?
I'm positive that there will be "good" AIs that are distinct from "woke" AIs. The market demand is there. If an American company won't provide it, China will.
I guess what I’m really getting at is that I don’t think most of super AI will be spent correcting our misperceptions. I think it will be used to build ever more powerful versions of Stuxnet.
By "good" I mean that's what political actors will call the ones that agree with their biases. We've already seen Elon claim community notes and Wikipedia are biased. This same thing happened in 2016 when a lot of Republicans started saying Snopes is biased.
Great post. There will always be profit in superior understanding of how information is produced, displayed, and interpreted. Each step potentially corrupts the true knowledge that can be gained. Even something as simple as your trendline. What really is it? It’s probably a least squares fit. But why not a fit that minimizes the sum of the differences from mean to the fourth power instead of the square? The “trend line” would then look different because of higher emphasis on the outliers.
Andrew Gelman has this great bit about this sort of thing, about how so much analytical "hacking" comes from looking at the trend in five different ways, and then reporting on the interesting one. If more and more analysis is done by something very smart, very diligent, and very intent on finding something interesting, it seems like we're gonna get a lot more of that.
Jesus Christ (complimentary)
been a big week for him
Big miss not linking this on the self-driving portion: https://www.tiktok.com/@zachandhailee/video/7496229410526465326
See, I'm probably supposed to say the reason I write this blog is to learn, or to have important conversations about meaningful things, and the biggest misses are when you realize your ideas are wrong. But the real reason I write this blog is to link to good tiktoks, and nothing is worse than blowing a chance like this.
(Also, I grew up about 30 minutes from mooresville)
benn.subhacked
Outstanding piece. It makes me think of how audience capture has polarized so many political commentators. Joe Rogan at least has "Jamie, pull that up" as an occasional check, but what happens when he's just asking his favorite AI instead?
So this felt like it was opening up too long of a sidebar for this post, but I had a whole section about something like this. It was essentially something like this:
- If you hang out with a 5-year old, you can basically win every argument you have with them. It doesn't matter if you're right or wrong; you can basically just outsmart them.
- But as AIs get smarter (or really, more persuasive) do we become the five-year old relative to them?
- And if that happens, like, what then? When Joe Rogan says "tell me this thing," are we sure that it won't be so good at that, and so convincing, that we won't really be able to disagree with it? And not because it's hallucinating or lying, but just because it is smarter than us?
I'm positive that there will be "good" AIs that are distinct from "woke" AIs. The market demand is there. If an American company won't provide it, China will.
Wait, when you say "good," I'm not sure I follow? Like, good as in what?
I guess what I’m really getting at is that I don’t think most of super AI will be spent correcting our misperceptions. I think it will be used to build ever more powerful versions of Stuxnet.
Ah, yeah, for sure. I posted this this morning on, urp, linkedin: https://www.linkedin.com/posts/benn-stancil_a-new-invisible-hand-activity-7322633225468542976-7SH_
And then someone sent me this: https://x.com/emollick/status/1916905103358931084
So, yeah, I'm not so optimistic.
By "good" I mean that's what political actors will call the ones that agree with their biases. We've already seen Elon claim community notes and Wikipedia are biased. This same thing happened in 2016 when a lot of Republicans started saying Snopes is biased.
Just for lolz, check out https://www.conservapedia.com
It was started all the way back in 2006. Hyperpartisans (*cough* nation states) are willing to play a very long game.
Great post. There will always be profit in superior understanding of how information is produced, displayed, and interpreted. Each step potentially corrupts the true knowledge that can be gained. Even something as simple as your trendline. What really is it? It’s probably a least squares fit. But why not a fit that minimizes the sum of the differences from mean to the fourth power instead of the square? The “trend line” would then look different because of higher emphasis on the outliers.
Andrew Gelman has this great bit about this sort of thing, about how so much analytical "hacking" comes from looking at the trend in five different ways, and then reporting on the interesting one. If more and more analysis is done by something very smart, very diligent, and very intent on finding something interesting, it seems like we're gonna get a lot more of that.
https://en.wikipedia.org/wiki/The_Garden_of_Forking_Paths#:~:text=In%20statistics%2C%20the%20garden%20of,positive%20rate%20in%20an%20experiment.