Great take, Benn! I feel like there is a tremendous potential in going deeper with applied AI and we're about to see it's being unleashed by the latest advancements and mass adoption of the technology in the next few years. That said, doubling down on math to solve business problems might be as, if not more, limiting than modeling past what's conceivable to humans. There are infinite games, problems with no (perfect) solution, and endless cases of irrational behavior. If only we could act consistently in our best interest, behavioral economists would rule the world.
Continuing on your analogy about all of us being stuck in a prime number maze, there seem to be plenty of mazes built around real world problems that can still perplex even the most powerful of computers. Until there is a true AGI that will solve the universe.
Fair warning: The Jared Kushner rating of this comment is about 7/10.
For sure - I have no idea how you would actually apply this, and it certainly seems reasonable that business problems prove to be too hard for it to make any sort of contribution to for quite some time. But, on the other hand, I can't shake the idea that, surely, someone will figure out a way to apply this somewhere? This feels very tin-hatty to say, but it seems like, relative to what's possible, we're all pretty bad at stuff.
Which, I guess the implication of that is something like, there's a huge looming possibility out there that we may never find - but it's at least out there.
No doubt there is still plenty of unknown unknowns out there and AI is helping us expand our horizons. The question is should we put more bets on it as a tool or let it loose and see if it can figure things out on its own. I have a feeling that we are pretty far from anything that can outperform a good old combo of human intelligence paired with some intense computing power. I also feel like when this finally happens, we'll be the last to get the news.
Definitely thought-provoking. Agree with everything. Even the potential contradiction. Rely more on simple models and frameworks and reason from there. At the same time, do those frameworks keep us pinned in local maxima without even being aware of it? How do we break out of it? Is it purely a computational problem, and let the AI do its thing? How would one validate the proposed AI solution in these more complex scenarios that aren’t chess games? Can’t help but feel likes its some form of gambling or faith.
Yeah, so I'm not at all an expert on this stuff, though in reading a bunch of things for this, it seems like these generalized solutions that teach themselves are much less inclined to find local maxima than some method where we try to teach it how to play. Of course, I guess we don't know how good the perfect player could be, though at least in chess, the gap between the best human and an AI that learns like this is pretty staggering. (eg, the Elo rankings, which are how you score chess players, for decent amateurs is 1500, for the best players in the world is 2800, and for AlphaZero is something like 3800. Which is a huge gap.)
We often swing our proverbial "hammer" at problems that face us and it limits the possibilities of solving the hard problems more effectively. This is why random approaches like Oblique Strategies that was conceived by Brian Eno to compose new music was so effective (see book titled "Messy" by Tim Harford) and why author and prolific patent inventor Cliff Pickover leveraged an ancient and random strategy of divination called stichomancy (reading random passages from book to spur ideas, see "Sex, Drugs, Einstein & elves by Pickover").
Great take, Benn! I feel like there is a tremendous potential in going deeper with applied AI and we're about to see it's being unleashed by the latest advancements and mass adoption of the technology in the next few years. That said, doubling down on math to solve business problems might be as, if not more, limiting than modeling past what's conceivable to humans. There are infinite games, problems with no (perfect) solution, and endless cases of irrational behavior. If only we could act consistently in our best interest, behavioral economists would rule the world.
Continuing on your analogy about all of us being stuck in a prime number maze, there seem to be plenty of mazes built around real world problems that can still perplex even the most powerful of computers. Until there is a true AGI that will solve the universe.
Fair warning: The Jared Kushner rating of this comment is about 7/10.
For sure - I have no idea how you would actually apply this, and it certainly seems reasonable that business problems prove to be too hard for it to make any sort of contribution to for quite some time. But, on the other hand, I can't shake the idea that, surely, someone will figure out a way to apply this somewhere? This feels very tin-hatty to say, but it seems like, relative to what's possible, we're all pretty bad at stuff.
Which, I guess the implication of that is something like, there's a huge looming possibility out there that we may never find - but it's at least out there.
No doubt there is still plenty of unknown unknowns out there and AI is helping us expand our horizons. The question is should we put more bets on it as a tool or let it loose and see if it can figure things out on its own. I have a feeling that we are pretty far from anything that can outperform a good old combo of human intelligence paired with some intense computing power. I also feel like when this finally happens, we'll be the last to get the news.
Definitely thought-provoking. Agree with everything. Even the potential contradiction. Rely more on simple models and frameworks and reason from there. At the same time, do those frameworks keep us pinned in local maxima without even being aware of it? How do we break out of it? Is it purely a computational problem, and let the AI do its thing? How would one validate the proposed AI solution in these more complex scenarios that aren’t chess games? Can’t help but feel likes its some form of gambling or faith.
Yeah, so I'm not at all an expert on this stuff, though in reading a bunch of things for this, it seems like these generalized solutions that teach themselves are much less inclined to find local maxima than some method where we try to teach it how to play. Of course, I guess we don't know how good the perfect player could be, though at least in chess, the gap between the best human and an AI that learns like this is pretty staggering. (eg, the Elo rankings, which are how you score chess players, for decent amateurs is 1500, for the best players in the world is 2800, and for AlphaZero is something like 3800. Which is a huge gap.)
Benn, just love you writing. Great article. Lots to think about.
Thanks Lloyd!
We often swing our proverbial "hammer" at problems that face us and it limits the possibilities of solving the hard problems more effectively. This is why random approaches like Oblique Strategies that was conceived by Brian Eno to compose new music was so effective (see book titled "Messy" by Tim Harford) and why author and prolific patent inventor Cliff Pickover leveraged an ancient and random strategy of divination called stichomancy (reading random passages from book to spur ideas, see "Sex, Drugs, Einstein & elves by Pickover").
I just googled that Messy book and found this story, and it was great: https://youtu.be/HvoYwoCI7Mk?t=278
So I'm into it.