
Here's a question I sometimes think about: If you wrote down every memory and fact and thought inside of the head of a CEO and gave it to an AI, which one would make better decisions?
You could have two theories:
Being a CEO requires some impossible-to-quantify human judgement that can never be replicated by a computer, so the best way to run a business is by giving the CEO more information. Show her a useful chart; recommend to her the right management book; give her a timely snippet of a few customer calls, or a summary of all of them. It is much better to give her the right presentation of facts, and let her combine them with her inscrutable sense of judgement, than it is to leave important decisions up to an AI.
Being a CEO requires context. The average CEO isn’t smarter than ChatGPT, but the average CEO has a lot more information—decades of personal experiences; memories from thousands of unrecorded conversations; proprietary knowledge about her business that wasn’t in the millions of ChatGPT’s training documents. Though the world’s most powerful tech companies are now great vampire squids wrapped around the face of humanity, relentlessly jamming their blood funnels into anything that smells like text, they can’t yet reach all the text. And perhaps the difference between us and them—and, in particular, the reason why we might make better decisions than they do—is not our divine sense of taste, but simply because we have some text that they do not. Perhaps human intuition is just a large context window and a creative RAG process. So rather than using AI to make the CEO more informed, maybe the best way to run a business is by reversing the roles: The AI should ask her questions to make itself smarter, so that it can make better decisions.
On one hand, this is obviously a dumb hypothetical. Our heads are not folders full of Word documents that can be exported onto a USB drive.1 We make decisions based on an uncountable number of beliefs and emotions that nudge our thinking in mysterious ways. We have preferences, and opinions, and inexplicable gut feelings whose origins we can't even explain ourselves. "Write down everything you know" is a ridiculous thing to ask someone to do.2 So any advocate of the second option can always say that an AI that makes worse decisions than the CEO is no true Scotsman: Not only would a fully informed AI make better decisions than she would, but the fact that it didn’t is itself proof that it wasn't fully informed.
On the other hand, the question doesn’t have to be so metaphysical. There are degrees. Does a CEO make better decisions than an LLM when she asks, “Sales are down, what do I do?” I mean, I hope so. But what if she tells it her business type or shows it a few metrics (like traffic, conversion, CAC)? What if she spends five minutes explaining why she thinks sales are down? What if she spends an hour? What if she gives it the deck that her vice president of sales presented to the board about why he thinks sales are down? What if she gives it a transcript of that board meeting, in which everyone talked about the vice president of sales’ presentation? What if she gives it the emails that she sent back and forth between the board and the vice president of sales? What if she gives her Slack messages and Google docs? What if she gives it every Slack message, and every email, and every Google doc? What if she gives it everything that’s ever been typed, recorded, or transcribed inside of her company? What if, when it needs to make a decision, the AI can ask her questions, about her preferences and opinions and inexplicable gut feelings, and weigh those in its decisions as it sees fit? What if it straps itself to her head and records her entire life?
What if the question isn’t about the CEO—who’s making enigmatic strategic decisions and engaging in the dark art of the deal—but the vice president of sales? A sales rep? A content marketer writing corporate blog posts? An analyst? An intern?
I don’t know. But historically, we’ve built organizations and products around the first theory.3 Companies hired consultants to produce giant reports full of research; they employed data teams to meticulously log every sale and click, and aggregate them up into giant binders full of numbers; they convene weekly business reviews in which departments present giant decks full of charts; they recruit spies at rival companies who can send them giant texts full of gossi–um, no, I mean, giant boxes of watches to London.
In the first couple years of this whole AI thing, it seemed like that was how we’d use it too: As another source of information. Transcription tools like Gong and Granola log all of your sales calls and make them searchable; “enterprise search” products like Glean tentacle through your Gmail and Google Drive, and summarize them for you; PwC and Walmart and McKinsey built chatbots that answer “common tax questions” and “summarize large documents” and “synthesize vast stores of knowledge.“ OpenAI’s deep research reads industry blogs and checks Twitter, and turns what it finds into a tidy book report. A gazillion SQL chatbots look for insights in your corporate databases. And on the other end of all these tools, someone reads the transcripts and the reports and the reports about the transcripts, blends what they learn with their inscrutable sense of judgement, and makes a decision.
But without the history that came before it, would we design our use of AI this way? Between the two of us—an AI that, in some approximate sense, knows everything that has ever been known, and me, who has a smattering of specialized experiences and meaty hands—who should be the agent and who should be the executive? Who would be the labor and who would be the management?
For the last few weeks, I’ve owed the readers of this blog a final summary of The White Lotus Power Rankings, which was a silly weekly survey I ran while the show was on the air. I posted updates on the results each week, and some patterns emerged: Some characters were loved and then did bad things and became hated; some were loved and then did bad things and were loved even more. There were interesting splits by gender. Nobody knew who was going to die, though most people agreed on who was going to kill them. It was all very good fun.
Anyway, since the season ended in April, instead of writing the last update, I rewrote the same story this blog now tells every week: Something something, the bitter lesson; something something, the great weirding; something something, we’re cooked.
Those are lazy opinions though. If I want to trundle out more recycled doomerism, I should probably try to cook myself first. Earned secrets about AI aren’t found at the bottom of an iced tiramisu latte4 had at the top of the top of an ivory tower; they are found in the arena, down in the dirt.
Squabble up, I guess. So I gave the survey results to a few different “AI analysts”, and told them to tell me what's interesting. Do the work for me; find what I couldn't; get me fired.5
The bots did not deliver. Though they did a good job of shortcutting some mechanical tasks—they quickly trimmed the responses down to just characters’ names; they converted episode descriptions into episode numbers; they made naive charts of things that seemed like they might be interesting, like votes by character over time. They wrote hapless commentary, like “votes vary over time.” They told me many things, but none of it was interesting.
But how could it have been? To extend Randy Au’s great line that, in analytics, the data in production is the data in people's heads, “insight” is relative. It is dependent on what people already know. You can't tell people something surprising without knowing what is expected; you can't tell them something interesting unless you know what they think is boring. As my first-grade art teacher Ms. Hunt said, “art is in the negative space.”6 Insight isn't the data; it is what's not in our heads.
When I gave the bot that information—the posts from previous weeks, essentially—it got much better. Its technical work was sloppier, but it thought more creatively. It pivoted off of existing ideas: It ran variants of previous analyses; it focused on characters who’d been discussed in prior posts; it asked itself more novel questions, like “is there a divergence in how viewers cast votes when considering these seemingly inverse roles [of the murderer and the “body”]?”
Though it was crude and imperfect, the improvement was stark. I quickly found myself spending more time—and getting more return from—the contextual prompting than I did on asking quantitative questions and looking at the results. And if I needed to do this sort of thing more often, that’s the tool I’d be willing to spend money on: The one that extracted the hard-to-gather qualitative context, and not the one that made better charts.
When people talk about the idealized future of analytics—and more aspirationally, the future of making decisions—they often say things like this:
Imagine creating business dashboards by simply describing what you want to see. No more clicking through complex interfaces or writing SQL queries - just have a conversation with AI about your data needs. This is the promise of Generative Business Intelligence.
It’s an easy enough world to imagine: The computer, instantly manifesting the answers to our questions on a screen. And indeed, as technical power tools like Hex become BI, this is largely what they say they are building:
I’m so excited to announce that Hex has acquired Hashboard! …
Our teams are already hard at work building toward our shared vision for the future of data. [This includes:]
Building the best product for deep-dive data work in the age of AI: our combined teams are hard at work on a next generation of our Magic features – including some experiences we can uniquely build on Hex’s platform and context.
Making it easy for everyone to use data: customers loved Hashboard’s self-serve BI offering, and we’re excited to incorporate a lot of what they got right, including semantic modeling, visualization, and AI into Hex.
But this was not the only acquisition in last week’s Cambrian implosion.7 And the others, though likely motivated by simpler ambitions and some basic financial realities, gesture in another direction.
Census, which was acquired by Fivetran, plans to send data off to AI agents:
When you integrate the entire data lifecycle, you can build richer and more accurate semantic models. These in turn can unlock real AI-powered automation. After all, how can an autonomous marketing agent decide if a campaign change was successful if it can’t retrieve the results of its actions?
Eppo, which was acquired by Datadog, plans to combine its A/B test results with Datadog’s monitoring data, and cut humans out of the decision-making loop:
We originally envisioned a human-in-the-loop process of tech workers implementing faster, better. But with the rise of AI agents, it has become clear that some types of product development will become fully closed-loop. Instead of engineers bussing tickets through a queue, AI agents can identify an issue, find its root cause, and implement a fix. And with flags and experiments, the fixes can be safely rolled out with all appropriate metrics measured statistically.
And data.world, which was acquired by ServiceNow, will also become a source for AI agents:
The new Workflow Data Network is a broad ecosystem of data platforms, applications, and enterprise tools that enhance Workflow Data Fabric and connect, understand, and take action from any data source…
data.world’s simple, smart, and powerful data catalog and data governance platform will be brought into the ServiceNow AI Platform, allowing customers to enrich data with meaning, context, and relationships — all while enabling AI agents and workflows to operate.
Look, I get it—these are press releases; agents, so hot right now, agents;8 don’t make too much of any of this.
The outlines are there, though. If Business Intelligence 1.0 was a pivot table, Business Intelligence 2.0 became a pivot table on the internet, and Business Intelligence 3.0 is a chatbot with semantic modeling, visualization, and AI, Business Intelligence 4.0 is…Slack? Our email? Us?
Because we’re what’s missing from all of these agents. How much room is there for Claude to improve as a competent SQL or Python engineer? Not much, I’d say. Even giving it more information about schemas and semantic layers can only go so far. But how much could they improve if they knew more about the presentation we gave last week? About the feedback to that presentation? About the anxious email we sent a coworker about that feedback? We’ll find out, it seems.
Everything becomes BI, we sometimes say around here, though I always meant it to mean that every data tool becomes BI. But at some point, the vampire squids will run out of books to read, and turn to our heads for new sources of text. And when we become a data tool, we become BI too.
The White Lotus Power Rankings
—
There are spoilers! With names!
—
Rick with a truly virtuosic performance: He is the third-most deplorable character, the most charming, the dead person, and the person who killed the dead person:
Still, Saxon is the season’s real success story. After the first two episodes—the “Initial points” column in the tables above—Saxon was voted the show’s most deplorable character by half of the respondents. Not only did he finish the season with zero votes, but he also came in second as the most charming (behind, obviously, the guy who killed everyone).
Also, the finale created the biggest gender splits yet. After the last episode, women cooled on Rick in favor of Saxon; men abandoned Saxon and loved Rick.
Why? Maybe this—though both men and women largely agreed that Rick was the murderer, women identified Chelsea as the dead character, whereas men said it was Rick.
Finally, who won? Who predicted that all of this was going to happen? After much discussion with the benn.substack’s judicial advisory committee, these are the winning numbers:
By overwhelming consensus, Rick is the murderer. Lots of people voted for Rick, including a bunch of people after the first episode. Well done.
Though Rick and Chelsea are tied as the dead person, Rick is more fun. So the winner would’ve been the first person who voted for Rick as both the murderer and the person the murderer murdered. But nobody did, because, what, why would you.
So, the winners are the first people to vote for Rick as the murderer and Chelsea as the dead person. This combination happened three times: Once after the fourth episode, once after the third, and once after the first episode.
We have our podium! We have our gold medalist! Prepare the national anthems!
Thank you everyone for playing along. And if you want to do your own analysis on all of this, go crazy folks, go crazy:
As everyone knows, they are shelves of delightful marbles.
“Write down everything you know” feels like the beginning of some bizarro Rumpelstiltskin fairy tale:
A wise man is kidnapped by an evil witch. The witch tells the wise man, “You are the wisest and most beloved man in the village. I want to be as wise and beloved as you.” She gives the wise man an inkwell and a book full of blank pages, and says to him, “By tomorrow morning, you must write everything you know in this book. Otherwise, I will keep you in this dungeon forever.”
That night, the wise man writes and writes and writes, until his hand hurts from writing so much. When the sun comes up, the witch returns.
“Did you write down how to bake a loaf of bread?,” asks the witch. “Yes,” says the wise man. “Did you write down that the sun rises in the east and that the ocean waves break on the shore of the beach?” “Yes,” says the wise man. “Did you write down how sweet you thought your first taste of honeysuckle was when you were a little boy?”
“No,” cries the wise man. “I had forgotten about my first taste of honeysuckle, until you asked me about it just now.”
The witch is angry, but she still wants to be as wise as the wise man. “I will give you one more chance,” she says, “or I will keep you in this dungeon forever.”
That night, the wise man writes and writes and writes, until he is so tired from writing that he falls asleep. When the sun comes up, the witch returns.
“Did you write down how it sounded when you met your wife?” “Yes,” says the wise man. “Did you write down how it feels to remember her?” “Yes,” says the wise man. “Did you write down all the shades of orange that you saw in the sky at sunset on the day that she died?”
“No,” cries the wise man. “Because I was trying to remember these beautiful things that I know about, I had forgotten about all of the painful things too.”
The witch is angry again, but she desperately wants to be as wise as the wise man. “I will give you one last chance,” she says. “But this time, I will give you a special book to write in. Everything you write in this book will become a memory, and everything you leave out will be forgotten. You can write whatever you want—real memories of your children, fake stories about distant towns you have never been to and lovers you did not have. And you can erase whatever you want—thoughts of broken bones and broken hearts. You will remember everything you down as though it were real, and it will be as vivid as last night’s dream.”
“But there is one thing you have to write. You have to write down that you were given this choice. You will not know what you wrote down in this book, but you will know that could have written down things that were not real.”
The witch leaves the wise man with the book. The wise man thinks and thinks and thinks. He writes and he erases. He writes and he erases, all night until the sun comes up.
“Well,” says the witch excitedly. “What did you write? Did you write down the most important proverbs? Did you write down your hardest-earned wisdom? Did you leave out your biggest regrets?”
The wise man hands her the book. The witch eagerly opens it.
Every page is blank.
“If I wrote down that I was a wise man,” he says, “I would always wonder if I was a fool who lied about being a wise man. If I wrote down how sweet a summer pie tastes, I would always be hungry for it. If I wrote down memories about my wife, I would wonder if our love was real or just a mirage. The only way I can ever know that what I believe to be real is real, is by knowing nothing at all. I can never repair doubt. But I was a wise man once, and I can be a wise man once more. I was in love once, and I can be in love once more. I replaced a black hole in my chest with a heart with gold once, and I replace it once more.”
And with that, the ordinary man left, to fill his jars again.
I mean, obviously we have; commercial AI didn't exist in modern form until a few years ago.
But those rolled up wafer cookies are found at the bottom of an iced tiramisu latte, and isn’t that basically what we’re all really looking for anyway?
Did I do this to avoid having lazy opinions, or was I just being lazy? Why not both? Why not sYnErGiEs?
Why Ms. Hunt was trying to teach first graders abstract concepts of artistic composition is beyond me (though apparently it stuck?). I guess she was trying to impersonate the character played by Jack Black in School of Rock, who was trying to impersonate the character played by…Mike White.
Is it a funeral, or an exciting new chapter with Hansel, with so much more to come?
dang, footnote 2 goes hard.
Machines becoming our managers was always the plan!
Did you ever read "Manna" by Marshall brain?
https://marshallbrain.com/manna1