All you can do is play the game
I don’t know what’s coming next.
When someone says “I don’t know,” what does it mean? There are levels to it:
The dad. You say “I don’t know;” you mean “I don’t care.” What do you want to eat for dinner? Where do you want to meet for coffee? When will you be home from work? What’s your favorite type of horse? It would be rude to tell a five-year-old who loves horses that you don’t care about horses, so you politely say that you don’t know which type is your favorite, and that there are simply too many wonderful types of horses to choose just one.
The Jeremy. The same as the dad, but said by a teenager. They don’t know when they’ll be home; they don’t care when they’ll be home; they are annoyed; they want to get off the phone; why are you still here, in their ear, in their head, in their life. They say “I don’t know,” they mean “go away.”
The VC. Actually, they do know. They think they know the exact answer, and they’re pretty sure they’ve known it the whole time. They want to say it so bad; they’ve been waiting the entire meeting for you to ask them what they think. But they also want to be humble; they want to be liked; they want to prove to you that they’re a more sophisticated thinker than a sycophantic ChatGPT. So they say “Well, I don’t know, it’s a tough decision, but…” and then tell you what they think the answer is anyway.
The VC, when it matters. They again think that they know what to do, but this time, they don’t want to be responsible for it. Yes, they grandstanded for ten minutes about their idea, and then defended it for ten minutes more, but they started their lecture by saying “Look, I don’t know what’s best here.” And they can’t be held accountable for anything they say after that—unless, of course, things go well, in which case, remember how this was their idea?
The Warren Buffet. The enlightened “I don’t know.” The serenity “I don’t know.” The “I don’t know” from people who don’t know, and are actually comfortable with that. The troll asks the question; the wise man says, “I don’t know, but I know how to find out.” The audience erupts.
The Gregory Olinovich. I don’t know, and I’m panicking. Nothing adds up. The ground is getting closer; the sirens are growing louder. I’m sitting at the controls of an incomprehensible machine, and I don’t know how any of it works. I’m barely hanging on, with an everlasting fire underneath me. The question demands an answer, and the only thing I know is that my head is flatlining and I have absolutely no idea what to do about it.
Anyway, this is a blog that’s ostensibly about tech, and to be a blog in Good Standing in Silicon Valley—or a podcast, or a Twitter talking head, or a useful guest on a conference panel—we’re supposed to talk about the future. That’s the preferred direction of our discourse: Forward. Looking ahead. What’s newly dead? What’s coming; what’s next? What will be the new new thing? What big change is happening in the world?
This is also a blog that’s ostensibly about data, so here are some things that could happen in the data industry:
The next huge business is a really good analytical copilot. It lives in SQL editors or Python notebooks, and quickly handles all the rote tasks that data analysts do—it writes tediously long queries; it generates boilerplate joins; it correctly does time zone math.1 Suddenly, questions that took an analyst a day to answer take an hour.
The next huge business is a really good chatbot for answering analytical questions. You ask it which marketing campaigns produced the most valuable leads; it writes a query and gives you an answer. It consults with you to refine your questions;2 it offers suggestions; it takes feedback to make its answers better. It delivers on what we’ve been promising for decades: A true self-serve experience. Suddenly, everyone is answering quantitative questions on their own; no analyst required.
The next huge business is a really good analytical agent.3 You give it an ambiguous, open-ended question—what are good ideas for a new marketing campaign? How can our sales team improve their close rates? How do we make more money?4—and it swarms a database with every possible question it can think of. The robots brute-force their way through thousands of curiosities; they hammer the database with mostly fruitless questions. But eventually, they find a interesting blip and ruthlessly pick it apart, until something useful falls out. It is the holy grail of analytics: Automated “insight” discovery, delivered by a relentless army of robots and a bonfire of AI tokens.
The next huge businesses are specialized analytical agents. Want to analyze your company’s finances? There’s a bot for that. How do people use your product? There’s a bot for that. Who should you draft in your fantasy football keeper league? There’s even a bot for that.
The next huge business is a really good conversational chatbot on top of documents, emails, Slack messages, and customer call transcripts. Executives stop asking for quantitative reports, and instead just ask the chatbot for advice. YC funds dozens of startups to collect data to feed into its maw: Customer interviews, screen recordings, videos of people browsing the aisles in their stores. Traditional data pipelines begin to decay, because why should we bother converting questions like “what are our customers upset about?” into streaming click events and a statistics problem when we can just ask a bot that’s read every support ticket?
The next huge business is a really good context layer that collects the same documents and emails, and exposes them to ChatGPT or Claude. There’s no money in building another chatbot, people say; all the money is in giving existing chatbots better data. There is no AI strategy without a data strategy, people say; the way to get the most out of AI is to comprehensively integrate all of your data together, in a carefully defined semantic ontology.
The next huge business is a huge bucket of loose text files. Ontologies and semantic layers get bitter lessoned, and the companies that try to cleverly integrate data are steamrolled by those that stuff every corporate communication into one giant folder that LLMs can read from.
Or, none of this works. Copilots make analysts marginally more efficient; the agents never answer meaningful questions. Companies use LLMs to search through their messages and notes, but the models never do productive analysis. Nobody buys the chatbots and everything stays BI, because charts were all that people ever wanted anyway.
Though these stories aren’t exactly mutually exclusive, they aren’t quite compatible either. It doesn’t make sense to build specialized chatbots if we feed everything into Claude; it doesn’t make sense to give analysts a blinged-out IDE if nobody hires analysts to answer questions anymore.
So which one wins? I don’t know, and man, it’s a full Gregory Olinovich.
Because if there is a single story that explains how AI changes the world, it is that it happens by accident. ChatGPT—the most valuable AI product in the world, and the one that is turning the entire internet into a chatbot—was a last minute project hacked together by a group of volunteers at a research lab. Nobody thought too hard about use cases or ideal customers; nobody asked how big Gartner predicted the market for chatbots would be; there was no two-by-two matrix of competitors. To the extent that there was a grand plan for ChatGPT, it was to launch it, and then shut it down.
Almost certainly, whatever changes happen in the data world over the next five years will also happen because of similar accidents. People have already tried to build that entire list of products in the previous section, with varying degrees of success. But will the next model release suddenly make one of them work? Will the next startup that tries to make an analytical chatbot make an interface for it that people unexpectedly love? Will someone hack together an automated business analyst with just the right incantations in their prompts, and it starts to work? As soon as one of these dominos fall, an army of new startups with rush towards that timeline, like moths towards a flame.
If the swarms of agents work, suddenly dozens of companies will try to build ways to make it cheaper to run thousands of queries all at once. If semantic ontologies and context engineering make chatbots useful, the context layer will become the next powder keg in IT software. If someone finds answers in messy buckets of text and video files, a hundred data pipeline companies will chase that heat. Or, more generally, the entire analytical world—and probably, every other software vertical—depends on which random breakthrough happens first.
So what do you do, when the winners are chosen by lottery?
There is a lesson I learn every Thursday night, and forget by Saturday morning: To figure out what a blog post is about, you have to write it first. You might think that it goes the other way—an idea starts in your head, it becomes an outline on a piece of paper, and then a blog post on the internet—but that’s never how it works. Epiphanies come from typing, not thinking.
Analogously, in a way—Cursor was founded by recent college graduates. They were not hardened engineers who’d Seen Things; they were not part of some corporate mafia. They probably didn’t have a detailed corporate business plan about wedges, growth strategies, marketing channels, or their second step. Instead, they were a few engineers who primarily differentiated themselves among the millions of other engineers who had the same idea—“what if someone put a chatbot inside of VSCode?”—by being the ones who built it. Though Cursor no doubt did some clever things, the most important thing was that they did it at all.
Generational riches, it turns out, also starts with typing, not thinking.
There’s a scene in Margin Call where Jeremy Irons, who’s playing the hardened CEO of titanic bank on the cusp of collapsing in the acute hours of 2008 financial crisis, contemplates his mortality:
Do you know why I’m in this chair? I’m here for one reason, and one reason alone: I’m here to guess what the music might do one week, one month, one year from now. That’s it. Nothing more. And standing here tonight, I’m afraid that I don’t. hear. a. thing. Just...silence.
If you work in tech for long enough, you, like Jeremy Irons, will begin to hear the music. You learn the realities of life in the trenches. You see the mistakes that everyone repeats. You talk to customers in whatever market you specialize in; you will hear of their troubles. You begin to see the patterns; you pick up on the chorus and common refrains; you hear the way it all rhymes. You develop an intuition, and that intuition becomes your edge. The kids might work harder, but if you can hear the music, you can work smarter.
Unfortunately, there is no music right now. The fog of AI—the wild randomness of today’s technological developments and of which products catch a viral updraft and which don’t—have silenced it. No IDC report on market sizing matters; no engineering fundamentals will save you when engineering becomes industrialized; no SaaS playbook works when nobody can say for sure that SaaS will even be around in ten years. Perhaps, no experience even matters. There is no such thing as a long term plan. There is just step one, and how you respond when the market tilts under your feet, and some new technical change punches you in the face.
An odd fact about the internet is that we’re all a few clicks or keystrokes from incomprehensible power and wealth. Right now, if you log into Robinhood and click on a few buttons in the right order, you could retire next week. Type a few thousand of the right characters into a code editor, and you’ll end up pulling the technical strings that control the world. Sure, the odds of that happening are small, but it is still strange—we are all one fifteen minute fugue state from owning an island.
What are those characters though? Obviously, nobody knows. But now, more than ever, it seems like the only way to find them—no matter who you are, whether that’s a grizzled veteran or at college student—it is to start typing.
Lol, no, it can’t do that; this isn’t AGI.
By which I mean, it refuses to answer your questions until you tell it why you’re asking them.
I have no idea if either of these tools are good, but one of them lists a bunch of trending Kalshi bets underneath its chatbot that promises “faster insights and smarter decisions,” so, you know.
This is presumably what Thinking Machines is trying to do; according to The Information, “the company aims to produce models customized to key performance indicators, specific business metrics that companies track, typically related to revenue or profit growth.” And they should know how to do this, because Thinking Machines clearly figured out the best make tons of money is to promise to build a robot that tells you how to make tons of money.

The beauty is that nobody knows. That's what makes this so enjoyable. Would you rather work on a problem where everyone knows the answer and the question is who builds it the fastest?
I just wish LinkedIn would change its AI strategy from helping me write better post (barf) to helping me find people I should connect with. I don’t mean suggested connections or sales outreach bots, but more along the lines of “I’m looking for someone in X industry in Y city who is knowledgeable about Z problem domain. Who should I talk to?”