Go crazy, folks, go crazy
I’m not saying it’s right. I’m just saying it might work.
Right now, millions of engineers are using AI to do their job. “Top engineers at Anthropic, OpenAI say AI now writes 100% of their code,” says Fortune. Claude is now effectively writing itself, says the person building Claude.1 “When AI writes almost all code, what happens to software engineering?,” asks a software engineer. This is all a very well known phenomenon at this point.2
By contrast, data analysts, who also write a lot of code, are not using AI to do their jobs. Though most use chat applications like ChatGPT, a 2025 survey from dbt Labs found that less than a third are using dedicated development tools. Things may have changed since that survey—it’s from early 2025, which was years ago these days—but by most accounts, AI seems to be upending analysts’ lives much less than it’s upending engineers’.
You could have two theories about this:
Analysts do a job that is uniquely hard for AI. We’ve talked about this theory a lot. Software projects are relatively contained—there is a codebase; there are users who give feedback on what that codebase does; there can be specifications for how you want to update that codebase to improve it; all of these things can be written down. Software is also relatively testable—change the code; push the new button; does it work? Data analysis is neither of these things. To solve an analytical problem, you have to know about a codebase, but also a business, a market, the thoughts inside of people’s heads, and the location of nearby electrical substations. You cannot write all of this down. Moreover, analysis isn’t testable. You find out if your recommendation was good after the recommendation plays itself out.
Or, analysts are cowards.
I mean, no, not exactly. But here is a history of popular generative AI products, and there is a pattern:
Google invented transformers, which were foundational to the development of large language models. Putting a chatbot on top of transformers was a fairly obvious idea, but Google was cautious about releasing a product like ChatGPT, because, in part, they were “too scared” that “chatbots say dumb things.” So, they didn’t, OpenAI eventually did—not because they knew it was going to work, but because, eh, why not?—and practically overnight, ChatGPT became one of the most used products in the world and OpenAI became one of the most valuable companies in the world.
Then, people quickly realized that AI is good at writing code. Initially, most AI-powered coding products, like Github Copilot or Cursor, were fundamentally about asking for permission: They proposed changes in code editors, and engineers were asked if they wanted to accept or reject the updates. Simply accepting all of the model’s edits was a fairly obvious idea, but that made people nervous. So most tools didn’t encourage it, until Anthropic said, eh, why not?,3 and released a fully autonomous coding app. Practically overnight, Claude Code became one of the most influential products in the world, and Anthropic became one of the most valuable companies in the world.
At its core, Claude Code is a bunch of looped requests to Claude. A user says “add a button to my website;” that is turned into a prompt to Claude; Claude’s response is fed back into another Claude; and again; and again; and so on. But why stop there, many people wondered. Could you have a manager Claude tell the first Claude to add a button to the website? Could you have a director Claude tell the manager Claude what problem it needs to solve, and have the manager Claude decide to add a button on its own? Could you have a CEO Claude tell the director Claude to hit their quarterly targets? Could you have a board of Claude tell the CEO Claude to sharpen their pencil?4 Which is all to say, Gas Town—i.e., an army of Claudes, telling each other what to do—was a fairly obvious idea. Still, most people didn’t try to build it—not in its unhinged, explosive form, anyway—because it sounds dangerous and expensive. But then, someone did, and it got a bunch of attention, because it was unhinged and explosive.
Of course, if a bunch of Claudes are good at managing our software projects, maybe they’d be good at managing our personal lives? Our lives aren’t that complicated; they’re just scattered. They’re in our personal emails, and our work emails, and texts, and calendars, and in our documents, and our bank statements, and our forgotten Banana Republic Rewards Credit Card accounts. Giving Claude access to all of these things and telling it to be a personal assistant is a fairly obvious idea, but it’s a horrifying one. So, most companies that tried to build personal AI assistants did so “responsibly,” by carefully gating what the assistant could see and do. And then an engineer said, eh, why not?, and yippee-ki-yayyed together Clawdbot, an AI assistant with access to absolutely everything. It became, in a month, the world’s sixth-most popular open source software project.5
Look, this is a responsible blog that believes in doing responsible things. It believes that it is correct for AI data products to focus on delivering “trusted insights on your enterprise data.” It believes that, “as AI agents evolve from experimental sidekicks to productive team members,” of course “enterprise leaders must design systems that are not only powerful but trusted, governed, and simple to use.” It believes that if the world were right and just, the product “that helps data teams deploy analytics agents they can trust” would be the product that earns everyone’s business. We should be rigorous. We should measure twice and cut once. We should be data stewards, and master data managers. We should not pursue the fairly obvious—and obviously irresponsible—idea of giving an AI agent unfettered access to our databases, our documents, our emails, our Slack messages, our Zoom calls, our meeting notes, and our customer support messages, and telling it, “Go find me something useful, and don’t come back until you do.” We should not launch a hundred Claude Code sessions and instruct them all to chase whatever hunches they have about how we could make more money. We should not have Codex test a new hypothesis every three seconds, until one finds a billion-dollar needle in a haystack.
But someone will. Someone will make a product that does that. And given this environment—and our recent history—which product are you betting on? The slow and steady one that carefully audits its structured context stores and tells users it doesn’t have enough information to answer their question? Or the one that cranks the AI dial to 12? Will it be the product that worries itself with governance and keeping inference costs low, or the one that believes that a dollar spent on Opus is probably a lot more productive than dollar spent on an analyst, and tries to ignite a data center on fire on your behalf?6 Is it the AI agent that’s optimized to oh-so-precisely answer mundane questions like, “How many shirts did we sell last week?” over and over again via a Slack integration? Or is a battalion of Codexes and Claudes that are all told to relentlessly and recklessly find ways to make more money?
Yes yes yes, I know, I know. That product is wrong. It doesn’t always work. It makes stuff up. It’s not reliable. It’s not secure. It’s dangerous.
Tell that to Google. Tell that to Copilot. Tell that a graveyard of AI personal assistant startups that stood on the same righteous soapbox.
When you’re on the inside, you forget that most people don’t care about the details that you do. You spent your life carefully researching AI safety inside of a cleanroom at Google; how could the public ever want to use a chatbot that doesn’t meet your exacting standards? Your entire job is double-checking the numbers; how could anyone ever trust an AI that isn’t writing queries through a version-controlled semantic layer? Up close, we can’t just do it; we have do it right.7
But outside of your particular domain, how many terms of service do you blindly accept? How many defaults do you change? How often do you YOLO your way through the warnings and fine print? How regularly do you say, “this is too long, I ain’t reading all that, just show me something good already?”
It’s a form of the Gell-Mann amnesia effect: Within our area of expertise, the more we worry about the details, and the more we forget that other people don’t. But outside of it, we’re like everyone else—we just want to see something cool.
These days, people spend a lot of time talking about the future of software. From an earlier post, here’s one way you could think about it:
Before we all had computers and phones and Instagram, making art was hard. You had to have a fancy camera, or painting skills, or the ability to stitch together film strips into a video. Because art was expensive and somewhat scarce, we valued the art itself.
Then it became easy to make. You can create great art in seconds, sometimes without even meaning to. And as the cost of making it fell, the value and notoriety of each individual piece of art fell too.
So we started to care more about the creators than their specific creations. Like: Name that one great Kai Cenat stream. What’s your favorite Mr. Beast video? What’s Charli D’Amelio’s masterpiece? Some things might be more memorable than others, but there is no opus. Very little stands on its own. Popularity comes from a personality and an amorphous body of work.
Now, the cost of creating software is also going to zero, as they say. So would we not expect to see the same patterns here? While that doesn’t mean big software businesses will go away—there will always be workhorse products that do accounting and manage warehouses and fly airplanes, just as there are still big-budget Hollywood movies—could there not also be an ecosystem of influencers who make software that is popular because they made it? …
Are Nikita Bier’s apps products or content? Is he an entrepreneur or an influencer? Is signull, an anonymous tech commentator, creating a product studio or a hype house? Is there even a difference?
There is another parallel, perhaps. When we are drowning in content, the only way to get people’s attention is by being crazy. Software may not be so different. Software must be disciplined, many people will say. It must be made by well-trained teams of thoughtful professionals, because that is the right way to do it.
Sure, maybe. But the right way and winning way aren’t necessarily the same thing. And maybe the future of software is stuff that’s made by one person who was willing to try something crazy.
Overseeing Claude? Observing Claude?
Even if you aren’t aware of it, your retirement account is.
“Our goal with Claude Code is to better understand how developers use Claude for coding to inform future model improvements.”
Could you have the manager Claude tell the board Claude that they’re lowering their growth targets this quarter, but they’ll make it up in the back half of the year?
Twenty projects have more stars on Github than OpenClaw. Fifteen are lists of engineering resources. The other five are React, Python, Linux, Vue, and TensorFlow.
I once asked people how often in their careers they found a truly meaningful “insight” in their data. The average answer was once every two years—or, if measured by an analyst’s salary, once every few hundred thousand dollars. How many Gas Towns of Claudes could you run with that? How many different moonshots could it explore? How many useful things would it find? Do you think it would be less than one?
When we launched Mode, we had to build a way for it to connect to customers’ databases. A lot of people used cloud databases, which we could connect to directly, if people gave us their passwords. But nobody would ever do that, we thought; you can’t expect people to just paste important passwords into a form on a random startup’s website. So we spent several months building a tiny application that people could install on their own servers, which made it possible for them to use Mode without ever sharing their passwords with us.
Almost immediately, everyone complained. “I have a password,” they said, “can’t you just use that?” “Here it is,” some said, in a support ticket, “please get me connected.”

I spent some time doing contract work with a team of analysts that almost exclusively used AI for analysis. I don't mean that they asked AI to write complex SQL queries or occasionally asked it to label unstructured data. I mean that they uploaded the dataset and told it to "act as a senior analyst" and write the report, and then they copy and pasted the report into a doc and sent it.
They weren't being lazy or irresponsible, that's quite literally what we were instructed to do, and the workload didn't allow for anything much more 'bespoke' than that. I used AI to write Python scripts or Google Apps scripts or to help me finesse the wording of a tricky paragraph, but I refused to use it for wholesale analysis because every time I tried, the result was absolute nonsense. Or at least it was like 30% nonsense, which I personally believe is too much nonsense.
But what I realized over time is that it actually didn't matter. The people charged with reading the reports were mostly not reading them anyway, and the ones that did mostly didn't care if the data was even accurate, let alone if it was statistically significant or used a "rigorous methodology." They just wanted a stat they could bring to their boss to say "something I did worked" or to say "here's why we should do this idea I have."
As you said: “Claude is a bunch of loops.”
Isn’t that everything in life and business? A loop of gathering information to guide the next loop (that gathers more info)?
Analysis is literally this. And as these things become more widely adopted the tentacles of those loops will reach into every single department, regardless of our concern.
The loops don’t scare me. The people who don’t want this do. They concern me greatly. How did they learn anything? A feedback loop. An imperfect one that was refined over time that eventually built habits, beliefs, and intuition (we call this “taste” now).
It’s how any child learns anything.
AI won’t eat your lunch; it’ll just make it far less interesting than the kid who brought a full course meal to their grade school cafeteria and then shared it with others.
And we all know what it’s like to show up with a soggy ham sandwich.