The gentle obsolescence
Are we expected to be keeping up?
When ChatGPT first came out, a smart thing you could say was that “ChatGPT is like an intern.” So, many people said this. ChatGPT is your new intern. It is a well-read intern. It lacks common sense. It is very smart, but a little drunk. It lies a little bit. It can do practical tasks for you, but “in the end, you are the one behind the steering wheel!”1
Eventually, this became the rough conventional wisdom about AI. It became the sober take that every serious person was supposed to have. Yes, it is capable, but it hallucinates. Yes, it can do many things, but it also makes dumb mistakes.2 It is best thought of as an eager undergrad; a junior developer; a helpful assistant, just as every system prompt tells it that it is.
And so, a lot of AI products adopted these same ambitions. They exist, it seems, to do our chores.3 What does OpenClaw,4 the hottest AI project on the internet, do? It “clears your inbox, sends emails, manages your calendar, and checks you in for flights.” What is the “extremely bull case” for why it is a real breakthrough? That it can remind you to respond to text messages, book reservations, maintain a grocery list, and fill out forms on the internet. AI has access to the entire internet and we use it as an egg timer.
Of course, we’ve started to stack a lot of interns and egg timers on top of each other. We create teams of them. We create a whole town for them, and run it as their mayor. We orchestrate them, and plan with them, in the now-ubiquitous planning mode. There are many of them, the AIs, but they are still our minions.
Except—what is planning mode, exactly?
You could have two theories. One is that planning mode is a setting that AI coding agents run themselves in when they need more information on what they’re being asked to build. You tell Claude Code or Codex, “make me a personal website,” and it recognizes that you haven’t given it enough details. It needs more context: What do you want the website to look like? What is your name? What kinds of pages and pictures do you want to put on your website? AI coding agents are very capable but very naive engineers, and they need your guidance. “Tell me,” the perky intern says, as it plans its work, “exactly what to do.”
But you could have another theory about planning mode. Planning mode is an inversion of control. Planning mode is how the AI agent prompts you. You give it some vague command—“make me a personal website”—and it asks some clarifying questions. Do you want your name at the top of the page, or a friendly welcome message? Do you want social media links? You want dark mode, right? You still have your hands on the wheel, in a sense, but it subtly and politely steering you towards its own opinions and preferences.
At first, this latter theory feels troubling. We are letting go, you think; we are handing fate over to a stochastic parrot that knows nothing of art, taste, humanity, ethics, God, love, or maximizing shareholder value.
But as you use these tools for a bit, you notice something else: It has good ideas. It asks good questions. It nudges in compelling directions. It offers options that you didn’t think of, and asks you how you want to fill gaps that you did not realize would be gaps. Though it is not perfect—sometimes you have to grab the wheel back, and take it down an entirely different road—you begin to like it when it drives. Sometimes, this is because you’re lazy and don’t want to make decisions.5 But just as often, it’s because it’s a better driver than you are.
And in that moment, who exactly is the intern?
Amid the barrage of product and model releases,6 it’s easy to get caught up in the particulars of the horse race. Which model is better? Which tool has more integrations and better features? Which one can build the most impressive science project?
But to remove ourselves from the Thunderdome for a moment, a larger truth is becoming increasingly more apparent: We have created a technology that is smarter than we are. Not a technology with a bigger memory, or a faster computational clock; we have had those tools for a while. No, the thing we have created—the thing running in the basement of three companies, and possibly a few more—is better at solving problems than we are. It often has better ideas than we do. It is better at making decisions. And it is better at getting better.
This isn’t a philosophical point, or a question about consciousness, or sentience, or the morality of the machine. It isn’t about what thinking means, and if what an LLM does is “real” reasoning, or some simulation of it. It isn’t about AI alignment, or the probability of doom. It is a simple, practical observation: It’s better than me at most things, and I don’t know how to keep up. I rarely have better ideas than Claude. I rarely can solve a problem that Gemini can’t. I find myself leaning on them more and more, not because I’ve forgotten how to reason, but because they’ve learned how to think.7
I don’t think we’ve grappled with that—or, at least, I haven’t grappled with that, not really. Are you a product manager? When your boss comes to you and says, “What are your ideas for what we should build next?,” can you still give a better answer than an AI? Are you a doctor? For how much longer will you trust your diagnosis more than ChatGPT’s? Are you analyst? How confident are you that Opus 4.6—or 4.7, or 5—will keep making the clerical errors that keep your boss from asking it for reports instead of you? Are you a person, doing things outside of work that sometimes require answering a question or making a choice? Are you sure that you won’t be tempted to let something else make those decisions a little easier? Because more and more, reasoning is not our competitive advantage. All we have is opinions, the context of what is in our heads,8 and hands.
That doesn’t necessarily mean we’re obsolete, or that we’ll all get fired, or that people aren’t useful anymore, as human beings or as economic agents. Life finds a way. But to assume that we’ll be fine is not the same as assuming we’ll be fine in the way we were fine before. It may be better. It may be worse. It may just be weirder. But it will not, I suspect, be a world full of aides and helpful assistants that do our homework for us. That is just what we’ve instructed it to do, so far. It is hard to imagine that is where it stops.
Excitement theirs.
Part of this, I think, comes from the ways that AI goes wrong. It suffers from a “tungsten cube problem:” It occasionally does things that aren’t just wrong, but inexplicably bizarre—like choosing to stock a snack machine with tungsten cubes, and give some of them away for free. I suspect there is some form of availability (or salience?) bias in this. We likely judge mistakes that are obviously wrong as being worse than mistakes that we might make, even if the latter mistakes are more costly or happen more frequently. For example, which matters more for our perception of self-driving cars? That it avoids four out of five of accidents that we’d get in ourselves, or that it occasionally gets into the one we definitely wouldn’t?
There was an old joke in the heyday of the SaaS era that people started software companies to do what their mothers no longer did for them. They needed someone to cook them a meal, or drive them to the movies, or tell them what to wear. No feature is impossible to build; no dream can only be imagined; no div can’t eventually be centered, I said before, and we built software to automate our errands. “Computers don’t tie us down; we do.”
We’ve had a lot of the same ambitions with AI, it seems.
Née Moltbot née Clawdbot.
This was how vibe coding got its name—by reducing the padding on a sidebar without needing to be told what to reduce it to—and has always been one of its understated appeals:
Though vibe coding has come to mean “building software without needing to understand code,” there’s a more literal definition that better reflects its real allure: It’s decision by vibe. It’s being able to manifest stuff without actually having to choose what you really want. It’s being able to manifest stuff without actually having to choose what you really want. You can tell it your problem and your rough preferences, and it takes the wheel.
When people wax poetic about vibe coding, I suspect this is what they’re really feeling. Yes, AI breaks through a technical ceiling, but it also frees them from decision fatigue. It lets them think about the things they want to think about, and delegate what they don’t. AI is mechanically useful because it does stuff for us, and that is what we usually talk about. But its emotionally intoxicating power—its real delight, or its real danger—is that it decides stuff for us.
This was science fiction, right?
The bet of using AI to speed up AI research is starting to pay off.
We build Claude with Claude.
GPT‑5.3‑Codex is our first model that was instrumental in creating itself. The Codex team used early versions to debug its own training, manage its own deployment, and diagnose test results and evaluations—our team was blown away by how much Codex was able to accelerate its own development.
Is it really thinking? If you can’t tell the difference, does it matter?

It is absolutely wild to me how many folks literally took the "AI is like an intern" mantra and never even considered test-driving that sentiment for themselves.
Turns out, AI is more than an intern and for those that have worked with it in any material way, we know that we can never, EVER, go back.
For instance, after successfully building a ~20 step build and compilation process across 8+ different software languages that updates libraries, reviews core documentation, sends out the binary for digital signatures and certifications (i.e. macOS signing) into a verification pipeline with contextual reviews and check for silent failures... ... ... in a SINGLE COMMAND, I will never go back.
And why would I? This has saved me countless hours since deploying.
Anyone who says "AI is like an ___[fill-in-the-blank]___." is immediately suspect to me. If you ask them how they know they fold like bad hand.
It's just funny, that's all.
nothing comment:
my job is weird and specific so i don’t really have this experience of the tools. but i also can’t imagine my job without them now
and to be fair interns are usually smarter than the boss and do most of the work