Something good
Could it be? Is it she?
What if Anthropic did it? What if Anthropic did it, and we refuse to believe it?
We believe half of it. Man, do we believe that half. That half is all we can talk about: Look at how much more we can do with AI. Look at how productive we are. Sure, you can quibble about the details: That AI can be sloppy; that some of the enthusiasm about Claude Code is theatrics and bluster; that some of what people are building is needless, directionless, or, most damning of all, tasteless. But at this point, it seems naive or willfully retrogressive to say AI does not help us do more.
We’re obsessed with this half. Wall Street torched the stock market because it thought SaaS businesses couldn’t keep up with this half. The productivity gap “between ‘great traditional SaaS’ and ‘AI-native’ is a full order of magnitude,” says one venture capitalist. AI can make engineers “30x more productive”, says another. “Now that generative AI is here, your definition of speed has to increase 10x,” says a third. In a recent survey, 98.2 percent of engineers said AI saves them time. More than 50 percent say it makes their work better. We believe this half with a religious fervor: Look at how much we’re shipping; look at how much Anthropic is shipping. Look at how many features we’re adding; at how many agents we’re running; at many tokens we’re burning. Look at how much money we’re spending.
Maybe that last one is part of the problem. Anthropic did it, and they’re making so much money because of it. There must be a catch; it must be a trap; this must be the pump before the inevitable dump. They will turn on us, because capitalism at that scale always does: “First, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves.” If anyone is making that much money, best to be suspicious.
Or, maybe it’s that our employers are making so much money. Our productivity is not our gain. Engineers who ship ten times more software get rewarded with a 10 percent raise and nine fewer colleagues. “Companies are capitalistic extraction machines and literally don’t know how to ease up”, and “startup founders are out there draining people at a faster rate than at any time in history,” says engineer Steve Yegge. “You need to push back. … You need to educate [your company’s leaders] about sharing the AI value capture between the company and the employees, and how to strike a good balance of sustainability and competitiveness.”
But even that is incomplete, because it is not our employers who are telling us to do it. The accelerated pace is voluntary; the call to do more is coming from inside the house. People are burning themselves out because they like it. For example: “I’ve been having such an amazing time with Claude Code.” And: “ive done more personal coding projects over christmas break than i have in the last 10 years.” And: “I have NEVER worked this hard, nor had this much fun with work.” And: “Using Claude Code has a weird side effect: You don’t just get more productive, you actually want to work more.”
So we give it the language of addiction. It is a drug; they are our dealers. We are gamblers, tweaking for the next hand; for one more pull on the slot machine. We are leaving our friends’ parties to spend time with it. We go to bed thinking about it, and wake up eager to use it again.
That is one way to tell this story: The capital entrapping the labor. For the last twenty five years, Silicon Valley spent billions of dollars trying to trick employees into believing they wanted to work long hours. Companies filled their offices with beer fridges and ping pong tables; they recruited people with people with “Cheetos, Fritos, and Doritios!;” they dared everyone to take time off. A spoonful of sugar, to help the medicine go down.
Coding agents are the next sleight of hand. “‘It is now one of the recruiting tools in Silicon Valley: How many tokens come along with my job?’ [Nvidia CEO Jensen] Huang said. ‘And the reason for that is very clear, because every engineer that has access to tokens will be more productive.’” And what is AI for, if not to make us more relentlessly productive?
But that is not the only way you could describe what is happening. Because there could be another half to the story: That AI makes the actual work actually fun.
Ever since people had jobs, we’ve fantasized about leaving them behind. We daydream about vacations; we celebrate our retirements; we idolize the four-hour workweek. We imagine worlds in which work is done for us, and we live lives of infinite leisure; or worlds that sever us from our jobs, and we send some other consciousness to our offices. Progress—social progress; personal progress; technological progress—is a life with less work.
The last few months have been a different sort of science fiction. Anthropic is not freeing people from the burden of having a job; it is freeing people from feeling like those jobs are a burden. It is a drug that makes us like to work—not the stuff around the work, like the sugar high of an office full of toys or the actual high of an office full of drugs, but the authentic, honest work.
Yes, it is early. Yes, people might be thrilled by the novelty of AI, and eventually tire of it. Yes, the products and companies that people built with AI may collapse, and people may find out that they only thought they were working. Yes, it might be less of a productivity tool, and more of a fidget spinner that occasionally throws off something useful. Yes, a world in which everything is built by one shared brain could steamroller our culture. Yes, a drug that makes us like work is not without its mortal dangers. Yes, it is almost incomprehensible to imagine a world where work generally makes people happy, and it is even more incomprehensible to imagine that there wouldn’t be some dystopian catch. Yes, the AI companies could be invaders.1 Yes, there is more to life than your job; yes, there is a life beyond this.
But jobs are (probably?) an inevitable fact of life. And right now, thousands of tech companies are trying to reinvent all them, just as Anthropic is reinventing software engineering. They could see two things. One is that productivity sells. The future is industrialization. It is turning everyone into factory foreman, the anxious chaperone overseeing lines of AI employees. It is an inbox full of chirpy agents; it is an inbox for monitoring the situation. It is a future optimized for output.
The other is that fun sells. It is that productivity can be a side-effect, and jobs we tolerate can be turned into jobs we want to do. It is that the best question to ask is not, “How do I make this person ten times more productive?,” but “How do I make this job ten times better?” It is optimizing for what people like.
The story we believe is the future we’ll build. And what is happening over the last few months—it is something. Maybe it is something big; stories like that get a lot of clicks. Maybe it is something bad; that gets even more clicks. But what progress it would be—what a genuinely better world we could make—if we allow ourselves to believe that it could be something good?
Sacré bleu!

I guess this is a polarizing take, but I’m just shocked at all these people saying that it makes coding fun. If anything I have far less fun using Claude code. My job changes from thinking deeply about the craft to reviewing and correcting AI slop that’s like 80% right.
I definitely agree that Claude web UI is super useful as an enhanced Google search (eg “give me an overview of Linux cgroups API”), or for generating one off data analysis scripts (“build a histogram of request durations from this CSV”). But for real work in large existing repos with Claude code i’m not so sure. I tried using it for a medium sized project at work recently, and I’m fairly convinced that I spent more time cajoling it than I would have writing it de novo myself.
I don’t find it fun at all. At first, yes it made everything easy. I could spend 15 mins on something that used to take an hour or two.
But now I’m still burning that same hour or two, now having to intensely review and fix the outputs. AI generated content looks great on the outside but the inside is empty.
Having to fix workslop all day is more mentally draining than just producing the product myself.