29 Comments
User's avatar
Joe Jordan's avatar

I think a major issue with this perspective is that floating point operations don't power the world. Fertilizer, concrete, and steel, all ultimately reliant on fossil fuels, power the world. Software is just one means among many to try to redirect the surpluses generated on the back of the real foundation of the world. And redirecting surpluses is about power at least as much as its about knowledge. That is why I am skeptical about AI.

Expand full comment
Benn Stancil's avatar

Sure, as a broader philosophical point, there's probably an argument to be had about who has more raw, bend the world to their will, power, the CEOs of Google and social media companies, or Saudi Arabia and Exxon. Though I don't think that's necessary an argument for AI skepticism as a transformative thing (with no comment on whether that's a desirable thing). Even if computers don't run the world, they definitely changed it in really dramatic ways.

Expand full comment
John Wessel's avatar

I mostly don't want you to be right about this topic- but I think you are...

"ACPE—average compute per employee" - is that a Benn original?

Also, I have been thinking about this topic: Do you think there is an aspect where we have a sort of "death of expertise?"

Humans are generalist managers, and AI agents are the ones who know the specifics?

But maybe we have a few experts in the business of repairing, tuning, and maintaining AI agents, just like in a factory.

Maybe that is the next thing a lot of engineers will be doing: pretty much only designing, repairing, and tweaking agents?

Expand full comment
Benn Stancil's avatar

It's a derivative of a conversation with a friend, which is where most things come from, tbh.

And I could see that, kinda? If I had to guess, I don't think it'd be as clean cut as that, where we become mechanics who know nothing but how to operate machines, and the machines know all about design or whatever. But I'd certainly guess that a lot of specific skills atrophy, like our handwriting becoming terrible. Stuff like, we all forget how to write tests, because the machines become very good at that.

Expand full comment
John Wessel's avatar

Ya - and there will be levels of mechanics that know more and know less. Some will just know how to push buttons and others will understand in higher fidelity how things work - much like this are today.

Expand full comment
Eirc Sebaro's avatar

$192K for a Junior Engineer? Did cheap AI write this article ;)

Expand full comment
Benn Stancil's avatar

I was a bit surprised by that too, but, levels is usually pretty good?

https://www.levels.fyi/t/software-engineer/levels/entry-level/locations/san-francisco-bay-area

Expand full comment
Nicole Edmonds's avatar

Maybe it's because I'm not a software engineer, but my existential dread with this whole premise is less about the human displacement, and lies more with the environmental costs. I'm certainly not a luddite by any stretch as technology evolves and marches on, but there's a certain tipping point not far on the horizon. I really feel that taking a more "TCO" approach which is inclusive of the environmental cost should steer much of these conversations.

Expand full comment
Benn Stancil's avatar

no you don't understand, it'll be fine because we'll all just move to mars, duh

Expand full comment
Jonah McIntire's avatar

I very strongly agree with the line of thinking described. As a CPTO of a large organisation (~1k people) I'm trying to transform all the software-creating roles rapidly towards a world where we get the diligence and volume of AI with the taste of our best human creators. You chose software engineering as a role inparticular, and that makes sense due to its cost and its impact. Not every role has the same marginal returns to intelligence (human or artificial): engineering is nearly an apex role in this regard. The economics make less sense outside of your Bay Area salary estimates. Even before AI the Bay Area was producing countless abstractions for engineers to use on the premise that its sped up and economized on scarce and expensive software engineers (i.e. Autho0 or Vanta or Stripe rather than the home-grown versions of authentication, security, or payments). That never caught on in China, and barely in Europe: engineering labour is just less expensive. But AI's unit economics is another scale though: it will motivate anyone, anywhere, to change how they make software.

Anyway, good writing and appreciate that you took the time to publish it.

Expand full comment
Benn Stancil's avatar

I did wonder if the bay area was exactly the right comparison or exactly the wrong one. On one hand, that's where engineers make the most, and where pricing pressure is the highest. On the other hand, they are presumably the most talented, or at least viewed that way. When people open large engineering offices in other places, that's when it feels particularly industrial already, where they see hires as a little more fungible. Maybe the cost savings aren't as large there, but (rightly or wrongly) they also seem to be hired to build to a spec.

Expand full comment
Jonah McIntire's avatar

Exactly.

Expand full comment
Matt's avatar

Great read! At least, maybe programming will survive as an artisan craft. Maybe some day people will proudly say they're using "hand-coded" software, just as we now romanticise hand-painted portraits, show off hand-woven clothes, or celebrate live music. Outcompeted in our day-to-day lives, but still reserved for when we crave something premium or just authentically human.

That said, I don't think people care about programming nearly as much as proper arts and crafts. Like, music and paintings are the final "product" in and of themselves, whereas programming is largely just a tool - a means to a separate end. So any romanticisation of programming itself will probably be a niche amongst developers, whereas programmed products may get more love, e.g. games made by hand vs games made with AI. Hand-made guitars vs live music.

Expand full comment
Benn Stancil's avatar

I wondered that too. I'm sure there will be some people who market themselves that way, and I bet there will be some amount of it, like how some movies are marketed as being authentic because they're made with 35mm film.

Still, I guess the real question is, will fully manmade things have a different character in some identifiable way? I would guess no, honestly, but if we do end up industrializing things more, maybe all the other software starts to become more generic and lifeless? I don't know.

Expand full comment
Evan Gray's avatar

What becomes the point of all this code/software though?

As humans, there is some limit to what we want to happen outside of our understanding or control. Traditionally the limit in B2B is how fast an organization is willing to change/adopt new processes. Does more code change that? I think we're already close to the limit; there's already more code than organizations can handle, like thousands of AI agent companies. I have a hard time imagining how an organization increases this threshold for change - how do we get from today to a CEO just saying "Hey AI, please run the company"?

A dystopian aside:

So much innovation from computers and data has already gone towards analyzing and manipulatung humans at massive scale, and I'm not sure what happens when we accelerate that further. I imagine a world with every human action and thought captured as data, that is then instantly fed into a model and used to generate a reality most likely to get us to take some action (e.g., buy something, say something, etc). . . Maybe too sci-fi

Expand full comment
Benn Stancil's avatar

Yeah, so that was one place where the industrial analogy doesn't really work for me - when you make cheap clothes, you might want to make 2x or 10x or 50x more of them, especially if you can do it cheaply. Do you want to do that with software? Like, does Microsoft want to make 50x more software?

I think the answer is...yes? For one, they would definitely want to make if 50x cheaper. Second, I suspect there are lots of things that they would do if they could make it 50x cheaper, where they'd make more custom software, more things for very specific problems, and so on. And three, I bet they'd make a lot more versions of stuff, as prototypes and ways to test ideas. Why make one new version of feature if you can make 50 and play with all of them?

Ultimately, I think that last point is where the limit is - it's around how fast people can make decisions on it. The product leader who can say "give me ten attempts at this," get them in a few minutes, and spends most of their time approving or rejecting improvements feels like as fast as you can feasibly go. And it feels like we're a ways away from that.

Expand full comment
Michael's avatar

I found myself discussing multiple times this week that our job profile as software people is completely being changed to something completely different yet unknown. But your take as mechanical foremen does have a point - especially when you mix in the "bitter lesson" reference. That one was huge - I have seen it first hand in the computer vision domain already. Yet I make the same kind of mistake Rich Sutton talks about... What a great reference Ben.

Expand full comment
Benn Stancil's avatar

I've increasingly become a believer that there are two dynamics that define all of Silicon Valley - the bitter lesson, and wanting to make a ton of money all at once: https://www.youtube.com/watch?v=BzAdXyPYKQo

Those are our two gods; everything else is just a distraction.

Expand full comment
Larry Stewart's avatar

This is a fascinating comparison. I’m curious, though: how might the cost of verifying, securing, and integrating all that AI-generated code affect the overall savings you’ve outlined?

Expand full comment
Benn Stancil's avatar

I'm sure it adds to it, but all of that seems relatively small in comparison? Or at least something that could definitely be optimized? And even if those systems are that efficient - you do a lot of duplicative stuff, or have to rewrite a lot of code - it seems like the volume of production would still overwhelm all of that. That hypothetical six-bot factory in would write 12 trillion lines of code. That is *so much.* (This random article said that Google's codebase was 2 billion lines of code in 2017: https://www.facebook.com/techinsider/posts/googles-entire-code-base-is-two-billion-lines/681289028736123/)

So sure, all that coordination will cost some money I'm sure, but there's such an unbelievable amount of energy in a system that could produce that much approximately useful code (as opposed to something like, 12 trillion lines of random numbers), it seems like you can still lose a lot and produce a ton with it.

Expand full comment
Lucas J's avatar

Really interesting perspective

Expand full comment
Alex Khrustalov's avatar

While I do agree that with increasing power of AI software engineering will change, I don't see companies turning into software factories. With that amount and speed of generated code there is no way to verify it the way we do it now – like pull requests. Because human will still remain a bottleneck. And this brings a couple of problems.

The first problem I see is security. How do we get sure that there is no backdoor in the generated code? Who will be responsible for that? A machine or operator?

The second problem – the complexity of the code will skyrocket; we will eventually get to the point when something goes wrong, nobody would be able to fix it.

The third problem – motivation. If this trend continues, fewer and fewer people will feel the need to learn, and eventually, we could face a serious skills and knowledge gap.

At the same time, I understand that these problems might be irrelevant from a future perspective.

Expand full comment
Alex Stenlake's avatar

I think there's a false assumption buried in here (and the article); that what we'd want to do is code code code. If we have that capability on tap, maybe what we'd do is get real judicious about how we deploy it. At the moment we code code code because there's more ideas/problems than coding time to address it. If that changes, maybe we spend more time designing and experimenting, judiciously selecting the right problem to solve.

I could be wildly wrong, but to my mind spending less time in the execution/action loop will have more time for strategic thinking and validating that you're doing things correctly. Doubly so if we build our systems to be "machine-to-machine". Maybe the spaghetti issue goes away when we have more brainpower to make sure we're solving the problems we intend to solve? Probably not completely, but certainly less so than taking the 'thousand monkeys' approach to engineering!

Expand full comment
Performative Bafflement's avatar

> I could be wildly wrong, but to my mind spending less time in the execution/action loop will have more time for strategic thinking and validating that you're doing things correctly.

Yeah, I was going to point out, this is a matter of both unit tests and verified UI use, and both of *those* can be massively paralleled / automated, too.

You can have fully automated "works as intended" loops, fully automated UI testing loops, fully automated Chaos Monkey style "testing for breakage" environments, and more.

Like we can basically tell Operator "click every button on this site and log what happens" today. Imagine that, but better and smarter. Websites have a mini combinatorial explosion, but it's only combinatorial for a human mind, usually - throw a hundred AI minds at it, and you can probably explore the whole UI space in a way we never can today, and only surface the problems. Dynamically created pages or apps or software are a bit of a problem there, but you can probably reduce the search and eval space with categorization, again which is something the AI minds can do.

And once best practices are established in these "one level up" spaces, which they surely will be, you'll basically have the fully automated environments Benn was positing.

Expand full comment
Benn Stancil's avatar

On whether or not the machines *can* do it (will they introduce huge security problems? is it performant? is it incomprehensible spaghetti code?), I'm with you that the answer right now is mostly...kinda? maybe? in some ways? But not really?

But, to the point about them being good at some things but not others, that seems like what changes - we figure out ways to make them good at the things they aren't currently good at. I'm sure there were people who said at some point, you can't trust the machines to make cars, because how will you know if they miss a screw, or fail to notice some defect? And they were probably right, in the sense that machines never could notice that stuff the way a person could. But over time, people built the factories so that the machines could solve those same problems, or could alert you when they failed to put in a screw, or whatever. And that's the bigger point to me - if you have industrial power to do that, I think people will inevitably build around that power.

Also, I do think it's pretty reasonable to ask if we actually want to *ship* a bunch more software, and I think the answer is arguably no. But I think people undoubtedly would want to *build* a bunch more if they could, for testing, for clicking tons of buttons, for brainstorming new ideas. That feels like the sort of capacity that, once it exists, people will figure out all sorts of ways to use.

(That came up in this other thread too: https://benn.substack.com/p/the-industrialization-of-it/comment/108674448)

Expand full comment
Alex Khrustalov's avatar

Yeah, kind of makes sense. I agree that the feedback loop is super slow and that is the main reason why good software is slow and expensive to produce. I think AI really could help with that but let it cook the code completely on its own, while humans just supervising the process - it is a far cry.

Expand full comment
Alex Stenlake's avatar

I've built some small-scale prototypes using high-automation LLM approaches. While writing on brownfield is still routinely a nightmare, refactoring (if you have testing in place) is normally pretty achievable. "Entire codebase in a day" achievable, if you'd believe it. I think you'd still need some more tools to make it scale right...things like module reuse to ensure hand-rolled (and flawed) implementations can't hide anywhere. But personally I'd chalk that up to "5 years of improvements + experience". If I had to place a bet where I think our issues will live, I think this stuff will struggle most with low-level/high performance applications. For the meat and potatoes of modern programming - webapps in declarative-ish frameworks and API plumbing - I think there's a fairly smooth automation path.

Expand full comment
Alex Khrustalov's avatar

Yes, a boilerplate like API layer which is basically transforming JSON into SQL, and common UIs like forms and buttons.

Expand full comment
Corneel's avatar

Great, catching up on your reading.

1. As I said on copy-copy revolution, stuff like configuration-as-code and automatic deployment can work all the security and performance issues out. For now, you keep some frameworks in, in order to let your engineers get used to it. However, dependencies are currently a risk in software engineering as well, so for small, handy, functions are already don't use packages anymore.

2. The amount of software will explode. You have the high performant automated stuff of 1, but you also have vibe coding + guard rails where everyone can create their own software for extremely specific tasks.

3. Stuff like Cursor or ChatGPT, not per se those tools, but 'interaction points' between man and machine that score high on customer intimacy / brand might be really profitable. The stuff behind it (the real tech, like Claude or Gemini will be difficult to get a sustainable competitive advantage, every month another model is better). The AI Agents will keep creating better software in their own language etc.

4. In this fast changing road to 2030, vendor Lock-in becomes much more of a problem and Open Standards become much more valuable. Being able to switch and keep switching is key.

5. Code architecture: Many cross-fuctional repo's that are complete. When you want to update something across the board, you just say to your agents: write a new test for element x and ensure that it works in all repos. Or maybe the AI Agents will reinvent frameworks to be more efficient, all well and good

6. You refer to Brooks Law. Interestingly, big time open source project somehow manage this. A lot of contributors working in parallel. So running AI agents might learn something from big open source governance and collaboration to ensure security, or at least OS seems closer to it that company-managed hierarchical management.

Expand full comment