Our brains aren’t very good at comprehending how much knowledge that can fit into a 200k token context window. In terms of football fields and 747s, each question to an LLM can contain nearly 2 complete Lord of the Rings.
Yeah, I think there's some underappreication of this, where "this won't work because the thing doesn't have enough context" will in not too long become "ok but what if just give it the context?"
Hardware does scale, and it's undoubtedly critical. It's also rarely the best means to scaling. By "best" I mean with respect to finite resources (being responsible consumers/polluters), hardware architecture limitations, and cost. It has its place, but not at the expense of (or completely turning a blind eye to) finding more efficient ways through methodology/approach/first principles.
I'm not saying it's an either/or situation. I am saying that the whole "vibe coding" thing is not software development, will never be software development, and unless the tool's limitations are respected, this, like any approach grounded in brute force (and a healthy dose of cognitive dissonance) will not end well
eeeeehhhhh I think that's where I disagree. "This isn't software engineering, and brute force is a bad strategy" is exactly the point of the bitter lesson idea - people want to hold to the craft of a thing, but in the long run, *craft* is often the losing strategy.
Which, sure, vibe coding may not be software engineering the way we imagine it today; I think that's fair. But do we have to build software with software development the way it is today? Like, "what you're doing isn't financial analysis," said the careful stock trader who's been put out of business by quants.
I haven't seen any evidence that AI is capable of anything I would consider "craft." Maybe a lot of software development isn't, either, at least these days. And given a choice between AI and some of the mess I have seen from supposed professionals, I will take AI. But that's trying distinguish between subpar and mediocre, and I certainly hope (I guess I am not as jaded as I thought) we'd aspire to something higher - which I would call "craft."
I have no issues with AI as a tool, and that is how IMO it should be viewed. Having access to robotic surgical equipment - even if I manage to remove an appendix - doesn't mean I performed surgery, nor should the outcome be taken as any form of evidence in support; it's all great until I encounter an issue any real medical professional would have anticipated.
I don't think it's craft either, but that's the trap to me. It's tempting to think, "ah ha, the AI can't do the craft, therefore it will never replace me!"
But people like cheap stuff. Mass production almost always wins. Sure, there are niche markets for craft, but the big winners are Temu, Ikea, McDonalds, Starbucks, etc etc etc. We're not safe as long as the machines can't do our craft; we're only safe as long as the machines can't be turned into an assembly line that makes 10x what we can make for 1/10th of the cost. And I think that's what will really happen here: It's not that the machines become geniuses; it's that someone figures out how to make them a run like factory.
(To your point on surgery, I see that differently actually. People who want their appendix removed don't care if they have artisan surgery; they just want their appendix removed. Sure, they want to know that the complications will get handled and all of that, but that's how you turn it into a factory: You don't make expensive surgeons do all the surgery; you figure out which patients can get the robotic surgery, and crank through those at an industrial scale.)
I'll concede that there will be mass market shifts due to AI, and likely to a more disruptive extent than we're accustomed to - but it's still the same (IMO) fundamental pattern/adoption lifecycle, where you can't get to the majority of your potential user base without some concessions.
My point re: surgery was mainly to emphasize the risk of unknown unknowns as it relates to the false confidence sophisticated tooling tends to elicit within fields where ignorance has serious negative impact. Sure, most software isn't anything close to that level of criticality, but my position is that I don't see indiscriminate use of AI (in its current state) being a viable path to building things that require some level of SLA-type guarantees.
I also fully recognize the likelihood of the assembly line future. That feels like a totally different thing from <random humans> vibe coding. Anyone who figures out the assembly line business model will be forced to build an engineered solution that accounts for all the standard overhead - and unless we figure out some sort of near-limitless, pollution-free energy solution, I can't see how brute force would be a viable strategy.
I agree with this (the AI part). Most software doesn't really matter, though, so most software is very poorly engineered, by people who barely if ever scratch beneath the surface of abstractions and glue. So even though AI can't understand the big picture (yet) and has all these problems you're describing, it's still good enough to replace a sea of bad or middling developers, and that's all you need for there to be a massive change in what the world of programming is like.
When ChatGPT first came out, there were all these jokes about how people were writing "artisanal" code and "handmade" blog posts and stuff. Which honestly feels a lot less like a joke now, and how things might actually go.
Our brains aren’t very good at comprehending how much knowledge that can fit into a 200k token context window. In terms of football fields and 747s, each question to an LLM can contain nearly 2 complete Lord of the Rings.
https://open.substack.com/pub/drewbeaupre/p/give-the-ai-the-full-picture
So what if we embrace this? What changes if we can 10x the context size?
Yeah, I think there's some underappreication of this, where "this won't work because the thing doesn't have enough context" will in not too long become "ok but what if just give it the context?"
Using the thing we currently call AI to build things full-on feels a lot like trying to scale with hardware vs. logic/design.
I'm not sure I follow? (But also, at some point, hardware does scale really really well?)
Hardware does scale, and it's undoubtedly critical. It's also rarely the best means to scaling. By "best" I mean with respect to finite resources (being responsible consumers/polluters), hardware architecture limitations, and cost. It has its place, but not at the expense of (or completely turning a blind eye to) finding more efficient ways through methodology/approach/first principles.
I'm not saying it's an either/or situation. I am saying that the whole "vibe coding" thing is not software development, will never be software development, and unless the tool's limitations are respected, this, like any approach grounded in brute force (and a healthy dose of cognitive dissonance) will not end well
eeeeehhhhh I think that's where I disagree. "This isn't software engineering, and brute force is a bad strategy" is exactly the point of the bitter lesson idea - people want to hold to the craft of a thing, but in the long run, *craft* is often the losing strategy.
Which, sure, vibe coding may not be software engineering the way we imagine it today; I think that's fair. But do we have to build software with software development the way it is today? Like, "what you're doing isn't financial analysis," said the careful stock trader who's been put out of business by quants.
I haven't seen any evidence that AI is capable of anything I would consider "craft." Maybe a lot of software development isn't, either, at least these days. And given a choice between AI and some of the mess I have seen from supposed professionals, I will take AI. But that's trying distinguish between subpar and mediocre, and I certainly hope (I guess I am not as jaded as I thought) we'd aspire to something higher - which I would call "craft."
I have no issues with AI as a tool, and that is how IMO it should be viewed. Having access to robotic surgical equipment - even if I manage to remove an appendix - doesn't mean I performed surgery, nor should the outcome be taken as any form of evidence in support; it's all great until I encounter an issue any real medical professional would have anticipated.
I don't think it's craft either, but that's the trap to me. It's tempting to think, "ah ha, the AI can't do the craft, therefore it will never replace me!"
But people like cheap stuff. Mass production almost always wins. Sure, there are niche markets for craft, but the big winners are Temu, Ikea, McDonalds, Starbucks, etc etc etc. We're not safe as long as the machines can't do our craft; we're only safe as long as the machines can't be turned into an assembly line that makes 10x what we can make for 1/10th of the cost. And I think that's what will really happen here: It's not that the machines become geniuses; it's that someone figures out how to make them a run like factory.
(To your point on surgery, I see that differently actually. People who want their appendix removed don't care if they have artisan surgery; they just want their appendix removed. Sure, they want to know that the complications will get handled and all of that, but that's how you turn it into a factory: You don't make expensive surgeons do all the surgery; you figure out which patients can get the robotic surgery, and crank through those at an industrial scale.)
I'll concede that there will be mass market shifts due to AI, and likely to a more disruptive extent than we're accustomed to - but it's still the same (IMO) fundamental pattern/adoption lifecycle, where you can't get to the majority of your potential user base without some concessions.
My point re: surgery was mainly to emphasize the risk of unknown unknowns as it relates to the false confidence sophisticated tooling tends to elicit within fields where ignorance has serious negative impact. Sure, most software isn't anything close to that level of criticality, but my position is that I don't see indiscriminate use of AI (in its current state) being a viable path to building things that require some level of SLA-type guarantees.
I also fully recognize the likelihood of the assembly line future. That feels like a totally different thing from <random humans> vibe coding. Anyone who figures out the assembly line business model will be forced to build an engineered solution that accounts for all the standard overhead - and unless we figure out some sort of near-limitless, pollution-free energy solution, I can't see how brute force would be a viable strategy.
I agree with this (the AI part). Most software doesn't really matter, though, so most software is very poorly engineered, by people who barely if ever scratch beneath the surface of abstractions and glue. So even though AI can't understand the big picture (yet) and has all these problems you're describing, it's still good enough to replace a sea of bad or middling developers, and that's all you need for there to be a massive change in what the world of programming is like.
When ChatGPT first came out, there were all these jokes about how people were writing "artisanal" code and "handmade" blog posts and stuff. Which honestly feels a lot less like a joke now, and how things might actually go.
The unreproducible smell of a handcrafted P tag…
yes, but it's *authentic*
+1 for Krazam
+2