The marginal cost of LLM inference is many many orders of magnitude lower than human labor, and unlike human labor is nonrivalrous -- i.e., if I use an LLM it doesn't mean someone else can't also use it at the same time. You could argue that datacenter capacity is a real constraint, but it's much easier to grow the supply of datacenter capacity than to grow the supply of human labor. Because of those fundamental differences, I'm not so sure how useful the discipline of labor economics will be when applied to this new generation of software.
I think LLMs will revolutionize the software industry, and agree with the wider sentiment that this is the equivalent of the industrial revolution for knowledge work. Building software today is insanely expensive. There's SO MUCH terrible software out there because building high quality software is such a labor intensive endeavor and the supply of humans who can do it well is minuscule relative to demand. The economic implications of "good software is relatively cheap to build" are enormous.
On the relative costs of running LLMs vs human labor, of course, I don't disagree with that. My point is about the relative costs of different LLMs. I do think it's possible that they'll all be cheap enough that it doesn't matter, though history would suggest otherwise. Nearly everything in computing has made become unbelievably cheaper and faster over the last several decades, and yet, we keep pushing it to its limit, and there continues to be market differentiation between low and high cost components. So to me, LLMs may be really low cost, but still operate like a labor market. (I agree with you that the non-rivalrous element is different than labor markets, which is why the analogy isn't perfect.)
On producing way higher quality software, maybe? That still seems like a pretty big if to me, both in that I'm not sure it'll happen, or that it'll matter that much if it does. It could, for sure. But we've huge strides in making good software much cheaper to produce over the last couple decades already, and it barely shows up in the economic statistics. Maybe those numbers are misleading, or maybe this is different, but it seems like a long way from a given to me.
That’s how they get you. If it was called Delta TX10, we’d never trust it. But Bard? Harmless. Sounds like musical theater fan who just loves a good night of charades.
It seems like every major technology advancement has all us humans fretting about being replaced by machines. Thinking this way grossly underestimates millions of years of evolution and the masterpiece that is the human mind. Rather than "replace" us, technology allows us to do what we need to do faster so we can work on other, usually higher valued things. This may mean that some low skilled jobs may be eventually automated away, but is ordering a burger from a kiosk versus a human so bad after all?
So I think that’s a tangential question. My point here was more that how we use different models could start to look like a labor market that’s independent of the labor market for people. Obviously, the two are intertwined, but to me, it’s interesting to think about how we might start “hiring” different models based their skills. (Like, will companies evaluate LLMs by “interviewing” them as much as they score their various technical attributes? That doesn’t actually seem that crazy?)
On the point of being replaced and all that, my general read is that the macro is very different than the micro. Broadly, sure, we automate some things, create new jobs, and, in the aggregate, it’s an improvement. But a lot of individuals might lose. And that can cause a whole lot of social disruption, even if their loss was more than made up someone else’s gain.
The marginal cost of LLM inference is many many orders of magnitude lower than human labor, and unlike human labor is nonrivalrous -- i.e., if I use an LLM it doesn't mean someone else can't also use it at the same time. You could argue that datacenter capacity is a real constraint, but it's much easier to grow the supply of datacenter capacity than to grow the supply of human labor. Because of those fundamental differences, I'm not so sure how useful the discipline of labor economics will be when applied to this new generation of software.
I think LLMs will revolutionize the software industry, and agree with the wider sentiment that this is the equivalent of the industrial revolution for knowledge work. Building software today is insanely expensive. There's SO MUCH terrible software out there because building high quality software is such a labor intensive endeavor and the supply of humans who can do it well is minuscule relative to demand. The economic implications of "good software is relatively cheap to build" are enormous.
On the relative costs of running LLMs vs human labor, of course, I don't disagree with that. My point is about the relative costs of different LLMs. I do think it's possible that they'll all be cheap enough that it doesn't matter, though history would suggest otherwise. Nearly everything in computing has made become unbelievably cheaper and faster over the last several decades, and yet, we keep pushing it to its limit, and there continues to be market differentiation between low and high cost components. So to me, LLMs may be really low cost, but still operate like a labor market. (I agree with you that the non-rivalrous element is different than labor markets, which is why the analogy isn't perfect.)
On producing way higher quality software, maybe? That still seems like a pretty big if to me, both in that I'm not sure it'll happen, or that it'll matter that much if it does. It could, for sure. But we've huge strides in making good software much cheaper to produce over the last couple decades already, and it barely shows up in the economic statistics. Maybe those numbers are misleading, or maybe this is different, but it seems like a long way from a given to me.
"if Bard doesn’t harvest us to be batteries for Google’s data centers."
Hysterical 🤣.. but with a name like Bard, I doubt it is a threat. 😬😬
That’s how they get you. If it was called Delta TX10, we’d never trust it. But Bard? Harmless. Sounds like musical theater fan who just loves a good night of charades.
OMG. You are right. I would be ready for Delta TX10. But, Bard 🦄, 😬I fell for it.
🤖🔪😵
It seems like every major technology advancement has all us humans fretting about being replaced by machines. Thinking this way grossly underestimates millions of years of evolution and the masterpiece that is the human mind. Rather than "replace" us, technology allows us to do what we need to do faster so we can work on other, usually higher valued things. This may mean that some low skilled jobs may be eventually automated away, but is ordering a burger from a kiosk versus a human so bad after all?
So I think that’s a tangential question. My point here was more that how we use different models could start to look like a labor market that’s independent of the labor market for people. Obviously, the two are intertwined, but to me, it’s interesting to think about how we might start “hiring” different models based their skills. (Like, will companies evaluate LLMs by “interviewing” them as much as they score their various technical attributes? That doesn’t actually seem that crazy?)
On the point of being replaced and all that, my general read is that the macro is very different than the micro. Broadly, sure, we automate some things, create new jobs, and, in the aggregate, it’s an improvement. But a lot of individuals might lose. And that can cause a whole lot of social disruption, even if their loss was more than made up someone else’s gain.
"ChatGPT might already be a halfway decent lawyer"
Talking to an AI lawyer would be crazy. But I do wonder if people are going to use AI as their lawyer.
I think a lot already do: https://en.m.wikipedia.org/wiki/DoNotPay
Which really raises the question - is the world better or worse off with everyone having their own personal mckinsey consultant?