26 Comments

I echo what James said, which I think you completely missed in your analysis: gpt-4o now also memorizes questions to get to know you better - the more I use it, the more difficult it becomes for me to switch, since it knows all my history, the questions I asked, what I care about, when I asked them, just like a partner in life.

I’m already personally at a point that I don’t see myself able to move to any other LLM, unless it’s 10x materially better, or GPT raises its prices by 5x or smth.

I can encourage everyone in the comments who is an active gpt user to ask the following question:

“Can you tell me about myself in 400 words”

You’ll be surprised how well it already knows you..

moving into a thought experiment on how the future could look:

I believe everyone will land with their core LLM, who will become their trained life coach or advisor, and it will become the centre piece of all digital interactions, similar to how social media accounts became the key online credentials.

eg. expect to be able to log in into Salesforce using my ChatGPT login details (as I do with google today), and all GenAI features/capabilities in Salesforce will be using my own personalised token.

Expand full comment
author

I'm kinda skeptical of this, for the reasons here:

https://benn.substack.com/p/do-ai-companies-work/comment/69198632

If personalization really is the moat, and Anthropic starts falling behind OpenAI specifically because people think OpenAI knows them and Claude doesn't, what's stopping Claude from building a bot that asks OpenAI 20 questions about you to learn about you really fast?

Like, sure, people get locked in to their therapist because their therapist knows all this stuff about them. But if your therapist was always available to answer any question about you, and a new therapist could ingest everything they said really fast and remember all of it, it seems like you might be willing to move. If the new therapist has billions of dollars of incentives to figure out a way to ask those questions, they're probably gonna do it.

Expand full comment
Oct 1Liked by Benn Stancil

The critical point is that under GDPR you can take out your data and what ChatGPT knows about you and have it removed from their service; I assume that importing this data into a different service would be a breeze.

So you don't need something 10x better to switch, just 1.1x better and you'll be able to switch seamlessly

Expand full comment
author

Plus, if the specialization stuff gets really important, I assume new LLM vendors would try to make it really easy for you to import all of your "history" into them.

Expand full comment

I think it's a decent moat until we see more specialization. Do you own the right to the actual data and can therefore transplant it as a database package...or do you only own the right to decide if the data exists? I think if it's the latter, and you have no right to data transport, companies will make your account a proprietary layer of their product.

I think the competitive potential over the personal data profiles that LLMs create are going to be an area of legal focus over the next decade. As people start to define identities that are valuable to markets there will also be a reconning of these identities being entirely unreliable.

Expand full comment

This is a great write-up. I think there is a good thought process evolution here from LLMs being cloud providers of 2024s to not having the same business model. The question of most remains. To my mind the current moat (till it lasts is) who can build a better narrative to raise more.moeny. and I am not saying it in a bad way. When you are in an industry like say semis, which needs huge upfront investment, sometimes all that matters is how much more money you can raise vs your competitors. Once the dust settles, we will probably have a couple of LLMs standing, which will be closely integrated with existing cloud providers like AWS or Azure for GTM.

Expand full comment
author

Yeah, I don't that's wrong, or that bad of a strategy (or really, once it becomes an arms race like this, it might be the _only_ strategy). Which, in some ways, is exactly the point - it's definitely not a good strategy for the average company, because it's a strategy that means the average company is going to die.

It's Squid Game: The individual contestants are smart to play it differently knowing that only one person can win. But if you're a VC who's invested in some random subset of those contestants, you probably don't like the strategy of everyone vying to be the one winner out of thousands.

Expand full comment

agreed. I would theorize that this is precisely the reason why as an early-stage venture fund, it makes a lot more sense to invest in so-called wrapper cos (moat = customer retention) or picks and shovel (moat = features that keep pace with LLM progress) than LLMs. Highest odds of more than one winner in these layers than LLMs. What do you think?

Expand full comment
author

I think that mostly makes sense? The foundational LLM companies seem 1) extremely expensive, 2) high risk (but high reward), and 3) the winners seem like they'll probably be very pedigreed, and hard rounds to get into. The wrapper companies seem lower variance (so long as you avoid the very obvious extremely thin wrapper companies).

Expand full comment
Oct 2Liked by Benn Stancil

I imagine the wrapper/downstream companies have even *less* moat than the LLMs. Any cool feature or app could be pretty easily replicated upstream in ChatGPT, etc. But WDYT?

Expand full comment
author

Yes and no?

In an absolute sense, yeah, I think that's true. They're thinner and smaller products, and in theory, they're like SaaS apps - there's not *real* moat with any of them, other than just slogging through whatever engineering work it takes to make them.

But in a relative sense, I think they're actually safer, because they often aren't that big, and slogging through engineering work is actually a pretty good moat. Plus, the stuff that gets popular seems to get popular for softer, harder-to-replicate reasons: It's trendy; it's ergonomic; it's got a cool brand; etc.

So, I don't think any really basic wrappers that get big quickly and spawn a bunch of copycats (eg, Jasper) are in good shape. But the smaller wrappers that build up a userbase the way a non-AI product builds up a userbase seems like it could be pretty sticky.

Expand full comment
Sep 14Liked by Benn Stancil

A possible moat could be personalisation. If a particular model knows you to the extent that it is significantly more valuable to interact with than a vanilla model then it will be harder to shift to a competitor.

Subject to user consent, personalisation could cover a range of factors such as preferred language style, key interests and knowledge levels so the LLM is closely attuned to how you interact and learn, and can even become a more pro-active partner. I guess this can be captured from questionnaires and learning from interactions.

Expand full comment
author

I...guessss? Like, in theory, sure, that seems true enough. But I think I'm pretty skeptical of that actually mattering that much, for a few reasons:

- I suspect these things can learn pretty quickly. It's a very different thing, but take Tiktok. The algorithm learns your preferences really really fast. If some new social media app came along that had better content, I don't think the Tiktok algorithm - which I suspect is really finely tuned to the preferences of its power users - would provide much protection.

- If training on preferences really does matter that much, new models will probably figure out ways to make it much easier to jumpstart that. They'd have a really strong incentive to personalize for you quickly, so they'd probably build stuff like "import your email" or whatever.

- Personalization like that feels like a consumer thing, and I'd guess that big AI companies will need to be enterprise businesses more than consumer ones. And there, businesses probably don't want usage-based personalization, but something that trains or fine tunes the core model with proprietary data (basically, import your email, at scale).

Expand full comment

Good post and I think this explains why OpenAI's next year or two will be very interesting to watch. They get ~2/3 of their revenue from ChatGPT, and more than 2/3 of the company is non-research employees. Obviously they are making a bet on owning the relationship with the consumer which means cannibalizing a lot of their API users' businesses. At this point with all the major researcher departures, it is starting to feel that research is more of a facade and the race is not to AGI but to find a market with some staying power before the money runs out.

Expand full comment
author

Thanks. If I had to guess, that's basically where I'd think they end up? Essentially as a big SaaS platform with lots of services on top of a good-but-not-always-state-of-the-art model, and their moat, more than anything, is brand and inertia.

Expand full comment
Oct 1Liked by Benn Stancil

This may be true, but you are probably one of a very few people who actually fall into this category. I'm in the tech market, and no one else I know has developed a relationship with their LLM like this.

Expand full comment
author

I know a few folks who've worked really hard to do it (to make them like a therapist, basically), but yeah, that seems very much the exception.

Expand full comment
Oct 1Liked by Benn Stancil

Yes, and the point I should have added is that this small subset of people is not enough to make much of a financial difference to a company with such huge capital needs.

Expand full comment
Sep 30·edited Sep 30Liked by Benn Stancil

I think this is true but slightly myopic. Real user data has always been the moat for software and I imagine it will continue to be so, both personal (as James says) and at scale. OpenAI and Anthropic both seem to have _very good_ reward models derived from real world user data for training their foundation models; from basic experimentation I think it is at least 3-5x better than the best open source reward models (NVIDIA's probably). While I could certainly see someone who already has distribution - say Meta or Google - going out and fetching data at similar scale, it would be nontrivial for new entity to do so. This will likely be even more true in 2-3 years when half the human population is using these models.

This is probably similar to Google in early years. Employees could and likely did go to other companies and tried to use similar algorithms, but Google could finetune search signals better than anyone else thanks to data from their traffic.

Expand full comment
author

I've struggled to figure out how much that matters, to be honest. There is a tendency for companies to talk about their data as being a moat because they're selling that story. A lot of early stage startups will say stuff like, "we'll build a product, and people will use it, and the more they use it, the better it will get, and then nobody can ever catch us." And that story is almost never real, so I'm pretty skeptical of anyone who tries to sell it.

But, at a certain scale, it probably does work. My guess is that scale is very big - like, 10s to 100s of millions of users big. That's well past what most startups will ever get to, but might actually work for OpenAI and maybe Anthropic.

Expand full comment
Sep 16Liked by Benn Stancil

Not sure if you've read this but reading your post reminded me of this - https://www.thediff.co/archive/software-is-the-new-hardware/

Nice framing and perspective on what's happening.

Expand full comment
author

Huhhh. I'm not sure I agree with this piece? Maybe more on this this week, but I'm starting to develop a theory that people see AI as a software-writing bot in the wrong way. Though it might make engineers more productive, I'm skeptical that it'll be particularly good at making commercial-worthy products that much cheaper to make. Eg, Slack and Github make enigneers faster, but don't fundamentally change the economics. For real software, I could see the same thing happening.

But what it could do is make it easier for everyone to build non-commercial software. In this sense, it's similar to Excel: Excel didn't make us all accountants or mathematicians, but it make a lot of people capable of building small "apps" that do arithmetic. AI seems like it could enable the same, but for a more general class of app.

Expand full comment
Oct 3Liked by Benn Stancil

It's pretty wild how AI companies have to keep innovating at breakneck speed, spending tons of money, and dealing with the fact that their tech will become outdated super fast- almost like a race to the bottom. It's a whole different ballgame compared to regular cloud services. Also- loved the callback to "It's a Concept of Plan" :D

Expand full comment
author

Yeah, it seems like the bet is there has to be some point at which it either plateaus, or you become more efficient than everyone else, or there's just a default and nobody thinks about using anything else. But even in those cases, it's hard to see how most companies don't flame out.

Expand full comment
Sep 30Liked by Benn Stancil

Its difficult to agree with you on point three, its not human nature that compels one to build faster or stronger models but rather shareholders and funding sources.

Expand full comment
author

Well, yeah, for sure, that too.

Expand full comment