10 Comments

I echo what James said, which I think you completely missed in your analysis: gpt-4o now also memorizes questions to get to know you better - the more I use it, the more difficult it becomes for me to switch, since it knows all my history, the questions I asked, what I care about, when I asked them, just like a partner in life.

I’m already personally at a point that I don’t see myself able to move to any other LLM, unless it’s 10x materially better, or GPT raises its prices by 5x or smth.

I can encourage everyone in the comments who is an active gpt user to ask the following question:

“Can you tell me about myself in 400 words”

You’ll be surprised how well it already knows you..

moving into a thought experiment on how the future could look:

I believe everyone will land with their core LLM, who will become their trained life coach or advisor, and it will become the centre piece of all digital interactions, similar to how social media accounts became the key online credentials.

eg. expect to be able to log in into Salesforce using my ChatGPT login details (as I do with google today), and all GenAI features/capabilities in Salesforce will be using my own personalised token.

Expand full comment
author

I'm kinda skeptical of this, for the reasons here:

https://benn.substack.com/p/do-ai-companies-work/comment/69198632

If personalization really is the moat, and Anthropic starts falling behind OpenAI specifically because people think OpenAI knows them and Claude doesn't, what's stopping Claude from building a bot that asks OpenAI 20 questions about you to learn about you really fast?

Like, sure, people get locked in to their therapist because their therapist knows all this stuff about them. But if your therapist was always available to answer any question about you, and a new therapist could ingest everything they said really fast and remember all of it, it seems like you might be willing to move. If the new therapist has billions of dollars of incentives to figure out a way to ask those questions, they're probably gonna do it.

Expand full comment

This is a great write-up. I think there is a good thought process evolution here from LLMs being cloud providers of 2024s to not having the same business model. The question of most remains. To my mind the current moat (till it lasts is) who can build a better narrative to raise more.moeny. and I am not saying it in a bad way. When you are in an industry like say semis, which needs huge upfront investment, sometimes all that matters is how much more money you can raise vs your competitors. Once the dust settles, we will probably have a couple of LLMs standing, which will be closely integrated with existing cloud providers like AWS or Azure for GTM.

Expand full comment
author

Yeah, I don't that's wrong, or that bad of a strategy (or really, once it becomes an arms race like this, it might be the _only_ strategy). Which, in some ways, is exactly the point - it's definitely not a good strategy for the average company, because it's a strategy that means the average company is going to die.

It's Squid Game: The individual contestants are smart to play it differently knowing that only one person can win. But if you're a VC who's invested in some random subset of those contestants, you probably don't like the strategy of everyone vying to be the one winner out of thousands.

Expand full comment

agreed. I would theorize that this is precisely the reason why as an early-stage venture fund, it makes a lot more sense to invest in so-called wrapper cos (moat = customer retention) or picks and shovel (moat = features that keep pace with LLM progress) than LLMs. Highest odds of more than one winner in these layers than LLMs. What do you think?

Expand full comment
author

I think that mostly makes sense? The foundational LLM companies seem 1) extremely expensive, 2) high risk (but high reward), and 3) the winners seem like they'll probably be very pedigreed, and hard rounds to get into. The wrapper companies seem lower variance (so long as you avoid the very obvious extremely thin wrapper companies).

Expand full comment
Sep 16Liked by Benn Stancil

Not sure if you've read this but reading your post reminded me of this - https://www.thediff.co/archive/software-is-the-new-hardware/

Nice framing and perspective on what's happening.

Expand full comment
author

Huhhh. I'm not sure I agree with this piece? Maybe more on this this week, but I'm starting to develop a theory that people see AI as a software-writing bot in the wrong way. Though it might make engineers more productive, I'm skeptical that it'll be particularly good at making commercial-worthy products that much cheaper to make. Eg, Slack and Github make enigneers faster, but don't fundamentally change the economics. For real software, I could see the same thing happening.

But what it could do is make it easier for everyone to build non-commercial software. In this sense, it's similar to Excel: Excel didn't make us all accountants or mathematicians, but it make a lot of people capable of building small "apps" that do arithmetic. AI seems like it could enable the same, but for a more general class of app.

Expand full comment
Sep 14Liked by Benn Stancil

A possible moat could be personalisation. If a particular model knows you to the extent that it is significantly more valuable to interact with than a vanilla model then it will be harder to shift to a competitor.

Subject to user consent, personalisation could cover a range of factors such as preferred language style, key interests and knowledge levels so the LLM is closely attuned to how you interact and learn, and can even become a more pro-active partner. I guess this can be captured from questionnaires and learning from interactions.

Expand full comment
author

I...guessss? Like, in theory, sure, that seems true enough. But I think I'm pretty skeptical of that actually mattering that much, for a few reasons:

- I suspect these things can learn pretty quickly. It's a very different thing, but take Tiktok. The algorithm learns your preferences really really fast. If some new social media app came along that had better content, I don't think the Tiktok algorithm - which I suspect is really finely tuned to the preferences of its power users - would provide much protection.

- If training on preferences really does matter that much, new models will probably figure out ways to make it much easier to jumpstart that. They'd have a really strong incentive to personalize for you quickly, so they'd probably build stuff like "import your email" or whatever.

- Personalization like that feels like a consumer thing, and I'd guess that big AI companies will need to be enterprise businesses more than consumer ones. And there, businesses probably don't want usage-based personalization, but something that trains or fine tunes the core model with proprietary data (basically, import your email, at scale).

Expand full comment