
How much is chatgpt.com worth?
Not OpenAI, or the various large language models that run behind ChatGPT, or even the boxes and buttons on the website—how much is the domain worth? Like, if Sam Altman forgot to update his credit card on GoDaddy, lost the domain, and someone tried to sell it on eBay, how much would it go for?
I mean, it is not worth very much to me. I don’t own a chatbot1 or a large language model. If people started going to my version of chatgpt.com, they would quickly realize that it was not the ChatGPT they were looking for and leave. The most I could get out of it would be a few days of redirected traffic to my SoundCloud, and a mildly viral LinkedIn post. That is not worth very much.
But how much would chatgpt.com be worth to, say, Google?
Google, unlike me, owns a chatbot and a large language model. If Google secretly bought chatgpt.com, they could replace OpenAI’s GPT models with their own Gemini models. They could build a new website that looks a lot like the old website. They could keep serving a chatbot at chatgpt.com.
And if they did that, would anything happen? Would anyone even notice?2 Would chatgpt.com continue to be the dominant chatbot website, and Gemini would, almost overnight, become the dominant chatbot model?
I have no idea. But that at least seems plausible? Most people aren’t rigorously evaluating the quality of a chatbot’s responses, and the people who are don’t agree on how to do it. Which isn’t to say the model isn’t important—of course it is; that's why chatgpt.com isn't worth very much to me. But the technical edge that OpenAI’s models arguably have over Gemini isn’t why chatgpt.com has a commanding share of today’s consumer AI market. OpenAI is winning because they built the first good chatbot product and because it became the default. As Nan Yu, who runs product at Linear, has argued, people adopt AI products that offer great user experiences, not because they’re powered by marginally better models:
Those products won because they made powerful, highly technical tools accessible through thoughtful design. The biggest barrier to mass AI adoption is not capability or intelligence; we have those in spades. It's UX.
The magic of something like Cursor is that there's a workflow which is heavily orchestrated to help users utilize the power that LLMs can provide. Sure — at its core, there's a series of prompts and calls to base models that generates the code... but this is marshaled through a UI that keeps users continuously flowing through the prompt > generate > eval > test loop.
…
We're still barely scratching the surface. For all of its success, tools like Cursor are still built for a highly technical audience. AI adoption won't come from more powerful models or CEO mandates — it will come from thoughtfully designed interfaces that make intelligence accessible to everyone.
Moreover, for sufficiently large companies like OpenAI, product advantages can become self-reinforcing: The more people use ChatGPT, the faster it can be refined; owning a wildly popular product helps OpenAI attract more talent; what OpenAI builds becomes an ecosystem standard; ChatGPT becomes a generic trademark, a verb. From this blog last year:
[Some people might argue that] OpenAI is on track to make 11.6 billion dollars next year because people want to use the best model.
Except—ChatGPT might not be the best model. Satya Nadella, who might own OpenAI, said that leading models aren’t all that essential anyway. And if nobody can tell the difference between human paintings and AI paintings, I’m skeptical that many people can tell if their book report on The Great Gatsby was written by GPT-4o or Gemini 1.5 Flash.
Instead, it seems much more likely that OpenAI is going to make 11.6 billion dollars because ChatGPT is popular. It became synonymous with AI, the leading company that no CIO gets fired for buying, and the website that every high schooler has bookmarked. It’s going to make 11.6 billion dollars because it’s got the best brand.
While it’s easy to get distracted by benchmarks and claims about which model is smartest, horsepower alone only gets you so far. You also need a product that people both like to use and think to use. And OpenAI’s biggest edge today might be that when people think to use a chatbot, they think to go to chatgpt.com.
—
This—ironically; so, so ironically—is potentially Google’s biggest problem as an AI vendor: They haven’t figured out how to get people to use their models. They own several of the world’s most popular productivity tools; they own the world’s most popular websites; they own the world’s most popular browser and control the most popular mobile operating system; they arguably have the best large language models. And yet, so far, it hasn't entirely added up. OpenAI dominates the consumer chatbot market, and Anthropic’s Claude is becoming the preferred model for code-writing apps and agents. Some people assume it’ll all eventually come together—their current products, their existing distribution, their models, their very big bank account—and we’ll all begin spending vast amounts of money on Google’s AI services. But how?
—
We know the story by now, or at least, we know of the story. A couple months ago, Windsurf, one of the more popular AI-powered coding applications, agreed to be acquired by OpenAI, which had previously tried to acquire Cursor, the most popular AI-powered coding application. Someone changed their mind, the deal between Windsurf and OpenAI fell apart, and Windsurf sold itself to Google instead.3
Well, sort of. Google bought the executive team and a few dozen AI engineers for $2.4 billion, paid off investors, and left the company, the product, and a couple hundred employees behind. Windsurf’s smoking husk was then immediately bought by Cognition, yet another AI-powered coding application.
Most of the conversation about the whole drama has been understandably focused on the circus: Who got paid? Who knifed whom? Will there ever be normal acquisitions anymore? Are these sorts of bizarro acquisitions shrewd or stupid? Is Silicon Valley broken?
All fair and fun questions. But there’s another question buried in all of this too: Did Google buy the right thing?
On one hand, of course they did. The prize for building the best LLM is somewhere between hundreds of billions of dollars and complete hegemonic domination over all of humanity. So, in 2025, AI engineers are worth an infinite amount of money, and fast-growing AI wrappers and a nice brand are worth much less. Google bought the valuable thing—the engineers—for a very big number; Cognition bought the pedestrian things—a business and a product—for a much smaller number.
On the other hand, the product is what Google needs! Google already has a lot of AI engineers, and they already have very good models.4 They have unrivaled channels for distribution. What they’re missing is something that convinces people to use those models, as often as possible.
And the talent that Google appears to have left behind at Windsurf—the application engineers and product designers, among others—seems to be the talent that can do exactly that. Because one thing that is undeniably true about Windsurf, which grew to a $100 million business in a matter of months, is that they built a good product. They made “thoughtfully designed interfaces that make intelligence accessible to everyone.” They solved the UX problem.
These days, it’s common for people to dismiss a lot of AI applications as wrappers around major LLMs providers. These businesses have no moat because they’re thin, cheaply addictive products; they have terrible margins because they use a ton of foundational model compute. But if you’re Google, isn’t that exactly what you want?5 Aren’t products like that what your existing models need? Who is actually more valuable to Google: Some AI engineers that can make Gemini a little bit better, or the people who can make a thin, addictive product that pushes massive amounts of traffic to Gemini?6
That is, after what chatgpt.com really is—an addictive wrapper around OpenAI’s LLMs. And today, is that wrapper, and all of the habits and bookmarks that come with it, not just as valuable to OpenAI as their models?7
People would lose things like their chat histories and memories, and they would probably notice that, so don’t take this question too literally. The more precise question is, “would people notice if the model answering their questions on ChatGPT was Gemini instead of GPT 4o?”
Obviously, models can always be better; having more talent is better than less talent; maybe Windsurf’s AI engineers are uniquely good. DeepMind certainly knows what sort of technical talent they need much more than I do.
And bizarrely, isn’t that exactly what a coding app like Cognition doesn’t want?
Are the people who were left behind at Windsurf people who can build this sort of product? I don’t know; it’s possible Google simply decided that they weren’t. But thousands of startups have tried to build AI products on top of the same handful of models, and Windsurf did it better than nearly all of them.
Or, if you’re Google, here’s another idea, if you want a better and cheaper wrapper around Gemini: Make the Google search box bigger.
People already like Google’s AI Mode more than ChatGPT! But one-line boxes are for search! Two-line boxes are for chat! Even if you can AI Mode from google.com, nobody’s gonna chat in a one-line box! So just make it a two-line box! That’s the thoughtfully designed interface that make Gemini accessible to everyone. Google doesn’t need a $2 billion acquisition. Google just needs some new CSS.
You hit the nail on the head on the commoditization of AI, and the importance of non-technical specs to how companies are successful. Apple users pay a lot more for less computing power because they want the ease of an Apple product. Likewise, if there were a "better" search engine, would people stop using Google? Probably not. OpenAI has the first-mover advantage, with "I chat-gpt'd something" becoming a verb like "I googled something." That being said, these days, when you Google something, you get an AI summarizing the answer for you. That's their way of getting people to use their AI tools and keeping Google Search relevant. The key problem these days (it seems) is just taking all these LLMs, and implementing them without the chat interface and directly into a workflow, which is what Cursor/windsurf do (and why people pay for them). Understanding the use cases and how to integrate the LLM into it, so-called "agentic AI."
years ago my youngest was struggling in kindergarten. I paid a fortune to all kinds of doctors and while the diagnosis in retrospect was obvious (ADHD) the recommendation cracked me up - the child needed 2 squares of carpet at story time not 1 - to give space for their body to wiggle and not disturb others. I always thought of that extra reading carpet real estate as the most expensive in all of the Bay Area. The bigger text box suggestion is #GOLD and should go down as an innovation breakthrough - give this concept a name, update your wikipedia and trademark it right away.