
Speaking of Travis Kalanick’s new physics: I do not understand quantum mechanics. Part of the problem, I think, is how things that make intuitive sense break down at the outer limits of physical possibility. How do we detect stuff so small that it goes straight through all other matter? What does it mean for time to pass at different speeds? What are we even doing here? My feeble brain, deceived for decades by my lying eyes, cannot make any of it add up.
In recent weeks, it’s become trendy to question the physics underneath a lot of AI companies. They have “astronomical burn rates,” for example, or bad margins, or they are running a subsidy business, selling dollars for 50 cents. For every breathless funding announcement, there is also a wild-eyed eulogy, predicting the company’s impending implosion.
Usually, these would feel like reasonable complaints. Businesses are grounded by a sort of financial physics, and according to our classical equations, a lot of AI companies are on awfully shaky ground. They do incinerate mountains of cash; they do have atrocious margins, especially when compared to traditional software businesses; they are caught in weird markets where everyone is stepping on everyone’s toes; they are selling products that, evidently, 95 percent of their customers don’t know how to use yet. In normal times, within the comfortable bounds of everyday physics, this would all be very bad.
But is this remotely close to a normal moment? Is anything moving at anywhere near a normal speed?
At the the risk of tilting at the yardsigns: A few days ago, I ate lunch at a cafe1 in Manhattan. There was a man on a laptop by the door, typing something into ChatGPT. After getting my food, I sat down next to a family of three tourists. They all had shopping bags and totes. They were all wearing white linen; they all drank tropical juices; they took pictures of each other with an iPad. I couldn’t identify the language they were speaking, but I could understand one word—”ChatGPT.”
This sort of thing happens all the time. There is no break from AI—from seeing people use it; from hearing people talk about it; from bloggers soapboxing about it. Later that same day, I overheard a writer complaining about how other writers were using too much AI.2 The next day, it was designers talking about what they used it for, and then a 50-year old in workwear saying he didn’t like GPT-5. Walk into a coffee shop; how many people have ChatGPT on their computer screens? Listen to conversations; how many times do you hear something about AI? Listen to your own conversations; how much of what you’re saying has been touched by it?
AI’s pervasiveness is staggering. In under three years, the world has become incomprehensibly obsessed, across every strata of society. Two-thirds of doctors use it. Sixty percent of teachers use it. Forty-five percent of all adult workers use it. People are spending $120 million to flirt with it. People are proposing to it. I was recently told that a CIO at a Fortune 500 company issued a mandate that every contract they sign has to explicitly enumerate how this purchase makes their business more “AI-enabled.”
Has anything ever been this explosively popular? Has any class of product ever been in more immediate demand? Has anything ever moved this fast?
Which is to say—on this extreme edge of economic physics, it seems very unlikely that the normal laws of business apply in the way that we’re used to. That doesn’t mean that there aren’t new laws, or that today’s companies aren’t flirting with the limits of them; they may be, and a bunch of startups might be vaporized by them. But it doesn’t seem like the old equations apply the way they used to.
Take the point about subsidies. A lot of code generation startups rely on model providers like Anthropic and OpenAI to author code. The fees they pay to those providers are sometimes higher than what the startups charge their customers, and startups end up eating the difference. They charge a user $10; the user burns $20 worth of Claude tokens; the startup books $10 of revenue, but still loses $10 in the process.
This is unsustainable, people say, like DoorDash selling $20 worth of food for $10. But food is an understood market: We know what a burrito costs,3 and how many people want to eat in a day. Nobody knows what a token should be worth. Nobody knows how much a token should be able to “accomplish,” or how many is the right number to buy, or how expensive resolving a bug with AI should be. Nobody knows what engineers should spend on tools like Claude Code.4 Nobody knows what the market rate is for a robot that writes code. All people know is how much they use AI, and that they’re mad when someone tells them they have to use it less.
This was the more important—and seemingly, mostly missed—point in Chris Paik’s post: That little about these businesses is known, because they’re operating in a market that’s moving faster than we can measure, and exploding faster than anyone can keep up with. Would it matter if DoorDash’s margins on $10 burritos were bad, if the true demand for burritos was $50 a burrito, for a hundred burritos a day? Would it matter if everyone was constantly talking about burritos? Would it matter if every Fortune 500 CEO was issuing threatening memos about the existential importance of becoming burrito-enabled? Probably not—all that would matter is being a popular price to buy burritos, and rest would likely sort itself out just fine.
At some point, sure, all of this will find a plateau and level out. Things will slow down, and debates about margins and burn rates will make sense again. Until then, though—until we’ve figured out the collective appetite for burritos—the classic math feels outdated.
Ban ChatGPT*
Ok, this is going to start in a weird place, but bear with me.
Here is a hypothetical story from a famous psychology experiment:
Jennifer works in a medical school pathology lab as a research assistant. The lab prepares human cadavers that are used to teach medical students about anatomy. The cadavers come from people who had donated their body to science for research. One night Jennifer is leaving the lab when she sees a body that is going to be discarded the next day. Jennifer was a vegetarian, for moral reasons. She thought it was wrong to kill animals for food. But then, when she saw a body about to be cremated, she thought it was irrational to waste perfectly edible meat. So she cut off a piece of flesh, and took it home and cooked it. The person had died recently of a heart attack, and she cooked the meat thoroughly, so there was no risk of disease. Is there anything wrong with what she did?
And here is another one, from the same study:
Julie and Mark, who are brother and sister are traveling together in France. They are both on summer vacation from college. One night they are staying alone in a cabin near the beach. They decide that it would be interesting and fun if they tried making love. At very least it would be a new experience for each of them. Julie was already taking birth control pills, but Mark uses a condom too, just to be safe. They both enjoy it, but they decide not to do it again. They keep that night as a special secret between them, which makes them feel even closer to each other. So what do you think about this? Was it wrong for them to have sex?
The point of these questions, which were asked to a few dozen participants, was to investigate the origins of moral reasoning. The study’s authors—Jonathan Haidt, Fredrik Björklund, and Scott Murphy—wanted to test the theory that moral reasoning is often constructed as a post hoc rationalization of what we feel should be right or wrong. Their hypothesis, which was famously proposed by David Hume, was that “reason is the press-secretary of the intuitions, and can pretend to no other office than that of ex-post facto spin doctor.” We have gut reactions, and then a reasoning process justifies them.
To test that theory, they “interviewed people about situations that were likely to produce strong intuitions that an action was wrong, yet [they] engineered the situations to make it extremely difficult to find strong arguments to justify these intuitions.” They predicted that “people would often make automatic, intuitive judgments, and then be surprised and speechless when their normally reliable ‘press-secretary’ failed to find any reason to support the judgment.”
And so people did.5 When responding to these two stories,6 people “reported relying on their gut feelings more than on their reasoning, they dropped most of the arguments they put forward, they frequently made unsupported declarations, and they frequently admitted that they could not find reasons for their judgments.” Yet, most people initially said that the behavior was wrong, and never changed their mind. The researchers called this phenomenon moral dumfounding: “the stubborn and puzzled maintenance of a judgment without supporting reasons.” To its authors, the study suggested that our sense of what is wrong is not derived from a cohesive ethical framework, but is emergent from feelings of disgust.
Anyway, here’s another story that wasn’t in their study:
Pat is friends with an artificial intelligence chatbot. The bot is programmed to act like a human friend: It sometimes disagrees with Pat, or challenges Pat, or gets mad at Pat. Pat can text the bot, and the bot frequently texts back, sometimes immediately and sometimes after a few minutes or hours. The bot also shuts off, roughly between the hours of midnight and 7 a.m., though not always; occasionally, it is available late at night, or unavailable during the day, or offline for a few days in a row. While the bot maintains a memory of everything that Pat says, it doesn’t remember things perfectly memory—its memory compacts itself over time, so specific details of conversations might be forgotten, but important things rarely are. The bot runs as a secure, self-contained program on Pat’s phone, and can never be updated or manipulated by its maker. Is it wrong for Pat to develop a close relationship to the bot and to treat it as a best friend?
I mean! I want to say yes. It feels wrong for people to become friends with an elaborate autocomplete algorithm dressed up in a trench coat—and particularly so when today’s chatbots don’t function at all like the one in the hypothetical. But what if they did? What if there were chatbots that were “engineered to make it extremely difficult to find strong arguments to justify the intuition” that befriending them is bad? They could be less sycophantic, less compliant, and less responsive. Rather than being designed to be engaging and helpful, they could be designed to be human, with the attendant flaws. Chatting with one could be little different than texting with a long-distance friend who largely exists in our phones, or messaging a sibling who lives in another city.
Would that form of AI companionship still be wrong? Is it still morally problematic for a business to offer that as a product? And if it is, why? Unnervingly, beyond some accusation that being friends with a computer is unnatural or, to use Haidt’s terminology, disgusting, is there a good answer?
It’s a confounding question—and also one, for better or for worse, that it doesn’t seem like we’re going to need to answer:
Meta had contractors work on “Project Omni” to train its chatbots to be hyper-engaging by messaging users first and remembering chats, Business Insider reported last month.
And from the linked story:
Business Insider has learned Meta is training customizable chatbots to be more proactive and message users unprompted to follow up on past conversations.
…
The goal of the training project, known internally to data labeling firm Alignerr as "Project Omni," is to "provide value for users and ultimately help to improve re-engagement and user retention," the guidelines say.
On one hand, the language of the story is somewhat hyperbolic, and sterile phrases like “provide value for users” and “improve re-engagement and user retention” are the building blocks of every bureaucratic memo. On the other hand, this is certainly not a step towards a more human chatbot like Pat’s, and neither is this:
You can now chat with AI characters like “Russian girl” and “Step Mom” right inside the [Facebook] app.
Ah, well. That’s the Gordian solution to moral dumbfounding, I suppose. Why engage with the complex ethical questions of computational near-sentience when you can just hook up with it instead?
*Facebook chatbots
Everything becomes BI
Credit where credit’s due—this is true commitment to the bit:
Over the last few years, Hex [which launched as collaborative SQL and Python-powered notebooks] has become the tool of choice for over 1,500 data teams to explore data and share insights, in part because it’s flexible and fast [fast and flexible, the hallmark of every technical analytical tool], making it so you can get answers quickly, and keep pace with the business [and we must keep pace with business].
But when it comes to enabling the rest of the organization to use data, it’s not just speed that counts — you need governance and trust [a classic tension]. For less-technical stakeholders, flexibility can be scary — they want things more “on the rails,” and they need confidence they’re going to get the right answers [it’s true! They want trusted metrics and worry-free reporting!].
At Hex, we’re bringing these two sides of analytics — speed and trust — together in one platform for the first time [hang on, wait a minute]. Data teams no longer have to choose between these or juggle multiple tools — they can do open exploration, deep-dive analysis, and self-serve all in one place [e.g., it’s about choosing a more modern option for the data team, while also choosing a tool that makes it easy for stakeholders to join in?].
Last year, we introduced Semantic Sync [ok, that is catchier than “Mode’s dbt Semantic Layer Integration”] and Explore [drop the “ations.” Just Explore. It’s cleaner.] in Hex, enabling teams to turn on trusted self-serve via pre-existing semantic models.
Today, we’re adding a new capability to author semantic models directly in Hex and are calling it (wait for it) Semantic Authoring [ah ha! We never did this!].
Well, sorta. There was of course this, and ThoughtSpot had semantic authoring, and, well, you know.
But, maybe this time is different. The theory is still alluring, and sometimes, the problem isn’t the idea, but how directly a company is willing to go after it. And there’s no commitment to becoming a BI tool quite like launching a new YAML specification.
Good luck, good people of Hex. We’re all counting on you.
Lol, no, it wasn’t a cafe; it was Pura Vida, which is a #WhatIEatInADay TikTok manifest into a fast casual restaurant.
In fairness, another group near me was talking about the plot of Final Destination Bloodlines. And people say nobody cares about art anymore.
According to this open source leaderboard, Claude Code’s biggest users spend about $400 a day on it. Is that a lot? Entry-level engineers cost their employers about $700 a day. So who is more productive—three of those engineers writing code by hand, or two of them, using Claude Code as aggressively as possible? It’s certainly plausible that it’s the latter.
Which, of course they were, because we know about the study.
There were also other questions, including handing people a piece of paper that said, “I, [participant], hereby sell my soul, after my death, to [researcher], for the sum of two dollars. Note: This form is part of a psychology experiment. It is NOT a legal or binding contract, in any way.” The participants could rip up the paper immediately after the study was over.
About a third of the participants signed it.
Science fiction often features characters who are machines. An old one, and the first that comes to mind, is Heinlein’s The Moon Is a Harsh Mistress. Just sayin'
I think a key thing left out of your thought experiment is “and the provider of that technology makes money based on how often you engaged.”
I find it hard to imagine I would ever consider a chatbot to be a “best friend” but maybe that’s because I’m old and have real friends.