Should they buy…Allbirds?
Everyone pivots to the enterprise. Plus, Block becomes a box.
See, maybe OpenAI is a normal startup after all:
They started with big, fun ambitions: digital intelligence; technology to benefit humanity, unconstrained by a need to generate financial return; a robot that can create TikToks.
They raised an absolutely titanic amount of money.
They changed their mind. They are now pivoting to the enterprise. No more TikToks— they now want a robot that makes software and does business:
OpenAI’s top executives are finalizing plans for a major strategy shift to refocus the company around coding and business users… “We cannot miss this moment because we are distracted by side quests,” [Fidji Simo, OpenAI’s CEO of applications,] told staff last week, according to remarks reviewed by The Wall Street Journal. “We really have to nail productivity in general and particularly productivity on the business front.”
Can they do it? The good news for OpenAI is that teaching a robot how to be a good software developer isn’t easy, but it is tidy. For example, if you want to train a large language model to write code, you can lock the model in a room with a bunch of coding assignments, and tell it to get busy. The model will go whirr, whirr, whirr!, the sun will eventually come up, and then you will grade how well it did on your tests.1
This is rough and imprecise, of course. The problems that you give the model might be very hard; Cursor asked their model to build a web browser; Anthropic told theirs to build a compiler; some venture capitalists asked Claude Code to grow corn. You might make the assignments intentionally vague, and the model might need to figure out some details on its own. You might yell into the void that software developers don’t just write code; they interpret messes, and make tradeoffs, and translate business requirements into something that actually works. And you may not know exactly how to grade the model’s output. Does the browser work? Is the website good? These parts are not tidy.
Still, of all the things you can train a model to do, writing code is one of the crisper ones:
Training follows this idea of what’s called “gradient descent,” which is that as I make changes, as I do training cycles—incrementally how much improvement do I see, and at what point does it stop or even reverse? In certain domains, the data has a really high rate of gradient descent, meaning that small changes provide a huge signal back to the model. So they’re very good at those things. A good example of that is software itself. If I make minor changes in code, I don’t get minor differences on the other side; I get broken software. So there’s a huge signal that flows back into training when you make minor changes in software. … There could hardly be a better domain for training a large language model than software.
Teaching a robot how to be a good employee, however—that is not so tidy. Sure, there are lots of small problems that business people want AI to solve for them, like sending emails and reading emails and organizing emails and deleting emails. But those are minor ambitions. Real business productivity is about making decisions. It is an analyst, figuring out what the company should do. It is a marketer, defining a new campaign. It is a salesperson, deciding who to call. It is a model that is a good CEO. “Once we’ve built this sort of generally intelligent system, basically, we will ask it to figure out a way to generate an investment return,” the CEO of OpenAI once said.
What assignments do you give that generally intelligent system to make it better at making business decisions? Unlike code, companies do not exist in a sandbox. “Create a web browser” is a big and somewhat ambiguous problem. “Tell me how to turn around my struggling business” is not only big and ambiguous; it is also uncontained. Maybe the answer is in your data, if you were to look at it just so. Maybe it’s in a careful reading of thousands of customer interactions. Maybe it’s in what your employees are saying to each other in Slack. Maybe it’s in some seemingly unrelated event on the other side of the world, in one TikTok that created a new meme, that created a new competitor, that created a new fad, that cratered your market and blew up your company. You cannot lock a large language model in a room with all of those things, because those things are everything.
In other words, to teach a robot to be an engineer, you need to write a computer science test. To teach a robot to be an employee, you have to first invent the universe—or at least, invent an entire company, with millions of fake product orders, and a diversity of fake customer service tickets, and countless fake internal emails and fake Slack messages, and years of fake market swings and fake trends on Twitter.
Allbirds, Once Silicon Valley’s Favorite Shoe, Sells for $39 Million
…
When Tim Brown and Joey Zwillinger founded Allbirds in 2015, Silicon Valley was immediately enraptured by its sustainable sneakers. Made from Merino wool, the comfortable shoes became a staple in tech office attire, with executives and engineers filling their wardrobes with the minimalist designs.
Management saw the growth potential and tried to expand the business around the world by opening 15 stores by late 2019, mostly in the United States. They opened locations in China, Britain and New Zealand. By the end of 2023, Allbirds had 60 stores globally.
Executives spent millions to try to lure consumers with splashy television ads, pushing new versions of the wool shoes and showing off sneakers made with new materials like eucalyptus tree fiber pulp.
I mean, no; it might not be legal; it is definitely bad optics; this is a joke; I’m not saying OpenAI should’ve bought a failing shoe company to use it as a gym for a bunch of AI employees. But…should they? Allbirds operated for ten years; it sold over a billion dollars in shoes; it employed hundreds of people. It is an entire corporate universe, packaged up for sale: Emails, Slack messages, databases, CRMs, ERPs, ATSs, ad campaigns, social media conversations, legal agreements, financial statements, SEC filings, leases, lawsuits, and an inconceivable number of documents and slide decks. If you are betting $122 billion on “a single enterprise platform” that is “integrated with systems of record, governed by enterprise-grade security, and designed to improve with experience as agents do real work”, is that sandbox not worth $39 million dollars?2
Thirty-nine million dollars is what OpenAI spends every 16 hours.3 It is 0.004 percent of their $852 billion valuation. It is, according to one wildly unsourced Twitter post, a fraction of what OpenAI paid for a YouTube channel.4 It is, relative to how much money OpenAI has and how much money OpenAI spends, violently affordable.
You could make two points about this, I suppose. One is that, when you raise hundreds of billion dollars with the explicit goal of replacing all knowledge work, normal math equations no longer work. Everything is affordable, and everything that increases your chances of success, even by some tiny percentage, is potentially worth it. It’s capitalism’s version of Pascal’s Wager: If the potential gain is all the money in the world, you can justify almost anything.
The other point is that, in the olden days, software was primarily useful when the software did something useful. Now, there is another use for software: Its code can be sold to AI companies, to fed into the model’s insatiable maw.
Similarly, big businesses used to be worth money because they made money. But maybe the ones that don’t could still be worth something too, because they might be useful to a model that needs to learn how to do—and not do—business.
Have you tried a text block?
How often does Block, the financial services provider formally known as Square, think about the Roman Empire?
Two thousand years before the first corporate org chart, the Roman Army solved a problem that every large organization still faces: how do you coordinate thousands of people across vast distances with limited communication?
Nine hundred words later, they continue:5
At Block, we’re questioning the underlying assumption: that organizations have to be hierarchically organized with humans as the coordination mechanism. … For the first time, a system can maintain a continuously updated model of an entire business and use it to coordinate work in ways that previously required humans relaying information through layers of management.
…
In a remote-first company where work is already machine-readable, AI can build and maintain that picture continuously. What’s being built, what’s blocked, where resources are allocated, what’s working and what isn’t. That’s the information the hierarchy used to carry. The company world model carries it instead.
…
The org structure follows from this, and it inverts the traditional picture. In a conventional company, the intelligence is spread throughout the people and the hierarchy routes it. In this model, the intelligence lives in the system. The people are on the edge. The edge is where the action is.
The edge is where the intelligence makes contact with reality. People reach into places the model can’t go yet. … But the edge doesn’t need layers of management to coordinate it. The world model gives every person at the edge the context they need to act without waiting for information to travel up and down a chain of command.
That is: Block is no longer a network of people and departments passing notes back and forth to each other. It is a giant box of facts, and its employees’ put facts in the box, retrieve facts from the box, and eventually, carry out the will of the box’s hive mind in the physical world.
We’ve talked about this a bit before:
What if we stopped making PowerPoints for each other, but for the machines? What if all of our TPS reports were absorbed into context layers and decision traces, and nobody ever saw the actual documents we put into the system? What if we never saw the documents that we put into the system? We dump our ideas into a text box; the machine uses our input to update its inscrutable repository of facts; other people interrogate the repository, not by reading it, but by asking the machine to fetch what they need. Why collaborate when you can add context? … For better or for worse, that seems to be where we’re heading—working around one another.
Unsurprisingly, Block believes it’s for the better. This is progress, they say, for Block and for Block’s employees: “The edge is where the action is;” “the system coordinates, and everyone is empowered.” But there is a fine line between a system that coordinates and decides. And between an AI that knows everything and us, who have “a smattering of specialized experiences and meaty hands,” who should be the agent and who should be the executive?
As you use these tools for a bit, you notice something else: It has good ideas. It asks good questions. It nudges in compelling directions. It offers options that you didn’t think of, and asks you how you want to fill gaps that you did not realize would be gaps. Though it is not perfect—sometimes you have to grab the wheel back, and take it down an entirely different road—you begin to like it when it drives. Sometimes, this is because you’re lazy and don’t want to make decisions. But just as often, it’s because it’s a better driver than you are.
And in that moment, who exactly is the intern?
Interns, after all, also reach into places that executives do not go, like dry cleaners and coffee shops. But also—interns have more fun, so maybe executive agency is overrated, and our demotion is a good thing?
Is that one-upmanship blog post—”Project Mend: Can ChatGPT turn around a $4 billion public company?”—not worth $39 million? (To be clear, my original curiosity here was, “Should an AI lab buy a large, distressed company to use all of their corporate IT systems as a way to create benchmarks for the enterprise agents?,” though that does create the obvious follow-up: “Should an AI lab buy a large, distressed company to see what happens if their AI agents run the whole thing?”)
In 2025, OpenAI made $13 billion in revenue and burned $8 billion, which implies that they spend about $21 billion a year, or $57 million a day.
The Technology Brothers, arriving for their first day of work:
OpenAI: “You smiling?”
The Technology Brothers: “Yes.”
“Yes, sir.”
“Yes, sir.”
“Why are you smiling?”
“Cuz I love technology. Technology is fun?”
“Fun, sir.”
“Fun, sir.”
“It’s fun?”
“Yes.”
“You sure?”
“I think?”
“Now you thinking, first you smile, then you think; you think technology is still fun?”
“Uh..yes?”
“Sir.”
“Yes. No?”
“No?”
“Sir, sir, uh, it was fun.”
“Not anymore though, is it. Is it?”
“No, uh–”
“No, it’s not fun anymore, not even a little bit.”
“A lil…No.”
“Make up your mind. Think. Since you thinking now, go on, think. Is it fun?”
“No sir, no. No sir.”
“Absolutely not.”
“Zero fun, sir.”

"Fred's challenge was that AI can't affect the physical world."
This is the sort of thing considered profound after drinking all night.
Weird shit can happen when lots of money & socially-challenged techs/VCs mix together.
And you certainly don't *need* AI to orchestrate growing corn. Maybe, someday, it will lead to marginally better outcomes. Maybe.