Post-money values
A furious ride uphill—and then, what?
I thought a lot about space as a kid, but I never wanted to be an astronaut. I wanted to play for the Atlanta Braves. I wanted to be Chipper Jones; then, when I realized I couldn’t hit,1 Andruw Jones; then, when I realized I couldn’t field, Greg Maddux.2
Then, I realized I should probably find another calling. It happened, all at once, when I found out, from a Sports Illustrated profile on him, that Alex Rodriguez started attracting the attention of professional scouts when he was in eighth grade. I was in seventh grade—and I hadn’t so much as attracted the attention of my middle school coach. I wanted to play with the best, and it was then that I realized that the gap between me and them was much, much further than I had imagined.
No matter. The world offered other ambitions, in other arenas. There were classrooms; there were colleges; there were internships to apply for; there were grad schools to try to get into.3 There were careers in Washington, D.C., and then in San Francisco—and scoreboards hang over those fields, too.
That is growing up, I suppose. You start out wanting to be good at what you think is fun, and eventually, you find yourself in other, more circumstantial ponds, pulled by other, more memetic ambitions: Status, notoriety, and—that most universal gravity—money.
Torque
Generative AI’s playful phase did not last long. In 2023, we had emails in the style of C-3PO; in 2025, everything in the style of Studio Ghibli. In 2026, it’s adapt or die.
Large companies are reconstructing themselves in AI’s shadow. We must do the same, people say; we must future-proof ourselves too. The economy is K-shaped: Some of us will adjust, learn, and climb the hill towards abundance. The rest of us will tumble into a permanent underclass.
That’s how it’s always been, though. There have always been new skills to learn; jobs have always come and gone. People have been worried about K-shaped economies for years. The careening, combustible boom of AI—and of our manic obsession with it—is simply compressing the letter’s angles. Get good at something, be the best, and make your money, before the walls close in. Meet the new boss, same as the old boss; this one is just pushing us faster through the turn. Meet the new gravity, the same as the old gravity; this one just pulling harder through the takeoff.
Meet the next new boss. Three days ago, Anthropic published the performance specs for Mythos, their latest large language model. But they did not publicly release the model, because it was deemed too dangerous:
Mythos Preview has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser. Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. The fallout—for economies, public safety, and national security—could be severe. Project Glasswing is an urgent attempt to put these capabilities to work for defensive purposes. …
We do not plan to make Claude Mythos Preview generally available, but our eventual goal is to enable our users to safely deploy Mythos-class models at scale.
According to Anthropic, Mythos wasn’t built to hunt for these vulnerabilities. It got smart enough to find them on its own:
We did not explicitly train Mythos Preview to have these capabilities. Rather, they emerged as a downstream consequence of general improvements in code, reasoning, and autonomy.
For better or for worse, it’s all going as planned. The models have a handful of intellectual skills. Those skills generalize. They’re getting better. And they’re getting better, faster: “Anthropic’s capability trajectory bent upward in the period leading to Claude Mythos Preview.” Could it go faster still? Could the curve tilt back further? Is this the steepest part of the climb uphill—is this max q?—or is it just the beginning?4
Either way, flipping through the system card, the feeling comes back: A middle schooler versus a freak; the sense of a gap that might be much, much further than I had imagined. Except this time, it isn’t a single sport, but all of them. Be an engineer? No. A product manager? Doubtful. A doctor? Perhaps not. A lawyer? Try again. This Substack? For now.5
In an early description of its corporate structure, OpenAI warned its investors that “it may be difficult to know what role money will play in a post-AGI world.” It was a line that always felt like it went too far—the usual AI hocus-pocus, the sort of utopian myth-making that venture capitalists love, and that AI companies love to sell. Because scarcity is relative. Though we’ve long earned money with our wits and work ethic, if intelligence is abundant and workers are tireless, something else will take their place. There will always be another bottleneck; there will always be money for those who clear it. Society will always have a scoreboard.6 We can take off, but we cannot escape that gravity.
Up
You already know the metaphor. Last week, aboard a literal rocket ship, four kids who grew up wanting to be astronauts went to space. After an amazing ride uphill, they went farther into the distant frontier than anyone else ever has.7 Now, they are barreling back to earth, a mote of dusk blasting through the void, hours from searing a final incision in the sky, and coming home.
What do they teach us? That even up there, from the pinnacle of professional achievement, what matters most is what—and who—is down here:
“And we would like to call it Carroll.”
But we know this already. We know it and rarely live it. We try to tell ourselves to slow down; we chase things anyway. We become our own sort of astronaut: We get lost our games,8 our ambitions, or in our own heads, and let the good thing go. The overview effect cannot fit on an iPhone either. You can only feel it.
Perhaps, then—how privileged we are for this moment of real existential weightlessness. We can ask ourselves, what would you do if the gravity were actually gone? Not: What would you do if you no longer needed to make money? But: What would you do if you were free from the tyranny of being able to make money? What ponds would you want to swim in? What ambitions would you want to manifest? Where would you go, if you were in space, where there is no such thing as “up?”
But before you answer, notice it: We are made anxious by those who have the new skills we’re supposed to have, like taste, judgement, and agency. We are jealous of those who are winning the games we’ve long played. But we are moved by those who have the courage to leave all of those old gravities behind.
Later, I briefly wanted to be Mark Lemke, because if there was ever a position that never feels entirely out of reach, it’s being a knuckleballer. (Also, man, this did not go where I thought it would.)
Emphasis on try.
Anthropic claimed that prior AI models didn’t meaningfully contribute to making newer models better. “It does not seem close to being able to substitute for Research Scientists and Research Engineers;” Mythos’ “advances were made without significant aid from the AI models available at the time.” On one hand, that means the sci-fi predictions about AI improving itself are still just sci-fi predictions; on the other hand, it means that all of this is happening without models being able to accelerate their own development.
A venture capitalist? Absolutely!
I think about this question a lot:
Is the promise of AGI and universal abundance incompatible with social media? No matter how much that machine makes for us, will we ever be satisfied if we can’t stop ourselves from doing the comparisons? If we all stare into a global feed of what the richest among us have, will we ever stop doing the math? If we build a machine that can give us everything, when do we dismantle the machine that makes us doubt that it is enough?
Machines, of course, have gone much farther. But it still matters when people do it.
I mean, it’s not the right energy, but almost all of this actually works pretty well here?

Personally, my money will always be how high I can kickflip and how long I can tail slide
'In an early description of its corporate structure, OpenAI warned its investors that “it may be difficult to know what role money will play in a post-AGI world.” It was a line that always felt like it went too far—the usual AI hocus-pocus, the sort of utopian myth-making that venture capitalists love, and that AI companies love to sell.'
Gold. I feel like the folks at OpenAI in particular - and especially Altman - are riding the far edge of the hubris wave.