The labor of little decisions
And the intoxicating power of things that make them for us. Plus, LinkedIn! And White Lotus Power Rankings!
If you are an 11-year-old fifth grader writing a book report, you spend a lot of time thinking about fonts. The right font makes you look smart. The right font sets the mood—Papyrus for a report on ancient Egypt; Lucida for Hernán Cortés; Copperplate Gothic for Edgar Allen Poe; Stencil for World War II. The right font (and the right plastic cover) is how you get extra credit. So when you open up Microsoft Word to write your report, you think very carefully about the font, because it is very important to get it right.1
On the other hand, if you are an AI company, or a venture capitalist who invests in AI companies, you do not think very hard about fonts. You are doing important and challenging things that take all of your attention and energy. You read technical papers and solve the hard math problems at the sharp edge of computer science. You do not have time for trivial aesthetic dilettantes; fonts are frivolous distractions; they are tinsel for children; they are opiates for the unoccupied brain. So your website is simple: A plain white page that uses a safe default like Inter, or one of the original Windows fonts like Courier New, or, if you are engaged in especially serious work, does not even specify a font at all.2
Ah, hahaha, no, that’s not true. We all care about the fonts that we use, because fonts are brand, and brand is identity. The font on our website, or the look of our adult book report Powerpoint busywork tells a story, even if that story is, “I want you to think I do not care about this.” The absence of a choice is itself a choice—a statement, even. There is no true neutral.
When you build anything on the internet, like a website or a software product, you have to make tons of choices like this. You have to pick the fonts. You have to decide what the buttons look like—should they have rounded corners? How do they animate when you click on them? Should they use words or icons? You have to decide on background colors, and border colors, and highlight colors, and hover colors. You have to decide on padding and margins and how much space to add between lines of text.
You have to make a lot of bigger decisions too. Do you put the menu at the top of the page or on the side? Do you open things in modals or in new pages? Do you automatically save people’s updates, or do you ask them to do it manually? Do you organize things with tags or folders? Can you create nested folders? When people search for stuff, how should the results get ordered?
Most of the time, you don’t care that much about this stuff. Nobody builds a new app because they want to make Salesforce with prettier fonts, or because they want Google Sheets with the formatting ribbon on the side.3 People build new products because they have some novel innovation—Salesforce with AI, or Google Sheets with interactive dashboards, or Evernote with some bizarro twist. All these other details are just necessary scaffolding.
Necessary, and time-consuming. Because, even if you don’t care about this stuff, you have to make decisions about it.
There’s this common bit that people use when they’re teaching kids how to use computers. They tell the kid to give them instructions about how to make a peanut butter and jelly sandwich, and say that they will follow the instructions exactly. “Put jelly on the bread,” the kid says. The teacher puts the jar of jelly on the bagged loaf of bread. “Nooo, open the jelly first!” The teacher smashes the jar open on the counter.
The point is that computers follow precise instructions. Which is great—that’s why they can fly rockets and run stock markets4—but it’s also exhausting. Because, in addition to making them pedantic chefs, it makes them demanding designers. All of the necessary decisions that go into making something have to be specified. Even if you don’t care about the particular color of your app’s background, you have to choose the exact shade. When you want to add a picture to a webpage, you have to choose exactly how big to make it, and the exact number of pixels to put in the margins around it. If you don’t tell the computer precisely how to make a button, the button won’t exist.
Of course, there are shortcuts—unstyled HTML; pre-made design frameworks like Bootstrap; picking a website you like and literally copying every color—but they’re only partial shortcuts. First, you can’t offload every decision, because there is no Bootstrap for choosing between modals or pages, or deciding if your app will use folders or tags. If you want to organize the notes in your bizarro note-taking app, you have to decide how to organize them.
Second, the defaults are boring! Things like Bootstrap are generic fads.5 And, though we can fully embrace the stock settings of raw HTML, that is both a severe choice—to pretend to exist beyond fashion—and hard to maintain.
And so, we fiddle. We think about all of this stuff because we have to make some sort of decision, and we want to make a good one. We get distracted, reverting to our 11-year-old selves, scrolling through the font list on Microsoft Word looking for our favorite. We spend time making thousands of mostly insignificant choices—partly because computers require us to define everything in line by tedious line of code, and partly because, when we’re sitting in front of a control panel full of knobs to turn, we can’t help ourselves.6
That’s life, in a way. So much of it is the distracting labor of little decisions. It is having to think about small things of relatively little consequence, because you can’t eat an unchosen meal, wear an unstyled outfit, board an unbooked flight, or categorize your application’s documents with an unspecified organizational unit. And the decision is a often harder than the doing.
AI, we’re told, is coming for software engineers’ jobs. But most warnings include a caveat, of sorts: It’s coming for junior engineers. The robots are good enough to do the mechanical work of translating English instructions into performant code. But they aren’t good enough to make strategic architectural decisions, or see several steps ahead of the mistake they’re about to make. They will accumulate tech debt faster than they repair it, and engineers need to be their guiding editor.
Though this all seems true enough (for now), there’s at least one way in which AI coding agents are much more like senior engineers than junior ones: They don't need precise specs. They don't need everything spelled out for them. You can chuck vague requests at Cursor, to add loading animations or to put a button on the bottom of a page, and they do it, with pretty good results. They won’t come up with completely original solutions—their aesthetic is some rough average of everyone else’s style—but that’s often the point. Not everything needs to be innovative; it just needs to work without being a pixel-perfect copy of something specific. You don’t want invention, but an adjustable default: Make it denser; make it look steampunk; make it sound expensive. You want to be expressive without the painful specificity of layers upon layers of tangled CSS. You want three words, not hex codes and pixel counts.
It also works for more structural changes: “Add nested folders for the notes;” or, “add folders;” or, even, “add a way to organize my bizarro notes.” Though its choices here aren’t groundbreaking either, they are often good enough—good enough to fill the supporting roles that they need to, and, maybe more importantly, good enough to settle our compulsion to tinker.
In my recent attempt to Build Something, that feature of the experience was as striking as anything. Though vibe coding has come to mean “building software without needing to understand code,” there’s a more literal definition that better reflects its real allure: It’s decision by vibe. It’s being able to manifest stuff without actually having to choose what you really want. You can tell it your problem and your rough preferences, and it takes the wheel.
When people wax poetic about vibe coding, I suspect this is what they’re really feeling. Yes, AI breaks through a technical ceiling, but it also frees them from decision fatigue. It lets them think about the things they want to think about, and delegate what they don’t. AI is mechanically useful because it does stuff for us, and that is what we usually talk about. But its emotionally intoxicating power—its real delight, or its real danger—is that it decides stuff for us.
Ah. But. Here’s a paradox I think about a lot: Why don’t people use AI to decide what to eat for dinner?
It seems like such an obvious thing. AI—both the vintage machine learning algorithms and today’s generative varietals—are very good at recommending things. People have to decide what to cook or where to go to eat all the time, and they hate doing it. The decision barely matters, and if we make a bad one, we get to try again tomorrow. And yet, most people (I think?) still scroll through Google reviews and cooking apps looking for restaurants and recipes. As god as my witness, I will not pick the restaurant—but I will not let the computer pick it either.
So, if you want to build an product that makes decisions for people—or even want to claim that AI can—it seems important to first understand why people are still reluctant to let it make this one?
My best guess is that there are two problems. First, the recommendation apps are missing a bunch of enigmatic context—I’m very hungry; I just ate pizza and don’t want it again; I’m in the mood for grilled chicken and a sweet peanut sauce. And second, the choices feel arbitrary. They tell you to make Japanese curry chicken, or to eat at a random Italian restaurant, and don’t tell you why. Though they ostensibly make a choice for you, they also make it very easy to tell it no. “Pass; I don’t like that; it’s not the best.” These apps aren’t decisions; they’re just more options.
But I think it’s solvable: Have the apps convince us. Tell us why we should eat there—it’s not usually crowded on Wednesdays; it’s been saved to your favorites for a while and you still haven’t gone; this review says that they serve ridiculous cocktails in novelty glassware, and we all know how you feel about ridiculous cocktails in novelty glassware. Make me think about what I’m giving up if I swipe left.
The thing is, this is easy? If LLMs are good at anything, it’s reverse-engineering a persuasive argument. That’s kind of their whole thing: Being confident and convincing, about everything, including stuff that they just made up.
Sometimes, that’s bad, and most people who build AI products try to solve it by working really hard to make the robots right. But, outside of a few very important situations—flying rockets and running stock markets and stuff like that—I’m not sure that’s what we actually want. We want decisions. We want to choose a font, and be as confident as the AI that it was right.
There are lots of examples of where this might work. The restaurant recommendation app, that sells us on its suggestions. A dating app that doesn’t try to find marginally better matches, but tells us why this match is worth asking out. A movie recommender that tells us what movie we should watch and why we’ll like it.7 Tools for buying gifts and designing your apartment and buying clothes that aren’t just algorithmic boxes of cottagecore dresses and chambray shirts, but are apps that tell you why you’d look great in cottagecore dresses and chambray shirts.
And, for the data people here, analytical apps that sell you on their conclusions. There is a small army of text-to-SQL startups that promise a future of instant answers and automatic insights. Most of these companies are hammering away at the technology, trying to get LLMs to conduct more accurate analyses. But this might be pushing on the wrong part of the problem. A better chart won’t convinces us to do things; more persuasive words will. Don’t build text-to-SQL; build text-to-SQL-to-argument, and give an executive the emotional support to make a decision.
After all, we're not really looking for the perfect font, because we never wanted to think about the font in the first place. We're just looking for permission to make a choice, and move on.
Truman
See, this is how it starts:
Introducing Reach
For the first time in history, you can create an AI simulation of your own LinkedIn audience. Reach is your LinkedIn brand-building co-pilot, helping you test and refine your posts before you publish.
We sign up for the simulation to test our LinkedIn posts. We don't care about the simulation at first, because the simulation is not real. We can't sell our software in the simulation; the simulation does not need our recruiting services or our eight-week data engineering bootcamp. The simulation cannot subscribe to our newsletter.
But the simulation can like our newsletter. The simulation can talk about our newsletter. The simulation can make our newsletter seem important, and give us the thrill of our newsletter going viral. The simulation can become our connections, our fans, our parasocial suitors. The simulation can't buy our stuff, but it can give us what we really want: followers and fame.
The simulation is not real, but really, is Brandon Ellis | I Turn Founders into Fortune 500 CEOs | Author of the Zero-to-Unicorn Playbook | Blitzscaler real either? We never talk to Brandon. We don’t want to talk to Brandon. Is Brandon a machine? Is Brandon from the simulation? It does not matter. Brandon is just a plausibly sentient thumb hovering over a like button. Brandon has less rizz than that flirty Sesame bot. We’d rather talk to Sesame. We’d rather get a like from Sesame.
“Imagine seeing your post go viral before you post it,” Reach says. Sure, for now. But at some point, if you’ve already gone viral on the simulation, why bother trying to go viral on an even more lifeless one?
The White Lotus Power Rankings
After the second episode, Saxon is holding strong, though Greg is climbing fast. The outlook is grim for Gaitok and Mook. And Piper seems more innocent, but the rest of the Ratliffs look a lot more sus
Episode two also smoothed out the demographic divide a bit:
After the first episode, 88 percent of women found the male characters deplorable and zero percent of women found the men likable. Though men also thought that the men in the show were deplorable, nearly half of the male viewers thought male characters were the most charming. After episode two, however, both men and women found the male characters a lot more likable.
But people’s murder suspicions flipped? After episode one, female viewers were twice as suspicious of the women as they were of the men, and male viewers were twice as suspicious of the men. After episode two, the women are now twice as suspicious of the men, and the men are somewhat more suspicious of the women. Suspicious.
You know the drill. Vote early; vote often; vote for episode 3, which is the episode that already aired last Sunday:
It is also very important to distract yourself from having to actually write about Hernán Cortés.
Introducing a new Chatbot Arena: The Chatbot Website Arena.
This isn’t entirely true. Some products really are Google Docs with prettier fonts and the navigation bar on the side.
There’s a parallel here with the decisions people make about how to run a company. When you start a startup, it’s tempting to tinker with everything: To invent new organization structures and management philosophies and compensation plans. It’s almost always a bad idea. Most companies would be better off focusing on their one unique thing, and just doing the boring version of everything else.
You should watch the new Griff music video that came out today, and you will like it because it’s a Griff music video.
We're very quickly running Hell in C.S. Lewis' "That Hideous Strength". That Linkedin simulation is exactly like where Napoleon was nullified in the afterlife.
https://substack.com/home/post/p-155684220?utm_campaign=post&utm_medium=web
come for the allison bornstein reference / stay for the griff updates