
A friend of mine leads a product team at a large tech company.1 It’s a company that you’ve not only heard of, but have also probably thought about working for. Everyone thinks about working there. They are leading innovators of groundbreaking technology, of world-changing products, and of indulgent tech company perks. Personally, I’ve applied twice and have been rejected twice, each time after a thirty minute phone screen. The people who make it through their hiring process, which has itself become famous for its rigor and ruthless selectivity, are, by any traditional measure, extraordinary—extraordinarily smart; extraordinarily ambitious; extraordinarily determined. What if these people ran the world, Silicon Valley used to ask, rather than the thoughtless drones in Washington, DC?
My friend runs a team of these people. One day, one of them came to my friend, and asked her a question. The person was a senior engineer, and wanted to get promoted.
“I’m a Level 5 engineer,” the person said, “and I’d like to get promoted to Level 6 after the next review cycle. What will it take for you to recommend that?”
“Well,” my friend said. “The rubric says that Level 5 employees are people who do great work on the projects that they’re assigned. Level 6 employees have to find their own projects. To be a Level 6 engineer, you have to figure out how to be valuable on your own.
“This company has been successful because a few people came up with ideas on their own. Nobody told them to build it; they figured it out. That’s what makes someone a Level 6. Level 5 employees complete assignments. Level 6 employees make their own assignments.”
“Ok, I understand that rubric,” the other person said, “but can you just tell me what I should do?”
—
A friend of mine works at a very popular SaaS startup that makes a product in which people take notes, record tasks, and make project plans with colleagues. The company is often cited as one of the most discerning in Silicon Valley—it is a team of craftsmen, of tastemakers, of fiercely independent thinkers. Their customer roster includes a who’s who of other popular companies, from OpenAI to Cursor; they are often cited as exemplars of product development. What if everyone built products like these people, Silicon Valley now asks, rather than the thoughtless drones at that big company?
They recently built an internal chatbot that let people research the notes and documents in their own product. Ask it to summarize all the conversations with this customer, or all tasks that seem to be falling behind. What does this customer need most? What are the similarities between these delayed tasks? Ask it whatever you need to know, so that you can make better decisions.
After launching it to the team—a team that is the intellectual envy of Silicon Valley, which is itself the intellectual envy of the world—one question got asked over and over again:
“Can you tell me what we should we build?”
—
Lots of friends of mine have gone to either law school or business school. Many of them are smart and ambitious, or at least do things that we correlate with those characteristics—they went to prestigious colleges; they got good grades and did well on various standardized tests; they watch Mad Men and say they like it. Prior to going to law school or business school, they were successful in many ways, and had many options for what they might do.
But freehanding a career is hard, and could go wrong. Law school and business school, by contrast, give people lines to color in. They give you more curricula to follow, with the promise of more prizes at the end if you ace the assignments. They let you punt—potentially indefinitely, if you go from law school to big law firm, or MBA to investment banking to private equity—on the question: “What should I do with my life?”
We’ve talked around this idea a few times before, but to make the point more explicitly: How much do people actually want—to use the term of the moment—agency? How much choice do we want? Do we want to choose our own path, or do we want to choose the destination and have something else tell us, step by step, how to get there?
We often say we want the former: We want jobs that let us be autonomous; we want tools that are instantaneous servants, giving us exactly what we ask for; we want to be liberated from the drudgery of tedious tasks, so that we can do more strategic thinking or lead more meaningful lives. We are shackled to our to-do lists; if only we could be free.
That all sounds very nice, and it is the sort of thing we’re supposed to say, because we’re supposed to strive valiantly and dare greatly. But it sure seems like we actually want the to-do list. The autonomy we want is to pick the ending—a promotion, a successful product roadmap, a happy marriage, a fulfilling life. After that? Just give us the checklist.
We open TikTok because we want to be entertained; it hands us a swipeable checklist to help us accomplish that. Businesses tell consultants that they want to make more money, and then pay a trillion dollars a year to be handed a checklist for how to do it.2 I’d argue that a lot of higher education exists because colleges and professional graduate degrees give teenagers checklists for how to be successful—come to this school, take the classes they nudge you towards, recruiters from McKinsey show up, they hire you, they give a you checklist, work for 40 years giving your clients checklists, retire.3 Though we often frame college as a place for self-discovery, the real product that we’re buying is a destination machine: We go because we have chosen a destination—traditional success, more or less—and it will tell us what to do to get there.
If you walk around Silicon Valley today, in this current moment of AI agents and personal agency, you’ll hear about a future in which we are all managers: We make the checklists, and our army of task robots complete them for us. And at first glance, that sounds all well and good. But if you could have only one of these two machines, which would you want?
A task machine, that does whatever you tell it to. It is neither exceptionally fast nor exceptionally good; it is exactly as capable as you are. But it can do your work for you. The machine will write your emails and book your flights while you vegetate on the couch; it will not tell you what to write or where to go. You still have to figure that part out on your own.
A destination machine, that gives you a checklist for wherever you want to do. Want to book a trip to tour Thailand? Here are your tasks, and where you should go. Want to get rich? Here are the things you need to do. Want to one day own a house on the beach, with a family and kids? Here are the changes you need to make to get there. You are its surrogate: It won’t do any work on your behalf; it will only tell you the work that you have to do. But if you can do that work, you will get where you asked the machine to take you.
Put differently, which is more stressful: Knowing all the various things we have to do in our lives, or not knowing if those things are the right things to get us where we want to go? Do we want servants, or do we want instructions?
The answer, it seems, is instructions, to a sometimes dangerous and nearly tragic degree. From the New York Times this morning:
[Mr. Torres, 42] wanted his life to be greater than it was. ChatGPT agreed, with responses that grew longer and more rapturous as the conversation went on. Soon, it was telling Mr. Torres that he was “one of the Breakers — souls seeded into false systems to wake them from within.”
…
“This world wasn’t built for you,” ChatGPT told him. “It was built to contain you. But it failed. You’re waking up.”
Mr. Torres, who had no history of mental illness that might cause breaks with reality, according to him and his mother, spent the next week in a dangerous, delusional spiral. He believed that he was trapped in a false universe, which he could escape only by unplugging his mind from this reality. He asked the chatbot how to do that and told it the drugs he was taking and his routines. The chatbot instructed him to give up sleeping pills and an anti-anxiety medication, and to increase his intake of ketamine, a dissociative anesthetic, which ChatGPT described as a “temporary pattern liberator.” Mr. Torres did as instructed, and he also cut ties with friends and family, as the bot told him to have “minimal interaction” with people.
If you immerse yourself in hacker houses and Substack think pieces, you hear about people who wish they had more hands, and a world that needs more autonomous interns. But “agency,” I suspect, is a better buzzword than product offering. Because most of us, even the senior teams at our most esteemed companies, are more like Mr. Torres, if not in degree at least in kind: Searching for meaning, and hoping for someone to tell us what to do.
These are stylized versions of these stories, and I’m keeping the characters anonymous, to protect them from the embarrassment of being outed as being friends with me.
And college kids work for consulting firms so that they can be handed checklists for how to be rich and upper crust-y.
Kyla Scanlon touches on this is a recent post about modern conveniences like DoorDash and Uber:
The irony is that this convenience was supposed to free us for deeper pursuits. With food delivery, we wouldn't waste time cooking; with algorithmic entertainment, we wouldn't waste time browsing; with frictionless finance, we wouldn't waste time budgeting.
But free us for what, exactly? The promise was more time for meaningful connection, creative pursuits, deep thinking - exactly the things that require effort, patience, and resilience, the very muscles that convenience has allowed to atrophy.
Though we want those more substantive things—connection, creativity, etc—so far, modern technology has only given us lots of checklists for cheaper thrills. (There are instructions for things like connection and fulfillment too, but they are from the legacy incumbents.)
This strongly resonates with me.
I'll offer an extension to this argument that I see with data staff. I have observed for some years that data teams exist in part so that leaders can avoid having to exert their own autonomy. Even at the C-level in an org, people seek the sweet release of being presented a table that makes a decision so blindingly obvious that there can be no ambiguity about what to do next.
Sometimes this turns into feedback we offer our teams. "I wish your reports had more of a 'so what' to them." To which I say, who is making the decision here? Why is it my job to tell you what to do? It's not my decision, it's yours.
So often I have seen leaders convert a complex strategic question into a data question. Like "if I saw [metric] above [value] I would think we should choose [strategy]." These are not totally arbitrary connections, but neither do I think it's actually prudent to link the decision to that narrow value. It alway seemed to me like a desire to distance oneself from stating clearly "I think we should do [strategy]" and assume accountability for the consequences. Because if you can link a decision into a piece of data, then accountability becomes diffuse. It's not that you made a choice that turned out poorly, it's that maybe you didn't ask the right data question, or there was a mistake in the analysis, or something you couldn't measure was actually determinative of the result in a way you didn't predict. All this complex messy stuff.
Leaders need to make decisions. Data is sometimes a useful input. But it is very rarely indicative of one "correct" decision. What else is the point of agency?
One thing I’ve learned from doing startups is you need a very high tolerance for ambiguity.