I'll offer an extension to this argument that I see with data staff. I have observed for some years that data teams exist in part so that leaders can avoid having to exert their own autonomy. Even at the C-level in an org, people seek the sweet release of being presented a table that makes a decision so blindingly obvious that there can be no ambiguity about what to do next.
Sometimes this turns into feedback we offer our teams. "I wish your reports had more of a 'so what' to them." To which I say, who is making the decision here? Why is it my job to tell you what to do? It's not my decision, it's yours.
So often I have seen leaders convert a complex strategic question into a data question. Like "if I saw [metric] above [value] I would think we should choose [strategy]." These are not totally arbitrary connections, but neither do I think it's actually prudent to link the decision to that narrow value. It alway seemed to me like a desire to distance oneself from stating clearly "I think we should do [strategy]" and assume accountability for the consequences. Because if you can link a decision into a piece of data, then accountability becomes diffuse. It's not that you made a choice that turned out poorly, it's that maybe you didn't ask the right data question, or there was a mistake in the analysis, or something you couldn't measure was actually determinative of the result in a way you didn't predict. All this complex messy stuff.
Leaders need to make decisions. Data is sometimes a useful input. But it is very rarely indicative of one "correct" decision. What else is the point of agency?
Yeah, I very much agree with this. And it feels like it can go even further sometimes, where it provides a way to avoid accountability in one direction. If the decision doesn't work out, it wasn't really a decision; we were just following the numbers. If it does, though, people are happy to take credit for it. Not in a selfish way, necessarily, or in a way that doesn't also credit the analysis, but they're suddenly a part of the story about the decision-making process again.
Which is sort of weird to me, how data seems to have a special capacity in this kind of way. If a decision is based on a bunch of qualitative inputs and reasoning and all of that, we tend to see those as inputs to a decision that the person in charge has to weigh as they see fit. In those cases, that's their job - to aggregate those inputs and decide. That's what they're paid to do.
But data seems to be this special sort of input that is detached view from nowhere. To let other inputs guide a decision is simply what it means to make a decision; to let data guide a decision is to remove yourself from the decision.
Isn’t this essentially why religion exists? To provide (whatever your preference) a structured set of behaviors, beliefs and moral frameworks to aspire to and to practice in order to set one on a path to an end goal (reincarnation, heaven, consciousness, etc. etc.)? While the failures of man have warped many of these experiences I would still say we should be careful what we wish for when asking an algorithm to provide the instructions for a life well lived.
That was the same reaction I had to reading that NYT story, honestly (and that's the last link in the last footnote). A guy was looking for meaning, and found this higher power to tell him how to find more purpose in life, if only he believed these things.
That said, one very obvious difference is that at least religion is well-vetted. True or not, the ideas have been a very long running collective project. ChatGPT can just invent something in the moment, and just for you. Religion is a shared reality; all this AI stuff can be just you. Which, at its worst, seems like it can motivate people to do what happened in that story.
The ancient Greeks believed everyone who had to work, i.e. follow checklists, was poor. If you had the freedom to give others lists of things to do, you were rich. The amount of money wasn't directly an important factor. They also thought the world didn't have much room for many rich people: logically most would have to be poor. It seems they were right!
I will say, it depends on what was on the checklist, but when I was working on that campaign, I was very much a checklist doer rather than a checklist writer, and I can't say I hated it.
Sometimes, right? I guess it depends on that little bit of conscience or wisdom or whatever in our minds that might be able to look critically at both guidance from without and our innermost desires. And then -- how much can we trust THAT?
A big part of the reason FAANG and Mckinsey and Goldman folk are where they are is specifically BECAUSE they want to be told what to do - they've been preselected for exactly that, and aren't a great reference class for agency overall, but instead are a better reference class for high discipline and high skill, and so on.
The similar tier of talent with agency are all doing startups or working weird careers in the arts or nonprofit spaces, because they don't mind defining a vision and going all out, it comes naturally to them.
This is actually one area I think is really exciting in the near future - when we have always-on AI assistants that can influence our decisions by making arguments in the rhetorical styles we most resonate with, and in terms of vision and values we most believe, it's going to be both a major force multiplier, and a generator of the "recipes for success" you call out most people as wanting.
Forget arduously building your character over decades via discipline - that sounds hard! Let's just "yombie," and let our assistants turn us into the yuppie zombies that will define success and the PMC for the next decade or so!
I know I personally would already bet on somebody faithfully executing whatever o3 advises daily when told "I want a great spouse and career and a life where I wake up and live energetically and fully of joy" versus what they themselves come up with.
I actually wrote a whole post about exactly that here:
I kind of think the same thing, and I'm not sure how I feel about it? On one hand, it does seem like there is at least a very real possibility that offloading a lot of that sort of decision making to some AI probably gets you closer to whatever goal you're after than you making that plan yourself, especially if it had some context on you and your life. And it doesn't seem at all crazy that we'll at least start to get closer to that, where people start getting in the habit of just asking it what they should do, like the guy in the NYT story.
On the other hand, that certainly feels pretty dystopian, even in the good cases when if the bot works. And in the bad cases, it seems like it starts to look like this thing in the story, where the bot becomes...misaligned? stuck in a loop? whatever you would call this?...and people follow it blindly anyway.
All that said, I'm sure there will be someone who goes Bryan Johnson on all of this, and very publicly begins living their entire life doing whatever an AI tells them.
Despite the above extremely deep gashmius I plead guilty to this entire post because I am in Frum World because I like rules. Rules give you a basic structure of what you are supposed to be doing every day and what goals you should be working on. But because the rules are very old you have to use your own judgment to think about how to pursue the goals in the 21st century. Yitz Greenberg suggests that you should act as if you are in the perfect world already.
I think a lot more people actually like that sort of world too, though (at least in Silicon Valley), it's not really a thing that you're allowed to admit. All the social pressure is to be a revolutionary, a disrupter, to go founder mode on everything, etc. Which, fine, but I'm sure there are a lot of people here who say they like that stuff. But if they could just as successful in a world where they could follow some rules and have the destination robot tell them what to do, I suspect they'd be happier there.
(Obligatory comment that Jon Hamm could not have given what seemed to be a very sincere smile when handed "The Revolution Was Televised" with the full Mad Men chapter to sign because of the evidence that he had literate fans because WITHOUT literate fans Mad Men could not have stayed on the air)
(My entire thought process was "Uh, he's in it, and no one signs DVDs that I know of, and "St. Louis Cardinals: The Big 50" is already signed by the actual author and he's essentially in that to talk about what he was doing when David Freese hit the home run.")
(Then MANY months later I realized that he probably smiled because you cannot get through that book and not know emphatically from Jon Hamm's own words that Don is a different person with different thoughts and feelings)
This strongly resonates with me.
I'll offer an extension to this argument that I see with data staff. I have observed for some years that data teams exist in part so that leaders can avoid having to exert their own autonomy. Even at the C-level in an org, people seek the sweet release of being presented a table that makes a decision so blindingly obvious that there can be no ambiguity about what to do next.
Sometimes this turns into feedback we offer our teams. "I wish your reports had more of a 'so what' to them." To which I say, who is making the decision here? Why is it my job to tell you what to do? It's not my decision, it's yours.
So often I have seen leaders convert a complex strategic question into a data question. Like "if I saw [metric] above [value] I would think we should choose [strategy]." These are not totally arbitrary connections, but neither do I think it's actually prudent to link the decision to that narrow value. It alway seemed to me like a desire to distance oneself from stating clearly "I think we should do [strategy]" and assume accountability for the consequences. Because if you can link a decision into a piece of data, then accountability becomes diffuse. It's not that you made a choice that turned out poorly, it's that maybe you didn't ask the right data question, or there was a mistake in the analysis, or something you couldn't measure was actually determinative of the result in a way you didn't predict. All this complex messy stuff.
Leaders need to make decisions. Data is sometimes a useful input. But it is very rarely indicative of one "correct" decision. What else is the point of agency?
Yeah, I very much agree with this. And it feels like it can go even further sometimes, where it provides a way to avoid accountability in one direction. If the decision doesn't work out, it wasn't really a decision; we were just following the numbers. If it does, though, people are happy to take credit for it. Not in a selfish way, necessarily, or in a way that doesn't also credit the analysis, but they're suddenly a part of the story about the decision-making process again.
Which is sort of weird to me, how data seems to have a special capacity in this kind of way. If a decision is based on a bunch of qualitative inputs and reasoning and all of that, we tend to see those as inputs to a decision that the person in charge has to weigh as they see fit. In those cases, that's their job - to aggregate those inputs and decide. That's what they're paid to do.
But data seems to be this special sort of input that is detached view from nowhere. To let other inputs guide a decision is simply what it means to make a decision; to let data guide a decision is to remove yourself from the decision.
One thing I’ve learned from doing startups is you need a very high tolerance for ambiguity.
Yeah, you definitely need it. Do I *want* it? Eeeeeeeeeh
"Liberty and necessity are consistent: as in the water that hath not only liberty, but a necessity of descending by the channel."
- Thomas Hobbes
Isn’t this essentially why religion exists? To provide (whatever your preference) a structured set of behaviors, beliefs and moral frameworks to aspire to and to practice in order to set one on a path to an end goal (reincarnation, heaven, consciousness, etc. etc.)? While the failures of man have warped many of these experiences I would still say we should be careful what we wish for when asking an algorithm to provide the instructions for a life well lived.
That was the same reaction I had to reading that NYT story, honestly (and that's the last link in the last footnote). A guy was looking for meaning, and found this higher power to tell him how to find more purpose in life, if only he believed these things.
That said, one very obvious difference is that at least religion is well-vetted. True or not, the ideas have been a very long running collective project. ChatGPT can just invent something in the moment, and just for you. Religion is a shared reality; all this AI stuff can be just you. Which, at its worst, seems like it can motivate people to do what happened in that story.
The ancient Greeks believed everyone who had to work, i.e. follow checklists, was poor. If you had the freedom to give others lists of things to do, you were rich. The amount of money wasn't directly an important factor. They also thought the world didn't have much room for many rich people: logically most would have to be poor. It seems they were right!
I will say, it depends on what was on the checklist, but when I was working on that campaign, I was very much a checklist doer rather than a checklist writer, and I can't say I hated it.
Existentialism. The curse of liberty. Sartre.
free will is too much work
We either do what we are told or do what we want to do, and unfortunately both paths can easily lead to total disaster.
Fair, though also, can both also work?
Sometimes, right? I guess it depends on that little bit of conscience or wisdom or whatever in our minds that might be able to look critically at both guidance from without and our innermost desires. And then -- how much can we trust THAT?
Great article, great place to carve reality.
A big part of the reason FAANG and Mckinsey and Goldman folk are where they are is specifically BECAUSE they want to be told what to do - they've been preselected for exactly that, and aren't a great reference class for agency overall, but instead are a better reference class for high discipline and high skill, and so on.
The similar tier of talent with agency are all doing startups or working weird careers in the arts or nonprofit spaces, because they don't mind defining a vision and going all out, it comes naturally to them.
This is actually one area I think is really exciting in the near future - when we have always-on AI assistants that can influence our decisions by making arguments in the rhetorical styles we most resonate with, and in terms of vision and values we most believe, it's going to be both a major force multiplier, and a generator of the "recipes for success" you call out most people as wanting.
Forget arduously building your character over decades via discipline - that sounds hard! Let's just "yombie," and let our assistants turn us into the yuppie zombies that will define success and the PMC for the next decade or so!
I know I personally would already bet on somebody faithfully executing whatever o3 advises daily when told "I want a great spouse and career and a life where I wake up and live energetically and fully of joy" versus what they themselves come up with.
I actually wrote a whole post about exactly that here:
https://performativebafflement.substack.com/p/the-spastic-yuppie-zombie-hoods-in?r=17hw9h
I kind of think the same thing, and I'm not sure how I feel about it? On one hand, it does seem like there is at least a very real possibility that offloading a lot of that sort of decision making to some AI probably gets you closer to whatever goal you're after than you making that plan yourself, especially if it had some context on you and your life. And it doesn't seem at all crazy that we'll at least start to get closer to that, where people start getting in the habit of just asking it what they should do, like the guy in the NYT story.
On the other hand, that certainly feels pretty dystopian, even in the good cases when if the bot works. And in the bad cases, it seems like it starts to look like this thing in the story, where the bot becomes...misaligned? stuck in a loop? whatever you would call this?...and people follow it blindly anyway.
All that said, I'm sure there will be someone who goes Bryan Johnson on all of this, and very publicly begins living their entire life doing whatever an AI tells them.
Despite the above extremely deep gashmius I plead guilty to this entire post because I am in Frum World because I like rules. Rules give you a basic structure of what you are supposed to be doing every day and what goals you should be working on. But because the rules are very old you have to use your own judgment to think about how to pursue the goals in the 21st century. Yitz Greenberg suggests that you should act as if you are in the perfect world already.
I think a lot more people actually like that sort of world too, though (at least in Silicon Valley), it's not really a thing that you're allowed to admit. All the social pressure is to be a revolutionary, a disrupter, to go founder mode on everything, etc. Which, fine, but I'm sure there are a lot of people here who say they like that stuff. But if they could just as successful in a world where they could follow some rules and have the destination robot tell them what to do, I suspect they'd be happier there.
(Obligatory comment that Jon Hamm could not have given what seemed to be a very sincere smile when handed "The Revolution Was Televised" with the full Mad Men chapter to sign because of the evidence that he had literate fans because WITHOUT literate fans Mad Men could not have stayed on the air)
(My entire thought process was "Uh, he's in it, and no one signs DVDs that I know of, and "St. Louis Cardinals: The Big 50" is already signed by the actual author and he's essentially in that to talk about what he was doing when David Freese hit the home run.")
(Then MANY months later I realized that he probably smiled because you cannot get through that book and not know emphatically from Jon Hamm's own words that Don is a different person with different thoughts and feelings)
The real question is, did he sign it "Jon Hamm," "Don Draper," or "Dick Whitman?"
Signed it with a "Jon" and an "H" and it was not automatically intelligible
He should use one of those big curvise J's that kind of looks like D, and just sign everything [J or D]on