14 Comments

Loved, "Don’t use data to get good at something we do a lot; use data to figure out what we’re good at, and then do it more often."

Expand full comment

Loved the article, everything from the sports, to "how do we measure good decisions?". I purchased the Strengths Analyzer. Heard you on a datacamp podcast, now I'm going through all your content and devouring it!

Expand full comment

Thanks, I really appreciate that. And glad you like it!

Expand full comment

The “eternal regular” season rings true. I have always taken issue with the “burn the ships” risk everything mentality. Sure you have to make decisions and move forward and not waffle back and forth - but I think risking everything with no way of return is often not necessary.

Expand full comment

I don't know anything about finance, but I remember reading about this a while ago, couldn't remember what it was called when I wrote this piece, and then was reminded of it right after I published it. But I think this is basically the idea? https://en.wikipedia.org/wiki/Kelly_criterion

Expand full comment

Oh cool - I like that! Yup!

Expand full comment

Great post! Footnote #1 seems to run a little counter to one of the key points of the entire post, though, doesn't it? I feel like Annie Duke has done a great job of trying to describe this by referring to the fallacy of “resulting”—using the benefit of hindsight to declare if a decision was a good one. You made a lot of points in the post that seem to support this, right? If you could flip the 50% heads coin or the 53% heads coin, it's entirely possible that, on one flip, the 50% coin would come up heads and the 53% coin would come up tails. You can regret that you didn't select counter to the odds (although you might be better off appreciating your momentary omniscience, in that you suddenly have knowledge of the counterfactual), but you still made the "right" *decision* based on the incomplete information that you had. No?

Expand full comment

Thanks! And yeah, that all makes sense, but it also seems weird to not care about the result at all. In things like poker, it's one thing - they're all repeated games, there are odds to play, etc. But when you really do have one shot (one opportunity...), it seems off to me to say that you definitively did the right thing if the thing you did caused you to lose.

Maaaybe that doesn't matter on something like a coin flip, because it's just a random number generator and you can't possibly have been any more insightful about the problem. You had to gamble; sometimes you win; sometimes you lose. But the more uncertain the odds (eg, football stuff), the more it seems like you have to ask yourself was your analysis of the situation wrong?

Maybe it's like this: The right decision is playing the odds. And for a coin flip, or poker, you know the odds. There is no, like, Bayesian adjustment of the "real" odds of the game based on the result. But for something like football, the odds are unknown. We can estimate them, but we don't really know. So if we play what we think the odds are and we lose, it seems like it should compel us to adjust our assumptions of what the real odds were by at a least a little bit, and potentially cause us to change our mind that the decision was right.

Expand full comment

The odds are 70% that this thread could spiral endlessly...but I'll reply anyway. :-)

"...for something like football, the odds are unknown. We can estimate them, but we don't really know." True...but...? Along with our estimate of the odds, we can get even closer to a "truth" by estimating how close to the truth our estimate is. That's the basic point of a confidence interval, isn't it? I suspect those laminated sheets the coaches consult don't include confidence intervals, but I can amuse myself by thinking they do.

For an extreme example, say that a particular play is estimated to be successful 70% of the time, and the 95% confidence interval says the "true" odds very likely fall between 60% and 80% of success. The *true* (unknowable, but let's say we're omniscient) odds are 62%: if the identical play were run in the identical circumstances 100 times, it would be successful 62 times and unsuccessful 38 times. But, the coach has no way of knowing which of those 100 circumstances he is experiencing right now.

He runs the play, and it's unsuccessful. It still feels like he made the "right" call. But, that's back to a point in your post—making the odds-on favorite call once may turn out poorly. Making the odds-on favorite call repeatedly should show better results, but that doesn't mean every play turns out in your favor. And, second-guessing decisions based on the results needs to acknowledge that you're working with more information after the fact than you had at the point the decision was made.

This does feel like re-hashing familiar ground though. I'll go ahead and throw in "Russell Wilson goal line pass in the 2015 Super Bowl" to get that reference out of of the way!

Expand full comment

Yeah, so maybe my question is a bit different than what I thought it was. If we define right as "choosing the thing with the best odds," sure, I think that makes sense. But the Russell Wilson, "run the ball!" example is a really good one because, that's a play call you can only ever run once. There's no repeating at all, it's one in a lifetime, etc. Was that the "right" call? I think the numbers actually support it, but it seems really hard to say it was? Just, like, you lost the super bowl on the goal line! *Something* was wrong! So I'm not sure how to think about that.

Expand full comment

I think I'll land on, "I wish for a world where analysts and their business partners realize that this is actually not a simple question to answer." That would then ripple out, I think, to better management of expectations and better collaboration when it comes to putting data (and the probabilistic nature of what it produces) to productive use. (And hopefully not to analysis paralysis.)

Expand full comment

Okay, but the thing about going for it on 4th is that coaches seem to be ceding their decision making to the models when the data is useful for heuristics but doesn’t actually tell you exactly what you should do. Quite obviously, there’s not enough data to get to stat sig on that exact score/time/teams. You can say “the data seems to say teams should go for it more when down by 4 with 5 mins left” but if you have a great kicker and you’re playing the best 4th down defense in the league and you have a good record of making stops and getting to 35 yard line, you specifically might not have that edge in this situation. Like you still have to think. I don’t know much about that specific Lions situation but I there’s still plenty of room to shout at NFL coaches even if you respect data

Expand full comment

For sure, and I don't think most coaches (including Campbell) are that mechanical about it. When they make some sort of decision like this and get asked about in press conferences after, they'll often say stuff like, "we were really confident in our kicker" or "we felt like we had an edge because we'd been beating their nickel corner all day" or whatever.

Still, to the point about these small edges being hard to find, the conclusion that teams should go for it more often has historically been one of those things where the numbers are overwhelmingly on one side. Is the league now doing it an optimal amount? I have no idea. But punting all the time was kind of like NBA teams shooting a bunch a midrange jumpers - no matter how you ran the numbers, the conclusion was always pretty definitive that, at least in the aggregate, it was a very bad strategy (which, of course, doesn't say anything about individual decisions being right or wrong)

Expand full comment

Great application of your perspective, Benn!

Expand full comment