15 Comments
Nov 23, 2021Liked by Benn Stancil

Cool insight! Gen Patten came to a similar conclusion. I might have to start running a timer for myself now. While this is definitely a great way to reframe self evaluation, or maybe compare individuals, do you think that this would be a good metric for a company to measure its analytics teams performance? Do you think this could introduce something like response time quotas that could get decoupled from realistic expectations as sales quotas often do?

Expand full comment
Nov 22, 2021Liked by Benn Stancil

One issue that I can think is about each team data maturity. For example, if I've a business that is starting to analyze their data / dont have much reporting, their questions normally tend to be simpler, there is tons of things that we can optimize (because nobody was looking). As the business complexity grows, a lot of the simpler decisions are automated (dashboards / data cubes / etc...) and the questions tend to be more and more complex.

If I measure the time by measuring how long the data team are taking to reply to a question, I could be measuring the increase in business complexity instead of measuring the data team performance. Also, how can I measure the efficiency provided by questions that are automated via dashboards/alerts/algorithms?

Expand full comment
Nov 14, 2021Liked by Benn Stancil

I've come across your writing recently and it's great! Consider me a rabid subscriber.

Some interesting points here and while I generally agree with the speed to decision metric, I've also found that "too fast" generates more suspicion than "too slow". It's almost as if my stakeholders equate time spent with higher quality, even though the questions can be very simple...

Expand full comment

Hi Ben - thanks for this piece! Thoughtful and agree that any conversation with Boris will be wide ranging and interesting. I'm wondering how you propose to annotate decisions/events in metrics and reports so that later on ... you can point at labels for decisions made by past you and get additional context into what the movie of the metrics looked like then? Just looking at the metrics and comparing period over period sometimes obscures the nuances of "this thing looks weird because we made another decision over here to prioritize X and then stopped that after 5 months"

Expand full comment
Nov 5, 2021Liked by Benn Stancil

Hi Ben. Always love reading your work. Thanks for the article. Any thoughts on the process to track decisions. Should we think it in terms of 'ask them' measurement? (introduces biases)

Expand full comment
Nov 5, 2021Liked by Benn Stancil

Two points: (1) Decision quality and output quality nuance that you are trying to bring out - is very elegantly captured by Annie Duke in 'Thinking in Bets' and (2) Speed to decision always trumps sophistication of decision for most use cases where cost of wrong decision is low. But there is an inherent challenge you face - you are not comparing against two analytical methodologies when you use that as metric. Most of the times your baseline is decision making with no formal analysis but a gut reaction -which will be by definition fastest way to a decision. How do you handle that?

Expand full comment