Cool insight! Gen Patten came to a similar conclusion. I might have to start running a timer for myself now. While this is definitely a great way to reframe self evaluation, or maybe compare individuals, do you think that this would be a good metric for a company to measure its analytics teams performance? Do you think this could introduce something like response time quotas that could get decoupled from realistic expectations as sales quotas often do?
No, not really. I think using it as a performance metric introduces all sorts of complications. It encourages people to take on simple questions; it's encourages sloppy work; etc. To me, the value of it is as a guiding motivation for how to answer the questions that you need to answer. If it starts to influence the questions you get asked, then I think it becomes problematic.
One issue that I can think is about each team data maturity. For example, if I've a business that is starting to analyze their data / dont have much reporting, their questions normally tend to be simpler, there is tons of things that we can optimize (because nobody was looking). As the business complexity grows, a lot of the simpler decisions are automated (dashboards / data cubes / etc...) and the questions tend to be more and more complex.
If I measure the time by measuring how long the data team are taking to reply to a question, I could be measuring the increase in business complexity instead of measuring the data team performance. Also, how can I measure the efficiency provided by questions that are automated via dashboards/alerts/algorithms?
I think that's fine. The goal isn't to lower the number in absolute terms; it's to think about how, given the question you're being asked in the context it's being asked, how can you get someone to a decision quickly.
As for the dashboards, the point isn't too measure every question exactly. It's meant as a guiding light. If building dashboards helps people make decisions quickly, build them. If they don't (because they're just metrics to put on a screen for the sake of having numbers, for instance), don't build it.
I've come across your writing recently and it's great! Consider me a rabid subscriber.
Some interesting points here and while I generally agree with the speed to decision metric, I've also found that "too fast" generates more suspicion than "too slow". It's almost as if my stakeholders equate time spent with higher quality, even though the questions can be very simple...
Hi Ben - thanks for this piece! Thoughtful and agree that any conversation with Boris will be wide ranging and interesting. I'm wondering how you propose to annotate decisions/events in metrics and reports so that later on ... you can point at labels for decisions made by past you and get additional context into what the movie of the metrics looked like then? Just looking at the metrics and comparing period over period sometimes obscures the nuances of "this thing looks weird because we made another decision over here to prioritize X and then stopped that after 5 months"
Thanks! And I don't know of any great solutions for this to be honest. Some people just do this in Google sheets, which seems to work...ok? I also know someone who's trying to build a lightweight app to help with this that makes it easy to flag these sorts of things, and records them in a place where you can mix it in with other dashboards and reports. They're still very early though, but it's at least an area people are thinking about.
Hi Ben. Always love reading your work. Thanks for the article. Any thoughts on the process to track decisions. Should we think it in terms of 'ask them' measurement? (introduces biases)
Thanks! I think that's the best we can do, honestly. I know some folks (unfortunately, I can't find the link) who had a lightweight process for just writing stuff down in a Google sheet to keep track of the reason they made decisions to do things. It took some time to build the habit, but worked once it was there.
Its bit of a hassle. It's good if everyone writes down why they chose something (that's probably only way of thinking critically), but many folks simply don't do it.
Two points: (1) Decision quality and output quality nuance that you are trying to bring out - is very elegantly captured by Annie Duke in 'Thinking in Bets' and (2) Speed to decision always trumps sophistication of decision for most use cases where cost of wrong decision is low. But there is an inherent challenge you face - you are not comparing against two analytical methodologies when you use that as metric. Most of the times your baseline is decision making with no formal analysis but a gut reaction -which will be by definition fastest way to a decision. How do you handle that?
I've heard good things about Annie Duke's book before, so I'll have to check it out.
On the second point, I think that's the best objection to this. What happens when a decision is made, and any input necessarily slows it down? I think any guidance there is tough, because you can't interject in everything.
My answer, I think, is something along the lines of, "convince people of what you think right, as fast as you can."
Cool insight! Gen Patten came to a similar conclusion. I might have to start running a timer for myself now. While this is definitely a great way to reframe self evaluation, or maybe compare individuals, do you think that this would be a good metric for a company to measure its analytics teams performance? Do you think this could introduce something like response time quotas that could get decoupled from realistic expectations as sales quotas often do?
No, not really. I think using it as a performance metric introduces all sorts of complications. It encourages people to take on simple questions; it's encourages sloppy work; etc. To me, the value of it is as a guiding motivation for how to answer the questions that you need to answer. If it starts to influence the questions you get asked, then I think it becomes problematic.
One issue that I can think is about each team data maturity. For example, if I've a business that is starting to analyze their data / dont have much reporting, their questions normally tend to be simpler, there is tons of things that we can optimize (because nobody was looking). As the business complexity grows, a lot of the simpler decisions are automated (dashboards / data cubes / etc...) and the questions tend to be more and more complex.
If I measure the time by measuring how long the data team are taking to reply to a question, I could be measuring the increase in business complexity instead of measuring the data team performance. Also, how can I measure the efficiency provided by questions that are automated via dashboards/alerts/algorithms?
I think that's fine. The goal isn't to lower the number in absolute terms; it's to think about how, given the question you're being asked in the context it's being asked, how can you get someone to a decision quickly.
As for the dashboards, the point isn't too measure every question exactly. It's meant as a guiding light. If building dashboards helps people make decisions quickly, build them. If they don't (because they're just metrics to put on a screen for the sake of having numbers, for instance), don't build it.
I've come across your writing recently and it's great! Consider me a rabid subscriber.
Some interesting points here and while I generally agree with the speed to decision metric, I've also found that "too fast" generates more suspicion than "too slow". It's almost as if my stakeholders equate time spent with higher quality, even though the questions can be very simple...
Thanks! And there's actually a really interesting phenomenon about that: https://thefinancialbodyguard.com/the-locksmiths-paradox/
Coinstar machines apparently make artificial sounds to counteract it, and I've heard that kayak.com slows down results to make it appear like it's having to search really hard for low prices. https://90percentofeverything.com/2010/12/16/adding-delays-to-increase-perceived-value-does-it-work/index.html
Haha - very funny and thought provoking articles, thanks for sharing these!
Hi Ben - thanks for this piece! Thoughtful and agree that any conversation with Boris will be wide ranging and interesting. I'm wondering how you propose to annotate decisions/events in metrics and reports so that later on ... you can point at labels for decisions made by past you and get additional context into what the movie of the metrics looked like then? Just looking at the metrics and comparing period over period sometimes obscures the nuances of "this thing looks weird because we made another decision over here to prioritize X and then stopped that after 5 months"
Thanks! And I don't know of any great solutions for this to be honest. Some people just do this in Google sheets, which seems to work...ok? I also know someone who's trying to build a lightweight app to help with this that makes it easy to flag these sorts of things, and records them in a place where you can mix it in with other dashboards and reports. They're still very early though, but it's at least an area people are thinking about.
Hi Ben. Always love reading your work. Thanks for the article. Any thoughts on the process to track decisions. Should we think it in terms of 'ask them' measurement? (introduces biases)
Thanks! I think that's the best we can do, honestly. I know some folks (unfortunately, I can't find the link) who had a lightweight process for just writing stuff down in a Google sheet to keep track of the reason they made decisions to do things. It took some time to build the habit, but worked once it was there.
For bigger decisions, something like this might work (though I've never tried it personally): https://barmstrong.medium.com/how-we-make-decisions-at-coinbase-cd6c630322e9
Its bit of a hassle. It's good if everyone writes down why they chose something (that's probably only way of thinking critically), but many folks simply don't do it.
yeah, so far at least, I don't think we've got any good ways around that. Some stuff is just going to get lost, I suspect.
Two points: (1) Decision quality and output quality nuance that you are trying to bring out - is very elegantly captured by Annie Duke in 'Thinking in Bets' and (2) Speed to decision always trumps sophistication of decision for most use cases where cost of wrong decision is low. But there is an inherent challenge you face - you are not comparing against two analytical methodologies when you use that as metric. Most of the times your baseline is decision making with no formal analysis but a gut reaction -which will be by definition fastest way to a decision. How do you handle that?
I've heard good things about Annie Duke's book before, so I'll have to check it out.
On the second point, I think that's the best objection to this. What happens when a decision is made, and any input necessarily slows it down? I think any guidance there is tough, because you can't interject in everything.
My answer, I think, is something along the lines of, "convince people of what you think right, as fast as you can."