In some ways what you are asking is not really about analytics, but about how to have a fact-driven / truth-seeking culture. Ray Dalio would have a field day with you on this.
Best organizations (Google, Amazon) supposedly had this early on.
Science-focused universities used to have this.
The easy thought process is that this should come from the leadership. But I don't think this is enough.
I think to have a truth-seeking culture, you also need to be exclusionary and hire the type of Sales people, as an example, who prefer Truth over Fiction, regardless of how truth is born - even if that happens through the process of discovering errors.
Kinda? I only know a bit about what that specifically means (and I had some interactions with Bridgewater back in the day), but I think that's a bit different. Even within those sorts of cultures, you have to figure out what to do what you thought was true turns out to be wrong. And I don't think we can say "well, if everyone is always truth seeking or whatever, corrections are always good," because at some point, you stop trusting the data at all.
Which is why I landed where I did - maybe the best balance is to treat data as a bunch of estimates, and say it's all true-ish, rather than forcing it to be right or wrong.
To bring a bit of buddhism into this, one does not stop trying to be mindful just because the mind gets distracted. In the right cultures, trusting data is not about data being correct. It is about relying on data, but also working towards making things more correct over time. And failures are to be expected.
But I agree that this is a high bar, and one that is ultimately unrealistic beyond like the first 50-100 employees who get to interact with Founders.
I think the culture stuff has to transition into Org-level processes. In your sales example, an analyst who discovers errors ought to bring his Data Team leadership and Sales leadership together. Expose the issue. And then Sales Leadership should find a way to communicate the issue to the rest of the Sales org to ensure trust remains.
And estimates is actually part of the org aspect as well. Ultimately, you earn trust by communicating data in a way that's relevant to the audience. That 543,891 number is 540k to Finance, 0.5mil to Investors, 110% to sales, and 543,891 to engineers. Figuring out how to communicate each is a matter of aligning communication with each team's/org's leadership.
I wonder if Substack were counting fake email opens from Apple's MPP previously.
Actually, that's something else that's potentially interesting, the email platform industry as a whole likes to report on "opens" as if they're some sort of objective truth, but the more you speak to people who understand how those systems work, the more you very quickly realise how deeply flawed those metrics are. Those flaws are frequently either not communicated or communicated incredibly poorly to customers who've been taught that this number somehow matters.
How do you start getting folks to stop looking at vanity metrics when they've been taught to look at them for years? 🤔
That story about email opens (which I didn’t know about) is exactly the kind of thing that makes this all so hard. It seems so simple to count, but as soon as you start getting into the details, you realize it’s all a giant mess. You could probably do the same thing with a hundred other common metrics, where what we think is easy and truthful ends up being some giant knot of complexity and vague definitions.
+1 for data = confidence game. Nevertheless the stickiness issue is more to do with our tendency to hide behind data (or so called experts) instead of acknowledging uncertainty.
In the 2032 version, we can do a few things when things go wrong: a) comment on the impact of decisions already taken via data (if we ever can measure it), and recalculate the ones for current/ future b) does the corrected data confirms or rejects our biases / learning c) comment on the revised targets or measurements that needs to be made based on the new corrected base.
Sometimes, the targets are literally basis points (50 bps project for acquisition marketing in mobile channel) = The reason I think estimates won’t do.
Accountability for opinions based on data, may be the way future CDOs or directors are measured on. think we have chief analytics officers then. Currently the blame finding is a mess in the modern data stack based roles.
Aug 20, 2022·edited Aug 20, 2022Liked by Benn Stancil
At my company (and my previous two companies), I've been the annoying person in the room demanding that we define our metrics the same way across the organization, that we have a data dictionary (or if we have one, an up-to-date one), that we require our clients to provide us THEIR definitions of their fields. I've been known to spend days tracking down the reason for a mistake. I've been reprimanded for spending too much time on data prep and exploration. People hate it. They respond in a professional manner, but they don't really want to talk about it. It's like everyone wants to pretend the numbers are right until we get caught with them being wrong. I like your rounding idea and I also want to help stakeholders align their expectations with reality and educate them about how real data really is.
There's an interesting element in that. How much nuance, how many caveats, etc, are people willing to tolerate? People can get worn down by the constant reminders about how the limits of interpreting analysis in particular ways. If you compound that with a bunch of naysaying about how much you can trust the data itself, I could see people eventually just throwing their hands up in the air and saying what's the point.
But, that's probably overthinking the whole thing. The actual answer may be, do the best you can, most people don't really care about the small caveats, and it'll all be fine.
I think that's a reasonable question--how many caveats are "tolerable" to people? Perhaps a data team should define how "serious" or impactful some inaccuracy or mistake we made needs to be before we communicate it.
You could also make that part transparent, I suppose, where you tell people we might make changes to small things and not tell you, which gives you some cover to do it without it seeming all shady.
In some ways what you are asking is not really about analytics, but about how to have a fact-driven / truth-seeking culture. Ray Dalio would have a field day with you on this.
Best organizations (Google, Amazon) supposedly had this early on.
Science-focused universities used to have this.
The easy thought process is that this should come from the leadership. But I don't think this is enough.
I think to have a truth-seeking culture, you also need to be exclusionary and hire the type of Sales people, as an example, who prefer Truth over Fiction, regardless of how truth is born - even if that happens through the process of discovering errors.
Kinda? I only know a bit about what that specifically means (and I had some interactions with Bridgewater back in the day), but I think that's a bit different. Even within those sorts of cultures, you have to figure out what to do what you thought was true turns out to be wrong. And I don't think we can say "well, if everyone is always truth seeking or whatever, corrections are always good," because at some point, you stop trusting the data at all.
Which is why I landed where I did - maybe the best balance is to treat data as a bunch of estimates, and say it's all true-ish, rather than forcing it to be right or wrong.
To bring a bit of buddhism into this, one does not stop trying to be mindful just because the mind gets distracted. In the right cultures, trusting data is not about data being correct. It is about relying on data, but also working towards making things more correct over time. And failures are to be expected.
But I agree that this is a high bar, and one that is ultimately unrealistic beyond like the first 50-100 employees who get to interact with Founders.
I think the culture stuff has to transition into Org-level processes. In your sales example, an analyst who discovers errors ought to bring his Data Team leadership and Sales leadership together. Expose the issue. And then Sales Leadership should find a way to communicate the issue to the rest of the Sales org to ensure trust remains.
And estimates is actually part of the org aspect as well. Ultimately, you earn trust by communicating data in a way that's relevant to the audience. That 543,891 number is 540k to Finance, 0.5mil to Investors, 110% to sales, and 543,891 to engineers. Figuring out how to communicate each is a matter of aligning communication with each team's/org's leadership.
and rounding is a good one too, but not enough on its own. errors can be of several magnitudes
As usual, Benn asks the hard questions most of us would prefer not to think about…
Hellz Yeah Benn! 🤘
PS- I love rounded numbers, the precision of data to the last penny is nonsense! Round it to the last thousand for me. 👍
I wonder if Substack were counting fake email opens from Apple's MPP previously.
Actually, that's something else that's potentially interesting, the email platform industry as a whole likes to report on "opens" as if they're some sort of objective truth, but the more you speak to people who understand how those systems work, the more you very quickly realise how deeply flawed those metrics are. Those flaws are frequently either not communicated or communicated incredibly poorly to customers who've been taught that this number somehow matters.
How do you start getting folks to stop looking at vanity metrics when they've been taught to look at them for years? 🤔
That story about email opens (which I didn’t know about) is exactly the kind of thing that makes this all so hard. It seems so simple to count, but as soon as you start getting into the details, you realize it’s all a giant mess. You could probably do the same thing with a hundred other common metrics, where what we think is easy and truthful ends up being some giant knot of complexity and vague definitions.
+1 for data = confidence game. Nevertheless the stickiness issue is more to do with our tendency to hide behind data (or so called experts) instead of acknowledging uncertainty.
In the 2032 version, we can do a few things when things go wrong: a) comment on the impact of decisions already taken via data (if we ever can measure it), and recalculate the ones for current/ future b) does the corrected data confirms or rejects our biases / learning c) comment on the revised targets or measurements that needs to be made based on the new corrected base.
Sometimes, the targets are literally basis points (50 bps project for acquisition marketing in mobile channel) = The reason I think estimates won’t do.
Accountability for opinions based on data, may be the way future CDOs or directors are measured on. think we have chief analytics officers then. Currently the blame finding is a mess in the modern data stack based roles.
That last point is an interesting thought. Will accountability ever come for data teams? And if it does, what does that look like?
You’re a really good writer.
Thanks - I really appreciate that, and glad you enjoy it!
At my company (and my previous two companies), I've been the annoying person in the room demanding that we define our metrics the same way across the organization, that we have a data dictionary (or if we have one, an up-to-date one), that we require our clients to provide us THEIR definitions of their fields. I've been known to spend days tracking down the reason for a mistake. I've been reprimanded for spending too much time on data prep and exploration. People hate it. They respond in a professional manner, but they don't really want to talk about it. It's like everyone wants to pretend the numbers are right until we get caught with them being wrong. I like your rounding idea and I also want to help stakeholders align their expectations with reality and educate them about how real data really is.
There's an interesting element in that. How much nuance, how many caveats, etc, are people willing to tolerate? People can get worn down by the constant reminders about how the limits of interpreting analysis in particular ways. If you compound that with a bunch of naysaying about how much you can trust the data itself, I could see people eventually just throwing their hands up in the air and saying what's the point.
But, that's probably overthinking the whole thing. The actual answer may be, do the best you can, most people don't really care about the small caveats, and it'll all be fine.
I think that's a reasonable question--how many caveats are "tolerable" to people? Perhaps a data team should define how "serious" or impactful some inaccuracy or mistake we made needs to be before we communicate it.
You could also make that part transparent, I suppose, where you tell people we might make changes to small things and not tell you, which gives you some cover to do it without it seeming all shady.
Exactly. WE SOLVED IT. lol