Let me tell you about the time we built a data warehouse and dashboards to discover our red-yellow-green dot system could be reduced to just red and green.
Seriously though, the best analytics project I ever worked on was designing a data model to understand and improve the efficiency of a state court system.
Questions like 'with case types of X, are certain judges consistently more efficient than others?' and 'how fast on average does each judicial district clear cases?'
Lots more I can't remember, but it was fascinating stuff and no one had done anything like it at the time. The guy in the state court system spearheading it was a pioneer. Designing that data model was a blast.
ah, i love this. this is the sort of stuff that I think is fascinating, both as a data thing and as a “look at this interesting fact about how the world worlds” thing. (It also seems like this is how most “insight” happens - not by doing complicated analysis, but by looking at something that’s never been looked at before.)
In my experience, a lot of insights happen like many scientific discoveries: by accident. Someone notices something possibly interesting (or asks an interesting new question, or looks at a problem from a new perspective), does a little bit of digging, and discovers a little (or big) treasure. As if they noticed some extra gold in a portion of the river, did some digging upstream, and found a vein.
Meanwhile, the complicated analyses manage to unlock another 0.005% of additional revenue -- or was that just a gust of wind? Or a new product release? Or they start digging all over the place (wherever there is a little more gold than average) in the hope of finding another vein. Trying to use science in order to repeat the successes of the past, instead of venturing into unknown territories in order to discover new treasures (since science doesn't know anything about the unknown, it cannot really help us in unknown territory anymore than a good sense of observation can). Or tip-toeing into new territories using A/B tests, as if that's how anyone ever discovered anything. When's the last time an "optimized" thumbnail on Netflix got you to watch something you didn't want to watch? It's like changing the cover of a book I guess... maybe it can increase sales by half a percent. Maybe. And then if we discover some kind of hidden formula, we can just make all of them look the same. But are we still increasing sales by that point? Insights quickly become trends, and by then any competitive edge is lost.
It seems like going from no data to some data is what provides the greatest insights we will ever get. Beyond that, it's mostly just small optimizations/diminishing returns. Because if there was something obvious hidden in the data, just waiting to be discovered, wouldn't we have discovered it when we analyzed the data for the first time? (if we did so thoroughly)
So maybe that's what we should focus on: what do we _not_ have data about? Or what data have we never really analyzed (or even looked at)? Or maybe new ways to look at the data we already have.
I definitely share this sentiment. I am a data product manager. Every time when our execs have a hard time trying to figure out use cases for something we are building, my manager would say' we are builing a car but you guys only know how to ride horses, so you probably won't understand what we are building until you see it.' In my mind I would ask myself 'are we trying building a car here? Or are we just churning out yet another horse that will eventually be forgotten about just like all the previous cars that we promised to build.
I love the “axes we fell in love with, rather than the gold we initially hoped to find”. I’m going to steal that. I’ve got an example for you, a client once asked me to figure out why their project margins would go from 13% to 3%. The data informed, but the people informed more. Turned out the customer changed ordered ebitda into the ground. Not complicated to figure out (could have done that with excel), but correctly identified, worth millions. So yeah, don’t be cynical, there can be gold in them hills.
And yeah, I guess I'm not so cynical as to say that there's nothing to be learned from data (and certainly don't think that there's not thing to be learned from asking questions and talking to people). But it does seem like a lot of that comes from finding fairly basic (though sometimes deeply buried) stuff, like "we accidentally broke the signup flow for Android users." It seems like fancy ourselves as detectives, who piece together all these clues, but we're more like foragers, where the valuable stuff is just lying on the ground, and the problem is that we need to look in the right place.
Which isn't bad? It's just that it's a different job than we typically frame it as.
I think this analysis misses the part that some of the extras on set _do_ eventually become the main actors. Most do not, but that is life. Most thought leadership is window dressing, but some is not, and those are the bits that make it through. This does not make _all_ thought leadership unimportant, just most of it. So it is with most things. This reflects a normal power law distribution.
I think that's true, or at least how it often plays out. My theory (which I didn't say here, but maybe should've) is that most data people do have some story about some "insight" that they found, but it happens once or twice in their careers. They have their big moment, find that thing that lands on the CEO's desk, and really does change things. It's just very uncommon. Which isn't bad, necessarily, though I'm not sure it's quite the narrative that we usually tell people when we talk about analytics.
While investors are okay with only 5% startups succeeding, saying that analysts find an insight only in 5% instances, will raise some eyebrows, I guess.
Yeah, it's not a great pitch. (Though I once heard someone say that analysis should be thought of like looking twice before you cross the street. It usually doesn't matter, but when it does, it's priceless.)
I mean, today, no? My current "work" is this blog, and I notice when some posts get more traffic than others, or more people unsubscribe. But I don't track it closely, and I've never attempted to do any sort of proper analysis on it.
I don't for two reasons. The first is soapbox-y, because I think chasing engagement on blogs and "content" tends to make everything formulaic and manufactured, like the Netflix aesthetic (https://www.vice.com/en/article/why-does-everything-on-netflix-look-like-that/). So I'd rather do this particular things based on instinct than numbers.
The second is because I don't think I'd find that much in it. Maybe there is some wisdom there, or maybe someone better at this could find it. But I'm not so sure I believe that there will be any "insight" in me trying to analyze performance metrics of different blog posts, or whatever.
(That said, I *do* think there could be something in using more qualitative analysis - and, ugh, AI - in this case. Which is increasingly becoming my view of the whole analytics thing, to be honest. That the numbers are more of a distraction.)
Right. So, how would your perspective have been different if you worked in, say, Finance or Healthcare? Would you trade on instinct? If you were a doctor, would you say- no need for blood tests/MRI/such, let's just go with my gut? I can fully see how applying analytics to content creation/distribution is mind-numbing, but there are still lots of areas that can't operate without the insights that analytics provides. Perhaps a stint in one of those might provide the insight you are looking for?
I mean, i’m not saying doctors should be out there doctoring enitrely on instinct. Or that there aren’t any cases where data isn’t useful. But that’s sort of my point - if you have to work in particular industries for data to be useful, that’s still pretty damning of the general promise of analytics, which was effectively “with data, find insight.” It wasn’t caveated with the industry you work in, or what you’re looking for. If anything, that promise was sold more to the tech industry, because they had “web scale big data.”
It wasn't sold to the tech industry as much as it was evolved by the tech industry. Bezos worked in trading before he started Amazon and was data-driven in all ways. He used product ratings to determine inventory levels. He used customers' location data to determine where the warehouses need to be. Being data driven about all those decisions enabled AMZN to scale/grow the way it did. Google and Meta applied insights to advertising, and turned click-bait into a formula. Netflix did the same for movies. Spotify for music/podcasts.
Data is useful no matter which industry you're in, but finding insight in certain industries/situations feels detrimental to the soul. Take dating, for example. If you know from data analytics and the person's psyche that saying the right things at the right time is going to get you laid, that is still insight, although acting on the insight makes one feel like a soulless sociopathic creep.
If you want to paint your heart out, but have insights on the right colour combinations that will make your paintings sell, what would you choose?
The problem of the tech space today that it is all about people's attentions, rather than delivering anything concrete to people. As soon as you work in any real space (supply chain, logistics, health, food production, manufacturing), insights start mattering again.
Ehh, I think that's where I disagree. I agree that a lot of the data hype came out of those companies, but think that's largely because the data they had was inherently more valuable to their businesses than most other companies. In some cases, sure, using data is icky, but more than that, I think it's using data is often just hard. (A longer version of this: https://benn.substack.com/p/day-of-reckoning)
I am not sure I follow. The data that these companies had was more valuable to their businesses than other companies **in their industries**? So, Google's data was more valuable than Yahoo's or Excite's? Was this because of their focus on data or their business model?
Yes, using data is hard- because one can't use the data unless one understands the operations it is meant to support. Most people in the "data industry" think that the work involves cleaning data until one can run XGBoost on it. :)
It seems like Ben believes "insight" means earth shattering revelation, as opposed to better understanding. I feel like I'm in bizarro world reading these latest posts.
I am still trying to figure out what Ben(n?) means, because it seems to me that he is seeing data as somehow separate from the operations/decisions they support. Having read his other posts (and the insights they carry), this seems very strange indeed.
My point is more about the marketing and brand of analytics, not what I think is actually valuable or what is or isn't an "insight."
Because regardless of how you define insight, the language the industry uses when it talks about the value of data is very much suggestive that there are earth shattering revelations out there, or at least big, unintuitive discoveries (eg, Snowflake's language of finding "previously unimagined insight"). Buried in every big dataset, there are these sorts of discoveries, if only we could unearth them.
But are there? Do those things exist? Do we actually find them? Maybe the other operational stuff is insight; maybe not. My question isn't about those things at all; it's about the earth shattering stuff, and how much of the marketing around those things is reality vs selling a pot of gold at the end of a rainbow.
Depends on who built the dataset. Some datasets contain zilch.
But, let’s say, that tomorrow, there is a dataset from NASA/SETI/whoever that shows that 400 light years away from us, there is a civilization that is at the same stage of technological development as us and that wants to connect with us (I am obviously ripping off a famous sci-fi plot here). Would you consider this earth-shattering because this insight proves that alien life exists and changes our view of the universe or will you go- meh, how does this change my actual day-to-day life at all?
Maybe this is just our nature (for tech people at least). Like how most software engineers dream of building the foundation for large scale sites & apps (https://highscalability.com/, http://highscalability.squarespace.com/), as if they were building the next big thing, but actually work on stuff most people don't really care all that much about. Same thing with business/startups: most of them fail (presumably because they fail to build something people care about).
Maybe we just don't know what we're doing, or where we're going. Or maybe there's a small percentage of us which is just much better than everyone else. That's what the data says: https://en.wikipedia.org/wiki/Normal_distribution.
In the end, the best we can do is to focus on our own individual part, and hope that others do so as well, and that it all works out in the end. I think that's what most data people are trying to do (even if it often looks like our work doesn't really matter).
One cool thing with data though (from a data engineering perspective, at least) is that diligent analysis & optimization can directly affect the bottom-line, so dealing with technical debt is much easier to explain to stakeholders as it has a direct impact on the business in terms of cost savings. Although in my experience, many data engineers struggle to get to a point where they can make these kinds of contributions.
In traditional software development, most stakeholders cannot understand the true cost of technical debt until no one wants to touch a codebase anymore (at least, not without wearing a hazmat suit). And no one wants to wear a hazmat suit to work every day (unless the pay is there, maybe).
I do suspect that a big part of it is that 1) analysis is hard, and 2) some people are just better at it. I've said this a few times before, but it feels like we take what's possible with data by looking at what superstars do in advantageous environments, say, "yes, we want that too," and think we can get it if only we build the right systems. But there is no factory to produce Lebron Jameses. You can manufacture good high school and college basketball players - reporting and mechanics and cost savings - but you can't manufacture the profound stuff.
And I very much agree on the cost saving thing (or "business process" optimizations in general). Every companies is sloppily run; if that was your goal - make everything move a little faster and smoother - I suspect the job is a lot easier.
That's what gets me up in the morning: "how can I make things a little better/faster/cheaper/smoother today?" At least it's tangible. No matter what, it's still a win.
But I'd be pretty depressed if it were: "how can I find some insights today?" (i.e. "will I manage to find a needle in the haystack?"). And otherwise, build more dashboards... that maybe someone will use... and maybe will make some kind of difference.
I feel like a lot of the time, our work is like trying to predict the waves of the ocean ("oh look, there's another one coming!", which will always continue endlessly), when it is really just detecting/predicting the tsunamis that matters -- and those don't happen very often, and are even harder to cause (otherwise, everyone would be causing them all the time, and then they would just be waves again).
Like trying to optimize a conversion funnel where every 0.1% improvement is a "victory", even though the conversion ratio naturally fluctuates by 1% to 2% pretty arbitrarily (and who knows what external forces are at work behind those fluctuations; maybe people are in a better mood on Thursdays? Or is it the moon? Or COVID?). And then trying to repeat those "victories" never works consistently. But we're doing "science"! The numbers prove it, and they are statistically significant.
We run a pretty large network of sites, and things that kinda work on one site don't work on the others. Or they work one week/month and not the next. Go figure.
It's like we're looking for a few more berries to gather instead of hunting something worthwhile. But then again, we can't only rely on hunting. And we can't be flying completely blind either. But how much blood can you possibly squeeze out of a stone?
Maybe our job is mostly just to land the plane safely.
I once heard someone say analytics is like looking twice before crossing the road - doesn't usually matter, but saves your life when it does.
I was into it when I first heard it, and it makes sense to me. Though now, I think I'd have the same question about that that I do about "insights" - does that actually happen? Are there that many examples of it? Of course, it doesn't that many to make it worthwhile - one, really - but it still feels a little theoretical.
It’s indeed like finding gold within data. One cant just hope to get lucky. Hence i concentrated on building a tool which acts as a multi faceted insight (?) extraction toolkit … shovel/ extraction unit/cleaning equipment all rolled into one. Hold it close to you and explain what you want from the data in terms of what you’re interested in (inputs expressed in terms of the data elements like filters, partitions, pattern with header/details matching elements, input and output elements, pre-processing steps, transaction header/detail pattern matching to ask, post-processing steps and calc kpis of interest)… the tool is set loose on the beach or dataset and it finds what it’s been asked to do so in an exact and deterministic way.
Its used to find data (subsets or kpis therein) which are of interest and which would presumably also contain insights if you didn’t already know the information/content present in the newly discovered/transformed outputs.
If you can describe what you want in vague or precise terms (not in free form NLP form but against a pattern data structure composed of multiple bells and whistles) then it will find what you want ... and the way to iterate and explore more... is to modify the ask to reflect the new pattern one is interested in... in a sort of ask, tweak and repeat ask and so on.... sense.
I mean, that's sort of the holy grail of this whole data thing, where the machine just takes raw sand and outputs all the diamonds. And that sounds...too good to be true?
holy grail of data/analytics.... I dont know :-) but I have been guilty of doing LinkedIn searches of the phrase periodically to keep track of what qualifies as such from an analytics perspective.
The machine can take in raw sand info... and split into partitions by attribute1 (pin_code say) and operate on them to output either all jewels (overall count) or split count by jewel_type (ovreall, 5 rows for 5 types of jewels) or even split output by attribute8 (region field with cardinality = 15, say) and give output partition wise split counts across (15x5 = 75 maximal) output records. Input processing helps in sessionization or categorizing 1000 buckets of sand into 1 unit and count units per pin code (say).. Or with a different dataset, identify/detect a type of crime and aggregate input to single instance per pin_code (single record of sufficient weightage reflecting the number of times that type of crime has occurred.. A record to indicate existence, special attributes/fields of the record to indicate prevalence or weightage etc).
This is do-able and has been done in a Proof of Concept sense... We need to pivot the data to fit into header-details paradigm ... even including potential header fields as artificial "products". Everything under one column (or two for id/value). The pattern structure/idea is borrowed from the humble Market Basket Business (MB) Rule ... If (conditions/antecedents) then (condition/consequent) acting as a sort of proxy context for the current analysis. The rule structure allows for flexibility via user actions like drillup (reduction) /drilldown (extension ... p,q => d to p,q,r => d ), hypothesis assertion/validation, modification of hypothesis or new hypothesis (from p,q => r to p,q,r => d ) etc. Also if persisted as an explicit dimension of analysis... we can have multiple independent Rules co-existing each of which can be a single strand/thread of analysis.... all against the same dataset. Co-existence of multiple rules with potential for many/all being in play allows us to piece together many small steps into a substantial whole (for e.g user behavior funnel or recommendation scenario can be thought of as a composite set of related/competing patterns). There is a lot of literature on calculating very many KPIs from groups of products/itemsets involving conditional probabilities etc. One can transform single row datasets with only header fields into a Basket of items/fields scenario thereby making single row datasets a special edge case of head-detail or Trx - Lineitem datasets where there are no "detail type products".
I’d be lying if I said I entirely followed this, though I think my immediate reaction is that this sort of thing does seem to work on proofs of concept, and on kind of dummy datasets, but is very hard to apply to messy real world stuff. It’s essentially this problem: https://benn.substack.com/p/the-design-flaw-at-the-heart-of-every
One interesting (public) example: Caroline Hoxby’s work to uncover links between student socioeconomic status, College Board scores, and admissions. Paul Tough writes about her work in his book, The Inequality Machine.
While basically everything you say here seriously resonates, I'd offer a silver lining. It seems to me that you can draw a very straight line from working on that newsletter to this substack - one that I can attest has a much shorter path to producing actual value. And I'm sure that is just one way in which that work seeded something that has born fruit on a much longer timeline.
I think about The Ringer - a place where podcasts are obviously the cashcow but a group of super talented writers are getting to sneak really excellent work through because it probably accounts for less than a percent of Spotify's operating expenses.
Nearly every venture ever feels to me like a lot of bluster to support the 5% of the leading actors. But the 95% are just in a slow-cooker where some amount of them will go on to become the 5% somewhere else years later. Farming the farm system out to AI bums me out, but it also feels likely to me that we will adapt this dynamic to some new tasks, and the cycle will continue.
Or that's what I'm telling myself to pull out of the tailspin this may or may not have sent me on so I can get something done this Monday morning lol. Cheers!
Another comment mentioned something similar, about how the background work can be a stepping stone to the thing that people actually pay attention to. I think I'm skeptical that of there being much of a linear path between the two - extras don't become movie stars, I don't imagine, and no matter how many blog posts I wrote for the think tank, I was never going to matter a fraction as much as Ben Bernanke did. But I think both of you bring up something I didn't think about, which is how being an extra can help a lot of people experiment and discover new things that ultimately lead them to becoming stars in some adjacent area. The beat writer becomes a periodic podcast guest; they get a job as a sideline reporter; they get a job in the broadcast booth. The farm system seems like a good analogy - the farm system matters, not because that many people want to watch minor league baseball, but because major leaguers have to come from somewhere.
I have no idea what AI does to that. On one hand, it seems like economic pressures would make a bunch of people burn down their farm system for an army of bots. Get rid of the junior investment bankers who make decks, or the beat writers, or the law office interns. On the other hand, those industries do need to get talent from somewhere. I have no idea what they replace those jobs with, but it seems like they'll have to replace them with *soemthing.*
I was being interviewed for a job and I asked a similar question "in your career, have you ever had an 'ah ha' moment where you found some great insight through data analysis." The interviewer recalled a time when he found that retail store employees were selling fewer high margin accessories when paired with a lower tier cell phone plan. They were being instructed to do so because if the monthly plan was lower, presumably the customer would have more $$ for accessories. However, the reverse was actually true, so retail store reps were instructed to lead with higher tier monthly plans and accessory sales rose significantly.
I like your analogy of "axes we fell in love with." I agree, we are foraging for basic things and usually there is not that much deeply buried stuff that makes a massive impact. However, basic things in a business break or get off track all the time. You fix a process today, it breaks again or decays over time. In a large company, there are nearly infinite things that can break or decay, so you need tooling that gives a lot of people a lot of leverage to cover as much of these things as possible. It is upon the analytics industry to monitor these things so that when they do break, they can notify the right people and get them back on track quickly.
So yes we are just foraging through the sand for tiny bits gold, but analytics tooling (axes) gives you special vision glasses so you can see the gold more easily and "hopefully" pick it up faster. Are you likely to find a 3lb chunk of gold - no - or maybe once every 5 years so that you can submit it as a story to Benn's blog post :). But presumably, the bits and pieces of gold you accumulate - and the pace in which you accumulate them does matter and ideally covers the cost of big beautiful axes and the people who wield them.
I think I very much agree with this. Or like, this is exactly what we should be doing - looking for the bits and pieces of gold lying on the ground, and making sure the machine hasn't decayed or gotten off track. And it seems like you're more likely to find the 3lb chunks of gold doing that (by noticing something is broken, or finding some fairly basic thing like what the cell phone person found) than you are by trying to sleuth out some complex story.
But, it doesn't seem like we market the job as that? My impression of analytics jobs - and of "analytics" as an industry - is that they're frequently sold as being about finding these grand insights. Moneyball, Facebook, the famous Target pregnancy story - this seems to be what everyone is chasing, and every vendor is selling. Which makes some sense at least in the latter case, because it sounds very cool and space age, and probably does sell better than "we help you find some basic stuff that you should know but probably don't." But I do wonder if we marketed the whole industry that way, if things would be any different.
Maybe this is just the symptom of a mature industry. I recall when business friendly visual analytics was very new, you had a lot of these incredible stories. We transitioned from not even looking at data (because it was too hard) to actually looking at it.
Simply loading a boring excel file into Tableau or some other tool, and dragging and dropping produced all kinds of eye-dropping findings because we had never looked at data like that before. In its time, this was really incredible. But that was then and this is now. We've gotten so good at dashboards, data, and analysis techniques that it is hard to squeeze more juice out of the orange. I think we still sell that as an industry because 1- we are selling, and 2 - it DID happen a lot and it still happens even if less often.
I could see that, where the difference between 0 and 1 is big, but 1 to 10 isn’t that much. Or, more precisely, going from 0 to 1 is both hard, and a lot of companies don’t have data that can actually go up to 10. They have data that can tell them what they’re selling, sure, but they don’t have data that tells them all of their company’s hidden secrets.
> In other words, if AI replaces all our jobs, it might not because because what we do is easy, or because AI is so good that it can do as well as we do. It may be because it never mattered if we were particularly good at it in the first place.
This idea of "background jobs" is really insightful, kudos!
Makes me think -- if we outsource today's background jobs to AI, where will the human stars of tomorrow grow and hone their skills?
Or maybe the path to stardom has even historically been less about rising through the ranks, and more about mavericks appearing out of left field, gathering momentum, and ultimately being co-opted by a respected institution?
Dunno. Again though -- food for thought, thanks :)
> the only data podcast I’ve ever really wanted: One in which people tell stories about the interesting things they learned about how the world works while working with data
I just heard about this book the other day, it's still very much WIP and I only lightly perused the first chapter, but sounds like it might be up your alley? https://calmcode.io/data-science-fiction
I think my instinct - based on absolutely nothing, but, what is this blog if not that - is that most stars are exactly as you describe. Which isn't to say they show up right away, but I'd guess that most people who are at the top of most fields don't slowly rise through the ranks, starting as "extras," but instead are on a fast track from the beginning. They may not be in that industry right away (eg, they might go from being in one field to another, or whatever), but the very slow climb to stardom doesn't seem to happen that much. (Which, I guess I would categorize differently than the very slow climb to power. That does seem to happen, as there are probably a lot of people who slowly ascend to power by outlasting people and getting promotions.)
And this book does look up my alley; definitely gonna read this.
ooh I like this. Though it feels...incomplete? Or like a good starting point? "People don't just make mechanical decisions based on statistics and logical reasoning" is probably something that this whole industry doesn't talk about nearly enough. But what do you do about that? How *do* they make decisions?
Trying to solve that by reading "Thinking Fast and Slow" feels somewhat circular, like trying to turn the human problem into another type of science problem. Maybe this tilts too far, but to me, decisions are emotional, and the only way to truly affect them is by operating in that realm. And there's no science-ing your way out of that.
Great link - super comprehensive. I love the framing of "decision literacy" as a first principle of our own self awareness. I believe we need to not only think about this as a means to making (or influencing - re article) a decision but also in reflecting and changing the parts of a decision that no longer work. This becomes extra important when we end up in a world where the past data is not going to predict the future situation which I think is the crux of your bigger set of questions. If we know we don't know THEN what...
Situations where people are asked to evaluate their own performance, especially when there are attached incentives, are ripe for analysis from outside the system. Goodhart mixed with human nature.
very good stuff. there are folks out there who deliver on this, the PhD DS w/ the analyst's context, the savant data person w/ skills across all data domains, but they are the exception.
our value is not sexy. it's a boolean field that you can now use in looker. Colin Zima linked the venmo blog in your linkedin post (https://venmo.github.io/blog/2014/08/28/data-driven-design-at-venmo/). i lol'd at this, "So using Looker, we created a table with custom dimensions and measures to identify these payments with just one boolean flag."
they made it sound so simple, sometimes it is. but other times that boolean flag is 4 meetings, 2 incremental tables and 4 disgusting CTEs.
that's our value, not sexy analytics, boolean fields in looker.
Let me tell you about the time we built a data warehouse and dashboards to discover our red-yellow-green dot system could be reduced to just red and green.
Seriously though, the best analytics project I ever worked on was designing a data model to understand and improve the efficiency of a state court system.
Questions like 'with case types of X, are certain judges consistently more efficient than others?' and 'how fast on average does each judicial district clear cases?'
Lots more I can't remember, but it was fascinating stuff and no one had done anything like it at the time. The guy in the state court system spearheading it was a pioneer. Designing that data model was a blast.
ah, i love this. this is the sort of stuff that I think is fascinating, both as a data thing and as a “look at this interesting fact about how the world worlds” thing. (It also seems like this is how most “insight” happens - not by doing complicated analysis, but by looking at something that’s never been looked at before.)
100%
In my experience, a lot of insights happen like many scientific discoveries: by accident. Someone notices something possibly interesting (or asks an interesting new question, or looks at a problem from a new perspective), does a little bit of digging, and discovers a little (or big) treasure. As if they noticed some extra gold in a portion of the river, did some digging upstream, and found a vein.
Meanwhile, the complicated analyses manage to unlock another 0.005% of additional revenue -- or was that just a gust of wind? Or a new product release? Or they start digging all over the place (wherever there is a little more gold than average) in the hope of finding another vein. Trying to use science in order to repeat the successes of the past, instead of venturing into unknown territories in order to discover new treasures (since science doesn't know anything about the unknown, it cannot really help us in unknown territory anymore than a good sense of observation can). Or tip-toeing into new territories using A/B tests, as if that's how anyone ever discovered anything. When's the last time an "optimized" thumbnail on Netflix got you to watch something you didn't want to watch? It's like changing the cover of a book I guess... maybe it can increase sales by half a percent. Maybe. And then if we discover some kind of hidden formula, we can just make all of them look the same. But are we still increasing sales by that point? Insights quickly become trends, and by then any competitive edge is lost.
It seems like going from no data to some data is what provides the greatest insights we will ever get. Beyond that, it's mostly just small optimizations/diminishing returns. Because if there was something obvious hidden in the data, just waiting to be discovered, wouldn't we have discovered it when we analyzed the data for the first time? (if we did so thoroughly)
So maybe that's what we should focus on: what do we _not_ have data about? Or what data have we never really analyzed (or even looked at)? Or maybe new ways to look at the data we already have.
Another hard hit piece, thank you Ben,
I definitely share this sentiment. I am a data product manager. Every time when our execs have a hard time trying to figure out use cases for something we are building, my manager would say' we are builing a car but you guys only know how to ride horses, so you probably won't understand what we are building until you see it.' In my mind I would ask myself 'are we trying building a car here? Or are we just churning out yet another horse that will eventually be forgotten about just like all the previous cars that we promised to build.
Thanks! And I don't know that I'm totally convinced of this, but it seems like the data version of that is:
- we get asked by execs to build a horse, because that's what people are used to
- we say, "we'll give you a horse, but what you really want from us is a car, if we had more time, we could give you a car"
- they say, "ok, i need the horse now but sure, this car sounds nice later"
- but then we do not, in fact, know how to make a car.
Like, I don't know that there's no demand for the car; the problem to me seems like we don't know actually how to *supply* the car.
I love the “axes we fell in love with, rather than the gold we initially hoped to find”. I’m going to steal that. I’ve got an example for you, a client once asked me to figure out why their project margins would go from 13% to 3%. The data informed, but the people informed more. Turned out the customer changed ordered ebitda into the ground. Not complicated to figure out (could have done that with excel), but correctly identified, worth millions. So yeah, don’t be cynical, there can be gold in them hills.
Oh nice, thanks! And by all means, steal away.
And yeah, I guess I'm not so cynical as to say that there's nothing to be learned from data (and certainly don't think that there's not thing to be learned from asking questions and talking to people). But it does seem like a lot of that comes from finding fairly basic (though sometimes deeply buried) stuff, like "we accidentally broke the signup flow for Android users." It seems like fancy ourselves as detectives, who piece together all these clues, but we're more like foragers, where the valuable stuff is just lying on the ground, and the problem is that we need to look in the right place.
Which isn't bad? It's just that it's a different job than we typically frame it as.
I think this analysis misses the part that some of the extras on set _do_ eventually become the main actors. Most do not, but that is life. Most thought leadership is window dressing, but some is not, and those are the bits that make it through. This does not make _all_ thought leadership unimportant, just most of it. So it is with most things. This reflects a normal power law distribution.
I think that's true, or at least how it often plays out. My theory (which I didn't say here, but maybe should've) is that most data people do have some story about some "insight" that they found, but it happens once or twice in their careers. They have their big moment, find that thing that lands on the CEO's desk, and really does change things. It's just very uncommon. Which isn't bad, necessarily, though I'm not sure it's quite the narrative that we usually tell people when we talk about analytics.
While investors are okay with only 5% startups succeeding, saying that analysts find an insight only in 5% instances, will raise some eyebrows, I guess.
Yeah, it's not a great pitch. (Though I once heard someone say that analysis should be thought of like looking twice before you cross the street. It usually doesn't matter, but when it does, it's priceless.)
Do you use analytics as part of your workflows, Benn? If you do, then why the question? If you don't, then why not?
I mean, today, no? My current "work" is this blog, and I notice when some posts get more traffic than others, or more people unsubscribe. But I don't track it closely, and I've never attempted to do any sort of proper analysis on it.
I don't for two reasons. The first is soapbox-y, because I think chasing engagement on blogs and "content" tends to make everything formulaic and manufactured, like the Netflix aesthetic (https://www.vice.com/en/article/why-does-everything-on-netflix-look-like-that/). So I'd rather do this particular things based on instinct than numbers.
The second is because I don't think I'd find that much in it. Maybe there is some wisdom there, or maybe someone better at this could find it. But I'm not so sure I believe that there will be any "insight" in me trying to analyze performance metrics of different blog posts, or whatever.
(That said, I *do* think there could be something in using more qualitative analysis - and, ugh, AI - in this case. Which is increasingly becoming my view of the whole analytics thing, to be honest. That the numbers are more of a distraction.)
eg, https://x.com/hiattb/status/1862906461031960586
Right. So, how would your perspective have been different if you worked in, say, Finance or Healthcare? Would you trade on instinct? If you were a doctor, would you say- no need for blood tests/MRI/such, let's just go with my gut? I can fully see how applying analytics to content creation/distribution is mind-numbing, but there are still lots of areas that can't operate without the insights that analytics provides. Perhaps a stint in one of those might provide the insight you are looking for?
I mean, i’m not saying doctors should be out there doctoring enitrely on instinct. Or that there aren’t any cases where data isn’t useful. But that’s sort of my point - if you have to work in particular industries for data to be useful, that’s still pretty damning of the general promise of analytics, which was effectively “with data, find insight.” It wasn’t caveated with the industry you work in, or what you’re looking for. If anything, that promise was sold more to the tech industry, because they had “web scale big data.”
It wasn't sold to the tech industry as much as it was evolved by the tech industry. Bezos worked in trading before he started Amazon and was data-driven in all ways. He used product ratings to determine inventory levels. He used customers' location data to determine where the warehouses need to be. Being data driven about all those decisions enabled AMZN to scale/grow the way it did. Google and Meta applied insights to advertising, and turned click-bait into a formula. Netflix did the same for movies. Spotify for music/podcasts.
Data is useful no matter which industry you're in, but finding insight in certain industries/situations feels detrimental to the soul. Take dating, for example. If you know from data analytics and the person's psyche that saying the right things at the right time is going to get you laid, that is still insight, although acting on the insight makes one feel like a soulless sociopathic creep.
If you want to paint your heart out, but have insights on the right colour combinations that will make your paintings sell, what would you choose?
The problem of the tech space today that it is all about people's attentions, rather than delivering anything concrete to people. As soon as you work in any real space (supply chain, logistics, health, food production, manufacturing), insights start mattering again.
Ehh, I think that's where I disagree. I agree that a lot of the data hype came out of those companies, but think that's largely because the data they had was inherently more valuable to their businesses than most other companies. In some cases, sure, using data is icky, but more than that, I think it's using data is often just hard. (A longer version of this: https://benn.substack.com/p/day-of-reckoning)
I am not sure I follow. The data that these companies had was more valuable to their businesses than other companies **in their industries**? So, Google's data was more valuable than Yahoo's or Excite's? Was this because of their focus on data or their business model?
Yes, using data is hard- because one can't use the data unless one understands the operations it is meant to support. Most people in the "data industry" think that the work involves cleaning data until one can run XGBoost on it. :)
It seems like Ben believes "insight" means earth shattering revelation, as opposed to better understanding. I feel like I'm in bizarro world reading these latest posts.
I am still trying to figure out what Ben(n?) means, because it seems to me that he is seeing data as somehow separate from the operations/decisions they support. Having read his other posts (and the insights they carry), this seems very strange indeed.
My point is more about the marketing and brand of analytics, not what I think is actually valuable or what is or isn't an "insight."
Because regardless of how you define insight, the language the industry uses when it talks about the value of data is very much suggestive that there are earth shattering revelations out there, or at least big, unintuitive discoveries (eg, Snowflake's language of finding "previously unimagined insight"). Buried in every big dataset, there are these sorts of discoveries, if only we could unearth them.
But are there? Do those things exist? Do we actually find them? Maybe the other operational stuff is insight; maybe not. My question isn't about those things at all; it's about the earth shattering stuff, and how much of the marketing around those things is reality vs selling a pot of gold at the end of a rainbow.
Depends on who built the dataset. Some datasets contain zilch.
But, let’s say, that tomorrow, there is a dataset from NASA/SETI/whoever that shows that 400 light years away from us, there is a civilization that is at the same stage of technological development as us and that wants to connect with us (I am obviously ripping off a famous sci-fi plot here). Would you consider this earth-shattering because this insight proves that alien life exists and changes our view of the universe or will you go- meh, how does this change my actual day-to-day life at all?
Maybe this is just our nature (for tech people at least). Like how most software engineers dream of building the foundation for large scale sites & apps (https://highscalability.com/, http://highscalability.squarespace.com/), as if they were building the next big thing, but actually work on stuff most people don't really care all that much about. Same thing with business/startups: most of them fail (presumably because they fail to build something people care about).
Maybe we just don't know what we're doing, or where we're going. Or maybe there's a small percentage of us which is just much better than everyone else. That's what the data says: https://en.wikipedia.org/wiki/Normal_distribution.
In the end, the best we can do is to focus on our own individual part, and hope that others do so as well, and that it all works out in the end. I think that's what most data people are trying to do (even if it often looks like our work doesn't really matter).
One cool thing with data though (from a data engineering perspective, at least) is that diligent analysis & optimization can directly affect the bottom-line, so dealing with technical debt is much easier to explain to stakeholders as it has a direct impact on the business in terms of cost savings. Although in my experience, many data engineers struggle to get to a point where they can make these kinds of contributions.
In traditional software development, most stakeholders cannot understand the true cost of technical debt until no one wants to touch a codebase anymore (at least, not without wearing a hazmat suit). And no one wants to wear a hazmat suit to work every day (unless the pay is there, maybe).
I do suspect that a big part of it is that 1) analysis is hard, and 2) some people are just better at it. I've said this a few times before, but it feels like we take what's possible with data by looking at what superstars do in advantageous environments, say, "yes, we want that too," and think we can get it if only we build the right systems. But there is no factory to produce Lebron Jameses. You can manufacture good high school and college basketball players - reporting and mechanics and cost savings - but you can't manufacture the profound stuff.
And I very much agree on the cost saving thing (or "business process" optimizations in general). Every companies is sloppily run; if that was your goal - make everything move a little faster and smoother - I suspect the job is a lot easier.
That's what gets me up in the morning: "how can I make things a little better/faster/cheaper/smoother today?" At least it's tangible. No matter what, it's still a win.
But I'd be pretty depressed if it were: "how can I find some insights today?" (i.e. "will I manage to find a needle in the haystack?"). And otherwise, build more dashboards... that maybe someone will use... and maybe will make some kind of difference.
I feel like a lot of the time, our work is like trying to predict the waves of the ocean ("oh look, there's another one coming!", which will always continue endlessly), when it is really just detecting/predicting the tsunamis that matters -- and those don't happen very often, and are even harder to cause (otherwise, everyone would be causing them all the time, and then they would just be waves again).
Like trying to optimize a conversion funnel where every 0.1% improvement is a "victory", even though the conversion ratio naturally fluctuates by 1% to 2% pretty arbitrarily (and who knows what external forces are at work behind those fluctuations; maybe people are in a better mood on Thursdays? Or is it the moon? Or COVID?). And then trying to repeat those "victories" never works consistently. But we're doing "science"! The numbers prove it, and they are statistically significant.
We run a pretty large network of sites, and things that kinda work on one site don't work on the others. Or they work one week/month and not the next. Go figure.
It's like we're looking for a few more berries to gather instead of hunting something worthwhile. But then again, we can't only rely on hunting. And we can't be flying completely blind either. But how much blood can you possibly squeeze out of a stone?
Maybe our job is mostly just to land the plane safely.
I once heard someone say analytics is like looking twice before crossing the road - doesn't usually matter, but saves your life when it does.
I was into it when I first heard it, and it makes sense to me. Though now, I think I'd have the same question about that that I do about "insights" - does that actually happen? Are there that many examples of it? Of course, it doesn't that many to make it worthwhile - one, really - but it still feels a little theoretical.
Really interesting. The truth can hurt, and brother I think you’re speaking it. More system 2 thinking, less conditional formatting!
It’s indeed like finding gold within data. One cant just hope to get lucky. Hence i concentrated on building a tool which acts as a multi faceted insight (?) extraction toolkit … shovel/ extraction unit/cleaning equipment all rolled into one. Hold it close to you and explain what you want from the data in terms of what you’re interested in (inputs expressed in terms of the data elements like filters, partitions, pattern with header/details matching elements, input and output elements, pre-processing steps, transaction header/detail pattern matching to ask, post-processing steps and calc kpis of interest)… the tool is set loose on the beach or dataset and it finds what it’s been asked to do so in an exact and deterministic way.
Its used to find data (subsets or kpis therein) which are of interest and which would presumably also contain insights if you didn’t already know the information/content present in the newly discovered/transformed outputs.
If you can describe what you want in vague or precise terms (not in free form NLP form but against a pattern data structure composed of multiple bells and whistles) then it will find what you want ... and the way to iterate and explore more... is to modify the ask to reflect the new pattern one is interested in... in a sort of ask, tweak and repeat ask and so on.... sense.
I mean, that's sort of the holy grail of this whole data thing, where the machine just takes raw sand and outputs all the diamonds. And that sounds...too good to be true?
holy grail of data/analytics.... I dont know :-) but I have been guilty of doing LinkedIn searches of the phrase periodically to keep track of what qualifies as such from an analytics perspective.
The machine can take in raw sand info... and split into partitions by attribute1 (pin_code say) and operate on them to output either all jewels (overall count) or split count by jewel_type (ovreall, 5 rows for 5 types of jewels) or even split output by attribute8 (region field with cardinality = 15, say) and give output partition wise split counts across (15x5 = 75 maximal) output records. Input processing helps in sessionization or categorizing 1000 buckets of sand into 1 unit and count units per pin code (say).. Or with a different dataset, identify/detect a type of crime and aggregate input to single instance per pin_code (single record of sufficient weightage reflecting the number of times that type of crime has occurred.. A record to indicate existence, special attributes/fields of the record to indicate prevalence or weightage etc).
This is do-able and has been done in a Proof of Concept sense... We need to pivot the data to fit into header-details paradigm ... even including potential header fields as artificial "products". Everything under one column (or two for id/value). The pattern structure/idea is borrowed from the humble Market Basket Business (MB) Rule ... If (conditions/antecedents) then (condition/consequent) acting as a sort of proxy context for the current analysis. The rule structure allows for flexibility via user actions like drillup (reduction) /drilldown (extension ... p,q => d to p,q,r => d ), hypothesis assertion/validation, modification of hypothesis or new hypothesis (from p,q => r to p,q,r => d ) etc. Also if persisted as an explicit dimension of analysis... we can have multiple independent Rules co-existing each of which can be a single strand/thread of analysis.... all against the same dataset. Co-existence of multiple rules with potential for many/all being in play allows us to piece together many small steps into a substantial whole (for e.g user behavior funnel or recommendation scenario can be thought of as a composite set of related/competing patterns). There is a lot of literature on calculating very many KPIs from groups of products/itemsets involving conditional probabilities etc. One can transform single row datasets with only header fields into a Basket of items/fields scenario thereby making single row datasets a special edge case of head-detail or Trx - Lineitem datasets where there are no "detail type products".
I’d be lying if I said I entirely followed this, though I think my immediate reaction is that this sort of thing does seem to work on proofs of concept, and on kind of dummy datasets, but is very hard to apply to messy real world stuff. It’s essentially this problem: https://benn.substack.com/p/the-design-flaw-at-the-heart-of-every
One interesting (public) example: Caroline Hoxby’s work to uncover links between student socioeconomic status, College Board scores, and admissions. Paul Tough writes about her work in his book, The Inequality Machine.
Oh nice. And yeah, there are probably some good examples from academic research where people have found some very interesting things like this.
While basically everything you say here seriously resonates, I'd offer a silver lining. It seems to me that you can draw a very straight line from working on that newsletter to this substack - one that I can attest has a much shorter path to producing actual value. And I'm sure that is just one way in which that work seeded something that has born fruit on a much longer timeline.
I think about The Ringer - a place where podcasts are obviously the cashcow but a group of super talented writers are getting to sneak really excellent work through because it probably accounts for less than a percent of Spotify's operating expenses.
Nearly every venture ever feels to me like a lot of bluster to support the 5% of the leading actors. But the 95% are just in a slow-cooker where some amount of them will go on to become the 5% somewhere else years later. Farming the farm system out to AI bums me out, but it also feels likely to me that we will adapt this dynamic to some new tasks, and the cycle will continue.
Or that's what I'm telling myself to pull out of the tailspin this may or may not have sent me on so I can get something done this Monday morning lol. Cheers!
Another comment mentioned something similar, about how the background work can be a stepping stone to the thing that people actually pay attention to. I think I'm skeptical that of there being much of a linear path between the two - extras don't become movie stars, I don't imagine, and no matter how many blog posts I wrote for the think tank, I was never going to matter a fraction as much as Ben Bernanke did. But I think both of you bring up something I didn't think about, which is how being an extra can help a lot of people experiment and discover new things that ultimately lead them to becoming stars in some adjacent area. The beat writer becomes a periodic podcast guest; they get a job as a sideline reporter; they get a job in the broadcast booth. The farm system seems like a good analogy - the farm system matters, not because that many people want to watch minor league baseball, but because major leaguers have to come from somewhere.
I have no idea what AI does to that. On one hand, it seems like economic pressures would make a bunch of people burn down their farm system for an army of bots. Get rid of the junior investment bankers who make decks, or the beat writers, or the law office interns. On the other hand, those industries do need to get talent from somewhere. I have no idea what they replace those jobs with, but it seems like they'll have to replace them with *soemthing.*
I was being interviewed for a job and I asked a similar question "in your career, have you ever had an 'ah ha' moment where you found some great insight through data analysis." The interviewer recalled a time when he found that retail store employees were selling fewer high margin accessories when paired with a lower tier cell phone plan. They were being instructed to do so because if the monthly plan was lower, presumably the customer would have more $$ for accessories. However, the reverse was actually true, so retail store reps were instructed to lead with higher tier monthly plans and accessory sales rose significantly.
I like your analogy of "axes we fell in love with." I agree, we are foraging for basic things and usually there is not that much deeply buried stuff that makes a massive impact. However, basic things in a business break or get off track all the time. You fix a process today, it breaks again or decays over time. In a large company, there are nearly infinite things that can break or decay, so you need tooling that gives a lot of people a lot of leverage to cover as much of these things as possible. It is upon the analytics industry to monitor these things so that when they do break, they can notify the right people and get them back on track quickly.
So yes we are just foraging through the sand for tiny bits gold, but analytics tooling (axes) gives you special vision glasses so you can see the gold more easily and "hopefully" pick it up faster. Are you likely to find a 3lb chunk of gold - no - or maybe once every 5 years so that you can submit it as a story to Benn's blog post :). But presumably, the bits and pieces of gold you accumulate - and the pace in which you accumulate them does matter and ideally covers the cost of big beautiful axes and the people who wield them.
I think I very much agree with this. Or like, this is exactly what we should be doing - looking for the bits and pieces of gold lying on the ground, and making sure the machine hasn't decayed or gotten off track. And it seems like you're more likely to find the 3lb chunks of gold doing that (by noticing something is broken, or finding some fairly basic thing like what the cell phone person found) than you are by trying to sleuth out some complex story.
But, it doesn't seem like we market the job as that? My impression of analytics jobs - and of "analytics" as an industry - is that they're frequently sold as being about finding these grand insights. Moneyball, Facebook, the famous Target pregnancy story - this seems to be what everyone is chasing, and every vendor is selling. Which makes some sense at least in the latter case, because it sounds very cool and space age, and probably does sell better than "we help you find some basic stuff that you should know but probably don't." But I do wonder if we marketed the whole industry that way, if things would be any different.
Maybe this is just the symptom of a mature industry. I recall when business friendly visual analytics was very new, you had a lot of these incredible stories. We transitioned from not even looking at data (because it was too hard) to actually looking at it.
Simply loading a boring excel file into Tableau or some other tool, and dragging and dropping produced all kinds of eye-dropping findings because we had never looked at data like that before. In its time, this was really incredible. But that was then and this is now. We've gotten so good at dashboards, data, and analysis techniques that it is hard to squeeze more juice out of the orange. I think we still sell that as an industry because 1- we are selling, and 2 - it DID happen a lot and it still happens even if less often.
I could see that, where the difference between 0 and 1 is big, but 1 to 10 isn’t that much. Or, more precisely, going from 0 to 1 is both hard, and a lot of companies don’t have data that can actually go up to 10. They have data that can tell them what they’re selling, sure, but they don’t have data that tells them all of their company’s hidden secrets.
> In other words, if AI replaces all our jobs, it might not because because what we do is easy, or because AI is so good that it can do as well as we do. It may be because it never mattered if we were particularly good at it in the first place.
This idea of "background jobs" is really insightful, kudos!
Makes me think -- if we outsource today's background jobs to AI, where will the human stars of tomorrow grow and hone their skills?
Or maybe the path to stardom has even historically been less about rising through the ranks, and more about mavericks appearing out of left field, gathering momentum, and ultimately being co-opted by a respected institution?
Dunno. Again though -- food for thought, thanks :)
> the only data podcast I’ve ever really wanted: One in which people tell stories about the interesting things they learned about how the world works while working with data
I just heard about this book the other day, it's still very much WIP and I only lightly perused the first chapter, but sounds like it might be up your alley? https://calmcode.io/data-science-fiction
I think my instinct - based on absolutely nothing, but, what is this blog if not that - is that most stars are exactly as you describe. Which isn't to say they show up right away, but I'd guess that most people who are at the top of most fields don't slowly rise through the ranks, starting as "extras," but instead are on a fast track from the beginning. They may not be in that industry right away (eg, they might go from being in one field to another, or whatever), but the very slow climb to stardom doesn't seem to happen that much. (Which, I guess I would categorize differently than the very slow climb to power. That does seem to happen, as there are probably a lot of people who slowly ascend to power by outlasting people and getting promotions.)
And this book does look up my alley; definitely gonna read this.
Not exactly an answer to your core question but a related thought - https://www.linkedin.com/posts/wernicke_if-we-want-value-from-data-its-not-enough-activity-7268557502106017792-oKBG?utm_source=share&utm_medium=member_desktop
ooh I like this. Though it feels...incomplete? Or like a good starting point? "People don't just make mechanical decisions based on statistics and logical reasoning" is probably something that this whole industry doesn't talk about nearly enough. But what do you do about that? How *do* they make decisions?
Trying to solve that by reading "Thinking Fast and Slow" feels somewhat circular, like trying to turn the human problem into another type of science problem. Maybe this tilts too far, but to me, decisions are emotional, and the only way to truly affect them is by operating in that realm. And there's no science-ing your way out of that.
(Somewhat related, this is an old piece, but maybe a good chapter two to that linkedin post: https://review.firstround.com/master-the-art-of-influence-persuasion-as-a-skill-and-habit/)
Great link - super comprehensive. I love the framing of "decision literacy" as a first principle of our own self awareness. I believe we need to not only think about this as a means to making (or influencing - re article) a decision but also in reflecting and changing the parts of a decision that no longer work. This becomes extra important when we end up in a world where the past data is not going to predict the future situation which I think is the crux of your bigger set of questions. If we know we don't know THEN what...
One of your best. Thank you!
Situations where people are asked to evaluate their own performance, especially when there are attached incentives, are ripe for analysis from outside the system. Goodhart mixed with human nature.
very good stuff. there are folks out there who deliver on this, the PhD DS w/ the analyst's context, the savant data person w/ skills across all data domains, but they are the exception.
most data folk just deliver the ability to cleanly count shit (credit to John Cutler | (https://cutlefish.substack.com/p/tbm-252-start-by-counting-things). and that's valuable! we should own that.
our value is not sexy. it's a boolean field that you can now use in looker. Colin Zima linked the venmo blog in your linkedin post (https://venmo.github.io/blog/2014/08/28/data-driven-design-at-venmo/). i lol'd at this, "So using Looker, we created a table with custom dimensions and measures to identify these payments with just one boolean flag."
they made it sound so simple, sometimes it is. but other times that boolean flag is 4 meetings, 2 incremental tables and 4 disgusting CTEs.
that's our value, not sexy analytics, boolean fields in looker.
Knowing what to measure, and how to do it, is incredibly valuable in business. 80% of companies can't even get there.