35 Comments

I should note that Xmrit is now open source, and would encourage everyone to go steal the code ;-) https://xmrit.com/opensource/

Expand full comment

Hmmm, I feel like there's gotta be a way to make some XmR charts in Mode.

Expand full comment

Control charts (one type of which is X-MR) seriously need to become part of the analytics lexicon. Also, pareto charts, which are a common tool in root cause analysis...finding the 20 percent of the things with 80 percent of the impact

Here's a couple toolboxes in this space:

- https://en.wikipedia.org/wiki/Seven_basic_tools_of_quality

- https://www.amazon.com/Lean-Six-Sigma-Pocket-Toolbook/dp/0071441190

Expand full comment

Yeah, I'm not a control chart absolutist, but I think these sort of interpretively suggestive charts (or other methods) are probably way underused. Surely something like this exists though? Where instead of choosing a line chart or a bar chart or whatever, you choose the method of comparison (or something)?

Expand full comment

Yes please! I would LOVE that, and would love to link to your default chart type!

Expand full comment

Unfortunately I have no say anymore in the chart types. But I feel like I should be able to at least make one.

Expand full comment

cool! here's another open source tool for control charts: https://github.com/carlosqsilva/pyspc

Expand full comment

this makes a lot of sense when technical people choose the tools and are more concerned about screwing up inputs than outputs - because the screwed up outputs are "not their fault."

to think we do all this just to pay for our timeshares down in Destin... ;-)

Expand full comment

There's definitely a dynamic there with data people (and consultants, etc) where they see their job as telling people the facts and all that, and the results of that are someone else's problem. Which, maybe? I have mixed feelings about that.

Expand full comment

I think technical executives get paid to care about the inputs and outputs and should very much be concerned about the outputs just as much if not more than the inputs.

Expand full comment

I think for executives, yes, but actually, only sorta kinda? Like, in theory, execs should definitely be output focused. But in practice, a lot of execs care more about politics and optics than anyone. A lot of them care about outputs as much so they can say that if something goes wrong, it's not their fault, as they do for things to actually go well.

Expand full comment

ya - you are right. Only in theory. My actual experience with some rare exceptions is the CYA approach as detailed above. :-(

Expand full comment

As usual, there are too many great topics to address them all! So, I'll focus on two:

First, the balance between tool flexibility and structured workflows. This challenge extends far beyond BI. I champion flexible tools or products with strong defaults (which is tough to do right!). Apple's OS X is a decent example – usable by anyone, yet adaptable for (IT) professionals.

Secondly, I want to highlight the semantic layer. They aim to balance flexibility with standardized definitions. While not a perfect solution to the post's problem, I believe it is a positive step…

Expand full comment

Yeah, on the first point, the really tough part there is most customers want flexibility. You've got to do a really good job of making something with good defaults to convince people that that's good enough. It's probably not a coincidence that Apple is the example there.

Expand full comment

I agree this is where progress needs to be made and I am not sure if a more interpretive “BI” will get us there. The “actionable insights” promise of BI always hinged on who generated the actions. The who determined the what. I think a lot of people don’t act correctly not because they misinterpret, it is because they don’t know of a good set of possible next steps. To me that is the low hanging fruit that our AI agent overloads will fix!

Expand full comment

So I think I agree with that, but I think I disagree with what "actionable" means. A lot of BI or BI-adjacent tools will release stuff to make dashboards "actionable," and they tend to be kind of operational actions, like "push this button to send an email to the customer who's package is delayed" or whatever. Which is fine, I guess, but I don't think that's what "actionable" data really is. To me, it's much more what the XmR chart stuff is about - it's about learning how your business works, and understanding the dynamics of the system a little better. We don't go data -> individual action; we go data -> theory of the world -> lots of actions (that's what this post was about https://benn.substack.com/i/143303690/theory-over-action).

That's why I think these sorts of interpretive BI tools might be useful. They aren't about trying to identify the immediate one-off thing we need to do from this dashboard that has a spike; they're about helping you develop theories. And if you have those theories (eg, most of our growth comes from holiday promotions), it's easy to come up with actions.

Expand full comment

I think it is more than a little interesting that not only does ~"no one" use the power of technology directly for making the world a better place for the less well off, but even more so that ~"no one" *even discusses* the phenomenon.

Like seriously, there are soooo many weak points in the empire's armour, why does no one attack? Fear, or maybe everyone is on on the game, despite constantly proclaiming (what appears to be) sincerely to the contrary.

Or...maybe everyone is confused - maybe the Hindus were right, maybe humans do live in Maya. That would be funny!

Expand full comment

On the first part of this, I have some very long winded theories (and mostly written blog post drafts) about this. My guess is that it's mostly status - so many people in technology are ambitious, and I think most of that ambition is about social status and hierarchy. And Silicon Valley is a place that elevates people who build stuff, make money, etc, and not people who "make the world a better place." So I don't think it's really about everyone being greedy and wanting money; I think it's about people wanting to do something important - but we've collectively defined important in a way that's not about doing "good" things.

Expand full comment

I suspect there has been plenty of think pieces about around why technology hasn't been used as a force for good, or for "making the world a better place". If I had to guess, it has partly if not entirely to do with late stage capitalism. Silicon valley and it's comparable global antecedents tend to operate within a world that you won't see anywhere else. I agree with the statement that how "important" is collectively defined drives the ways in which technology is developed. It's something that is *usually* written into a company's pitch deck, and it rarely aligns with "good things". I'm pretty cynical, but generally "monetisation" or "acquisition" come before "leading to impactful change".

Expand full comment

Agreed, though I think the (somewhat unintentional, probably) misdirection goes a bit deeper, at least in tech. In finance, for instance, I think people generally understand they're doing this to make money. In tech, it feels more like people believe that technological progress - and the money and success that comes with it, and getting people to use a thing - is good. I don't think it's always so cynical that people use the whole "make the world a better place" line as con; I think they see "making the world a better place" and "make something that lots of people use and pay money for" as the same thing. Which, it might be sometimes? Occasionally? But often not?

Expand full comment

> I agree with the statement that how "important" is collectively defined drives the ways in which technology is developed.

Or more accurately, how *it is not* collectively defined, or even really *defined* in the first place, strictly speaking.

Expand full comment

Well, sure, it's never explicitly defined. But if you work in tech long enough (or for about six months, really), everyone learn what's treated as important.

Expand full comment

Right, but that is a function of "important" not being collectively defined in our society, and the "the dog that didn't bark" effect (people cannot notice that which isn't present, or notice that they do not).

Expand full comment

I guess, though I don't think anything like that can ever really be defined. It's just social norms and whatnot.

Expand full comment

You're surely right to a non-trivial degree, but I bet the reality of it is something more like this:

https://en.wikipedia.org/wiki/The_Adventure_of_Silver_Blaze

Gregory (Scotland Yard detective): Is there any other point to which you would wish to draw my attention?

Holmes: To the curious incident of the dog in the night-time.

Gregory: The dog did nothing in the night-time.

Holmes: That was the curious incident.

Expand full comment

Very insightful - always enjoyed reading your thinking around this topic. What AI ideas are you incubating? 😉

Expand full comment

Thanks! And I have no good ideas, sadly, other than the little things that I want for myself, like an AI that automatically organizes my giant list of blog post ideas and tells me how to make sense of them in some way (which might also just be an 8-line script that uses the OpenAI API, which barely counts as an idea).

Expand full comment

Benn. This was very well written. I enjoyed it. Will read it again, just so I get all the information you are trying to convey, but can relate to all parts except the Florida stuff :)

Expand full comment

You mean it's not your lifelong dream to own a timeshare in Destin? https://genius.com/31477710

Expand full comment