13 Comments

just read your other post on Minerva. I agree with your preference for SQL over API.

Regarding YAML - I am not sure that YAML is expressive enough to compute metrics. What we did on the flaskdata.io metrics layer was to wrap SQL inside functions with a standard calling convention in order to compute the metrics from the time series data. I won't pretend that its ideal - the debug process is not pleasant but once it works, the job is done.

To get some more flexibility we use YAML to define how to extract the actual metrics into the BI

borrowing a page out of transform's play book.

For our next product, I'm serious considering OpenTelemetry

Expand full comment
author

Yeah, agreed on YAML, at least purely. It's like defining metrics via drop down formula builder to me. It can be useful for some things, but you need a higher ceiling somewhere.

Expand full comment

Given that computation of metrics is usually not part of the data access layer; it seems to me that metrics is a layer unto itself. This enables us to hide the computational complexity of deriving metrics from time-series data from the BI. It also enables a design where we can compute and expose live metrics from inside the operational process (as is commonly done with operating systems). I believe that this makes sense in particular when you have a slow moving process (like clinical trial data).

Expand full comment
Aug 27, 2022Liked by Benn Stancil

Great article as always Benn. Been a long time reader of your posts from Pakistan and I have learned alot from your writings. I'm only 4 years old into a career in Analytics.

Just a thought: building on your conception of a a) consumption only and b) all-consumption encompassing BI Tool, it should also contain the "Metrics" Layer that you talk about in some other articles. I feel the metrics can (and arguably, should) be defined "inside" the BI layer. This would make it very easy to work with consistent, single source Metric definitions for Adhoc analysis aswell as self serve AND periodic reporting.

I do feel that a separate Metrics layer would make the eco-system abit too fragmented and would add an imo unneeded middle layer.

Thanks for your time!

Expand full comment
author

Thanks! There's definitely a case to be made for defining metrics in a BI tool, which is how it's historically been done. And with as many tools as there are in the space, it's probably good to be careful about where we add more fragmentation. But, my take is that given the value of having centralized logic (both semantics and metrics), this is an area where fragmentation is worth it.

Expand full comment
May 31, 2022Liked by Benn Stancil

Excellent article. I was starting to feel crazy for thinking that the term "headless BI" makes no sense, thanks for making me feel somewhat normal. I've been pondering what to call BI with no semantic layer. Legless, I like it. I ran through metadata-less, meta-less, the head, before finally landing on semantic-free. Anyhow, I'm with you 100% on the move toward lighter BI. First, we'll get rid of the semantic layer. Next, the query engine.

Expand full comment
Oct 3, 2021Liked by Benn Stancil

Great breakdown of topic! For enabling "A better, more universal BI tool would combine both ad hoc and self-serve workflows, making it easy to hop between different modes of consumption..." and your mentioning making deeper analytical work more integrated with self-serve dashboards... what are the kinds of requirements you're picturing here? E.g. should BI tools be written in and expose the underlying python/javascript/SQL/R (making it easier to compare and integrate methods used by those doing deeper adhoc analysis) or what do you picture here?

Expand full comment
author

Yeah, that's close to what I think works. Right now, to build a self-serve dashboard, you have to configure a tool for that specific purpose (build a Tableau extract, write LookML, etc). And most ad hoc tools are kind of dead ends for non-analysts. They provide some basic interactivity, but nothing as rich as a traditional BI tool.

Ideally, you could have both. Specifically, I think this means that analysts should be able to SQL queries and Python code, and then putting a rich visualization exploration tool on top of those results. Rather than writing LookML to build a self-serve tool, analysts should just be able to write a query. Instead of creating a new Tableau extract, they should be able to put those sorts of visualizations directly on top of their ad hoc queries and Python results.

Expand full comment
Sep 29, 2021Liked by Benn Stancil

Very well articulated, Benn. Decision making as a science and process is still in its infancy, and part of the issue is that the our decision models and tools are a derivative by enterprise org models vs. putting the decision making process at the center and creating the org around it.

Expand full comment
Sep 28, 2021Liked by Benn Stancil

I have been out of running an enterprise BI service for a while. This has been really helpful to orient to the current model. Thank you

Expand full comment
Sep 20, 2021Liked by Benn Stancil

Just uh...wow. So much good stuff in here. Lots to talk about in the next issue of the AER :P

Expand full comment
author

thanks! and excited to read it.

Expand full comment