16 Comments

As with Data Contracts and many aspects of the Data Mesh, this model is already in place in many upmarket, larger teams. It's called "IT Chargebacks" which turns the IT/Data team into a mini business within the broader business.

https://journal.uptimeinstitute.com/it-chargeback-drives-efficiency/

It's used for headcount charging (if marketing wants 10 hours of an analyst time, they'll pay the Head of Data for 10 billable hours), computing resources (if marketing takes up 80% of the Snowflake bill, they cover that, at least in part), licenses (marketing has 50 Looker users so marketing pays IT for 50 Looker seats), and for external product and services vendor RFPs to baseline against.

This is very common in 1000+ headcount companies that run on EBITDA and net operating profit incentives over growth-first incentives.

For further info on how to crawl, walk, run toward this, and as chargeback and showback 101, you can read the following:

1) https://www.finops.org/framework/capabilities/chargeback/

2) https://www.softwareone.com/en-us/blog/all-articles/2022/06/20/how-to-establish-a-finops-culture-of-accountability

3) https://www.nicus.com/blog/showback-vs-chargeback-which-should-drive-your-bill-of-it/

Expand full comment

Chargebacks, at least how they were structured at IBM when I was there (i.e. Blue Dollars) had the unfortunate consequence of limiting innovation for potential new products that might otherwise have added a dependency on other offerings. There was no room for experimentation unless an immediate derivative value could be determined. Granted these were sofware products and not data(sets?) and I think you are referring to charging for data professional's time/outputs internally and not necessarily datasets directly, I think there presents a scenario where only work of immediate utility get performed and as a result no incubating projects that could move the needle long term ever get funded. Maybe a good thing maybe not dunno.

Expand full comment
Oct 31, 2022Liked by Benn Stancil

In the case where I used quotas I was migrating a bunch of data scientists and processes from on-premise SAS to open source Python on AWS. Previously productivity was measured by model accuracy alone. I switched it to number of models in production with a minimum (a quota). The SAS programmers had to migrate to keep up.

Expand full comment
Oct 29, 2022Liked by Benn Stancil

Benn, I would love to do this at my company. I'm sharing an older article on an extreme version of this model shared across the company. I would love to see the stats of the internal app

https://www.bloomberg.com/news/articles/2019-06-20/charging-employees-for-conference-rooms-helps-disco-boost-profit

Expand full comment
Oct 28, 2022Liked by Benn Stancil

Nearly every data team I've seen win had to fundamentally change the economics of data in their organization. I've done it by rewarding disparate data teams with talent and headcount, and by setting high quotas. Craig Martell, a Silicon Valley veteran of LinkedIn, Dropbox and Lyft, is doing it now at the largest organization in the world... U.S. Department of Defense. See a good recent interview with Craig here: https://www.youtube.com/watch?v=bfmZ8Iv0uEQ

Expand full comment

I'll share a story from a team I was previously on. During quarterly planning, while other teams were wrangling over their next Q's commitments, our team would drag everyone into a room together, have product/eng/ux sit at a table, ont table for each subteam that will be working together for the quarter.

Then everyone gets a stack of post-its to write down project/work items, and 13 physical wooden people figures, representing the 13 weeks of a quarter. The teams would have to negotiate and horse trade around with physical tokens how much research, design, eng time could go to each project. It had some shortcomings, but was a surprisingly good system.

Expand full comment