It’s too many options
Burn the bike shed and sign some ten-year contracts. Plus, remembering that <LEAD.FAVORITE_COFFE_BRAND> cup in Game of Thrones.

I sometimes wonder if companies that buy software would be better off signing ten-year contracts rather than one-year contracts.
Typically, it’s buyers who want to sign short-term deals and sellers who want long-term ones. Go to almost any pricing page for a modern SaaS company, and you'll see two prices:1 A big, bold one, which you'll pay if you're billed annually, and a small, higher one, which you'll pay if you're billed monthly.2 And if you talk to any software vendor about signing an annual contract, they'll probably give a third option for a multi-year deal, which will have the lowest price of the three.
The logic behind this sort of discounting is straightforward. For SaaS vendors, cash commitments are king. The entire business model is built around customers staying customers for a while and, to a lesser degree, up-front payments. Long-term deals offer both of these things.3 Moreover, in some sense, annual contracts are less work to maintain than monthly ones. If a company knows that a customer will be a customer for the next year, they don’t have to renew that customer’s business every month. For all these reasons, getting 10,000 dollars today is better than probably getting 1,000 dollars a month for the next 12 months, and SaaS companies often offer people that deal.
They also offer the deal because buyers prefer short-term contracts. Buyers don’t want commitment; they want flexibility. What if they discover some hidden flaw in the product they bought? What if something about their business changes, and they no longer want to use it? What if a competitor launches a shiny new thing that they really like? If they’re on a month-to-month contract, they can simply walk away. But if they’re on a long-term contract, they can’t. They can’t credibly demand that the vendor fix whatever thing they don’t like, “or else.” They can stop using the product, but they can’t stop paying for it. They’re locked in.
And so, people say, when possible, avoid it. Avoid long-term contracts. Month-to-month is better than an annual commitment; annual is better than multi-year; literally anything is better than ten years.
—
But is it? One of the common complaints that data people have about the state of the data industry today is that it’s too focused on tooling. We speculate on the implications of Snowflake’s launches and Databricks’ acquisitions; we waste time trying to define the endless flavors of business intelligence; we talk about Modern Data Stack this and AI that. As the memes say, this is all a fun distraction from what we actually need to do: Fix people and process problems. Teams don’t need to optimize the handles on their hammers or the nuances of their nails; they need to become better carpenters. The magic we’re looking for is in the work we’re avoiding.
I'm not totally sold by this argument in the aggregate; we won’t all collectively become better craftspeople unless someone invents better hammers and nails. For individuals, though, tooling probably is a much lower order bit than skill, and talking about tooling is a form of technological bikeshedding. It's a science project; a hobby; an indulgence; an escape from the hard and often unpleasant work of dealing with people and corporate politics.
So, to summarize, there are two pieces of learned wisdom about technology and skills:
Don't get locked in to one vendor or tool, because you might want to use a different one later.
Don’t worry too much about tools, because they aren’t the real solutions to your problems.
But you see the contradiction? The first piece of advice says to maintain optionality—but the second says that the options you’re protecting are what colors to paint your bike shed. The first point implicitly encourages us to always be optimizing our tooling; the second point says do not get distracted by optimizing your tooling.
However, if the second point is the more important one—which I think it is—there’s an easy way to be good at it: Do the opposite of the first point. Choose lock-in. Buy the BI tool; burn the boats behind you; set your tools in stone. Force yourself to worry about people and process problems by taking away your other temptations.
If you’re in the market for some software product and you whittle your options down to a handful of reasonable finalists, all of those tools are probably good enough. Databricks, Snowflake, BigQuery, Synapse—does it really matter which you choose? Will “business outcomes” be materially different because the data team chooses a BI tool that its users give 4.60 stars versus 4.58? Or 4.58 versus 4.56?4 How much are those 0.02 stars worth? Are they worth anything at all? Because the ability to move between those products, and upgrade from a 4.56 to a 4.60, is the optionality we’re paying for, in both foregone discounts and in the time we lose to distracting debates and product evaluations.
It’s worse than that though, because optionality has an even bigger cost: Stability. If you invest in a single tool, you can probably wring more value out of it than you can by constantly trying to find something that’s marginally better. But people are unlikely to make that investment if they’re always window shopping for an upgrade.
Imagine a luxury apartment that’s leased by two people who are only committed to it until a better place comes on the market, and only committed to each other until they move. Now imagine a more pedestrian condo that a married couple owns. Which space is better? Imagine a college basketball team of one-and-done freshmen who will be NBA lottery picks. Now imagine one full of future insurance salesmen who’ve been playing together for several years. Which team is better?
In both cases, the stability of the latter option is at least arguably better than the talent of the former. And these are extreme cases, of college teams full of 1- and 2-star recruits versus 4- and 5-star recruits. When we buy software, we often sacrifice that stability, so that we can upgrade from a 4.56 star recruit to a 4.6 star recruit.
There are business examples too. In his 14 key principles for management, W. Edwards Deming recommends that companies “move toward a single supplier for any one item, on a long-term relationship of loyalty and trust.” This is important, Deming says, because quality requires consistency. Multiple suppliers create variability and inconsistency; variable and inconsistent businesses struggle to improve. The same principle likely applies for having multiple software “suppliers.” Migrating between tools—or even just questioning if a tool is right, and always evaluating new ones—is a lot more disruptive than making the most out of something that might be missing a few of the latest features.
The two main arguments against these sorts of long-term deals are, one, products can stagnate, and two, vendors can exploit long-term customers because they don’t need to earn their business anymore.
Though the first risk is real, it happens more slowly than most people imagine. Microstrategy, a 35-year old BI tool, has a rating of 4.4 stars.5 Barring some extreme event, like a business collapse or a broken acquisition, most products don’t die overnight.
The second risk actually seems backwards. Like Deming suggests, loyal long-term customers have more influence over their suppliers than fickle ones. Google may not need to win Snapchat’s business every year anymore, but Google really needs to win it every five years. As anyone who’s ever sold software before will tell you, it’s not the aggressive negotiators that keep you awake at night; it’s long-term customers who are gradually getting unhappy. Those are the accounts you can’t afford to lose, and the ones you bend over backwards to serve.
—
One piece of software that companies usually commit to for a while is their database. They may not sign long-term contracts, but databases often become so integrated with other systems that they're impractical to change. To replace BigQuery with Databricks, you have to move all of your data; you have to rewrite thousands of SQL queries written in a BigQuery accent into queries written in a Databricks accent; you have to update all the things that talk to Google’s APIs to now talk to Databricks’ APIs. It’s a pain, and people really try to avoid doing it.
That might be changing? Over the last couple weeks, Databricks and Snowflake announced new commitments to Iceberg,6 which could make it easier to swap one database with another:7
If you eliminate data lock-in and allow workloads to “travel” between platforms based on cost / performance characteristics, you create a more efficient market for workloads. This allows competition to naturally push prices down over time.
Architecturally, seems great! The parts are more modular. Economically, also great! Lower prices; cheaper migrations. Wonderful.8
Practically, I’m not so sure. More optionality means more suppliers, more wandering eyes, and less stability. Optimizing compute engines and shuffling between databases could be a revolutionary innovation that shifts paradigms by creating seamless integrations across scalable B2B SaaS products—or it could just become another bike shed for us to think about.
A few months ago, I said that the biggest failing of the modern data stack was its ambition to be modular. The theory sounds nice—a bunch of specialized Lego pieces, assembled together to meet exact needs of every customer, working in tightly integrated harmony around standard protocols and languages—but the reality is it’s too many options:
The economics never worked and the experience was dysfunctional. Worse still, the belief in modularity helped inflate the bubble, by convincing people—and VCs—that there was space in the market for every specialized wedge.
Maybe we get this one right, and mix-and-match compute engines don’t suffer from these same problems. But the modern data stack shows that optionality creates plenty of problems too: Indecision, second-guessing, and lots of lower order bits to fiddle with. Sometimes, we might be better off just picking one thing and sticking with it.
The ads are fake
This used to be one of my favorite interview questions:
Imagine that you work for Starbucks. A few weeks into the final season of Game of Thrones, an HBO executive comes to you with an offer.
“Look”, they say, “the show has gone to hell.9 We’ve decided to throw all artistic integrity out the window, and to milk it for as much money as we can. So we’re here to offer you a deal. If you pay us enough money, we’ll leave a Starbucks cup in a shot in one of the final episodes. Ten million people will see it. It will go viral. It will become a meme. Everyone will be talking about Starbucks. Imagine the promotional possibilities. Starkbucks. The winter menu is coming. I don’t know, we obviously can’t write anymore, you’ll think of something good. Anyway. How much are you be willing to pay us to do this?”
What do you offer HBO?
I have no idea what the right answer is (the point of the question was to see how people think through it, not to calculate an actual number), but it doesn’t seem like it’d be very much? Product placement ads aren’t very targeted, so the average impression for something like this would likely be a good bit less valuable than the average impression of a search or social media ad.
But what if it could be targeted? What if HBO said they’d only show the Starbucks cup to people who recently went to Starbucks? What if they could offer the same deal to Dunkin’ Donuts, and promised to only show a Dunkin’ cup to people who live within ten miles of a store?
We may be getting there. From the New York Times:
On Monday, TikTok announced a new set of tools that will allow brands to create ads using avatars generated by artificial intelligence that look like real people. There will be two types of avatars, TikTok said in a release. Brands can choose from an array of stock avatars “created from video footage of real paid actors that are licensed for commercial use,” or they can opt for a customizable avatar that could be designed to look like a specific creator.
Right now, TikTok is only using generative AI to help advertisers create ads before they serve them. But eventually, surely the same technology will get used to create ads when they serve them. If we can animate a fake avatar to read a transcript, we can animate the same avatar to read a customized transcript that speaks directly to whoever is seeing it. Talk about a product they recently searched for. Speak in their native language. Adjust the avatar’s age and appearance to make them appear as trustworthy as possible to the specific person they’re talking to.
And if we do this, why would we stop with content that’s explicitly an ad? Watch a James Bond movie on Netflix, and transform his car into whatever make the people on that account might buy. It’s no longer an Aston Martin; it’s a Toyota Sienna. Do the same for his watch; his suit; his sunglasses.
Each of us live in our own version of the internet. First, it was because the internet gave us more options: Rather than everyone getting news from a couple broadcast networks and major newspapers, we could choose the sites and blogs that we liked. Then, it was the ads: Our individual browsing histories followed us around, and we all saw ads that were personalized to our presumed preferences.10 And more recently, it’s become our feeds: Tiktok, Twitter, and Netflix show us posts and videos that their algorithms think we’ll like.
Still, though we might each see a different kaleidoscope of content, each individual piece is consistent. Our online mosaics are manufactured, but other people share some of the same tiles.
But maybe not for much longer? After all, why show people a real tile, when a modified one might get them to buy an extra cup of coffee?
This is the software equivalent of a gas station marquee having a cash price and a credit card price. It’s unclear how much Google would charge you if you tried to pay in cash.
That said, most multi-year deals are paid for one year at a time. If you sign a two-year contract in July of 2024, you’re committing to pay for the first year of your contract that month and for the second year of your contract in July of 2025.
Ok, but how on earth is G2 calculating the average scores on these pages? They tell the individual scores; you can do the math yourself. And their math is…different.
On one hand, companies game their G2 scores by harassing their best customers in writing reviews, so this is a pretty suspect source. On the other hand, if there’s any signal in these scores at all, Microstrategy is still decaying remarkably slowly. It’s a month older than Taylor Swift!
Libraries have three parts:
Physical books, on shelves.
A scheme for organizing the books, like the Dewey Decimal System.
Librarians, who use the system to find the books on the shelves.
A database is roughly the same thing. There is physical data, stored in files on a drive. The database organizes those files in some manner that makes them easy to retrieve. And when you ask the database for data, a computer uses its organizational scheme to look up the files you asked for. Historically, different databases have used different systems for organizing their shelves—e.g., Redshift might arrange its books alphabetically by author, while Snowflake might order them by length. This meant that compute engines—the librarians—couldn’t share the same physical library of books, because those librarians needed the books to be organized according to whatever version of the Dewey Decimal System they knew how to use.
Iceberg is an open-source Dewey Decimal System. If everyone agrees to organize their books that way, then different database compute engines could all share one set of books. That would make it much easier for people to choose the right librarians for the right jobs, because they wouldn’t have to rearrange their entire library first.
By database, I mean “database compute engine,” because I don’t know what a database is anymore.
Cynically, none of this will actually matter that much, because Databricks and Snowflake aren’t collaborating on a new standard; they’re each trying to become the center of the new standard. Though that may create some overlap today, with everyone making a big show about how much they love Iceberg, it seems inevitable that their real goals are to add their own features on top of Iceberg, and then try to convince everyone that their version is the version everyone should standardize around.
People are still signing this! Ten people a day! Five years later!
For example, if you clicked on any of those pricing pages in the first section, buckle up. Nothing gets a retargeting campaign fired up quite like a pricing page visit.
The crazy part is this can apply WITHIN products… try doubling down on Power BI/Fabric… the paradox of choice will get you given the vast array of ways of doing things within the that “product” alone… not to mention cloud means you can’t hold back the tide by staying on an old version… you just come in to work one day and the product has changed.
The proliferation of tools and patterns has screwed a lot of people career wise as well… Imagine you’re 40, just got out of a 15 year gig at a conservative company that did everything on prem in SQL Server, SSAS and maybe this new fangled Power BI thing… then you are interviewed by a 26 year old head of analytics with 3 years experience who has the hots for their flavour of the MDS, half of which you’ve never heard of…
My favorite Deming quote “Ninety-five percent of changes made by management today make no improvement.”
Does this apply to BI and other tooling? - ya probably so.
Rather than actually being tied into long term contracts - I think trying to make decisions for 10 years might be a good step. I don’t want to be locked in for 10 years, but should think like does this decision still works 10 years from now?