We were hired to do the grunt work
If solving strategic problems is the more valuable thing to do, why aren't we doing it?
If you regularly talk to people who work at technology companies, you will discover something surprising: Everyone is doing the wrong job. Most engineers are fixing bugs, migrating services between clouds, or upgrading some frontend framework from version 7.1.12 to 9.3.1 Most data analysts are answering the same dumb2 questions that they answered last month, living every week at the intersection of Groundhog Day and LMGTFY. Most marketers are rewriting blog posts as LinkedIn posts and reconciling lists of leads. Most product managers are writing tedious specs and following up; most lawyers are converting everything to .docx and tracking tedious changes; most finance directors are conditionally formatting tedious Excel workbooks. Most managers are making decks; they are coordinating, aligning, reviewing; they are in back-to-back meetings that should’ve been emails.
Have a drink with a person who works in technology, and they will eventually tell you how they feel about their job: They are stuck in a white collar salt mine. The majority of their day is spent slogging through grunt work that is, if not beneath them, beneath their potential. They were hired to do higher impact work; more strategic work; more valuable work.
Product managers will say they should be dreaming up groundbreaking features, not sending status updates. Engineers will say they should be building those features, not bespoke integrations for a big customer. Analysts will say they want to be looking for strategic insights, not making yet another dashboard. Marketers will say they should be designing the next great brand; lawyers should be engineering the next great tax shenanigans; finance directors should be engineering the next great Bitcoin shenanigans.3 Managers will wonder if they should be managers at all.
But we are stuck doing these things—these inconsequential, minor tasks—because something is in the way. Our organization is too dysfunctional. Or we haven’t hired the right people yet. The tools we use are too brittle and need to be upgraded; the right infrastructure hasn’t been built; our tech debt hasn’t been repaid. Or our corporate overlords simply do not appreciate us for what we’re good at, and they keep asking us to do other stuff instead.
And so, if you talk to hopeful people in technology, they will say that they are waiting: Waiting for the reorg that will change their responsibilities. Waiting for the migration to be over. Waiting for the self-serve platform to be done; waiting for the open roles to be filled. Waiting to be saved.
You could have two theories about all of this:
It is generally correct. We are all using only twenty percent of our professional capacity, and there is some organizational configuration—a team structure, a technology suite, an operational process—that would unlock the remaining eighty percent. And if our job is currently doing a bunch of undesirable grunt work, our meta job is to figure out how to do less of it, and more of something else.
It is generally incorrect. If the headline work was, in fact, the valuable work, why aren’t we already doing it? The bugs don’t have to be fixed; the dumb questions don’t have to be answered; the spreadsheets don’t have to be formatted. It is possible that all of us are constantly making irrational choices, and we are all asking each other to do distracting administrivia—but it is also possible that the mundane tasks are the important tasks. There are only so much strategy to discuss, and so many banner projects to be done, and most of what most companies need is tedium and organizational caulk.
The argument for the first point is that it sounds very nice, and that we want it to be true. The argument for the second point is that it is empirically true. The rapture never comes. Reorgs come and go, and new problems land in our lap. The migration to 9.3 rolls into the migration to 11.2.1.4 People keep asking us questions instead of using the dashboards we built for them. The new hire needs to be onboarded; the new hire is starting to help; the new hire just quit to take a “more strategic” role somewhere else. Everything changes, and the indignities remain.
Anyway, things that sound nice are easier to sell, and this week, Microsoft promised to be next savior:
Every organization has an innovation agenda. Whether it’s building AI-native applications, creating more engaging customer experiences, or unlocking new efficiencies—ambition is never the problem. But for many teams, it’s their technical debt that stands in the way. Legacy systems, outdated codebases, and fragmented infrastructure slow progress and drain resources. In fact, over 37% of application portfolios require modernization today—and that number will remain high over the next three years. Developers want the freedom to innovate, but migration and modernization is often slow, complex, and hard to start. These delays translate into lost opportunities and stalled transformation.
Generative AI changes the game.
This is a common refrain these days: AI will finally be the thing to deliver us from evil.5 It will automatically fix our bugs, or upgrade our databases; it will create tweets from blog posts; it will answer the dumb questions; it will follow up. It will track changes and format spreadsheets. It might be our manager, so that we don’t have to be. And that, of course, will give us the “freedom to innovate.”
But does having the former—robots to do rote work—necessarily imply the latter—that our jobs will be better? I mean. From Dan Shipper at Every:
You won’t be judged on how much you know, but instead on how well you can allocate and manage the resources to get work done. …
Even junior employees will be expected to use AI, which will force them into the role of manager—model manager. Instead of managing humans, they’ll be allocating work to AI models and making sure the work gets done well. They’ll need many of the same skills as human managers of today do (though in slightly modified form).
And more recently, from Julie Zhao:
The old boundary lines are blurring. Just as we’ll see fewer “pure managers,” we’ll also see fewer “pure ICs.” Instead, more people will live in the messy middle: sometimes executing, sometimes designing processes, sometimes coordinating.
And just this week, Deena Mousa published a new study in Works in Progress investigating how AI was infiltrating radiology, where “there are over 700 FDA-cleared radiology models, which account for more than three-quarters of all medical AI devices:”
A radiologist can’t just “hand off” a scan to AI. To cover a typical day, they’d need to pick from dozens of different models, run each one separately, and stitch the answers together.
Even platforms that bundle multiple models still spit out a list of disconnected yes/no answers.
Do these things sound fun? Does coordinating robots and and comparing AI-generated diagnostic reports sound like strategic work? Or is this just the next generation of an email job?
If you talk to technologists about how AI might change how we work, they will eventually tell you about the Jevons paradox:6 As technology makes some resources more efficient, demand for those resources go up. More efficient coal-burning engines make us burn more coal, because we use engines for more things. As computers get smaller and cheaper, we first put one in every home and then in every pocket. And, the extrapolation goes, if AI helps radiologists—or engineers, or analysts, or any other job—work faster, a lot more demand shows up.
In any economic discussion about AI, this story is our blunt instrument of choice: Demand increases; life gets better; QED. But there’s also a bizarro, Waluigi version of the Jevons paradox that can also be true: No matter how many things technology makes convenient, the supply of inconvenient things seems to remain constant. If managers don’t have to write letters or send faxes, they become bogged down with more email. If accountants don’t have to do math by hand, they create dozens of mismatched spreadsheets. If engineers don’t have to program computers with punchcards, they create an avalanche of bugs that need to be fixed.
Likewise, when AI answers annoying analytical questions, our jobs will be to review its work. When AI resolves engineering tickets, we will spend more time shuffling them around a kanban board. When AI cranks out 200,000,000 personalized Taco Bell ads, we have that many more email lists to reconcile. When AI diagnoses medical images, we spend our time stitching it all together.
What will people who work at technology companies tell you, if you talk to them in ten years? I have no idea. But I suspect they will tell you that they are doing too much tedious grunt work, and they are hopeful that they will soon be saved.
The latest release is 11.2.6.
“Dumb” is a verbatim quote from the last conversation I had about this.
11.2.6 requires Python 3.16, but we’re on 3.11, and that upgrade is blocked.
Though really, it will probably lead us to temptation (e.g., Crispy Chicken Nuggets with Diablo Sauce and a Mountain Dew Baja Midnight):
In Taco Bell US, 41 percent of our orders are digital, fueled by loyalty offers and unique digital activations like Mike’s Hot Honey Tuesday Drop and Feed the Beat Record Club box. Taco Bell’s unique activations helped grow active loyalty consumers nearly 45 percent year-over-year. Across the organization, AI is supercharging our marketing. Over 200 million AI-generated communications have been sent this year, delivering up to five times incrementality compared to traditional approaches.
And Jevon’s paradox states that, as a concept becomes more widely known, we will all inevitably become more confused by it, because we will assume it was created someone named Jevon, and is therefore called Jevon’s paradox, or maybe even Jevons’ paradox, when it is, in fact, apparently, the Jevons paradox.

I think you're right, because the fundamental challenge of knowledge work is coordination.
As organizations scale larger, coordination is harder. The impact of AI on coordination is not likely to be that high, as unless it displaces humans in the productive effort, the limiting factor is the human context & comprehension window.
I do think that there are companies and scenarios better (or worse) at solving this problem, but they rely on humans who are effective at reducing the noise.
I can tell you (for example) that many business analyst roles at Capital One are relatively high impact.
I'm torn, like...I run a relatively modest nonprofit. There are no dashboards because I can just see what everyone is up to in the day to day. There are very few meetings because we're good at async chat-first interaction.
To me, most dashboards are failures of an org - specifically, leadership wants to make decisions based on whether Number Go Up or Number Go Down, except its not that simple, so the yask for more dashboards (To figure out if Enough Numbers Go Up or Too Many Go Down).
It's an endless quest to surface more information that wouldn't be necessary if they had deeper domain knowledge. And then shit rolls downhill.
That said I dont understand complaining about fixing bugs, but I'm the sort for whom bug = 'Lets fix this first' because they make me itch.
And like... I think most people would say they don't feel their work is valuable. Or that of their colleagues. But they have to because monopolies abound and they /can/ stay inefficient as monopolies.
What used to safeguard against this, I would argue, is competition - you had to get more efficient constantly or the competitors would steal your customers. As we've allowed endless consolidation, that /need/ to do better has gone away, and become instead a shell game of fake productivity in an effort to get promoted or avoid a layoff/firing.
A classic case of everyone chasing the measurement instead of the underlying goal behind the measurement.