Estimating Invisible Work

Why are we so bad at estimating and what can we do about it?

Issue 40 /

Hi friends šŸ‘‹ Welcome to February. Can you believe 2025 is already 9% complete? Well, not to brag, but my goals for this year and New Year's resolutions are already 100% abandoned.

I want to start with a big thank you to everyone who replied to the last newsletter with encouraging words about the new "Untitled Interviews Project" I'm working on. I really appreciate your support and I'm excited to share more about it soon. I even purchased a new domain name for it, so I guess I'm fully committed now.

Also, some of you reached out wanting to learn more about the note-taking and organization system I briefly mentioned last month. I'll be writing something about that too, so keep an eye on a future issue of the newsletter for the link.

This week we'll talk about estimating software projects, why we are so bad at it, and how we can more accurately predict the future without a crystal ball.

Let's dive in.

MAXI'S RAMBLINGS

Estimating Invisible Work

Photo by Matheus Bertelliā€‹

You know that feeling when your boss asks you for an estimate on a project, you tell them itā€™ll take two weeks, and then exactly two weeks later, you deliver the project fully up-to-spec, free of bugs, and with a complete test suite?

No? That really never happened to you?

Yeah, I know. It never happened to me either. When I say something is going to take two weeks, what I usually mean is that in two weeks Iā€™ll be about 80% doneā€¦ and all Iā€™ll have left is the other 80%.

Thereā€™s a big chance this will resonate with your own experience because, well, weā€™re all equally bad at estimating projectsā€”especially the large, messy ones that ask us to do things weā€™ve never done before. Of course, we do get our estimates right sometimes, but unless we somehow develop the ability to predict the future, estimating is not a skill we can consistently rely on.

Itā€™s like I always say: there are only two kinds of people, those who are bad at estimating and those who are just lucky.

Ok, I never actually said that. But still, it kinda rings true, doesnā€™t it? Itā€™s like thereā€™s a layer of invisible work that always gets unaccounted for, and unless we lucked out with our estimates, weā€™re pretty much guaranteed to miss the mark.

So today I want to spend a bit of time talking about what this invisible work looks like, and what, if anything, we can do to consider it in our estimates.

But first, letā€™s answer a more fundamental questionā€”why are we so bad at estimating software projects?

Why are we so bad at estimating?

Invisible work and our inability to predict the future surely get in the way of coming up with accurate estimates, but thereā€™s one more thing that explains why weā€™re all so bad at itā€”our overconfidence.

In a 1994 study, a group of researchers asked students to estimate how long it would take them to finish their senior thesesā€”the average estimate was about 34 days. They also asked the students for a ā€œbest caseā€ estimate if everything went as smoothly as possible and a ā€œworst caseā€ estimate if everything went as poorly as possible, to which the students estimated 27 and 48 days, respectively, on average.

What was the actual average completion time of the theses? 55 days, with the majority of students taking longer than their ā€œworst caseā€ estimate.

Scientists call this phenomenon the planning fallacy, and itā€™s just one of our many overconfidence biases. The planning fallacy causes us to be overly optimistic with our estimates even when thereā€™s no reward for it, and it explains something we instinctively know: all of us, from psychology students to software engineers, are absolutely terrible at estimating complex projects.

Iā€™m sure this isnā€™t shocking news to anybody, but thereā€™s one aspect of this study (and the many others that have researched this fallacy over the years) that is worth calling out: Itā€™s not just that weā€™re bad at estimating tasks and projectsā€”itā€™s that weā€™re very good at under-estimating them. Or in other words, we have a strong tendency to overestimate our ability to complete projects on time.

To compensate for this overconfidence, a common piece of advice is to give some ā€œpaddingā€ to our estimates. Some might suggest to double, triple, or even quadruple whatever time we initially think it will take. And, to be fair, these inflated estimates tend to be more accurate than our original ones (especially for larger projects), but they can suffer from an equally damaging and much more sneaky problemā€”Parkinsonā€™s Law.

According to Parkinsonā€™s Law, ā€œwork expands so as to fill the time available for its completion.ā€ If we give ourselves four weeks to complete something we could have easily done in three, chances are it will take us the full four weeks.

Sure, a super-inflated estimate would technically be ā€œaccurateā€ if we manage to deliver the project on time, but that doesnā€™t mean we used that time as efficiently as possible. Plus, thereā€™s a limit to how much padding we can add to a project or task without raising some eyebrows. If every time our boss asks us to update the color of a button, our reply is ā€œYou got it boss, it'll take me about 10 weeksā€, our estimates might be accurate, but weā€™re going to have a hard time keeping that job for too much longer.

Parkinson's Law ā€” Source: DTUā€‹

This is why estimating projects is so hard. Itā€™s like we have two opposing forces working against usā€”one encouraging us to give optimistic estimates and another one punishing us with inefficiency when we try to compensate for our optimism. No matter what estimate we give, it feels like itā€™s always going to be wrong.

But not all is lost. Our estimates might never be perfectly accurate 100% of the time, but there are things that can help us come up with more realistic ones. It all starts by taking a deeper look at that invisible work we talked about earlier.

Making the invisible visible

Invisible work isnā€™t really invisibleā€”itā€™s just that we tend to ignore it or underestimate it in our estimates.

Adding some padding to our estimates works well because it recognizes that this invisible work exists, that unexpected things will eventually come up, and that tasks will always take longer than we initially thought. But how do we know how much padding we should add? How do we know when to double and when to triple our estimates? And how can we avoid the trap of giving it too much padding and suffering the consequences of Parkinson's Law?

To answer these questions, we need to get more specific about what this invisible work looks like. And my favorite way to do that is by using Dave Stewart's framework explaining why The work is never just ā€œthe workā€.

In his great 2022 essay, Dave helps us make invisible work more tangible by breaking down the total amount of work that goes into a software project into eight categories:

  • 1ļøāƒ£ The work around the work ā€” meetings, reviews, project management
  • 2ļøāƒ£ The work to get the work ā€” research, experimentation, scoping, quoting, pitching
  • 3ļøāƒ£ The work before the work ā€” configuration, setup, services, infrastructure
  • 4ļøāƒ£ The work ā€” the actual build, design, and testing of the product
  • 5ļøāƒ£ The work between the work ā€” iteration, debugging, refactoring, maintenance, tooling
  • 6ļøāƒ£ The work beyond the work ā€” changes, omissions, nice-to-haves, scope creep
  • 7ļøāƒ£ The work outside the work ā€” surprises, contingency, disasters, mission creep
  • 8ļøāƒ£ The work after the work ā€” hosting, deployment, security, support, updates, fixes
infographic showing how a development project's phases can be broken down and estimated
The work is never just "the work" ā€” Illustration by Dave Stewart

Not every project has all of these different types of work, of course. But the larger and more complex the project, the more different types of ā€œworkā€ itā€™ll require.

Even for small projects, chances are they will involve at least 3 or 4 different types of work. Say we think a project would take us about 40 hours of coding; giving an estimate of "a week" assumes that weā€™ll be able to code for 8 hours a day without any meetings, code reviews, interruptions, context-switching, or endless emails and Slack messages to reply to. Unless you work completely by yourself, thatā€™s a very unlikely scenario.

What I like the most about Daveā€™s framework is that we can use his breakdown as a checklist anytime we sit down to estimate a project. We might have a clear idea of how long ā€œthe workā€ (coding, design, testing) will take, but what about the rest?

What research do we need to do before starting to tackle this project? What new infrastructure is required? Do we have any dependencies with other teams? Whatā€™s our plan for when bugs start to come in?

This exercise clearly isnā€™t an exact science, but asking ourselves these questions will shed some light on areas we might have previously ignored, resulting in an estimate weā€™ll feel much more comfortable with.

So next time you have to do the very hard work of trying to predict the future and come up with a project estimate, I encourage you to give Daveā€™s checklist a try. Itā€™ll help you come up with a more accurate estimate and maybe, just maybe, deliver your next project on time.

ARCHITECTURE SNACKS

Links Worth Checking Out

We're Good At Writing Software by Kent Beck & Beth Andres-Beck
  1. Kent Beck & Beth Andres-Beck gave a keynote presentation at the Ƙredev Conference called We're Good At Writing Software. I don't think they would have named it that way if they saw some of the software I wrote, but I appreciate their optimism. In their talk, Kent and Beth explain their Desert and Forest analogy and share tons of great advice for getting out of the desert and reaching the bug-free, always productive forest.
  2. Jimmy Miller wrote about the benefits of Discovery Coding, the practice of writing some of the code first instead of starting with a complete design.
  3. You might be surprised to learn that the modern way of writing JavaScript servers involves using something that has been around for almost a decadeā€”the Request/Response API.
  4. Brenley Dueck wrote an article about server functions, why they matter, and how they differ from server actions and server components. I appreciate his article, but I honestly don't get why we need an explanation because all of these names aren't confusing at all.
  5. Great overview by Sandro Maglione on all the different methods we have to manage state in React applications before reaching for third-party libraries.
  6. Laurie Voss, the VP of Developer Relations at LlamaIndex, wrote about everything he learned about writing AI apps so far. Laurie's conclusion is that "LLMs are awesome and limited," which very much resonates with my own experience working with them.
  7. Kelly Sutton shares his experience a year after moving away from React and replacing it with the Ruby on Rails library StimulusJS.
  8. Kind of random but interesting if you're a PokĆ©mon nerd like me: last year, a bunch of early prototypes of PokĆ©mon trading cards started showing up in public auctions and sold for tons of money. Turns out that many of these prototypes weren't printed in the '90s as originally thoughtā€”but in 2024.

Thatā€™s all for today, friends! Thank you for making it all the way to the end. If you enjoyed the newsletter, it would mean the world to me if youā€™d share it with your friends and coworkers. (And if you didn't enjoy it, why not share it with an enemy?)

Did someone forward this to you? First of all, tell them how awesome they are, and then consider subscribing to the newsletter to get the next issue right in your inbox.

I read and reply to all of your comments. Feel free to reach out on Twitter, LinkedIn, or reply to this email directly with any feedback or questions.

Have a great week šŸ‘‹

ā€“ Maxi

Is frontend architecture your cup of tea? šŸµ

Level up your skills with Frontend at Scaleā€”your friendly software design and architecture newsletter. Subscribe to get the next issue right in your inbox.

    “Maxi's newsletter is a treasure trove of wisdom for software engineers with a growth mindset. Packed with valuable insights on software design and curated resources that keep you at the forefront of the industry, it's simply a must-read. Elevate your game, one issue at a time.”

    Addy Osmani
    Addy Osmani
    Engineering Lead, Google Chrome