Why are software estimates so often underestimates?
A comment from michaelochurch on Hacker News:
Let's say that you have 20 tasks. Each involves rolling a 10-sided die. If it's a 1 through 8, wait that number of minutes. If it's a 9, wait 15 minutes. If it's a 10, wait an hour.
How long is this string of tasks going to take? Summing the median time expectancy, we get a sum 110 minutes, because the median time for a task is 5.5 minutes. The actual expected time to completion is 222 minutes, with 5+ hours not being unreasonable if one rolls a lot of 9's and 10's.
There's a piece called
Yet another ls option
describing the rabbit hole that was uncovered
by just trying to add an option to display commas in large numbers to
What's the point?
Before trying to produce estimates, I think it's worth asking why estimates are even needed. If you're taking an iterative approach to product development, then you don't even know what you're going to be building in the future, so what is there to estimate? On the other hand, the value is often in the planning rather than the plan. Producing estimates makes us think about the problem and potential solutions, and spot potential hazards before we stumble into them. In any case, I like understanding why we're producing estimates and how they're going to be used. (See also: the #noestimates discussion on Twitter and elsewhere.)
Explaining estimation, prioritisation and velocity (oh my!)
A few years ago, I wrote an e-mail to some non-technical folk explaining how estimation, priorisation and velocity work. Or rather, one way in which they can be applied. One thing I left out was that in some situations, especially when doing work in unfamiliar areas, even relative estimations can be wildly inaccurate. Sometimes it's best to give rough size estimates – will something take hours, days, weeks, or months? – to avoid giving a false impression of highly accurate estimates. In any case, if you have a deadline, planning to finish well before the deadline strikes me as quite sensible.