Project Estimates and Understanding Success
By Dr. Philip Mann
Organizations often doom projects to fail right from the start. Whether relatively small agile projects or massive megaprojects, a combination of poor estimating and poor understanding of estimates significantly contributes to failures. There are lots of clever tricks and complex calculations that enter the scene to deal with the potential of failure; some organizations even create a string of sacrificial “loss leader” projects to absorb the fallout from poor estimating practices. However, there is little writing that walks leaders through what estimates are and how to use them to establish project baselines and expectations – how to define success or failure in terms of the tolerances and techniques we prescribe to develop them.
That’s where this article comes in.
The Nature of Estimates
When examining the reasons projects fail, it is immediately clear how many of the problem areas converge at themes of misunderstanding the nature and purpose of the relevant estimates. First, and most obvious, we must remember that an estimate is a calculated guess, not a guarantee, with an acceptable level of accuracy (tolerance range or ±%) based on the methodology used. These estimates are not an upper limit or extreme but are about what we can expect based on what we know of this project in performance, cost, and time.
Estimates are a product of the amount of detail provided, with better estimates resulting from greater detail. Estimating ranges can vary by industry and organization, and the following illustrates standard estimate accuracy by estimate type, based on the level of detail in scope and other cost factors:
- Rough Order of Magnitude (ROM) Estimate: -25% to +75% (Lowest detail required)
- Preliminary Estimate: -15% to +50%
- Budget Estimate: -10% to +25%
- Definitive Estimate: -5% to +10% (Highest detail required)
It’s essential to understand that the nominal value from which the listed tolerance ranges is not the “real” estimate with an expected deviation. Instead, the nominal value is a reference figure for the range within which we expect the actual cost to fall. Think of it as painting a circle big enough that we are pretty sure we can hit it while acknowledging that the whole circle is the target, not just the center!
- Given a ROM of $500k (the nominal value) for a project, we can see that the actual cost should be somewhere between $375k and $875k. That is, anything in that range is within the estimate.
- Likewise, a budget estimate of the same nominal $500k gives us a range of $450k to $625k, which means that an actual cost that falls anywhere within that range is right on target.
Next, because estimates become more accurate with more information, the accuracy ranges reflect the amount of risk inherent in the project based on the amount of data used in the estimates, or lack thereof. In essence, the better we understand what we are doing, the more we can pin down what the costs are to do it. Projects that have a lot of well-understood details have relatively close tolerances, while estimates for projects that are still on the back of a napkin accurately reflect the high degree of uncertainty.
Before we move on to methods, notice how all of the estimate types listed here have a larger upward range than downward range. Technically, any value within the tolerances for the estimate type is equally likely, but realizing that there are more possible answers above the nominal value than below, should bias us away from low-end optimism and toward higher-value baselines.
Better Targets Through Better Baselines
You can already see where the main problem is: we tend to develop project baselines from the nominal values instead of considering the whole estimate range and what it means. If we baseline our $500k nominal project at that nominal value and it comes in at $600k, we would say it was over budget by 20% even though it was solidly within our estimate range – thus, right on target by any reasonable understanding of the estimate. It makes no sense to lock-down a target for success that supposes a much higher fidelity of information than was possible when created.
There are, however, a few approaches that can address this disparity, at least in part, though it will depend on the culture and industry of your organization.
Upper Limit Baseline
One tactic is to use the best possible estimating available to you and baseline to the upper limit of the estimate. The merit of this method is that it bakes-in the information-based risk at the time of estimation. The downside is that the baseline value appears unreasonably high to some decisionmakers, especially when developed from lower-fidelity estimates. For our $500k nominal project, this means baselining at $625k (maximum range for a budget estimate).
Weighted Average Baseline
A more palatable approach is to use a weighted average within the estimate range. Any suitable technique, such as PERT, can provide a baseline figure that leans a little more toward the higher end of the tolerance range while still considering the nominal number as “most likely.” Using this method to determine the baseline value for the project does diminish the impact of the variability in lower-fidelity estimate types. However, its strength is that it improves some baseline situations by making it easier to negotiate an above-nominal value as a baseline. For our $500k nominal project, this means baselining at $512.5k (PERT-weighted budget estimate).
Take the Middle Road
The nature of weighted estimates and how we determine those values for a project estimate make it only slightly better than using the nominal value in most cases. A third approach is to split the difference between the nominal value and the upper limit of the estimating method. This approach leaves us with a more representative baseline than a nominal value or weighted average while being somewhat easier to sell to management than the true upper limit baseline. For our $500k nominal project, this means baselining at $562.5k (nominal-to-maximum mean value for a budget estimate).
Some organizations may find it useful to develop business rules regarding the nominal-to-baseline relationship. For more bureaucratic-leaning organizations, it may be reasonable to add a fixed percentage to the nominal value to establish the baseline for all projects (e.g., nominal + 25% = baseline). While essentially arbitrary at the start, organizations with a more bureaucratic disposition respond better to process changes that appear less situational and produce more useful project baselines by adopting fixed adjustments to create more reasonable baselines.
Likewise, more entrepreneurial organizations may choose to create a heuristic that incorporates risk variables into an adaptive nominal-to-baseline calculation. For example, low-risk projects may add 10% to the nominal, while high-risk projects add 50%.
In any case, the focus is to align project baselines with the fidelity of the estimates and techniques used to develop them within the operational context, culture, and industry of the organization.
Organizations that want meaningful long term project success need to reinforce the basics of estimates. We need to finish the conceptual equation by connecting the methods of calculating our expected costs to the way we use those figures to baseline projects and define success or failure. Finally, we need to eliminate the inconsistent views of estimates that create varied perceptions of “success” and “failure” so that everyone involved has the same understanding of what it means to be “on target.”
While this article focused only on estimates in terms of cost, schedule estimates are subject to the same errors, biases, and misuses. Everything said applies to all project baseline estimates in principle.
 “Accuracy” used in PMI documentation is more correctly labeled as “precision” or “confidence level” in other measurement discussions, but the PMI usage appears throughout this post. PMI uses “precision” to reflect numerical rounding practices.
 “Nominal” is the middle or assigned value that serves as a reference for the tolerances of the estimate, typically the median or mean value of the estimate curve.
 Modified Program Evaluation and Review Technique (PERT) (L + 4N + H)/6, using (L)ow, (N)ominal, and (H)igh estimate values.
 Aggregate risk as determined by the organization.