After I had reviewed how Monte Carlo simulation could help a team to understand the distribution of potential schedule or cost outcomes for a release or project, one of the learners in the class I was teaching pointed out that while such techniques can be helpful, they are only as good as the inputs provided.
When Monte Carlo is used in project contexts, these inputs are usually expected duration or cost ranges for the activities. If those ranges are inaccurate the resulting outputs will be useless and all it takes is one poorly estimated activity to skew the entire distribution curve. And even when reasonable estimate ranges are provided, if the wrong probability distribution types are selected for those activities, we wouldn’t get a usable output. For example, if the default provided is a normal (i.e. bell curve) distribution and we are evaluating an activity which is primarily people-driven, in some cases a log-normal distribution might be more appropriate as the worst case could tail off towards infinity!
Such garbage in-garbage out challenges are not restricted to simulation tools alone. Take Expected Monetary Value (EMV) which can be used to quantify the expected impacts of uncertainties. EMV can help to define contingency amounts and could be used in combination with decision trees to support decision-making. While it is a powerful tool, if the estimates for risk impact and probability are invalid, the outputs will be useless.
Even a tried-and-true technique such as parametric estimation will produce the wrong results when the input data is suspect or the underlying assumptions supporting the use of the model are invalid. For example, if we use a formula to calculate how many cans of paint we will need based on wall measurements, unless that formula also incorporates factors such as the desired number of coats or what the wall’s surface is made of, the formula might produce bad results.
How do we overcome such challenges?
Expert judgment on the part of the people performing the activities helps. Your own past experience can help to sanity check the estimates you receive. Independent consultants with deep experience in the work being done or published historical data for similar projects might offset knowledge gaps with the first two approaches.
But all of these methods are predicated on the assumption that the context of our new project is similar to what we or others have experienced before. Unfortunately, the accelerated pace of contextual change resulting from factors such as advancements in technology or environmental disruptions means that past results are more frequently not indicative of future performance.
We might still be able to use the same tools as before, but we will need to design and implement experiments to tell us if the underlying assumptions supporting their usage are invalid, build flexibility into our projects and coach our stakeholders to increase their resilience for plan pivots.
Faced with complexity, trust but verify (as early as possible) becomes our mantra.