Pages

Saturday 11 August 2012

Why EVM Fails

Some recommended reading now for anybody attempting to instigate the use of Earned Value in their organisation. Follow the link below to a very candid paper on the pitfalls of poor EV managament and why it is important to get the foundation of processes and culture in place before EV could ever hope to be successful.

http://www.icoste.org/LukasPaper.pdf

Joseph makes a few great points, particular in the latter part of the paper in is 'Top Ten Mistake' section. I've seen all of these at some point in the past and I've seen first and what affect they have on the quality of EVM outputs.

I would also like to add my own 11th 'Top Mistake' which relates to granularity.

Break It Down

Even if you find yourself in the privileged position of having avoided Jospeh's 'Top Ten Mistakes', there is a further, less obvious mistake that can, at best, limit the value of the process and, at worst, mislead those reviewing the project into a false impression of project performance.

Successfully running Earned Value is only worth the effort if it is performed at a sufficient level of granularity. Obviously doing so requires an equal level of granularity in cost collection, which comes with its own cultural and procedural issues. Many companies shy away from more detailed cost collection because it increases the burden on direct bookers and increases the likelihood of misbookings.

But these reservations are minimal when compared to the resulting increase in Project control. The increased burden on bookers will be small providing the organisation has a well established and robust booking system integrated with its ERP software. The potential of reduced booking accuracy is a more serious one though and can only be mitigated via:

- Efficient lines of communication to ensure that project teams are aware of the correct booking numbers;
- A well designed booking process that encourages precision;
- Good shop-floor control by the PM and CAMs;
- and thorough analysis of recorded actuals to spot any errors, both with labour bookings and materials.

Once in place, a more broken-down EVM system will provide far better clarity of the condition of the project and will present Project Managers with greater precision of performance data.

This is important because progress on Work Packages (WPs) with smaller budgets will be open to scrutiny, rather than swallowed up in the figures of a much larger area of the WBS; particularly important where these smaller elements exist on the critical path or are otherwise subject to narrow constraints.

Sunday 5 August 2012

Monte-Carlo Analysis Example - A Guide

In my last post I commented on how simple Monte-Carlo analysis was to run without the need for expensive Risk Management software. I posted an example of a simple spreadsheet to do just that in the 'Toolbox' page of this blog. I wanted to post a short guide for it to get you started, should you wish to use it.

Begin by navigating to the 'Toolbox' page using the links on the right-hand side of the PM Shed homepage. Select the link and download the spreadsheet.

Once you have it open, start entering your risk information in the table in the top right. A brief risk description should be put in column B, and the Cost Impact and Probability figures should go in columns C and D respectively (feel free to add extra rows if necessary, but the sheet's formulae will need to be updated)

Column E will then calculate the 'Factored Value' (Impact x Probability), and the sum of this in cell E14 will show the basic 'Management Reserve (MR)' or 'Technical Contingency' for the list of risks.

Columns H onwards display 10,000 simulated runs through our project using MS Excel's 'RANDBETWEEN' function (a random number generator, shown in column G for information only). The total risk cost for each 'run' is shown in row 12, and row 13 then shows whether the current MR was sufficient to cover this cost and displays a YES or a NO accordingly.

Cell G15 simply shows the percentage of 'YES's accross all runs. This percentage tells us what confidence we should have that the MR funding is appropriate for the level of risk on the project. Simply summing all the Cost x Probability figures for each risk will likely give a confidence of around 50%. As you will see with example figures I have entered, the percentage is around 53%. If you press F9, new random numbers will be generated for all 10,000 runs, but you will notice that the percentage changes very little due to the large sample size of the analysis.

The last step is to add some additional 'confidence funding' in cell E13. As you increase the figure in this cell, you will see the confidence percentage increase. It would be fairly simple to augment the spreadsheet such that the desired percentage could be entered and the required confidence funding calculated for you - but I'll leave that to you!

Use the spreadsheet as you see fit, and please post comments if you have any questions, suggestions or.. well, comments!

Friday 3 August 2012

Risk Contingency - Are You Confident?

The Need for Risk Management

The current financial crisis has provided a poignant reminder of why prudent, honest risk management is so important. Unlike the great casino of the banking world, very few private sector organisations can rely on being bail-out by the state should any venture go awry, so a sound understanding of the risks inherent to any commercial endeavour is crucial to project success.

Firstly, to give confidence of project viability, a full and comprehensive risk review must be undertaken before any competitive bids are submitted. Many organisations have a well established risk assessment process, involving representatives from an array of disciplines and reviewing all elements of the project by stepping through the Statement of Work (SoW) or the WBS task by task. Having identified the risks though, the register is often devalued by vaguely defined, "finger-in-the-air", cost and probability estimates. There is a wide range of estimating techniques available (which I will hopefully touch upon in another post), which should be fully deployed to reach accurate cost predictions, and probability estimates should be derived from past experience or via mathematical reasoning where applicable. Where a risk impact involves extra work (labour), costs should be calculated as with any other task - estimating man-hours, choosing the most appropriate resource or resource type, and calculating the cost parametrically. Where the range of potential impact is particularly large, three-point-estimating should be used along with PERT analysis to provide a single figure.

An oft forgotten element of assessing risk, and a major reason for late project delivery, is the evaluation of the schedule impact. You may have ensured that sufficient funds are laid by, but is your customer more focused on a timely completion? As I sit here watching Olympic swimming out of the corner of my eye, I would imagine that LOCOG had a far keener interest in any risks to facility delivery dates in the build-up to the games than on potential cost overruns.

Once an accurately costed risk register has been developed, the practice of calculating the 'technical contingency' or 'management reserve' by adding up the 'Gross Cost Impact x Probability %' for each risk, is well established. But why do so many project teams stop there?

Monte-Carlo Analysis

Simple Cost Risk, or 'Monte Carlo' analysis is critical element of any risk management process. Without it, a project can have no confidence that their contingency is sufficient. I imagine the reason many organisations shy away from it is because it's seen as overly complicated - probably due to the fact that is usually something performed by a dedicated Risk Management tool, which brings it's own costs and complications. But Monte Carlo analysis is simpler than most people realise.

For the uninitiated, Monte Carlo analysis effectively 'rolls the dice' on a project to calculate a total risk impact cost using the probabilities defined for each risk. On each simulated 'run', the mathematical dice, which is effectively a random number generator, 'decides' whether each risk occurs. Any of the risks might be selected to occur, or all of them, or indeed none at all. But by doing the same calculations over hundreds or thousands of iterations, a picture begins to develop of a 'typical' outcome. A simple calculation can tell you what percentage of runs produced total risk impact costs that fell within the contingency budget (i.e. the percentage of runs in which the project had sufficient contingency to cover the total cost of all impacted risks). This percentage essentially tells you how likely you are to have enough money to cover your risks.

Typically, the basic contingency figure (calculated as descried above) will produce a fairly low confidence figure, usually in the region of 50%, which is precisely why this analysis is so valuable. By altering the contingency budget by adding some additional 'confidence funding', and re-running the simulation, higher confidences can be achieved. Depending on the nature of the project and any risk/benefit analysis carried out, a particular desired confidence level can be defined by the organisation and the 'confidence funding' set appropriately.

In the 'Toolbox' section of this blog, I have uploaded a simple monte-carlo analysis spreadsheet to illustrate how easy it is to run a basic simulation. Feel free to make use of it.