Pages

Saturday, 11 May 2013

You may speak 'Project Management', but you still need a dictionary...

There has been a lot written over the years concerning the humble Work Breakdown Structure (WBS) and its value at the centre of any planning/pricing operations. Producing an agreed hierarchy of the project work allows for greater confidence in a full coverage of the project deliverables and the Statement of Work (SoW). It also forms the basis of the project reporting and project management structure by allowing for the definition of control accounts. Finally, the WBS facilitates the assignment of responsible persons, thereby ensuring that every every activity, and indeed every high level rollup, has an owner and nothing gets neglected.

Many people differ over the methodology behind the WBS, some opt for a deliverables based approach, some for a Product Breakdown Structure type hierarchy, while others prefer a discipline based breakdown (Systems Engineering, Hardware Engineering etc). More often than not, though, it is a combination of the above, usually defined by the nature of the project, and indeed that of the organisation. For this reason, I would never recommend being too overly prescriptive with WBS best practice; a simple WBS template with some accompanying guidance is often sufficient.

What is far more important, and what often gets neglected when forming a WBS, is the definition and supprting info behind each work package. 

Far too often when I ask to see a project's WBS I am presented with the usual WBS numbering breakdown with a list of work package titles. If I'm lucky, there is an Organisation Breakdown Structure with a Responsibility Assignment Matrix (RAM) showing the responsible parties for each area of the work. But a WBS does not end there. A Work Breakdown Structure, without a WBS dictionary, is incomplete.

WBS Dictionary

In essence a WBS Dictionary is a core planning document wich documents all the relevant information that is required to plan, implement and monitor each work package. This should include, but not necessarily be limited to:

Booking Codes
Scheduled Start and Finish Dates ("Period of Performance")
Responsible Person(s) (Output from RAM)
Resource Requirements
Budget
Basis of Estimate
Requirements/SoW mapping
Task objectives
Definition of work (what work is, and is not, included)
Key external dependencies (techincal info, supplier deliveries, customer furnished equipment)
Quality Control information

The WBS should be at the heart of any planning operation, at all stages in a project's lifecycle. During a proposal/bid process, the WBS Dictionary will likely be developed at a higher level (due to the relative lack of definition at this stage), but is no less crucial for it. The dictionary gives confidence that all elements of work have been included in the proposal and supports all estimates by documenting the assumptions made within the basis of estimate.

Once work commences and the usual project analysis systems are put in place, the dictionary is an invaluable tool to support this. Taking schedule status and calculating/assessing earned value is made much easier if 'task completion' is well defined and individual work package budgets are available.

The WBS dictionary is an essential element of a Project Management Plan, so don't leave home without one!

..unless it needs to be protectively marked, in which case you should probably keep it locked in a draw or something, you know the drill...

PM SHED

Sunday, 20 January 2013

Performance Measurement Metrics and "The 99% Sydrome"

Scheduling best practice dictates that work should be broken down sufficiently such that it can be logically linked. This is particularly important when applying Earned Value as it aids the determination of progress. Larger, longer duration tasks require the approximation of task completion at the status date, while a more granular breakdown makes this easier; each small element of work is either complete or not (often referred to as "0/100" tasks where value is earned upon task completion and not before).

Occasionally, though, it is necessary to maintain longer duration tasks for various reasons. In these cases, it is desirable to allow for the accumulation of value whilst the activity is in progress. Where the task spans more than two reporting periods, standard practice is to allow the use of a "% Complete" EVM method (or Performance Measurement Technique (PMT)). However, these leaves the project exposed to the risk of "the 99% syndrome"; the task owner reports "almost complete", thereby claiming the majority of the task's value, but holds off from declaring final completion for until some considerable time later.

The 99% Syndrome can mask potential delays or bottlenecks by allowing unjustified value to be claimed. To avoid this, all '% Complete' assigned tasks should be assigned supporting to metrics to define stages of progression through the work. Very poor applications of EVM have insufficient detail in the WBS, leading to a schedule full of large, long-duration, poorly defined tasks without supporting metrics. The compound effect of this across the whole plan is devastating to the integrity of the EVM data, and ultimately, its usefulness.

However, defining and establishing task metrics can often be very difficult in even the most robust EVM systems. For certain types of work it can seem impossible to agree a sensible method for tracking progress. A particularly complex example I came across recently related to the 'burndown' of bugs found in a software product. The task duration and work was based upon an estimated total number of bugs, including those not yet found, and the aim was to have all the bugs cleared prior to a customer testing event. Since the effort required to clear these bugs represented a major proportion of the project, it was imperative that progress on this task was tracked accurately. Initially it was suggested that progress could be attributed to the number of bugs 'fixed' since the last status date, but this doesn't take into account the rate at which new bugs are being raised. If 100 bugs were fixed in a given period, but in the same period another 100 were raised, it would not be appropriate to claim any earned value since there was no progression to a bug-free system.

Instead, I suggested that the reduction in the total number of 'open' bugs should be used to calculate progress. If we take a snapshot of the total number of known bugs at the beginning of the task (say, 500), progress can be attributed as this number declines (Period 1: Bugs Fixed = 100; Bugs Raised = 50. Delta = 100 - 50 = 50. Percentage Complete = 50/500 = 10%). This approach seems appropriate so long as the the number declines. However, the typical profile of such a fault burndown activity is one where the total number of open faults rises early on, then plateaus, then finally falls in the latter stages. This would mean that no earned value could be accrued until late in the task - not perhaps a fair assessment of progression.

The search for a sensible way to track progress on these tasks continues. If you have any suggested, I would welcome them. Or indeed any further examples you may have come across that have proved difficult.


Saturday, 11 August 2012

Why EVM Fails

Some recommended reading now for anybody attempting to instigate the use of Earned Value in their organisation. Follow the link below to a very candid paper on the pitfalls of poor EV managament and why it is important to get the foundation of processes and culture in place before EV could ever hope to be successful.

http://www.icoste.org/LukasPaper.pdf

Joseph makes a few great points, particular in the latter part of the paper in is 'Top Ten Mistake' section. I've seen all of these at some point in the past and I've seen first and what affect they have on the quality of EVM outputs.

I would also like to add my own 11th 'Top Mistake' which relates to granularity.

Break It Down

Even if you find yourself in the privileged position of having avoided Jospeh's 'Top Ten Mistakes', there is a further, less obvious mistake that can, at best, limit the value of the process and, at worst, mislead those reviewing the project into a false impression of project performance.

Successfully running Earned Value is only worth the effort if it is performed at a sufficient level of granularity. Obviously doing so requires an equal level of granularity in cost collection, which comes with its own cultural and procedural issues. Many companies shy away from more detailed cost collection because it increases the burden on direct bookers and increases the likelihood of misbookings.

But these reservations are minimal when compared to the resulting increase in Project control. The increased burden on bookers will be small providing the organisation has a well established and robust booking system integrated with its ERP software. The potential of reduced booking accuracy is a more serious one though and can only be mitigated via:

- Efficient lines of communication to ensure that project teams are aware of the correct booking numbers;
- A well designed booking process that encourages precision;
- Good shop-floor control by the PM and CAMs;
- and thorough analysis of recorded actuals to spot any errors, both with labour bookings and materials.

Once in place, a more broken-down EVM system will provide far better clarity of the condition of the project and will present Project Managers with greater precision of performance data.

This is important because progress on Work Packages (WPs) with smaller budgets will be open to scrutiny, rather than swallowed up in the figures of a much larger area of the WBS; particularly important where these smaller elements exist on the critical path or are otherwise subject to narrow constraints.

Sunday, 5 August 2012

Monte-Carlo Analysis Example - A Guide

In my last post I commented on how simple Monte-Carlo analysis was to run without the need for expensive Risk Management software. I posted an example of a simple spreadsheet to do just that in the 'Toolbox' page of this blog. I wanted to post a short guide for it to get you started, should you wish to use it.

Begin by navigating to the 'Toolbox' page using the links on the right-hand side of the PM Shed homepage. Select the link and download the spreadsheet.

Once you have it open, start entering your risk information in the table in the top right. A brief risk description should be put in column B, and the Cost Impact and Probability figures should go in columns C and D respectively (feel free to add extra rows if necessary, but the sheet's formulae will need to be updated)

Column E will then calculate the 'Factored Value' (Impact x Probability), and the sum of this in cell E14 will show the basic 'Management Reserve (MR)' or 'Technical Contingency' for the list of risks.

Columns H onwards display 10,000 simulated runs through our project using MS Excel's 'RANDBETWEEN' function (a random number generator, shown in column G for information only). The total risk cost for each 'run' is shown in row 12, and row 13 then shows whether the current MR was sufficient to cover this cost and displays a YES or a NO accordingly.

Cell G15 simply shows the percentage of 'YES's accross all runs. This percentage tells us what confidence we should have that the MR funding is appropriate for the level of risk on the project. Simply summing all the Cost x Probability figures for each risk will likely give a confidence of around 50%. As you will see with example figures I have entered, the percentage is around 53%. If you press F9, new random numbers will be generated for all 10,000 runs, but you will notice that the percentage changes very little due to the large sample size of the analysis.

The last step is to add some additional 'confidence funding' in cell E13. As you increase the figure in this cell, you will see the confidence percentage increase. It would be fairly simple to augment the spreadsheet such that the desired percentage could be entered and the required confidence funding calculated for you - but I'll leave that to you!

Use the spreadsheet as you see fit, and please post comments if you have any questions, suggestions or.. well, comments!

Friday, 3 August 2012

Risk Contingency - Are You Confident?

The Need for Risk Management

The current financial crisis has provided a poignant reminder of why prudent, honest risk management is so important. Unlike the great casino of the banking world, very few private sector organisations can rely on being bail-out by the state should any venture go awry, so a sound understanding of the risks inherent to any commercial endeavour is crucial to project success.

Firstly, to give confidence of project viability, a full and comprehensive risk review must be undertaken before any competitive bids are submitted. Many organisations have a well established risk assessment process, involving representatives from an array of disciplines and reviewing all elements of the project by stepping through the Statement of Work (SoW) or the WBS task by task. Having identified the risks though, the register is often devalued by vaguely defined, "finger-in-the-air", cost and probability estimates. There is a wide range of estimating techniques available (which I will hopefully touch upon in another post), which should be fully deployed to reach accurate cost predictions, and probability estimates should be derived from past experience or via mathematical reasoning where applicable. Where a risk impact involves extra work (labour), costs should be calculated as with any other task - estimating man-hours, choosing the most appropriate resource or resource type, and calculating the cost parametrically. Where the range of potential impact is particularly large, three-point-estimating should be used along with PERT analysis to provide a single figure.

An oft forgotten element of assessing risk, and a major reason for late project delivery, is the evaluation of the schedule impact. You may have ensured that sufficient funds are laid by, but is your customer more focused on a timely completion? As I sit here watching Olympic swimming out of the corner of my eye, I would imagine that LOCOG had a far keener interest in any risks to facility delivery dates in the build-up to the games than on potential cost overruns.

Once an accurately costed risk register has been developed, the practice of calculating the 'technical contingency' or 'management reserve' by adding up the 'Gross Cost Impact x Probability %' for each risk, is well established. But why do so many project teams stop there?

Monte-Carlo Analysis

Simple Cost Risk, or 'Monte Carlo' analysis is critical element of any risk management process. Without it, a project can have no confidence that their contingency is sufficient. I imagine the reason many organisations shy away from it is because it's seen as overly complicated - probably due to the fact that is usually something performed by a dedicated Risk Management tool, which brings it's own costs and complications. But Monte Carlo analysis is simpler than most people realise.

For the uninitiated, Monte Carlo analysis effectively 'rolls the dice' on a project to calculate a total risk impact cost using the probabilities defined for each risk. On each simulated 'run', the mathematical dice, which is effectively a random number generator, 'decides' whether each risk occurs. Any of the risks might be selected to occur, or all of them, or indeed none at all. But by doing the same calculations over hundreds or thousands of iterations, a picture begins to develop of a 'typical' outcome. A simple calculation can tell you what percentage of runs produced total risk impact costs that fell within the contingency budget (i.e. the percentage of runs in which the project had sufficient contingency to cover the total cost of all impacted risks). This percentage essentially tells you how likely you are to have enough money to cover your risks.

Typically, the basic contingency figure (calculated as descried above) will produce a fairly low confidence figure, usually in the region of 50%, which is precisely why this analysis is so valuable. By altering the contingency budget by adding some additional 'confidence funding', and re-running the simulation, higher confidences can be achieved. Depending on the nature of the project and any risk/benefit analysis carried out, a particular desired confidence level can be defined by the organisation and the 'confidence funding' set appropriately.

In the 'Toolbox' section of this blog, I have uploaded a simple monte-carlo analysis spreadsheet to illustrate how easy it is to run a basic simulation. Feel free to make use of it.

Friday, 27 July 2012

The Myth of Hindsight

I wanted this blog to be about interesting and relevant observations from the world of Project Management and Project Controls. Over time I hope to develop this into a 'Toolbox' (hence the 'PM Shed') of helpful tips, guidance and some templates and examples to aid you on your way.

To some, project controls and Earned Value principals are a mystery, to others it is an inconvenient 'box' that project managers are made to 'tick' by micro-managing directors, but to me it is a fundamentally important aspect of all successful projects and, if done right, is an incredibly rewarding field. That said, the point of my first post is a more general one that can be applied throughout business, or indeed life.

The assessment of decision-making

"The decision to close the roof on centre-court turned out to be a good one after torrential rain hit Wimbledon this evening"

This statement, heard on the radio during the tennis tournament back in June, is logically incorrect. The 'wisdom' of any decision cannot, and should not, be assessed based upon hindsight because this information was not available to the decision-maker at the time. And yet it is something we hear all the time - particularly in Project Management.

A risk review for instance should be judged upon the thoroughness of the process, the range of opinions consulted and the technical understanding of the deliverables and their requirements. Therefore the 'quality' of the resulting risk register can be assessed before the project even begins. Should an unknown risk affect the programme, thereby absorbing all contingency funds and eating into the margin, this should not, by default, black-mark the PM as a poor risk manager.

Sceptics might suggest that this gives PMs a get-out-of-jail-free card. How can failure or poor performance result in a thumbs-up for the choice that delivered it? Besides, the impact of a decision is often far easier to gauge than the diligence of the decision-making process, but does that make it a better measure of judgement?

I would argue that it is dangerous to review decisions, and develop decision-making processes, purely on the basis of hindsight alone. All projects, and indeed all choices in life, have an element of risk or a 'known unknown', which is to say that it could end in failure. But who is to say that the alternative choice or choices would have had more favourable results?