There has been a lot written over the years concerning the humble Work Breakdown Structure (WBS) and its value at the centre of any planning/pricing operations. Producing an agreed hierarchy of the project work allows for greater confidence in a full coverage of the project deliverables and the Statement of Work (SoW). It also forms the basis of the project reporting and project management structure by allowing for the definition of control accounts. Finally, the WBS facilitates the assignment of responsible persons, thereby ensuring that every every activity, and indeed every high level rollup, has an owner and nothing gets neglected.
Many people differ over the methodology behind the WBS, some opt for a deliverables based approach, some for a Product Breakdown Structure type hierarchy, while others prefer a discipline based breakdown (Systems Engineering, Hardware Engineering etc). More often than not, though, it is a combination of the above, usually defined by the nature of the project, and indeed that of the organisation. For this reason, I would never recommend being too overly prescriptive with WBS best practice; a simple WBS template with some accompanying guidance is often sufficient.
What is far more important, and what often gets neglected when forming a WBS, is the definition and supprting info behind each work package.
Far too often when I ask to see a project's WBS I am presented with the usual WBS numbering breakdown with a list of work package titles. If I'm lucky, there is an Organisation Breakdown Structure with a Responsibility Assignment Matrix (RAM) showing the responsible parties for each area of the work. But a WBS does not end there. A Work Breakdown Structure, without a WBS dictionary, is incomplete.
WBS Dictionary
In essence a WBS Dictionary is a core planning document wich documents all the relevant information that is required to plan, implement and monitor each work package. This should include, but not necessarily be limited to:
Booking Codes
Scheduled Start and Finish Dates ("Period of Performance")
Responsible Person(s) (Output from RAM)
Resource Requirements
Budget
Basis of Estimate
Requirements/SoW mapping
Task objectives
Definition of work (what work is, and is not, included)
Key external dependencies (techincal info, supplier deliveries, customer furnished equipment)
Quality Control information
The WBS should be at the heart of any planning operation, at all stages in a project's lifecycle. During a proposal/bid process, the WBS Dictionary will likely be developed at a higher level (due to the relative lack of definition at this stage), but is no less crucial for it. The dictionary gives confidence that all elements of work have been included in the proposal and supports all estimates by documenting the assumptions made within the basis of estimate.
Once work commences and the usual project analysis systems are put in place, the dictionary is an invaluable tool to support this. Taking schedule status and calculating/assessing earned value is made much easier if 'task completion' is well defined and individual work package budgets are available.
The WBS dictionary is an essential element of a Project Management Plan, so don't leave home without one!
..unless it needs to be protectively marked, in which case you should probably keep it locked in a draw or something, you know the drill...
PM SHED
Saturday, 11 May 2013
Sunday, 20 January 2013
Performance Measurement Metrics and "The 99% Sydrome"
Scheduling best practice dictates that work should be broken down sufficiently such that it can be logically linked. This is particularly important when applying Earned Value as it aids the determination of progress. Larger, longer duration tasks require the approximation of task completion at the status date, while a more granular breakdown makes this easier; each small element of work is either complete or not (often referred to as "0/100" tasks where value is earned upon task completion and not before).
Occasionally, though, it is necessary to maintain longer duration tasks for various reasons. In these cases, it is desirable to allow for the accumulation of value whilst the activity is in progress. Where the task spans more than two reporting periods, standard practice is to allow the use of a "% Complete" EVM method (or Performance Measurement Technique (PMT)). However, these leaves the project exposed to the risk of "the 99% syndrome"; the task owner reports "almost complete", thereby claiming the majority of the task's value, but holds off from declaring final completion for until some considerable time later.
The 99% Syndrome can mask potential delays or bottlenecks by allowing unjustified value to be claimed. To avoid this, all '% Complete' assigned tasks should be assigned supporting to metrics to define stages of progression through the work. Very poor applications of EVM have insufficient detail in the WBS, leading to a schedule full of large, long-duration, poorly defined tasks without supporting metrics. The compound effect of this across the whole plan is devastating to the integrity of the EVM data, and ultimately, its usefulness.
However, defining and establishing task metrics can often be very difficult in even the most robust EVM systems. For certain types of work it can seem impossible to agree a sensible method for tracking progress. A particularly complex example I came across recently related to the 'burndown' of bugs found in a software product. The task duration and work was based upon an estimated total number of bugs, including those not yet found, and the aim was to have all the bugs cleared prior to a customer testing event. Since the effort required to clear these bugs represented a major proportion of the project, it was imperative that progress on this task was tracked accurately. Initially it was suggested that progress could be attributed to the number of bugs 'fixed' since the last status date, but this doesn't take into account the rate at which new bugs are being raised. If 100 bugs were fixed in a given period, but in the same period another 100 were raised, it would not be appropriate to claim any earned value since there was no progression to a bug-free system.
Instead, I suggested that the reduction in the total number of 'open' bugs should be used to calculate progress. If we take a snapshot of the total number of known bugs at the beginning of the task (say, 500), progress can be attributed as this number declines (Period 1: Bugs Fixed = 100; Bugs Raised = 50. Delta = 100 - 50 = 50. Percentage Complete = 50/500 = 10%). This approach seems appropriate so long as the the number declines. However, the typical profile of such a fault burndown activity is one where the total number of open faults rises early on, then plateaus, then finally falls in the latter stages. This would mean that no earned value could be accrued until late in the task - not perhaps a fair assessment of progression.
The search for a sensible way to track progress on these tasks continues. If you have any suggested, I would welcome them. Or indeed any further examples you may have come across that have proved difficult.
Occasionally, though, it is necessary to maintain longer duration tasks for various reasons. In these cases, it is desirable to allow for the accumulation of value whilst the activity is in progress. Where the task spans more than two reporting periods, standard practice is to allow the use of a "% Complete" EVM method (or Performance Measurement Technique (PMT)). However, these leaves the project exposed to the risk of "the 99% syndrome"; the task owner reports "almost complete", thereby claiming the majority of the task's value, but holds off from declaring final completion for until some considerable time later.
The 99% Syndrome can mask potential delays or bottlenecks by allowing unjustified value to be claimed. To avoid this, all '% Complete' assigned tasks should be assigned supporting to metrics to define stages of progression through the work. Very poor applications of EVM have insufficient detail in the WBS, leading to a schedule full of large, long-duration, poorly defined tasks without supporting metrics. The compound effect of this across the whole plan is devastating to the integrity of the EVM data, and ultimately, its usefulness.
However, defining and establishing task metrics can often be very difficult in even the most robust EVM systems. For certain types of work it can seem impossible to agree a sensible method for tracking progress. A particularly complex example I came across recently related to the 'burndown' of bugs found in a software product. The task duration and work was based upon an estimated total number of bugs, including those not yet found, and the aim was to have all the bugs cleared prior to a customer testing event. Since the effort required to clear these bugs represented a major proportion of the project, it was imperative that progress on this task was tracked accurately. Initially it was suggested that progress could be attributed to the number of bugs 'fixed' since the last status date, but this doesn't take into account the rate at which new bugs are being raised. If 100 bugs were fixed in a given period, but in the same period another 100 were raised, it would not be appropriate to claim any earned value since there was no progression to a bug-free system.
Instead, I suggested that the reduction in the total number of 'open' bugs should be used to calculate progress. If we take a snapshot of the total number of known bugs at the beginning of the task (say, 500), progress can be attributed as this number declines (Period 1: Bugs Fixed = 100; Bugs Raised = 50. Delta = 100 - 50 = 50. Percentage Complete = 50/500 = 10%). This approach seems appropriate so long as the the number declines. However, the typical profile of such a fault burndown activity is one where the total number of open faults rises early on, then plateaus, then finally falls in the latter stages. This would mean that no earned value could be accrued until late in the task - not perhaps a fair assessment of progression.
The search for a sensible way to track progress on these tasks continues. If you have any suggested, I would welcome them. Or indeed any further examples you may have come across that have proved difficult.
Subscribe to:
Posts (Atom)