Pages

Sunday 20 January 2013

Performance Measurement Metrics and "The 99% Sydrome"

Scheduling best practice dictates that work should be broken down sufficiently such that it can be logically linked. This is particularly important when applying Earned Value as it aids the determination of progress. Larger, longer duration tasks require the approximation of task completion at the status date, while a more granular breakdown makes this easier; each small element of work is either complete or not (often referred to as "0/100" tasks where value is earned upon task completion and not before).

Occasionally, though, it is necessary to maintain longer duration tasks for various reasons. In these cases, it is desirable to allow for the accumulation of value whilst the activity is in progress. Where the task spans more than two reporting periods, standard practice is to allow the use of a "% Complete" EVM method (or Performance Measurement Technique (PMT)). However, these leaves the project exposed to the risk of "the 99% syndrome"; the task owner reports "almost complete", thereby claiming the majority of the task's value, but holds off from declaring final completion for until some considerable time later.

The 99% Syndrome can mask potential delays or bottlenecks by allowing unjustified value to be claimed. To avoid this, all '% Complete' assigned tasks should be assigned supporting to metrics to define stages of progression through the work. Very poor applications of EVM have insufficient detail in the WBS, leading to a schedule full of large, long-duration, poorly defined tasks without supporting metrics. The compound effect of this across the whole plan is devastating to the integrity of the EVM data, and ultimately, its usefulness.

However, defining and establishing task metrics can often be very difficult in even the most robust EVM systems. For certain types of work it can seem impossible to agree a sensible method for tracking progress. A particularly complex example I came across recently related to the 'burndown' of bugs found in a software product. The task duration and work was based upon an estimated total number of bugs, including those not yet found, and the aim was to have all the bugs cleared prior to a customer testing event. Since the effort required to clear these bugs represented a major proportion of the project, it was imperative that progress on this task was tracked accurately. Initially it was suggested that progress could be attributed to the number of bugs 'fixed' since the last status date, but this doesn't take into account the rate at which new bugs are being raised. If 100 bugs were fixed in a given period, but in the same period another 100 were raised, it would not be appropriate to claim any earned value since there was no progression to a bug-free system.

Instead, I suggested that the reduction in the total number of 'open' bugs should be used to calculate progress. If we take a snapshot of the total number of known bugs at the beginning of the task (say, 500), progress can be attributed as this number declines (Period 1: Bugs Fixed = 100; Bugs Raised = 50. Delta = 100 - 50 = 50. Percentage Complete = 50/500 = 10%). This approach seems appropriate so long as the the number declines. However, the typical profile of such a fault burndown activity is one where the total number of open faults rises early on, then plateaus, then finally falls in the latter stages. This would mean that no earned value could be accrued until late in the task - not perhaps a fair assessment of progression.

The search for a sensible way to track progress on these tasks continues. If you have any suggested, I would welcome them. Or indeed any further examples you may have come across that have proved difficult.


No comments:

Post a Comment