Velocity Signature Analysis
December 10, 2008
Most agile projects track their velocity. For the past 10 years or so I have been studying the velocity profiles of my projects and any other projects I can get data from. Velocity profiles tell a story about the project, and like signatures are unique.
Tracking stories, or more normally points completed per iteration, gives the classic velocity graphs such as the one shown below.
Here we can see the Projected Velocity shown by the dotted blue line and the Actual Velocity shown in dark blue.
From observing 15-20 projects I have noticed the following reoccurring patterns. Am I the only one, or are these common?
The Over Promise – when first asked for velocity projections the team produces overly optimistic projections. Then the inevitable complications and delays of new environments and start-up slip (lack of traction) mean less than what is projected gets delivered.
Fortunately most projects pay less attention to these early iteration projections since they are notoriously variable. I tend to base estimates to complete more on other more traditional estimation measures to begin with and incorporate velocity feedback progressively as it becomes more reliable.
Ramp to Nirvana – Given promising initial progress trends, surely we are going to continue getting better and better as the team gels and more reusable frameworks are developed? Unfortunately reality has a way of interfering and usually as the code base grows so does support work (assuming regular go-lives), refactoring load, and complexity; all of which reduce the impact of continually getting faster and so the Ramp to Nirvana does not occur (on my projects yet).
Balanced Force Flat-lining – Related to the Ramp to Nirvana not happening, are the characteristics of flat lining. We continue to improve technically and implement process enhancements, yet the velocity stays about the same. I am guessing that the incremental team improvements are being cancelled out by increasing application complexity, refactoring load, and support effort. (Either that or the team is just contented at their current level!) The net result is that velocity continues at a flat rate.
Roller Coaster – Here Balanced Force Flat-Lining is in play and no major up or down trends are happening, but iteration to iteration, we have some oscillation. This is happening on my current project and is a feature of two things I know about, maybe more. First, batches of uncompleted stories are occasionally carried forward to the next iteration and completed there. Since we do not claim point credit until our business reps have approved the story, some iterations have work started in a previous iteration credited for completion in them. The second factor is review queues; our business reps work on testing the application and some things take longer to test than others. Occasionally queues of “Ready for SME Review” stories build up robbing the real iteration of credit and depositing a larger amount of credit in the iteration where the review catch-up occurred.
We all should and we all shouldn’t. Some variation is natural and is just to be expected. Edwards Deming tells us that Common Cause Variation is just the variation within an historical range and is normal and there is a lack of significance in individual high or low values. Instead it is just “noise” within the system.
However Special Cause Variation is a trend or new unanticipated, emergent or previously neglected phenomena within the system. It is any variation outside the historical experience base or evidence of some inherent change in the system. It is the “signal” within a system.
Classic Manager Mistakes
Deming warns us of two classic mistakes managers make:
1) Interfering with Common Cause Variation - some stuff just varies, so let it go.
2) Not intervening with Special Cause Variation – if some big shift happens we should take action.
He also recommends the use of Control Charts to track variation and attempt to identify Special Cause Variation that requires intervention.
Here we see a measured characteristic varying with Upper and Lower Control Limits (UCL and LCL). Anything inside these tolerances is deemed Common Cause Variation and beyond these limits Special Cause Variation.
We can easily do this on agile projects with Velocity Tolerances.
What else is out there?
These are some velocity signatures I have observed, but I would be interested to know if anyone else experiences these or is it just me? Also, what else crops up and how do we best interpret it?
If you've got a decent set of data, can't you use it to better predict the projected velocity if you're always over promising?
Posted by: Andrew Bullock | December 12, 2008 at 03:20 AM
Yes, we use prior iteration velocities as a guide for future ones. This is known as “Yesterday’s Weather”. Over Promise occurs early in the project when we have little prior data to rely on. Sorry if I did not make that clear, thanks for your question.
Posted by: Mike Griffiths | December 12, 2008 at 06:26 AM