I recently wrote a post on Velocity Signature Analysis and have been looking at how undertaking large chunks of work as a complete team impacts velocity. We are currently three quarters of the way through a major (4 months long) piece of functionality and velocity is finally rising. This seems a pattern; for the early portion of a new area of work we spend a lot of time understanding the business domain and checking our interpretation using mock-ups and discussions. Velocity, in terms of functionality built and approved by the business is down during this time since many of the team members are involved in understanding the new business area rather than cranking out code.
As project manager I can get jittery, did we estimate this section of work correctly? Our average velocity for the last module was 60 points per month and now we are only getting 20! Weeks and weeks go by as whiteboards get filled, designs get changed, but the tested story counts hardly moves. Compounding this Discovery Drain phenomenon is the Clean-up Drain pattern. During the early portions of a new phase, fixing up the niggling issues hanging over from the last phase seems to take a long time. This makes perfect sense, if they were easy they would probably have been done earlier. It is always the difficult to reproduce bug, the change request that necessitates a rework of established workflow or multiple stakeholder collaboration that seem to bleed into the next development phase. While there may only be 3 or 4 bugs or change requests hanging over, they take a disproportionate amount of time to resolve.
I sometimes use a booster rocket analogy for illustrating team cohesion and vision. When team members are not aligned with a common project goal, their individual motivations can result in a suboptimal team vector. By aligning team member efforts through common goals and a way for people to grow and get something valuable for themselves by making the project successful, we align individual vectors and produce a much greater project vector.
There is a parallel with project velocity too. If 30% of the team’s capacity is consumed on better understanding a complex business domain and 30% of the team’s capacity is spent fixing bugs and change requests for which we may earn little velocity credit, then that only leaves 40% left for raw velocity earning development.
As these tasks are completed effort can be returned to development and velocity increases. The process leads to lumpy throughput, but seems preferable to the alternatives. We could let our BA’s run ahead with analysis, filling the hopper with story outlines ready for consumption by the development team. We do this slightly, but are conscious of not letting it get too far since we lose the whole team focus on tasks and experience Pipelining Problems.
If the QA and developers are not present for at least the major analysis conversations we lose valuable insights, time saving suggestions, and create the need to reiterate points later. If business users and BA’s are too far ahead then when development questions or bugs arise that need their input there is a task switching overhead as they “park” their current work, reorient themselves in the task at question and help solve the problem. So instead work is undertaken in vertical slices, conducted by the majority of the team.
Like everything it is a balancing act, we want to exploit role specialization when it brings advantages, but see the benefit of a multi disciplined team tackling and driving through to user acceptance on discreet units of work. So, rather than a smooth flow of stories through the production process, we get some slow downs and speed ups as the team collectively takes on chunks of learning and then delivery.
Lean production systems teach us that smaller batches can be a way to smooth throughput. If we could find a way to structure the project into smaller chunks rather than 3 or 4 month long modules then these peaks and troughs would be smoothed out and velocity as a whole increased. Either this is not possible in our project domain, or more likely, I have not been able to find a way to do this yet. Our business domain is complex and naturally divides into chunks. We are replacing a suite of legacy applications and as we finish replacing once application, disconnect its interfaces and move our focus to the next one, we experience the learning cycle and tidy up issues described earlier.
I suspect this is a function of our project - which is really a program of application replacements. So, rather than get overly concerned with the oscillations in velocity, we can just zoom out some more and say, overall our velocity averages 45 points per month. Yet given this is a 4 year program there are millions of dollars of difference between our forecasted end date and spend between the best, average and worst velocities experienced per module.
So is the XP term “yesterday’s weather” really a good indicator? Can we use recent velocity to predict future velocity? I believe so, we have to allow for explainable variations, but estimation based on what has been proved to be achievable seems fairer than speculation on what we expect or would like to happen (traditional planning). It is just that sometimes the weather is a little changeable. Like hear in Calgary at the moment, where on last Tuesday we were able to go running in shorts on a sunny +12C day and by Thursday we were wrapped up running in the snow with -25C wind-chill. However, on average, I predict the weather for February to be about -5C to -10C, probably.