When Outsourcing Makes Sense

When to Outsource GridDisclaimer: This article is based on my personal experience of software project development work over a 25 year period running a mixture of local projects, outsourced projects and hybrid models. The data is my own and subjective, but supported by 1,000’s of industry peers I question while delivering training courses for the PMI. I do not work for a local based or outsourcing based company, I have nothing to gain from favoring either approach, but I hope these thoughts are useful for determining some of the pro’s, con’s, true costs and circumstances when outsourcing is better or worse than local development.

To the uninitiated, outsourcing seems like a great idea. Software engineers are expensive in many countries but much cheaper in other parts of the world. So, since software requirements and completed software can be shipped free of charge via email and web sites, why not get it developed where labor rates are much lower?

Coding vs Collaboration Costs

The flaw in this plan comes in the execution of it when it becomes apparent that software development projects typically entail more than just the development of software. Writing code is certainly part of it, but understanding the problem, agreeing on a design, discovering and solving unforeseen issues, making smart decisions and compromises to optimize value and schedule are big parts of it too. This is the collaboration effort part of a project. Also, while the coding part might represent 30-50% of the overall project costs, these shrink to 20-30% when a 3-year ownership cost view is considered that includes support, maintenance and enhancements.

Sticking with just development costs for now, let’s examine a scenario. The business case pitched to executives by outsourcing companies initially seems very compelling: Project Alpha needs 9 months of software development by a team of 5 people. If you work in an expensive labor market, like North America, we can assume fully-loaded hourly rates of $100 per hour, yet highly qualified consultants from our fictional outsourcing country of TechLand cost only $25 per hour. So, the project for 9 months x 160 hours per month x 5 people at $100 per hour in an expensive market costs $720,000. For a TechLand team this would cost 9mths x 160hrs x 5pl x $25hr = $180,000, that’s a cool $540,00 saving, right?

Let’s revisit this scenario based on the acknowledgment that the actual software writing part of a project is closer to a 30-50% of the total effort. This leaves the remaining 50-70% of the work as the communications heavy collaboration part. It should come as no surprise that separating people via distance, time zones, and potentially language and cultural barriers increase communications effort and propagates issues up the cost-of-change-curve

So, when 50-70% of the communication-heavy collaboration work takes longer, how do we quantify that? Agile methods recommend Face-to-Face communications because it is the quickest, conveys body-language and provides an opportunity for immediate Q&A only for the issues that need it. Switching from Face-to-Face to video, conference call, email or paper create barriers and adds significant time and opportunity for confusion. A 2-3 X increase in effort likely downplays the true impact when considering the costs of fixing things that go awry because of inevitable misunderstandings, but let’s use that number.

Redoing our project Alpha costs with, say, 40% as the actual coding effort and 60% effort communications heavy collaboration work that takes 2.5 X as much effort we get: 9mths x 160hrs x 5pl x $25 hr x 40% = $72K Coding + 9mths x 160hrs x 5pl x $25 hr x 60% x 2.5 = $270K Collaboration giving $342,000 in total. However, this is less than half the costs of the $720,000 locally developed project so we are still good, right?

The Compounding Costs of Delay

An error in the logic applied so far is that this 2.5 X communication and collaboration penalty on 60% of the work can somehow be magically absorbed into the same 9 month timeline. In reality these outsourced projects take longer because of the increased communication and collaboration effort and 2.5 x 60% = 1.5 X as long is consistent with my experience from 25 years of mixed local and outsourced projects.

Continue reading "When Outsourcing Makes Sense" »

Agile Innovation

Psst, this is your conscious, I am here to remind you about something you have thought about, but then hid away in the back of your mind. Lots of this agile stuff is hypocritical, it preaches evolution and change, but then we ask the same old three questions at standup every day. Also, why must we have standup every day, isn’t that kind of prescriptive? Agile methods are supposed to facilitate innovation through iterative development followed by inspection and adaption. They practice the scientific method of measurement and feedback on products and team work; so why are the agile practices themselves magically exempt from this precious evolution?

I believe there are two main reasons; first off, it is to protect inexperienced agile practitioners from themselves. With a free rein to morph product and process there is a strong likelihood that by six months into a project the practices followed by the team would have deviated from the proven and tested methods of most successful teams. The risk of failure would increase and every project in a company would be using a radically different approach making integration, scaling and team member transfers a major problem.

The other reason is a little more sinister. Most of the creators, proponents and promotors of agile methods have interests in keeping the methods pure vanilla. This is so they can create training courses, certifications and web sites for them. While scrum, as one example, has its specialized ceremony names and products you can neatly market services for it. If you allow or encourage people to change it then the result is not so proprietary and more difficult to defend, promote and assert ownership over.

I am not suggesting we should be changing agile methods willy-nilly, I think a basic suggestion to try them out-of-the-box for a couple of years is sound advice. However, beyond that I believe there are great opportunities for growth and deviation outside the standard agile models for stable teams who want to evolve further. This article tells the story of one team that did just that and what other people can learn from it.

Continue reading "Agile Innovation" »

Quality Project Management

Unexpected SuccessHow do we define quality as a project manager? Is it managing a project really well, or managing a successful project? How about managing a successful project really well, that sounds pretty good. However it poses the next question: What is a successful project? Let’s look at some examples of project success, failure and ambiguity.


Apollo 13
Apollo 13, the third manned mission by NASA intended to land on the moon that experienced electrical problems 2 days after liftoff. An explosion occurred resulting in the loss of oxygen and power and the "Houston, we've had a problem" quote from astronaut, James Lovell (that is widely misquoted as, "Houston, we have a problem".)

Apollo 13
The crew shut down the Command Module and used the Lunar Module as a "lifeboat" during the return trip to earth. Despite great hardship caused by limited electrical power, extreme cold, and a shortage of water, the crew returned safely to Earth and while missing the main moon-based scope, it was a very successful rescue, allowing for future missions. Clearly, this was a remarkable achievement, but the original project goals were not met. Lovell now recounts this story at PMI conferences under the very apt title of “A Successful Failure”.


Continue reading "Quality Project Management" »

Go Talk To Your Stakeholders

P4As a PM, what is the most effective thing you can do for your project in the next hour? (After finishing this article, of course!) I would suggest speaking to your project team members and business representatives about where their concerns lie and what they believe the biggest risks for the project are. The reason being that while tarting up a WBS or re-leveling a project plan might be familiar and comfortable (where you are a master of your own domain), it really amounts to nothing if your project is heading for trouble. Like rearranging deck chairs when you should be looking for icebergs, there are better uses of your time.

The frequency and magnitude of IT project failures are so prevalent and epic that people can appear in denial of their ability to influence, or “in acceptance” that a certain percentage of projects just go south. Does it need to be that way? If we spent more time asking people where stuff could go wrong rather than making ever more polished models of flawed project plans, could we change the statistics (even a little bit)?

According to research by Roger Sessions of ObjectWatch, 66 percent of projects are classified as “at risk” of failure or severe shortcomings. Of these 66 percent, between 50 and 80 percent of these projects will fail. So, if 66 percent of projects are at risk, let’s say 65 percent of these projects will fail; that’s .66 x .66, meaning 43 percent of projects fail. (Despite the grim projection, these numbers are actually slightly better than the Standish Chaos report findings). What would happen if we could prevent just, say, 5 percent of those from failing?

The impacts would be huge, because the amount of money spent on IT projects now is truly monumental. Of all these failing projects, there must be many that flirt on the edge of success versus failure--wobbling between being able to be saved and past the point of no return. These are my targets--not the doomed-from-the-start death marches to oblivion but the appropriately staffed, well-intentioned projects that just don’t quite make it. I bet there was a point in the path to failure when some more dialogue around risks and issues could have provided the opportunity to take corrective action.

The trouble is that we don’t know if we are on an ultimately successful or unsuccessful project until its path may be irreversible. So we need to be acting as if we could be heading for trouble at regular intervals. We should also examine the economics behind this suggestion to change PM behavior. How much is it really worth to maybe sway the outcome of just 5 percent of the projects that are headed to failure?

According to The New York Times, industrialized countries spend 6.4 percent of the GDP on IT projects. Of that, 57 percent typically goes to hardware and communication costs and 43 percent to software development. Of these, 66 percent are classed as “at risk” and 65 percent of them ultimately fail. The cost of failed projects is two-fold: There is the direct project cost, but also a series of related indirect costs. These include the cost of replacing the failed system, the disruption costs to the business, lost revenue due to the failed system, the disruption costs to the business, lost opportunity costs, lost market share and so on.

An investigation into a failed Internal Revenue System project showed a 9.6:1 ration of indirect costs to direct project costs. For our purposes, we will use a more modest ratio of 7.5:1. Let’s see how these figure pan out:

IT Failure Costs

So it turns out that the failing SW projects cost the world about $6 trillion dollars annually, and over $1.3 trillion in the United States alone. That’s a chunk of change, and saving just the 5 percent of projects wobbling on the edge of failure in the States would amount to $1,336B x 0.05 = $66.8B (or $1.28B per week).

How do we do it? Well, socializing the problem is a start. Let’s talk about project risks more often and raise them from the clinical world of reviews and audits up to the more human, approachable world of predictions and wagers. Ask team members to predict why the project may fail or get stuck. Ask our sponsors where they think the biggest obstacles lie. Follow up with “How do we avoid that?” and “What would have to happen to prevent that?” type questions, and follow through on the recommendations.

Just the act of discussing these issues can influence behavior. Armed with knowledge of where the really large icebergs are, people tend to steer and behave differently. To reiterate, we are not trying to prevent all project failures; just keep an extra 5 percent on track through frequent, honest dialog about the issues and a broader stakeholder awareness of the major project risks is a great way to start. So what are you waiting for? 

(I wrote this article first for Gantthead, here )


Collaborative Games for Risk Management - Part 2

Team ContributionsThis is the last post in a series on agile risk management. The first looked at the opportunities agile methods offer for proactive risk management, while the second examined the benefits of engaging the whole team in risk management through collaborative games. The last instalment walked through the first three games covering:

1. Risk management planning
2. Risk Identification
3. Qualitative Risk Analysis

This month we look at the final three sets of collaborative team activities that cover:

4. Quantitative Risk Analysis
5. Risk Response Planning (and Doing!)
6. Monitoring and Controlling Risks

The exercises we will examine are

  • Today’s Forecast -- Quantitative Risk Analysis
    • Dragons’ Den -- next best dollar spent
    • Battle Bots -- simulations
  • Backlog Injector -- Plan Risk Responses
    • Junction Function -- choose the risk response path
    • Dollar Balance -- Risk/Opportunity EVM to ROI comparison
    • Report Card -- Customer/Product owner engagement
    • Inoculator -- inject risk avoidance/mitigation and opportunity stories into backlog
  • Risk Radar -- Monitoring and Controlling Risks
    • Risk Burn Down Graphs -- Tracking and monitoring
    • Risk Retrospectives -- Evaluating the effectiveness of the risk management plan
    • Rinse and Repeat -- Updating risk management artifacts, revisiting process

Continue reading "Collaborative Games for Risk Management - Part 2" »

Agile Risk Management – Harnessing the Team

Team ideasLast month we looked at how agile methods provide multiple opportunities for embracing proactive risk management. The prioritized backlog, short iterations, frequent inspections and adaptation of process map well to tackling risks early and checking on the effectiveness of our risk management approach.

We want to overcome many of the correct-but-not-sufficient aspects of risk management seen too often on projects:

    Poor engagement - dry, boring, academic, done by PM, does not drive enough change
    Done once – typically near the start, when we know least about the project
    Not revisited enough – often “parked” off to one side and not reviewed again
    Not integrated into project lifecycle – poor tools for task integration
    Not engaging, poor visibility – few stakeholders regularly review the project risks

This month’s article extends risk management beyond the project manager role and introduces the benefits of making it more of a collaborative team exercise. Next week we will walk through the team risk games one by one.

First of all, why collaborative team games? Just as techniques like Planning Poker and Iteration Planning effectively make estimation and scheduling a team activity and gain the technical insights of engaging people closer to the work. So too do collaborative games for risk management; after all, why leave risk management to the person who is furthest from the technical work – the project manager?

"...why leave risk management to the person who is furthest from the technical work – the project manager?"

Before I upset project managers worried about erosion of responsibilities we need to be clear on what the scope is here. I am advocating the closer and more effective engagement of the team members who have insights into technical and team related risks. I am not suggesting we throw the risk register to the team and tell them to get on with it. Instead we are looking for better quality risk identification and additional insights into risk avoidance and mitigation, not the wholesale displacement of the risk management function.

So why should we bother to engage the team? Why not let them get on with doing what they are supposed to be doing, namely building the solution? Well there are some reoccurring problems with how risk management is attempted on projects. Most software projects resemble problem solving exercises more than plan execution exercises. It is very difficult to separate out the experimentation and risk mitigation form the pure execution. Team members are actively engaged in risk management every day. We can benefit from their input in the risk management process and if they are more aware of the project risks (by being engaged in determining them) how they approach their work can be more risk aware and successful.

The benefits of collaboration are widely acknowledged, a study by Steven Yaffee from the University of Michigan cites the following benefits:

Continue reading "Agile Risk Management – Harnessing the Team" »

PMI-ACP Book Discount

PMI-ACP Exam Prep CoverI picked up a copy of my PMI-ACPSM Exam Prep book on a visit to RMC Project Management over the weekend. It was good to see it printed up for the first time, and with all the exercises and 120 sample exam questions, it was thicker than I expected at over 350 full-size pages.

The extra weight also comes from the case studies of agile projects I have worked on over the years and the additional materials I included to link the exam topics together. These items that are not in the exam are clearly marked so you can skip over them if you want. However, I am sure some people will find they add value by making the ideas more real. These additional materials also supply useful information to allow readers to fully understand the topics, rather than just memorize the information for the exam.

I am very grateful to the staff at RMC for pulling together my thoughts and ideas into this book, and for the people who reviewed it. Alistair Cockburn and Dennis Stevens were particularly helpful, and after reviewing it, they wrote the following quotes for the cover:

“As one of the authors of the Agile Manifesto, I am delighted to see this book by Mike Griffiths. It is great that such an exam guide was prepared by someone with a deep understanding of both project management and Agile development. Personally, I hope that everyone reads this book, not just to pass the PMI-ACP exam, but to learn Agile development safely and effectively!”

– Dr. Alistair Cockburn, Manifesto for Agile Software Development Co-Author, International Consortium for Agile Co-Founder, and Current Member of the PMI-ACP Steering Committee.

“This is a VERY enjoyable book to read, due to Mike's firm grasp of the underlying concepts of Agile, and his articulate and entertaining writing style. My favorite part is the fact that it is organized into a framework that helps all of the Agile concepts hang together, so they will be easier to recall when taking the PMI-ACP exam.

But Mike's book is more than just the best PMI-ACP prep book out there. It is also the best consolidated source of Agile knowledge, tools, and techniques available today. Even if you are not planning on sitting for the PMI-ACP exam in the near future you need to buy this book, read it, and keep it as a reference for how to responsibly be Agile!”

Dennis Stevens, PMI-ACP Steering Committee Member, PMI Agile Community of Practice Council Leader, and Partner at Leading Agile.

Thanks to you both, working with you over the years has been a blast. I would also like to thank the visitors of my blog here, too, for reading my posts and submitting insightful comments that kept me motivated to write. RMC has provided me a limited time promotion code that gives readers a further $10 off their currently discounted price for the book. If you follow this link and enter promo codeXTENMGBD”, you can get the additional $10 discount up until May 18th 2012. This is a 25% reduction on the retail price.

PMI-ACP Book Coverage

PMI-ACP BooksI finished my PMI-ACP Exam Preparation book a couple of weeks ago and now it is with the publishers for reviews and final edits. It turned out larger than expected, but I think better for the extra exercises and sample exam questions.

When designing the PMI-ACPSM exam, we needed to base the content outline on existing books and resources so that candidates would understand what the exam would test them on. When choosing the books, we went back and forth on our decisions of which books to include, since there are so many good resources available. And while we recommend that people learn as much as they can, we also had to recognize the need for keeping the exam content—and the preparation process for the exam—reasonable. In the end, we selected the following 11 books:

  1.    Agile Estimating and Planning, by Mike Cohn
  2.   Agile Project Management: Creating Innovative Products, Second Edition, by Jim Highsmith
  3.   Agile Project Management with Scrum, by Ken Schwaber
  4.   Agile Retrospectives: Making Good Teams Great, by Esther Derby and Diana Larsen
  5.   Agile Software Development: The Cooperative Game, Second Edition, by Alistair Cockburn
  6.   Becoming Agile: ...in an Imperfect World, by Greg Smith and Ahmed Sidky
  7.   Coaching Agile Teams, by Lyssa Adkins
  8.   Lean-Agile Software Development: Achieving Enterprise Agility, by Alan Shalloway, Guy Beaver, and James R. Trott
  9.   The Software Project Manager’s Bridge to Agility, by Michele Sliger and Stacia Broderick
  10.   The Art of Agile Development, by James Shore and Shane Warden
  11.   User Stories Applied: For Agile Software Development, by Mike Cohn

Reading all of these books takes some time, since the 11 books add up to more than 4,000 pages. The books also cover a lot more material than you need to know for the exam. From each book, we extracted the portions that best covered the exam content outline topics, and the exam questions were then targeted at those specific sections.

Continue reading "PMI-ACP Book Coverage" »

PMBOK v5 Update.

PMBOK Guide - Fifth Edition

I am overdue for providing an update on how my work on The PMBOK v5 Guide is going. Well, it is on its way. The process is slow (sometimes painfully slow) but this is because of the number of people involved and the review process used. To give an idea, here is the plan for the next 6 months:

*  17 February – 20 March 2012: the exposure draft PMBOK® Guide – Fifth Edition will be open for comments

*  Late February 2012: team training for our adjudication processes

*  20 March 2012: our exposure draft period closes and comment adjudication begins

*  20 March – 28 April 2012: teams adjudicate exposure draft comments 

*  Early May – mid May 2012: core committee reconciles any comment adjudications that cut across chapters or where consensus has not yet been obtained

*  Mid May – early June 2012: appeals period for adjudication decisions; final draft QC and integration reviews

*  Early June – mid June 2012: appeals adjudication and resolution

*  Mid June – late June 2012: final draft cleanup and incorporation of QC comments

*  28 June 2012: core committee vote on finalized draft

The Exposure Draft process is a great mechanism for allowing members to review and comment on new material, but likely to generate a ton of review work for us. The Fourth Edition update, back in 2008 received over 4,400 comments during its exposure draft.  Since the membership of the PMI has increased significantly since 2008 we could be looking at close to double that figure.

That is a lot of suggested changes to review and I think March and April will be a busy time for me. Of course they will not arrive in one Word document, but I wonder what the PMBOK Guide would look like if we just did an “Accept All”? Right now it is the calm before the storm; I am going to make the most of it.

PMI-ACP Value Stream Mapping

PMI-ACP  Value Stream Mapping I have been away attending the excellent “Agile on The Beach” conference recently, but when I returned I had an email waiting requesting some PMI-ACP study help on Value Stream Mapping. So here is quick outline of the topic.

Value Stream Mapping – is a lean manufacturing technique that has been adopted by agile methods. It is used to analyze the flow of information (or materials) required to complete a process and to determine elements of waste that may be removed to improve the efficiency of the process. Value stream mapping usually involves creating visual maps of the process (value stream maps) and progresses through these stages:
1)    Identify the product or service that you are analyzing
2)    Create a value stream map of the currant process identifying steps, queues, delays and information flows
3)    Review the map to find delays, waste and constraints
4)    Create a new value stream map of the future state optimized to remove/reduce delays, waste and constraints
5)    Develop a roadmap to create the future state
6)    Plan to revisit the process in the future to continually tune and optimize

To illustrate lets optimize the value stream for buying a cake to celebrate passing your PMI-ACP exam with a friend. Let’s say this involves choosing a cake, waiting at the bakery counter to get the cake, paying for the cake at the checkout, then unpacking and slicing before enjoying the benefit of the process (the cake).

Continue reading "PMI-ACP Value Stream Mapping" »

Agile Prioritisation

A uniting thread through the variety of different agile methods is the fact that they all employ work prioritisation schemes. While the terminology varies--Scrum for instance has a “product backlog”, FDD a “feature list” and DSDM a “prioritised requirements list”--the concept is the same. The project works through a prioritised list of items that have discernable business value.

Prioritisation is required because it enables the flexing of scope to meet budget or timeline objectives while retaining a useful set of functionality (Minimum Marketable Release).

Agile prioritisation1

Prioritisation also provides a framework for deciding if/when to incorporate changes. By asking the business to “Tell me what it is more important than?” and then inserting the new change into the prioritised work list at the appropriate point, it is possible to include changes.

  Agile prioritisation2

It is worth explaining that while agile methods provide tremendous flexibility through the ability to accept late-breaking changes, they cannot bend the laws of time and space. So if your project time or budget was estimated to be fully consumed with the current scope, then adding a new change will inevitably force a lower priority feature below the “cutoff point” of what we expect to deliver. So, yes, we can accept late-breaking changes…but only at the expense of lower-priority work items.

A single prioritised work list also simplifies the view of remaining work. Rather than having separate “buckets” of work for change requests, defect fixes and new features, by combining them into a single prioritised list, people get a clear, complete view of all remaining work. Many teams miss this point to begin with and retain bug-fix and change budgets. This muddies velocity; a single prioritized list of work-to-be-done regardless of origin offers better transparency and trade-off control.

Behind Every Scheme is a Schemer
How we go about prioritisation (and the prioritisation schemes used) varies from method to method--and sometimes project to project--based on what works for the business. A simple scheme is to label items as “Priority 1”, “Priority 2”, “Priority 3”, etc. While straightforward, a problem with this is that there is a tendency for everything to become a “Priority 1”--or at least too many things are labeled “Priority 1” for the scheme to be effective. It is rare that a business representative asks for a new feature and says it should be a Priority 2 or 3, since they know that low-priority items risk missing the cut off. Likewise, “high”, “medium” and “low” prioritisations fall victim to the same fate. Without a shared, defendable reason for what defines “high”, we end up with too many and a lack of true priority.

DSDM popularized the MoSCoW prioritisation scheme, deriving its name from the first letters of its labels of “Must…”, “Should…”, “Could…”, “Would like to have, but not this time.” Under MoSCoW, we have some easier-to-defend categories. “Must-have” requirements or features are fundamental to the system; without them, the system does not work or has no value. “Should have’s” are important--by definition, we should have them for the system to work correctly; if they are not there, then the work-around will likely be costly or cumbersome. “Could have’s” are useful net additions that add tangible value, and “Would like’s” are the “also-ran” requests that are duly noted--but unlikely to make the cut.

An approach I have seen work well is giving sponsors Monopoly money equalling the project budget and asking them to distribute it amongst the system features. This is useful for gaining general priority on system components but can be taken too far if applied to question perceived low value-add activities like documentation, so keep it for the business features.

If you do not have any Monopoly money available, the 100-Point Method originally developed by Dean Leffingwell and Don Widrig for use cases can be used instead. It is a voting scheme where each stakeholder is given 100 points that he or she can use for voting in favor of the most important requirements. How they distribute the 100 points is up to them: 20 here, 10 there or even all 100 on a single requirement if that is their sole priority.

The Requirements Prioritisation Model created by Karl Wiegers is a more mathematically rigorous method of calculating priority. Every proposed feature is rated for benefit, penalty, cost and risk on a relative scale of 1 -9 (low to high). Customers rate the benefit score for having the feature and the penalty score for not having it. Developers rate the cost of producing the feature and the risk associated with producing it. After entering the numbers for all the features, the relative priority for each feature is calculated by considering the percentage of the weighted feature desirability to each feature. (For more details and link to an example, see Karl’s outline First Things First: Prioritizing Requirements; also, for more analysis on mathematical models on requirements prioritisation try this US-CERT paper.)

No Scheme as a Scheme
At the end of the day, it is the prioritisation of features and not the scheme we need to focus on. Sometimes refereeing the schemes themselves can detract too much time from meaningful discussions. For this reason, I am personally a fan of simply asking the business to list features in priority order. No category 1,2,3s; no high, medium, lows; no must haves, etc,--instead, just a simple list (whether this is in Excel or an agile requirements management tool). This removes the categories that people tend to fixate upon from the debate and leaves the discussion around priorities.


By all means get creative and try different schemes to engage the business in the prioritisation. There is no single best way to always prioritise; instead, try to diagnose issues arising in the prioritisation process, be it “lack of involvement” or “too many priority 1’s”, and then try approaches such as Monopoly money, MoSCoW or a pure list to assist if the problems cannot be resolved via dialogue. The goal is to understand where features lie in relation to others as opposed to assigning a category label. By maintaining a flexible list of prioritised requirements and having opportunities to revisit and reprioritise, we maintain the agility to deliver the highest value set of features within the available time and budget.

(I wrote this article originally for Gantthead.com and it appeared in April 2011 here)

Agile as a Solution for "Miscalibration Errors"

Error Malcolm Gladwell (author of Blink and Tipping Point) was in town a couple of weeks ago and I enjoyed a great presentation he gave on what happens when we think we have complete information on a subject.

The Problem
Gladwell asserts that the global economic crisis was largely caused by “Miscalibration Errors”. These are errors made by leaders who become over confident due to reliance on information. Those in charge of the major banks were smart, professional, and respected people at the top of their game; who, as it turns out, are prime candidates from miscalibration errors.

People who are incompetent make frequent, largely unimportant errors, and that is understandable. They are largely unimportant errors because people who are incompetent rarely get into positions of power. Yet those who are highly competent are susceptible to rare, but hugely significant errors. 

Think of the global economic crisis where bank CEOs were seemingly in denial of the impending collapse of the sub-prime mortgage market. (I don’t mean close to the end when they were secretly betting against the market while still recommending products to their clients, but earlier on when they were happy to bet their own firms on “AAA” rated derivatives that they knew were really just a collection of highly suspect subprime mortgages.)

Anyway, this phenomenon of educated, well informed leaders making rare, but catastrophic errors is not new and unlikely to go away soon, it seems to be a baked-in human flaw. When presented with increasing levels of information our perception of judgement accuracy increases when in reality their judgement may be very suspect. Let’s look at some examples:

Continue reading "Agile as a Solution for "Miscalibration Errors"" »

Training in New Orleans - Updated: Now Full

New Orleans The next occurrence of my Agile Project Management class will be in New Orleans on February 28 and March 1st (Feb 18 Update: and is now full ). After that there is:

Savannah, GA - April 11, 12
Dallas, TX  - October 26, 27
Anaheim, CA - November 7,8

I enjoy delivering these courses and people enjoy attending them too, here are some feedback comments:

"Mike delivers an exceptionally well reasoned and effective presentation of agile. Thoroughly appreciated" - Bill Palace, El Sugund, CA
“The best PMI class I have ever taken.” - Scott Hall, Marriot International
"This was a very well executed course. Instructor (Mike Griffiths) was very engaging!" – Ameila White, Boeing
"The instructor was very knowledgeable, class well organized, content at the right level of detail and very comprehensive. One of the best classes I have taken regarding PM topics" – James Bernard, Scottsdale
"Excellent course with great information" – Tom Gehret, JNJ Vision Care
"Excellent facilitator. Mike is respectful and knowledgeable" - Nghiem Pauline, San Diego, CA
"The course was fantastic " - Kimberly Kehoe, San Diego, CA
"Mike is an excellent instructor and I really appreciated his organized and clear, well researched presentation. His domain and project management experience is evident from his talk. Also I appreciate his exposure/experience to multiple approaches like PRINCE2, PMBOK, Scrum, DSDM etc." - Sarah Harris, OpenText
"Great content and delivery" – Andrea Williams, Fed Ex
"Great Stuff!, Really enjoyed instructor and real-world examples" - Don Brusasco, Northridge, CA
"The instructor did an excellent job of keeping the pace, - clearly explaining topics and providing practical applications" - Cathy MacKinnon, Schering Plough Corp
"Excellent!" – Peter Colquohoun, Australian Defence

All of these classes sold out last year so if you want to attend I suggest you book early; I hope to see you in New Orleans!

High Performance Team

Ips_poster_small For the last 25 years I have been learning about high performing teams and trying to create high performing teams. Well, I finally got to work with one for 3 years solid and I totally loved it.

For the last 3 years I have been working on the IPS project at Husky Energy with the best team and project I have experienced or reviewed. More of a program than a project, we rewrote a number of legacy pipeline control and billing systems in .NET and removed a clutter of spreadsheets and Access applications that had sprung up to fill the gaps left by difficult to upgrade legacy systems.

I had worked with great people before, I have seen the difference good executive support, and engaged business representatives make. However, as the cliché goes, when you bring them together, and add enough freedom to make big changes, the result is much more than the sum of its parts.

Why So Successful?

Freedom to reset and redo - This was a project restart. After several unsuccessful attempts to kick-off the project and then a failed experience with a vendor to out-source it, the project was brought back in house and restarted. I was fortunate to join at a time when management was open to fresh approaches. Already $1M behind budget and 2 years delayed, we were able to introduce changes into an organization receptive to hearing new ideas and changing process.

Executive Support and Business Champion
– These terms “Executive Support” and “Business Champion” are often just titles, mere names or nouns. For our project they were verbs that described their everyday jobs. The sponsor fought to retain our budget during cut backs, the business champion repeatedly went to battle to retain resources, gain exceptions from harmful processes and ensure we had access to the very best business resources.

An experienced, pragmatic team – Most of them had “been there and done that with pure agile”, they had seen the benefits and costs associated with pure TDD, XP, and Scrum. They knew all the theory and had heard the theological debates and just wanted to work now. They were exceptionally strong technically, with mature use of design patterns and layer abstraction. Mainly experienced contractors and some experienced full time staff, humility was high and ego’s low.

Great domain knowledge
– We had an insider, an architect from some of the original systems we were replacing on our team. The bugs, the flaws, the big chunks of tricky logic from the original systems could all be highlighted and explained rather than rediscovered, a great time saver.

Embedded with our business users – Being away from the IT group and in with the business was critical in learning their day job and building rapport. By seeing their business cycles, busy days and deadlines we were better able to plan iterations, demos and meetings. Being face to face and sharing a kitchen helped with conversations and quick questions too.

Right Process – Way behind our people’s influence on success was our process, using a relaxed interpretation of agile, we worked with two week iterations, daily stand-ups, user stories, and empowered teams. Given we were replacing existing required functionality it meant prioritization and detailed task estimates were less valuable. From a perspective of minimizing waste we naturally gravitated to do less story point estimation and lighter iteration planning sessions. It was reassuring to hear David Anderson in 2008 talking about Kanban and viewing estimation and iteration rigour as waste. We were not just being lazy; we needed a certain fidelity of estimation for planning, but beyond that got diminishing returns.

The Outcomes
High Productivity - The last release contained over 1400 function points that were developed in one month by 6 developers. This is approximately 12 function points per developer per day, over three times the industry average and other releases had similar productivity. This was despite the domain being complex, we had a full time PhD mathematician SME (subject mater expert) on the team to define and test the linear programming, and iterative calculations being used.

Continue reading "High Performance Team" »

Starting an Agile Project within a Traditional Framework

Starting Agile Project Agile methods typically don’t cover the early stages of projects very well. They often assume you are ready to start gathering requirements as user stories, or that you even have some candidate features or stories magically ready to go. This is shown by the scope coverage diagram below.

Yet for all but the smallest, most process-light organizations, there are stages of envisioning, feasibility, and setup that happen before we are ready to jump into requirements gathering. This is where many companies come unstuck with agile. Either they jump straight to stories and miss out the early project stuff and suffer disoriented stakeholders and disengaged project offices. Or they go too heavy with the upfront chartering and scope definition, create brittle plans and lose some of the advantages of adaptation and iterative development.

The following diagram is a good place to consider starting from. It combines some of the bare necessities from a traditional approach that will satisfy the project office. Yet it does not go too far into Big Design Up Front (BDUF) to create problems or consume too much time that could be better spent working directly with the business to determine their requirements via collaborative development.

Start Up Activities

The overall duration is short, just 15% of the likely (guestimated) project duration, but yields lightweight versions of the key deliverables familiar to the PMI process police. We get stakeholders informed and engaged, the project office placated, and the bulk of the project duration dedicated to iterative development. 

Instead of a Work Breakdown Structure (WBS) I would recommend a candidate prioritised Feature List or prioritised Story list.

Agile Early Process

Kick-Off meetings using the Design The Product Box exercise, or Shape the Poduct Tree are a great way to start identifying candidate features and then follow them up with Feature Discovery Workshops. The length and number of feature workshops will vary depending on the size and complexity of your project. A three month web project with a team of three people might be scoped in a day or two. Yet a three year project with 30 people will take longer to drive out all the candidate features.

I am actually quite a fan of the early project deliverables like Project Charters since they explain important W5+ (What, Why, When, Who, Where + How) elements of a project, but believe these documents don’t have to be verbose or costly to create.

Instead cover the basics, get people onside and the whole endeavour will go quicker than if you try to start without these things and have to catch people up later.

Six Project Trends Every PM Should be Aware Of

Future As we start 2010, the second decade of the 21st century, project managers really should be embracing 21st century technologies and approaches. While developers and other project members have been benefiting from improved communication and collaboration via new technology in the last 10 years, project managers have been slower to adopt them.

The plus side of being a late adopter is that most of the kinks get ironed out before you experience them and all the features you may need have probably already been developed. So, time to get with it. Perhaps it can be a New Year’s resolution to at least examine these tools and approaches if you are not already using them on your projects.

The World Has Changed – Why Haven’t Your PM Tools and Approaches?
In the last 10 years many changes have occurred in the world of managing IT projects, yet we still see the same tools and approaches being employed. Is this because they are classic and timeless? Are the traditional PM approaches so successful that they do not need to be dragged here and there following trends and immature technology fads? No, I fear it is more that people are creatures of habit, and the usually more mature project management community, are worse than most at evaluating and adopting new approaches.

Also, project management is a largely individual activity, teams of developers and business analysts are far more common than teams of project managers, so peer-to-peer learning and tool support is almost nonexistent for project managers. Plus, project management can often be a reputation based market and to some people fumbling around as a beginner in a new approach is very uncomfortable to them. Well it is time to get over it, this is how we learn anything, and if you are concerned about looking foolish, just imagine how foolish you will look when everyone else has moved with recent trends and you are in the last stand of dinosaurs.

Continue reading "Six Project Trends Every PM Should be Aware Of" »

Project Progress, Optical Illusions, and the Simpsons

My last post was on modifying Parking Lot Diagrams to use the area of boxes to show the relative effort of differing parts of the system. The idea was that while traditional Parking Lot diagrams are a great way of summarizing progress at a project level, the customary approach of giving all the feature groups the same size box may lead people to believe these chunks of functionality are of equivalent size or complexity.

In the boxes below, the number in brackets describes the feature count, but this requires people to read the numbers and then consider the relative sizes. My goal was to create a new graphical summary that depicts the relative sizes via box area to make the summaries easier to understand.

So a standard Parking Lot diagram like this...
Parking Lot Diagram

becomes a scaled Parking Lot diagram like this...

Parking Lot Diagram Scaled

In the second example we can quickly see the relative estimated sizes of the areas of work, with for example, “Enter Order Details” being three times the size (in area) of “Create New Order”. In this example to keep things simple, I am using feature count (15 vs 5) to depict effort and area. Most projects will use development effort in points or person days as the scaling factor.

Anyway, I have been using these scaled Parking Lot diagrams on my projects for a while, created quite laboriously in Excel and PowerPoint, and I was hoping that someone with better skills than I have could help me streamline the process. Well, in typical fashion, soon after posting I learned that:

A)    I may be asking the wrong question
B)    My invented solution has already been done before and far more elegantly
C)    It has been done on the Simpsons

Oh, the joy of the internet! Actually this is of course a great thing and now I can progress from a much firmer foundation, and far quicker than my slow evolution was taking me...

Continue reading "Project Progress, Optical Illusions, and the Simpsons" »

Parking Lot Diagrams Revisited – Using Area to Show Effort

Parking Lot Diagram Parking Lot diagrams are a great way of summarizing an entire project’s progress and status onto a single page. They illustrate work done, work in progress, work planned and identify what is behind schedule.
In this example we can see that “Stock Search” in red, is behind, it was due to finish in November. “Create New Order” shown in green is complete, the boxes in yellow are ‘in progress’ and those in white have not been started yet. I have posted previously about their use and examples of how to produce them.

However, they miss an important element, the relative sizes (usually in terms of effort, but could be cost or risk) of the functional areas. So, in the example above, the fact that “Enter Order Details” might be 3 times the development effort of “Create New Order” is likely lost on the stakeholders reviewing the chart. Sure they can look at the number of features 15 vs 5 and likely surmise “Enter Order Details” is more work, but upon first impression, this is not immediately apparent.

So, bring on Scaled Progress charts, the box size represents estimated development effort. I have been using these with my current Steering Committee for a while. I meet some of these people infrequently and these charts provide a nice reminder of the overall project scope and where the project progress right now.

Continue reading "Parking Lot Diagrams Revisited – Using Area to Show Effort" »

PMBOK v4 and Agile mappings

PMBOK pdf For the attendees of my recent Las Vegas course, below is a link to the PMBOK v4 to Agile mappings we discussed. My previous course material mappings were based on PMBOK v3, and before that the 2000 edition, which are out of date now.


Quite a lot changed from the PMBOK v3 to v4; all the processes were renamed into the new verb-noun format. Six of the old processes were merged into four new ones, two processes were deleted, and two new ones added. So it seemed like time to redo the mappings and post them online this time.



Process guidelines and templates are not an acceptable replacement for common sense, thought, dialog, or collaboration. A fool with a tool is still a fool, but can be especially dangerous since they give the impression that they have a potential solution to tricky problems. Beware of simply following any project guidelines that seem counter to your objectives.


So, why would you want to be mapping the PMBOK v4 to Agile techniques anyway?...

Continue reading "PMBOK v4 and Agile mappings" »

Non-Functional Requirements - Minimal Checklist

Non-Functional Requirements All IT systems at some point in their lifecycle need to consider non-functional requirements and their testing. For some projects these requirements warrant extensive work and for other project domains a quick check through may be sufficient. As a minimum, the following list can be a helpful reminder to ensure you have covered the basics.  Based on your own project characteristics, I would recommend the topics are converted into SMART (Specific, Measurable, Attainable, Realisable, Timeboxed / Traceable) requirements with the detail and rigour appropriate to your project.

The list is also available at the bottom of the article as a one-page PDF document. While it is easy to make the list longer by adding more items, I would really like to hear how to make the list better while keeping it on one page (and readable) to share with other visitors here.

  • Login requirements - access levels, CRUD levels
  • Password requirements - length, special characters, expiry, recycling policies
  • Inactivity timeouts – durations, actions

  • Audited elements – what business elements will be audited?
  • Audited fields – which data fields will be audited?
  • Audit file characteristics - before image, after image, user and time stamp, etc

Continue reading "Non-Functional Requirements - Minimal Checklist" »

Agile in New Orleans

New Orleans Next week I’ll be teaching a two day Agile Project Management course for the PMI in New Orleans. The class sold out quickly; I only teach 3 or 4 times a year for the PMI and I wondered if registration numbers would be down this year. The fact that it filled up so quickly is very positive and perhaps more people are tuning to agile as a way to get more work done with less budget.

This year’s Agile Business Conference in London has the theme of “Driving Success in Adversity” and I have submitted a presentation outline and plan to attend. There submission system states “This year we invite presentations and tutorials emphasising how Agile practices promote efficiency in project delivery, guarantee business value and optimise return on investment.” This seems a great theme, agile is all about maximizing business value, and I am looking forward to the conference.
Meanwhile, in New Orleans next week, I am keen to hear how organizations are currently using agile methods within their organizations to add value. (I am also looking forward to sampling the food and feeling some warmer weather after a long Canadian winter!)

Lifecycle Variables

I have written a couple of posts now (here and here) about the new PMBOK v4 guide due out soon. One of the new graphs included helps describe how project characteristics change over the project life time.
The top blue line of the graph is used to illustrate how Stakeholder Influence, Risk and Uncertainty start off high and then reduce as the project progresses. The Escalating orange line illustrates how the Cost of Changes increase dramatically over the project timeline.

Quite a lot has already been written on flattening the Cost of Change curve within agile, so I will leave that for now and focus first on the top line.

Before discussing ideas such as how ongoing business input in, for example, prioritization of the remaining work prolongs their ability to influence the project, we should take a moment to understand the PMBOK audience. The PMBOK is not just for software projects, or IT projects, it is an industry agnostic guide relevant to construction, engineering, and manufacturing among other disciplines.

As a general guide, I think these curves make sense, especially outside of software projects. The ability to influence does decline rapidly once designs are committed and construction begins. Likewise, Risks and Uncertainty also reduce generally later in the project once technical obstacles have been overcome.

Software though is different, actually I would hazard a guess that every industry is different really, however software is the one that I know about. Software exhibits a characteristic known as “Extreme Modifiability” meaning we can make many changes, even late in the lifecycle and still be successful. While it would be difficult to move a bridge 3 miles upstream when it was 75% complete; we could choose to move validation logic from the presentation layer, to a middle tier, or a database trigger late into a project.

Continue reading "Lifecycle Variables" »

Print Your Own Planning Poker Cards

Planning_poker_cards_3Local APLN member Edgardo Gonzalez, has kindly offered a Planning Poker card template for readers to download. (No longer are we tied to the decks generously given out by Mountain Goat Software at conferences; now we can print our own!)  Based on the Fibonacci sequence, these cards print on standard Avery stationery and can be used by team members to estimate story points. (Post on Agile Estimation Techniques)

Thanks Edgardo for your gift to the community.

Download Planning Poker Cards.doc

Agile Project Leadership and More on Accreditation

Grasp_agileLast week I taught the “Agile Project Leadership” course with Sanjiv Augustine in Manchester, UK. The course went really well and we were looked after by Ian and Dot Tudor our hosts from TCC Training and Consultancy. They have a number of training facilities around the UK and ours was Aspen House, a converted church that retained all the arched doorways and high vaulted ceilings you would hope for.

Aspen_house_3It was a rare treat to teach in such nice surroundings and the church setting made evangelising agile all the more fun. In truth we were “preaching to the choir” as most of the delegates were already familiar with the benefits of agile and were looking for practical tools and more leadership techniques to move their organizations to the next level.

Continue reading "Agile Project Leadership and More on Accreditation" »

Agile Project Leadership Training Course

Agile_help_4 On February 4-5th I will be co-instructing with Sanjiv Augustine our new “Agile Project Leadership” training course in Manchester, UK. Sanjiv is the author of the excellent “Managing Agile Projects” book and fellow APLN board member.

This is a fast paced, practical focussed course that covers agile project management, leadership, and avoiding common agile project pitfalls.

You can find further details including a course outline at here.

Top 10 Estimation Best Practices

Agile_estimates_2I have written a few posts now on estimation and so I thought it was time for a summary. Also, I am planning to create a series of one page best practice summaries / cheat-sheets for agile and this seemed like a good candidate.

(I am a real fan of one-pagers; there is special value in getting everything visible and presented on one page that brings unique scope comprehension and clarity.  Toyota and other lean organizations make heavy use of A3 reports, a one page summary of an issue, its root causes, countermeasures, and verification steps to summarize problems and planned solutions - they are powerful tools.

Incidentally, when I first started work I had a wise and cantankerous project manager who was full of oxymoronic proverbs. One I remember was “If you cannot summarize it on only one page, you need to go off and learn more about it!” An astute paradox.)

Anyway, here’s the summary; if you want more explanation on any of the points, refer to my previous posts (Upfront Estimates, Estimation Techniques, Estimate Ranges) on agile estimation.

Continue reading "Top 10 Estimation Best Practices" »

Software Estimates - Managing Expectations via Ranges

Agile_estimate_ranges Customer: How much is that parrot in the window?
Pet shop owner: Somewhere between $200 and $250
Customer: Erm, OK, I’ll give you $200 for it then!

To some people providing estimates as a range of values seem a strange and unsatisfying way of conducting business. They just want to know how much something will cost, not how much it may or not cost. This is reasonable if the object or service is ready for use, but not if it has yet to be created. The more uncertainty involved in the process the more likely there will be some variability. Now consider this conversation:

Customer: How much will it cost for a taxi ride from the library to the airport
Taxi driver: Probably between $20 and $25, depending on traffic
Customer: OK, that sounds fair, let’s go

It seems more reasonable, we understand the variability of traffic. Unfortunately not all stakeholders understand the variability caused by evolving requirements and changing technology often associated with software projects. However the uncertainties of software development are real and we have an obligation to report estimates as ranges to help manage expectations.

Continue reading "Software Estimates - Managing Expectations via Ranges" »

Agile Estimating – Estimation Approaches

Agile_estimatesIn the last post we covered the importance of engaging the right people in the estimation process and the need to use more than one estimation approach. Today we will look at some examples of team based estimation approaches and practical ways to combine estimate result sets.

Estimation approaches (agile or traditional) can be divided into Heuristic (expert judgment based) and Parametric (calculation based) approaches.

• Comparison to similar systems
• Expert Judgment
• Activity Based (top down)
• Task Based (bottom up)

• Function Points
• Use Case Points
• Object Points

Both sets of approaches have some merit, but they are also have their limitations and are open to misuse and over reliance too. For instance Activity Based (top-down) estimating is the most commonly employed estimation approach, but has been found to be the least accurate. Capers Jones in his book “Estimating Software Costs” instead recommends task based (bottom up) estimating approaches that tend to yield better results by encouraging a more thorough investigation into the likely tasks.

Involving many stakeholders
We should ask the people who will be doing the work how long they think it will take. Not only are they closest to the technical details and therefore theoretically in a better position to create a better estimate, but also because of the psychological benefits also.

If someone just hands you an estimate for your work and tells you it should take two weeks, you can either comply and try to get it done in two weeks or rebel and either finish it early or explain why it will take much longer to illustrate the poor nature of the estimate. Even complying and doing the work in two weeks can be a problem; given two weeks work will expand to fill the time. If a solution is found early time will likely be spent finding a better solution or refining the existing one. We do not get enough of the early finishes to cancel out the late ones. As Don Reinertsen observes in “Managing the Design Factory” in engineering we get few early finishes.


Continue reading "Agile Estimating – Estimation Approaches" »

New Agile Project Leadership Training Course

Tree_of_agile_knowledge_2 In September I will be co-instructing with Sanjiv Augustine the new course “Agile Project Leadership”. Sanjiv is a fellow APLN board member and author of the excellent “Managing Agile Projects” book. I’m really excited because a) we have an excellent course that will stretch attendees while engaging them, and b) co-teaching with Sanjiv will be a blast since he is such a knowledgable and personable expert.

Our first course offering will be in Manchester, UK on September 10-11th. You can find further details including a course outline at Agile University here

Large Project Risks

(Chop those large projects down to size)

Ship_wrecked When I started out as a PM and had a few successful projects under my belt, I wanted to manage larger and larger projects. Now I’m looking for ways to make them smaller and limit the functionality initially tackled. This is not just laziness or a lack of ambition on my behalf, instead it is a realization that delivering business value comes from successful projects and that large projects are inherently risky and more likely to fail.

Jim Johnson of the Standish Group presented some interesting metrics at the PMI Global Congress conference in Toronto a couple of years ago that confirmed my suspicion. From a study of over 23,000 projects it was found that the success rate dropped as project duration increased.


From the graph we can see that most (over 50%) of the 6 month projects surveyed were deemed successful, dropping to 23% of 12 month projects, and less than 10% of 24 month projects. Now, we need to understand what “Sucess”  means here. The criteria was quite strict and defined as within 15% of cost and schedule and to customer defined functionality and quality standards.

I expect a large proportion of these “failed” projects merely missed the 15% cost or time criteria and the picture would not be so bad with a wider margin. Predicting costs over long periods is notoriously difficult as even labour rates and inflation predictions elude the best market ecconomists. However the trend is still true, larger projects carry a much higher probability of failure for a number of reasons we will see...

Continue reading "Large Project Risks" »

A Burn-Rate Based Estimation Tool

S_curve_title_2In my last post I suggested spending less time worrying about project metrics and more time on stakeholder concerns, so I thought a tool to simplify project tracking might be useful.

Having completed project estimating with the team, it is often necessary to convert effort estimates into cost estimates based on likely utilizations and then track actual hours billed against these projections. Today’s download is a simple burn-rate estimation tool that produces S-Curve graphs and let you track actual spend against burn-rate projections and planned budgets.

You can download the spreadsheet here:

Download estimation_and_budget_tracking.xls

Continue reading "A Burn-Rate Based Estimation Tool" »

Schedule Questions: Pair Programming and the PNR Curve

Schedule_questionsConverting effort estimates into project durations and team sizes is an important part of project planning. How this is done varies from project manager to project manager, to some it is an art, others a science, and to many a case of living with everyday constraints. Today I will focus on the science and its implication to pair programming.

If your initial project estimates indicate 72 person months of effort how do you best resource it?

1 person for 72 months?
72 people for 1 month?
6 people for 12 months?

Intuitively we know that 1 person for 72 months might work (providing they had all the right skills) but typically business wants the benefits of a project as soon as possible. 72 people for 1 month is extremely unlikely to work, unless the project is simple and massively parallel, like cleaning oil off rocks on a beach. Usually adding more resources beyond an optimal level provides diminishing rates of return. As the old saying goes, You can not make a baby in one month with nine women, (although it might be fun to try.). Also as Fredrick Brooks stated in, The Mythical Man MonthAdding resources to a project that is already late will make it later”. So, given an effort estimate how do we determine the optimal team size and schedule to deliver quickly and keep costs down?

While adding team members to a project increases costs in a linear fashion, the project timeline is not reduced in a corresponding linear way. Research by Putnam Norden found that for projects that require communication and learning (like software projects), the effort to time curve follows a Rayleigh distribution. Putnam confirmed that this curved applied to software projects in his article “A General Empirical Solution to the Macro Software Sizing and Estimation problem”, IEEE Transactions of SW Engineering, July 1978 and the curve became known as the Putnam Norden Rayleigh curve or PNR Staffing curve as shown below.


Continue reading "Schedule Questions: Pair Programming and the PNR Curve" »

Using Earned Value on Agile Projects

Speedo Q: Does Earned Value work for software projects?
A: Absolutely, Earned Value Analysis (EVA) is a statically valid reporting approach that can be applied to any endeavour. It compares actual progress and spend against projected progress and spend.

Q: Can you use Earned Value on Agile projects?
A: You can, but I would not recommend it. There are fundamental problems using EVA on agile projects relating to baseline plan quality. Also there are better alternatives available for agile projects.

Earned Value analysis and reporting measures conformance and performance to a baselined plan. So, given that on agile projects we know that our initial plans are likely to change, why track progress against a weak plan?

Continue reading "Using Earned Value on Agile Projects" »

Next Calgary APLN Meeting Sold Out

Aplnlogo_2 The next Calgary Agile Project Leadership Network (APLN) meeting on Thursday October 12th is full. However we are currently putting names on a wait list in case of cancellations, so if you would still like to attend this event send an email with your contact details to register@calgaryapln.org and we will see if we can get you in.

For those of you already signed up, I’m sure the presentation will be very interesting. Rob Morris from CDL Systems will be talking about “Estimating and Planning Agile projects” and I had a chance to review Rob’s material earlier this week. It looks really good and draws from Rob’s deep experience along with materials from Mike Cohn and Steve McConnell.

Event:                         Calgary APLN: “Agile Estimation and Planning”
Presenter:                  Rob Morris, Principle Software Engineer, CDL Systems
Date:                          Thursday October 12, 2006
Time:                          12:00 PM – 1:00 PM (registration will commence at 11:30 AM)
                                   Light beverages to be provided
Location:                   5th Avenue Place – Conference Room
                                   Suite 202, 420 – 2nd Street SW

Topic: “Agile Estimation and Planning

Rob’s presentation will explore Agile estimation and how it can be used in determining what can be accomplished within an iteration and how to estimate multi-iteration release plans. He will also touch on firm fixed price estimation and compare these approaches with more traditional estimation approaches.

About the Speaker:

Rob is the principal software engineer at CDL Systems and has over 20 years experience developing software systems. His more recent work has involved overseeing the development of control station software used to fly unmanned aerial vehicles currently operating in Iraq and Afghanistan. Rob has a BSc. in Electrical Engineering and Masters degrees in Computer Science and Software Engineering. He has embraced Agile techniques and tries to shoehorn them into the more document centric military projects at every opportunity. He is also a certified ScrumMaster.

Calgary APLN Meeting: "Estimating and Planning Agile Projects"

The next Calgary Agile Project Leadership Network (APLN) meeting will be on Thursday October 12th when Rob Morris from CDL Systems will be talking about “Estimating and Planning Agile projects”. Here are the details:

Event: Calgary APLN: “Agile Estimation and Planning”
Presenter: Rob Morris, Principle Software Engineer, CDL Software
Date: Thursday October 12, 2006
Time: 12:00 PM – 1:00 PM (registration will commence at 11:30 AM)
Light beverages to be provided
Location: 5th Avenue Place – Conference Room
Suite 202, 420 – 2nd Street SW

Topic: “Agile Estimation and Planning”
Rob’s presentation will explore Agile estimation and how it can be used in determining what can be accomplished within an iteration and how to estimate multi-iteration release plans. He will also touch on firm fixed price estimation and compare these approaches with more traditional estimation approaches.

About the Speaker:
Rob is the principal software engineer at CDL Systems and has over 20 years experience developing software systems. His more recent work has involved overseeing the development of control station software used to fly unmanned aerial vehicles currently operating in Iraq and Afghanistan. Rob has a BSc. in Electrical Engineering and Masters degrees in Computer Science and Software Engineering. He has embraced Agile techniques and tries to shoehorn them into the more document centric military projects at every opportunity. He is also a certified ScrumMaster.

To register for this event and for more information about the Calgary APLN group visit www.calgaryapln.org