August 31, 2008

Totally Test-driven Scrum Delivers Better Objectivity In Measuring Real Progress

One of the more questionable precepts behind the Scrum virus is the way it does estimating at release and iteraton ("Scrum") levels.

At the release level, when we're sizing our product backlog, we use story points to estimate the relative complexity (or difficulty, or improbability) of user stories. This makes sense, because estimating in terms of effort or time tends to encourage the illusion of commitments being made about when stuff will be delivered. So it's best to avoid the temptation, especially at such a high and woolly level of estimation.

When we get to the iteration level, we suddenly ditch our "let's not mention hours or days" ideals and we're encouraged to break down our stories into engineering tasks and estimate those in - yep, you've guessed it - time required for completion.

I have two problems with this: one major and one minor. My major grip is actually about switching between two distinct models - deliverables and complexity, vs. tasks and effort. I actually have found through years of experience that the first model is the most useful.

There are two dangers with the tasks and effort model for planning iterations: one is that tasks are the "how" and it's quite possible to be 90% of the way through the tasks for a user story and have delivered nothing of any value to the users. So planning and tracking iterations through tasks can lead to the notoriously misleading "90% done" syndrome that has bitten so very many projects on the beehive in the past (and continues to do so.)

The other problem I have with the tasks and effort model is that there's an ever-present danger that when a developer estimates that some task will take "2 days", that is magically transformed by managers into a firm commitment to have it completed exactly 2 days from now.

I much prefer to break user stories down into tests and complexity points - sort of like a mini product backlog (let's call it the story backlog for now).

At the start of each Scrum, we do just enough analysis and design for each story to identify - and, I should stress, not to design in detail - the key test scenarios we'll need to tackle to implement that story to the point where the product owner agrees is useful.

On a board or a bit of free wall space - if you have any left (because I know how much you Scrum cats like to cover every square inch of every surface with your crazy crayon doodlings) - create a column for each user story, and in each column stick up a card for each named test scenario. Each story has an estimate of relative complexity. The sum of all the story ponts makes up the total complexity of the iteration.



Correspondingly, every test has complexity points. The sum of the points for all the tests for a story - the story backlog - adds up to 100% of the story points estimated for that story.

E.g., you have a story estimated at 23 story points. It has three tests with 23, 8 and 13 points respectively. If you have completed the first test, then you have completed 23/(23 + 8 + 13) * 23 story points for that iteation. Do you catch my drift?

Progress is measured entirely in terms of story points collected for tests passed. There is no "A for Effort" in my way of doing things. It's either testably delivered or it's not.

The analogy I use - and, yes, it's a golfing analogy - is to differentiate by measuring the completeness of a hole by whether or not the players took the shots they planned to vs. whether or not the ball actually finished up in the hole.

Using an acceptance test dashboard like the one illustrated above can be a very powerful way to communicate progress to project stakeholders and ensure a much greater objectivity in tracking progress within an iteration.




Posted 6 years, 6 months ago on August 31, 2008