Monday, April 12, 2021

Stop measuring Sprint Predictability

In strongly plan-driven organizations, we often see a fascination with Sprint Predictability. So - what is it, and why would I advise against it?

Let's first take a look at how we can measure Sprint Predictability:

We have four key points of interest in this measurement system: 

  1.  What did the team plan based on their known/presumed velocity?
  2.  What did the team actually deliver based on their committed plan?
  3.  What did the team miss, based on their committed plan?
  4.  What did the team overachieve?

Charted, it could look like this:


We can thus tell whether a team can estimate their velocity realistically, and whether they are setting sufficiently SMART (Specific, Measurable, Ambitious, Realistic, Terminated) goals for themselves.

In a multi-team setting, we could even compare these metrics across teams, to learn which teams have a good control over their velocity and which don't.

If by now, you're convinced that Sprint predictability is a good idea - no! It's not! It's a horrible idea!

Here's why:

The Crystal Ball

Every prediction is based on what we know today. The biggest challenge is predicting things we don't know today.

Here are a couple reasons why our forecast of predictability may be entirely wrong and why we may need to adapt. We may have ...

  • Vague objectives
  • Mis-estimated the work
  • Made some assumptions that turned out false
  • Encountered some unforseen challenges
  • Discovered something else that has higher value

Of course, management in a plan-driven organization can and will argue, "That's exactly the point of planning: to remove such uncertainties and provide clarity." With which we are back to square 1: Trying to create the perfect plan, which requires us to have a perfect crystal ball. 

Predictability implicitly assumes that adaptation (ability to respond to change) is a luxury rather than a necessity. When we operate in an environment where adaptation really isn't required, we should not use an agile approach to begin with.


Let's now take a tabular look at the five reasons for getting forecasts wrong:

Cause Challenge Alternative
Vague objective The communicated objective and the real goal may be miles apart.
It's better to pursue the actual goal than to meet the plan.
Take small steps and constantly check whether these are steps in the right direction, changing course as new information arises.
Mis-estimation Work was perceived simpler than originally thought, mandating tasks nobody expected, consuming extra time. Avoid aligning on the content of the work, instead align around the outcomes and break these into bite-sized portions that have little risk attached.
Wrong assumptions Some things about our Product turned out differently than we had anticipated. We can do more pre-work, which does nothing other than trade "delivery time" for "preparation time", we still make un-validated assumptions. Validating assumptions is always a regular part of the work. Set up experiments that determine the next step rather than trying to draw a straight line to the goal from the beginning. Accept "open points" as you set out.
Unforseen challenges An ambitious plan has risk, while an un-ambitious plan has risk buffers. Pondering all of the eventualities to "right-size" the risk buffer is a complete distraction from the actual Sprint Goal. Equally avoid planning overly optimistic (e.g., assuming absolutely smooth sailing) as well as overly pessimistic (e.g. assuming WW3 breaks out) and just accept that unusual events take us out of our comfort zone of being predictable. Learn over time which level of randomness is "normal."
Value changed Something happened that made new work more valuable than the one originally planned. While this shouldn't frequently happen within a Sprint, it could be part of discovery work. Ensure there is clarity within the team and organization that the primary goal is maximizing value and customer satisfaction, not meeting plans. 

As we can see from the above table, "Sprint Predictability" is a local optimization that gives people a cozy feeling of being fully in control, when in reality, they're distracted from creating value for the organization. 


Re-Focous

As much as managers, and even some Scrum Masters, like to use metrics and numbers to see whether teams have high predictability on their Sprints, we need to re-focus our discussion towards:
  1. How well do we understand which goal we're trying to achieve? (Level of Transparency)
  2. Do we understand, and have the ability to generate, value? (Ability to Inspect)
  3. Since "The biggest risk is taking no risks" - let's agree on how much risk can our organization bear with (Ability to Adapt)
When we focus on these pillars of Scrum, we will go an entirely different direction from "becoming more predictable" - we need to improve our ability to respond swiftly and effectively as new information arises!

And once we have high responsiveness, we can argue formidably whether a "Sprint Predictability Index" has any value at all.

2 comments:

  1. While I agree with Wolfram that Sprints are batches, my curiousity here is about just what is being predicted. Perhaps I should say, about the relationship between what is produced over a series of sprints and the accuracy of the sprint prediction. Does analysis show that teams that can predict the most closely on a sprint by sprint basis are the most productive? (I suspect there actually might be an inverse correlation). If there is a negative correlation, then it would mean that focusing on velocity is harmful to production (which, according to theory is the result you would expect). I guess it could be considered a kind of local optimization (which human beings really LOVE to do)>

    ReplyDelete
  2. This comment has been removed by a blog administrator.

    ReplyDelete