Monday, January 30, 2017

User Story Writing - what does that mean?

I keep hearing the question "How to write good/better user stories?". A quick search on Google reveals over 25 million hits, with the first page linking to Scrum gurus such as Mike Cohn and Roman Pichler, giving examples of "good user stories" and guidelines for writing them.
Let's dig deeper. You want to write better user stories? What does the term "user story" even mean?
"I don't think it means ..." courtesy of

The Connextra template

As a <ROLE> I want <FEATURE> so that <REASON>
Some guys at Connextra figured out in 2001 that a good way of formulating user stories is with their template. It helped them solve their problem and now people make that a (near) essential part of Scrum.
An entire industry has been created helping Product Owners "write better user stories" based on this template. There are many good tips around, including clarity of reason, specific acceptance criteria, separation of concerns and many others. All of them miss the point. The Connextra template is an "agile template for writing requirements". Using that template will not result in a "User Story".

So, what's a user story?

As heretical as this may sound: can you imagine a user telling a story to the developers?
Someone has a problem or a need and talks about it. We decide to create software to solve this problem.

What's the PO's role in that?

The PO has the main responsibility of deciding which item in the backlog is the most valuable and therefore, should be delivered first. For this, it's a good idea to understand what the user's problem is, how big it is - and how much value the user gets from having it solved. This means you need to listen to the user and ask questions helping in the prioritization process. 
It's OK to act as a mouthpiece for the user in cases where users don't have a voice.
In other cases, the PO has the responsibility of ensuring that developers get first-hand information and a thorough understanding of the problem.

What's the team's role in that?

In Japan, there is a philosophy that it's the student's responsibility to understand their teacher. In western circles, students blame their teacher when they can't understand their teacher. Let's just say that a blame culture helps nobody - and that users often don't understand why they have the problem they are facing.
So, the team has the responsibility of figuring out what the user means. And there is no better way of figuring that out than by interacting and discussing with the very person who is concerned.
Rather than point fingers at the PO for requesting better stories, the team should learn to understand their users. 
Asking questions is a plausible way of learning. Creating common models is another. Blaming leads nowhere.


When I work as Product Owner, I'm not stuffing information into computer-aided ticket systems. And I'm not "writing user stories" at all. I create doodles. Every story card I create is a doodle. And when we get around to it, the first question my team asks is: "What do you mean with this one?
That's where the discussion starts. It ends when we're all on the same page. 


If you want to write requirement specifications, please do so. Just don't call them "user stories".
If you want to work with user stories, try telling stories and asking questions.

Tuesday, January 24, 2017

TWEAK: Willingness

When trust is given, it must be followed with a desire, a willingness. Willingness is "the other side" of motivation. Agile Development regards intrinsic motivation over extrinsic motivation, so when willingness is impeded, you must find out the root cause so that people can release their potential.
As a Scrum Master, you should consider, as taken straight from Dan Pink's "Drive":
  • Purpose: Is the product important to the:
    • Product Owner?
    • Customer?
    • Management?
    • Team?
    • Society?
    • World?
  • Autonomy: Who calls the shots on:
    • Team constellation?
    • New features?
    • Processes?
    • Technology?
    • Activities (e.g. team events)?
  • Mastery: Considering Shu-Ha-Ri. Where is the team regarding:
    • Being a team?
    • Their technology?
    • The Product?
    • The Market?
    • Agility?
Do not feel pressured to achieve full willingness within everyone, both management and the team, on day One. Moving a rather complacent individual to be highly willing may take a long time, depending on the circumstances and duration for which complacency had set in.

Thursday, January 19, 2017

Draw Toast - learn to understand each other

Have you ever wondered why it is so hard to reach a common understanding? How often do people talk about the same thing without meaning the same thing? "Draw Toast" is a fun exercise to help us reflect and learn about how we ourselves think - and how differently those around us think. Let me share with you my experience from my first-ever Draw Toast session during the Open Space of ScrumTisch Köln.

The process of making toast - which model is "right"? Mine! ... oh wait!

Stage 1: Draw Toast

I instructed the participants to each grab a sheet of paper and a pen - then "Sketch the process of making toast, starting from the package until consumption". I wrote these instruction on a whiteboard. Below that, I sketched a piece of fluffy toast. They had five minutes.
Note to self: Five minutes was too much. People started to get bored. Three would have been plenty.

Toast? Close enough.

Stage 2: Discuss

In stage 2, I asked participants to pair up with someone else and exchange the sketches. Then, I instructed: "There is no right or wrong model, they only serve to have a conversation. Every sketch is slightly different, so take a look at the differences. Ask your partner to explain the intention behind their model. For example 'I see you have this step which I don't have: why is this important for you?' - or 'You omitted this step ... why?' Two key constraints: Do not judge. Avoid statements such as 'wrong' or 'bad'. And: Try to learn something." I gave them five minutes.
Note to self: Five minutes was quite little. Maybe seven would have been better.

The room went into bubbling discussion. We did some overtime until the room was silent again

Stage 3: Reflect

In the third stage, I simply posed the question: "What did we learn ... about ourselves and about others?" and opened the discussion, jotting down bullet points.

Left: About yourself. Right: About others.

Some of the key points that came up:

About myself, I learned:

  • we make tons of implicit assumptions
  • we base our models on our own habits
  • it's really difficult NOT to judge
About others, I learned:
  • Other people's approach differs
  • "Crispy" or "warm" is a personal preference
  • Hey, this guy likes strawberry jam!

    Stage 4: Debrief

    Participants already saw that even on a very simple process like toasting, expectations and unspoken assumptions vary widely. The visible model helps us to ask questions which bring us closer together. Until we had that model, we didn't even realize how different our understanding was. An open-minded conversation is absolutely essential to reach a common understanding.
    I dismissed the group with the suggestion to reflect on this half an hour and consider how our own mental models affect how we perceive and interact with those around us.

    Personal reflection

    Conducting Draw Toast with a group of over 30 people within 30 minutes was a challenge and I was amazed it actually worked.

    I intentionally broke the process suggested by Tom Wujec because I was simultaneously experimenting with the premise of non-judgmental discussion I picked up from Marshall Goldsmith and Liberating Structures. The experiment ended well.

    The Liberating Structure "1-2-All" was extremely useful. It would have been better if I'd had enough time to take one step further into "1-2-4-All", having pairs choose one of the two models and discuss them with another pair. That would have taken another 8 minutes which would have been worthwhile.

    One point which came up during the reflection was this: The shape of the toast I presented was copied by nearly all participants. Only during reflection did one of them realize, "Hey, this is not what toast actually looks like!" With a seemingly innocent visualization, I had already instilled a thought pattern into the audience. I had manipulated their concept of "toast"!
    Afterwards, I had a discussion with another coach on this matter. He jested "If you had drawn a frying pan next to the toast, we'd probably all have integrated a frying pan into the process."
    He suggested that as a potentially interesting experiment for the future: "What happens when you give people nonsensical preconditions, then ask them to design a process around them?"
    Note to self: Got to try that in the future. Need a setting ...


    Draw Toast can be used both for team building and during Retrospectives. It's a great exercise to help people in a team understand both themselves and each other better. Using Liberating Structures, you can do this easily with an entire room full of people.

    My suggestion: Give it a try.

    Monday, January 16, 2017

    Bringing power into your Retrospectives

    Many people consider that the purpose of a Retrospective is to let the team reflect what went good or bad in the last iteration, then decide on a few significant modifications to the process. Well - that's not wrong. Yet, it's a bit short-sighted. The most powerful retrospectives transcend that level. Here are a few pointers:

    Bored of yet another Good-Bad-Improve Retro? Good!

    Fit for purpose

    Let's start out with the question: What is the purpose of a Retrospective? The obvious answer is: "To improve the process." Let's dig below the surface by asking more specifically: What determines the effectiveness of the process?

    An action is either what we do - or how we do things, You can not expect to have an effective process if it contains ineffective actions. Therefore, we desire to replace actions with more effective actions. Likewise, we like to abolish unnecessary actions. And we should harness the power of our most effective actions.

    The standard purpose of retrospectives is to improve actions of the team.

    The more things you do at the same time, the less focused your work is. The more disruptions of the work we tolerate, the less business results we create. An unfocused process generates low value even when every actions is maximized for effectivity.

    Another purpose of retrospectives might be to increase focus of the team.

    While Retrospectives are a points of reflection, they are often induced, triggered reflection. Having reflection points on the calendar is good - albeit very limited. To reduce delays in improvement, increase understanding and maximize the likelihood of successful changes, reflection needs to be an innate skill of the team.

    Retrospectives can help the team improve their reflection ability.

    Systems Thinking
    The system within which a team operates constrains the team in many ways that they may be unaware of. For example, a local optimization may make work easier for the team, yet destroy business opportunities and therefore reduce sustainability. Understanding the impact of the system on the team and the impact of the team on the system is essential to effectively improve.

    Retrospectives can foster Systems Thinking.

    Collaboration is not only how individual people do their work with proper alignment - it is how people integrate their work with each other. Just like Pair Programming combines two brains on one task, collaboration enables the team to accomplish the same thing easier, faster and better. 

    Retrospectives may open doors for better collaboration.

    A team which is coached through a Retrospective is not only conducting activities. While the coach provides tasks to the team in order to guide their reflection process, the coach has the precious opportunity to observe how the team behaves and thinks in a controlled, protected environment.

    Retrospectives are a tremendous opportunity to observe.

    A wise coach not only leads a Retrospective to a result by guiding the team through the Retrospective process, the coach actively controls the process and experiments with the social and creative dynamics of the team while doing so. This experiment generates insights into the psychological and social structure of the team, which can later be used to change behaviours.

    Retrospectives are the coach's experimentation sandbox.


    To maximize the power of your retrospectives, you need to transcend the level of simply defining improvement actions. Use Retrospectives on multiple levels at the same time to:

    • Improve the process
    • Focus the team
    • Instill a reflection mindset
    • Nurture collaboration
    • Observe the team
    • Conduct social experiments

    When planning your Retrospectives, move away from finding techniques which entertain the team while going through the mandatory frequent improvement routine. Transcend this level and look for ways to work with the team on many levels.

    The potential optimization goals of Retrospectives described in this article are not comprehensive. They are intended as a reflection opportunity to maximize the impact of your Retrospectives.

    Add power to your Retrospectives by clearly determining the goals you want to reach, then finding ways to achieve this. 

    What to do in a Review?

    The Review meeting is the Inspect+Adapt ceremony of the Product. It is intended to align customers and developers. Sometimes, teams are confused as to what to do in a Review to reach this goal.

    What you need

    A good Review creates transparency on many levels. Mutual understanding is improved on many levels:

    • Customers learn what the product looks like right now
    • The product owner learns what customers need next.
    • Developers learn what customers care for - and what they don't.
    • There is healthy discussion about why developers chose specific design approaches

    The best Reviews end with all attendants being aligned on what will happen next.
    If stakeholders are satisfied, the next steps are clear to the Product Owner and the developers.
    If stakeholders are dissatisfied, corrective actions are identified for prioritization and the next planning.

    Maximize feedback

    The Agile Manifesto states that "Working Software is the primary measure of progress". Developers should never feel pressured to justify that they were "busy". We don't need to prove or justify anything - we need feedback.
    Any element of the Review that is not aimed at obtaining feedback is waste. When preparing a Review, the most important questions should be "Which feedback will we get from this?" and "Why would we need that feedback?"

    Bad practices

    The SAFe 4.0 Scrum Master Orientation suggests that in a Review, in SAFe called "Team Demo", the team should do the following:
    Teams demonstrate every Story, spike, refactor, and NFR 
    Now, this is actually a terrible idea for Reviews. The topic has been discussed on LinkedIn.
    Let's discuss why this is a bad idea, point by point:

    Bad idea #1: Team doing it
    When you really want to know how the customer/stakeholder feels, they should be in the driver's seat. From an involvement perspective, the stakeholders should be  as active as possible during the Review. This maximizes the amount of feedback and learning the team will receive.

    It's ok if the team explains what to look for, it's a bit troublesome of the team has to walk the stakeholders through.

    Bad idea #2: Demonstration
    Real learning and the interesting discussions happen when someone other than the developers (who know exactly how they built it) have to figure out how to use the new feature. Many quirks remain hidden until a first-time user tries out the product.

    Let the customer/stakeholder play with the application.

    Bad idea #3: Review Stories
    Stories are explanations of the customer's needs. The team should not demonstrate what work they have done to fulfill the customer's needs. Much more interesting is the real change to the product and the impact it has on the customer.

    Move from stories towards customer centered goals.

    Bad idea #4: Review Spikes
    Spikes are backlog items intended solely for the team's internal learning. While the team may use spikes to discover what the customer really wants or how to serve the customer need, they do not result in Working Software.

    Without categorically denying that spikes should be demonstrated, a spike is only interesting if the developers need a specific decision from the stakeholders based on that learning.

    Bad idea #5: Refactoring
    The purpose of refactoring is "provide the means to integrate a new feature". Refactoring is a "semantic-preserving transition". This means that from a user perspective, refactoring makes no difference. On the one hand, refactoring is hygiene work that "just needs to be done". On the other hand, refactoring generates no learning opportunity.

    Keep refactoring work out of the Review.

    Bad idea #6: NFR's
    Non-functional requirements are, by their very nature, not functionality. Their impact is often hidden from a user perspective. While it is possible to demonstrate (best with real examples rather than some static images/slides) NFR's, this returns to the question: "What learning do you expect to come out of presenting that to the stakeholders?"

    NFR's are only relevant if more learning is expected from the Review.


    Reviews are fairly simple.
    The team should be offering stakeholders an opportunity to inspect the team's attained goals and provide feedback.
    The format of a Review should maximize activity, interaction and discussion of all those involved. If everyone can learn something from the Review, it is valuable.
    The core question for every Review would be "How can we maximize learning something about the product?"

    Make your Review a valuable learning opportunity.

    Friday, January 13, 2017

    Happy New Year: A forward-looking Retrospective approach

    Most retrospective techniques focus on analyzing past events, leading to introspection of transpired events. I have created the "Happy New Year" technique to help teams transcend the past and orient towards the future. I will introduce it to you for your own perusal:

    A look at the clairvoyance Retro

    Step 1- Prepare


    In the first step, create a 2x2 matrix and entitle it with the period the team should address.
    Now, in January, it's a decent idea to take the new year. Taking the "next release" the next fiscal quarter may be exactly as valid, it depends on how far ahead you want to look.

    Good and Bad

    Entitle the columns with "Good" (fortunate) and "Bad" (unfortunate). Now, I've used a four-leaf-clover and a "Friday 13th" (as today is just that day) to depict these, you can feel free to use your own.
    The idea is that in the left column, the team should enter events that will help the team - and in the right column, events that become setbacks.

    Prediction and Wish

    The rows are entitled "Prediction" and "Wish". I used a crystal ball for the predictions and a magic genie lamp to depict them.
    The idea is that in the upper row, the team should enter events that are fairly likely to occur - and in the lower row, events that the team hopes, even though there is no evidence that they will happen.

    The four squares

    The upper-left square is "Predict-Good". This is for upcoming events that the team looks forward, such as "We get Magic Leap on our desks".

    The upper-right square is "Predict-Bad". Transpiring events the team would prefer to avoid, if possible - such as "A competitor takes over some of our customers".

    The lower-left square is "Wish-Good". This is what the team might wish for - such as "Our product wins the NetGeeks award".

    The final, lower-right square, "Wish-Bad", requires a bit of further explanation: Who would wish for bad things? Here, we are looking for events that will severely disrupt our team/organization with an opportunity for massive change. For example "Our cloud hoster goes bankrupt" would result in massive business loss, yet it would be a great opportunity to discover a better hosting strategy.

    Step 2 - Collect

    Let the team think

    Instruct: "For each of these four scenarios - name specific events that you can think of. Try to find at least 1, and no more than 2, items for each of the boxes."
    Give the team 5 minutes of silence so that each team member has enough time write their own sticky notes.

    Post and Introduce

    After the time is up, let the team members post their sticky notes on the board, introducing with a few sentences what the item is. At this time, do not discuss further implications or potential solutions.


    Potentially, the team will come up with items which are not events, such as "We learn something new" - which is more of an ongoing process than something that you could stick on a calendar and validate with a yes/no hypothesis test. Let the author re-formulate these into an event (e.g., "We attend a HTML class")


    Any duplicate notes should be removed. 


    Let the team members quickly discuss whether the notes are in the right segment - are the predictions more like wishes, are the wishes actually coming true already?

    Step 3 - Prioritize

    You can use dot-voting or any other technique to let the team select a few topics for further exploration.

    Hint: In some cases, it's useful to de-scope elements from one (or more) box, such as "Predict-Good", if your intention is more to challenge the team.

    Step 4 - Discuss

    Depending on the box, the discussion questions look different.

    Predictions: "What will we do after this happens?" - and: "How can we ensure/avoid this?"
    Wishes: "Why would we want this to happen?" - and: "What will come out of that?"

    It's a good idea to write down the key points of the discussion for reference.
    Discuss one or two items - and jot down the key points.

    Step 5 - Refocus

    The discussion will most likely yield multiple strands. In order to elaborate specific actions, you will need to refocus by shutting down all but one or two of them. Again, you can use a technique like dot-voting to let the team autonomously decide with which item to proceed.
    Limit the things you want to work on

    Step 6 - Propose

    In the final step, the team should decide on a way forward, elaborating specific actions to steer into the desired direction.
    Topics which are long-term relevant are often not resolved with a single change - potentially, a series of changes need to be made which take some time. If this is the case, simply create a timeline and discuss what you want to do when.

    Create a timeline until when ou want to reach your goal.
    Make sure people take responsibility.

    So, that's it. Have fun, good luck - I wish you great new insights.

    Wednesday, January 11, 2017

    Do you really want high utilization?

    Let's end the discussion about whether we should optimize for maximum utilization right here, right now - with a metaphor. Ponder your own answers for the questions.

    Your features are the cars. Your teams are the lanes.

    Lane 1 is optimized for maximum utilization (80%+).
    Lane 2 tries high utilization (50%).
    Lane 3 actively minimizes utilization (as close to 0% as possible).

    Question: If your goal is to get from A to B as fast as possible - on which lane would you travel?

    Question: What happens when a car suddenly needs to brake? (i.e. an impediment occurs)

    Question: What happens when a car needs to enter your lane? (i.e. new information becomes available)

    Transfer-Question: What is the fastest way to obtain business value in product development?

    Concluding Question: Since minimal time-to-market maximizes ROI - which utilization strategy should you pursue?

    Clearing up some simple SAFe misgivings

    In this post, I will address a few misgivings that have been expressed towards SAFe.
    As each of them is independent, this post is more of a patchwork collection than a coherent stream.

    SAFe has a Hardening sprint

    Point taken, "Hardening Sprints" are a bad idea. They stem from Waterfall thinking.
    Hardening Sprints imply that the teams' Definition of Done doesn't really mean "Done" - and Hardening as a separate activity encourages shoving technical debt under the rug.

    The designers of SAFe acknowledge that. SAFe4 has removed the concept of a "Hardening Sprint" and replaced it with "Release Any Time", i.e., a clear shift towards Continuous Delivery.

    SAFe is "Agile flavor" for traditional managers

    Many people have the concept that "Agile" is something that teams do, while existing management structures, philosophies and practices remain untouched.

    Far from it! While SAFe acknowledges that existing traditional organizations have lines of management that can not simply be abolished overnight, there are massive differences:
    1. Setting: Teams do not work like in a traditional setting. The work of teams and managers no longer corresponds to traditional management.
    2. Responsibility: Teams are self-managed and autonomous. The responsibility of teams and managers shifts.
    3. Structure: Teams belong to the ART and work for the ART. They no longer belong to a line and work for projects. Gone are the days of capacity management, reporting and controlling.
    4. Leadership: SAFe emphasizes Servant leadership and proposes a leadership model that is incompatible with Command and Control.
    5. Mindset: SAFe acknowledges the Lean-Agile principles, which require everyone to rethink how they work.
    Traditional managers need to un-learn their old role and learn a new role. While SAFe provides enough leeway for a transition, those who do embrace agility will become organizational impediments sooner or later.

    SAFe is evil

    Ha, that's a good one! Please, define "good" and "evil"?
    SAFe is a Public Domain framework and all knowledge of SAFe is provided free-of-charge on the official website. Of course, scaledagile is a for-profit-company and is trying to earn money.
    In that, scaledagile applies marketing and sales techniques with the purpose of inviting you to give SAFe a try.

    As with any sale, it may be that your product does not fully match your expectations when you failed to inform yourself about the product before you unpacked it. And likewise, it may be that the product may be mishandled by those who fail to follow the instructions.

    It's your own fault when you chose to be uninformed, as sufficient information is clearly available - including unpaid(!) neutral case studies from those who tried SAFe.
    And it's also your own fault when you fail to get competent help during a SAFe transformation: There are enough competent experts out there.

    How does that make SAFe evil?

    SAFe isn't innovative

    Tough shot.

    Yes, to be honest, there are no innovative ideas in SAFe. On the contrary, SAFe observes what agile companies do in order to solve their own problems - then provides the most reliable concepts bundled into a framework.
    Those concepts have already been tried, proven and validated on many occasions before they are accepted into the SAFe canon.

    You can not buy "Innovation" and you can not have it forced on you. "Innovative" is something you need to be.
    SAFe is a framework. A framework can help you with that - if you choose to use it for that purpose.

    So, yes - SAFe isn't innovative. And it won't make you innovative either - unless you choose to be.

    SAFe only collects things that we already have at our disposal

    And that is also it's greatest strength. You're not going to find weird things in SAFe that haven't been tried and proven.
    Dean Leffingwell quotes Bruce Lee, "Use only that which works - and take it from wherever you find it."

    The notion that "We already have that at our disposal" is a good way of being non-helpful.
    I have to think of that old dating joke: "Can I have your number?" - "It's in the telephone book." - "And your name?" - "Right next to it!"

    It's good that the information is already there, especially since it means you have enough neutral third parties who will validate the concepts. yet, in the presence of near-infinite information, important things may be hidden in plain sight. SAFe makes them visible and tangible.

    Reversibility Design: Black Cars or uncutting the chassis?

    How can you optimize your test strategy? 
    That depends on what you want to achieve. Do you want a test strategy that leaves less room for errors in the product - or do you want a test strategy that permits highly flexible products where you can't really predict all potential problems up front? 
    Here are two design metaphors to help you on the way: 

    Black Cars

    You can have any colour, as long as it's black - H. Ford
    The "Black Car" is a powerful design metaphor for state reduction:
    All cars weren’t black because Henry Ford was a controlling tightwad. It was simply so that the paint shop either had paint or it didn’t.That made the whole thing easier to manage.
    The Model T only came in black because the production line required compromise so that efficiency and improved quality could be achieved.
    Spraying different colours would have required a break in the production line, meaning increased costs, more staff, more equipment, a more complicated process, and the risk of the wrong colour being applied. (taken from here)

    Black Cars minimize tests while maximizing the reliability and robustness of the product.

    Uncutting the metal

    "Uncutting the metal" is a really powerful design metaphor for impact control:
    In Henry Ford’s factory, once you cut a piece of metal, you couldn’t uncut it. If you make a decision reversible, then you don’t need to test it with the kind of rigor that you’re talking about.
    Uncutting minimizes tests while maximizing the flexibility of the product.

    Learn both how to build black cars - and: how you can uncut.
    It helps a lot in people designing the most effective tests and better applications.

    Both metaphors are taken from the agile testing guru Kent Beck. The original article can be found here.

    Tuesday, January 10, 2017

    Normalized Story Points - what's that?

    SAFe4 suggests that Story Points should be normalized across the Release Train. Additionally, it provides a method for estimating the first Sprint that could be considered inconsistent with the idea of Story Points. Let us take a closer look at the idea.

    What are Story Points?

    Story Points are, in short, an arbitrary measure quantifying the expected effort to get a backlog item "Done". They are expected to help the team plan their capacity for each iteration and to help the Product Owner in getting a rough understanding of how much the team might be able to deliver within the few next months. This can be used, for example, to calculate estimated Release dates and/or scope.

    There is an additional purpose that is also suggested by Mike Cohn in his blog: When you know the amount of Story Points that can be completed per Iteration, you can assign cost estimates to backlog items helping the PO make better business decisions.
    For example, a backlog item might turn into a negative business case once the cost is known, and can then either be reworked for better ROI or discarded entirely.

    SAFe picks up this idea in the WSJF concept, i.e. prioritizing features that have a good ROI/effort ratio.

    The most important thing about Story Point estimation is that every member within the team has an understanding of what a Story Point means to the team. It can mean something entirely different things to other teams, hence caution should be exercised when they are referenced outside the team.

    What are Normalized Story Points?

    SAFe's delivery unit is the Agile Release Train (ART), effectively a "Team of Teams".
    Just as a Story Point is intended to mean the same thing to one team, it should mean the same thing within a Team of Teams.
    Otherwise, the Product Manager would receive different estimates from different teams and is completely unable to use these estimates for business purposes. This would render the estimation process useless and estimates worthless.

    As such, SAFe suggests that just like in a Scrum team, the individual team members need a common understanding of their Story Points - the ART's individual teams require a common understanding of their Story Points to make meaningful estimates.

    Why can't each team have their own Story Points?

    In SAFe, all teams on the Release Train pull their work from a single, shared, common, central Program Backlog. This Program Backlog serves to consolidate all work within the ART, regardless of which team will actually pull the work.
    A key Agile concept is that the work should be independent of the person who does it, as specialization leads to local optimization.
    From a Lean perspective, it is better if a slower team starts the work immediately than to wait for a faster team.

    Especially when cross-team collaboration is an option, the slower team can already deliver a portion of the value before the faster team becomes available to join the collaboration. This reduces the overall time that the faster team is bound and hastens final completion.

    If Story Points differ among teams, it might become necessary that every single backlog item needs to be estimated by every single team in order to see which team takes how long to complete the item. This type of estimation is possible, yet leads to tremendous waste and overhead.

    If Story Points are normalized across teams, it is sufficient to get a single estimate from a single team, then look at the velocity of each team to get an understanding which team would take how long.

    Another benefit of normalized Story Points is that when Team A needs support from Team B to meet a crucial deadline, Team B's product Owner knows exactly how much they need to drop from the backlog in order to take on some stories from Team A without wasting effort on re-estimation.

    How does SAFe normalize Story Points?

    In the first Program Increment, the ART is new. Both the individual teams and the ART consist of members who have not collaborated in this constellation before. Teams are in the "Storming" phase - as is the ART itself.
    This means Working Agreements are unclear. The DOD is just a vague ideal that hasn't been applied before and might have unexpected pitfalls. Depending on the product work, the environment is also new and unknown. Effectively, the teams don't know anything about how much work they can do. Every estimate is a haphazard guess.

    One approach might be to have a discussion first to identify a benchmark story, assign benchmark points and work from there. This discussion will lead to further discussions, all of which provide no customer value.

    To avoid this approach, SAFe suggests the following approach:

    Start with Relative Estimates

    In the first PI Planning, teams take the smallest item in their backlog and assign it a "1". Then, using Relative Estimation (based on Fibonacci numbers), they assign a "2" to the next bigger item, a "3" to an item that is slightly bigger than that one - and so on. Once, they have a couple of references, they can say "About as much as this/that one".

    Of course - all of this is guesswork. But it's as good as any other method in the absence of empirical data. At least teams get to have a healthy discussion about "what", "how" and potential risks.

    How is Velocity calculated based on Normalized Story Points?

    Again, in the first Program Increment, we have absolutely no idea how many Story Points a team can deliver. Since we have rough Person-day estimates, SAFe suggests a very simplistic approach for the first PI Planning:

    We know how many team members we have and we also know how many days they *expect* to be working during the Iteration. (nobody knows when they will be sick, so that's a risk we just take).

    A typical SAFe Iteration is 2 calendar weeks, so it has 10 Working Days. We multiply that number with the amount of team members. 
    Base Capacity = 10*Team Members

    From that iteration capacity, we deduct every day of a team member's absence. 
    Adjusted Capacity = Base Capacity - (Holidays * Team Members ) - (Individual Absence)

    Finally, we deduct 20% - as planning for 100% utilization is planning for disaster. We round this down.
    Initial Velocity = Adjusted Capacity * 0.8

    Here is an example:
    Team Trolls has 6 developers. There is a single day of vacation and Tony needs to take care of something on Friday.

    Base Capacity = 10*6 = 60 SP
    Adjusted Capacity = 60 SP (Base) - 1*6 SP (Holidays) - 1 SP (Absence) = 53 SP
    Velocity = 53 SP * 80% = 42 SP

    So, Team Trolls would plan Iteration 1 with 42 Story Points. If the numbers don't add up, it's better to err on the lower side than to over-commit. They might choose to fill the Sprint with 39 Points, for example.

    What happens to Velocity and Normalized Story Points over time?

    In Iteration 1, we merely guessed. Guessing is better than nothing. We learn, inspect and adapt. For example, Team Trolls has discovered that they can slice throuh Stories like butter and take on more points in the future - while Team Badgers has discovered they need to do significant support work for other teams (such as knowledge transfer), slowing them down. They would then take on fewer Story Points in subsequent Sprints.

    Here is a sample of how an ART velocity may develop over time like:

    Tracking velocity over time

    As we see in this example, teams Inspect+Adapt their own plan, feeding back to Product Management useful values to I+A the overall PI Plan and (if applicable) Release plans.

    No re-estimation of previously estimated backlog will be needed. As new work becomes available, and "Done" Stories can be used as benchmarks for additional backlog items in the future to keep in line with current backlog items.

    Caution with Normalized Story Points

    Story Points are not a business metric. Neither is Velocity. They are simplified planning metrics intended to minimize planning effort while providing sufficient confidence in the created plan.
    The metrics are subject to the same constraits as in single-team Scrum, i.e. the following anti-patterns need to be avoided:

    Do not:
    1. Assume estimates are ever "correct". They are -and remain- estimates.
    2. Measure progress based on "Story Points Delivered". Working Software is the Primary Measure of Progress.
    3. Compare teams based on their velocity. Velocity is not a performance metric.
    4. Optimize the ART structure based on velocity figures. An ART is a highly complex adaptive system.
    5. Try to maintain a constant/increasing velocity. Capacity planning is intended to minimize the risk of failure and subject to reality. Velocity is just an indicator to improve the reliability of planning.


    The normalization of Story Points solves a problem that does not exist in a non-scaled environment, i.e. the question "What happens to overall progress when another team takes on this backlog item?"
    This helps the ART shuffle backlog items among teams in order to maximize for overall product value, rather than team utilization.

    In the absence of better information, we use a crude rule-of-thumb to get the first Story Point figures on our backlog. When we have completed stories, we can determine which of these as useful as reference points. The initial tie between a Story Point and a developer-day moves towards a rather intangible virtual unit really quick. This must happen to keep the Story Points consistent across teams.
    The understanding of a Story Point needs to remain consistent across teams.

    In the absence of better information, we use a crude rule-of-thumb to get initial team velocity. When we have completed an iteration, we use the real results as reference points for the future.
    Within a few iterations, velocity's correlation to capacity-days shifts towards the intangible virtual unit of Story Points that are disconnected from time. This must happen to maintain Velocity as a functional, consistent planning tool in the ART.

    In an ART, it is even harder than in Single-team Scrum to resist the urge to evaluate teams based on Velocity. The RTE has the important responsibility to maintain the integrity of Story Points by stopping any attempts (usually by management) to abuse them.

    Monday, January 9, 2017

    Do we really need Scrum?

    According to a recent discussion on LinkedIn, Dr. Jeff Sutherland claims regarding SAFe, "If you remove all the waste from SAFe you wind up with Scrum." Now, it's obvious that the creator of each product promotes their product - and that Dr. Sutherland would promote the merits of Scrum as a "no-waste form of organization". 
    Let us ignore for this post the question whether SAFe is really just "Scrum plus Waste" and much rather consider the question "Is Scrum really where we want to take our organization?"

    Conway's Law

    "organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations" - M. Conway
    Any defined structure within an organization channels communication, so any form pre-defined Structure affects the design of the created product.

    Conway's Law should spark the question, "Is the form of system created by Scrum even desirable?"
    To answer this question, we first need to drill a little into why people actually use Scrum.

    The Waterfall Strawman

    Scrum is often compared to Waterfall software development, and the portrayed as the "better alternative". Well - that's true.

    Waterfall was proclaimed as dysfunctional by Winston Royce even at the time when it was first published. People adopted it regardless.
    Now, let us compare Scrum (works) with Waterfall (doesn't work). Of course Scrum looks much better.

    Does that mean Scrum is the best way to do things? No. We haven't proven that. All we did was apply the "Strawman fallacy" and combine it with a "bifurcation fallacy".

    The real question is not whether we should use Waterfall or Scrum. With a bit of common sense, we would choose option #2.

    The real question is "What system do we need?".
    In this case, the answer for "Scrum" is much less obvious. Let us explore further.

    The problems

    Let us examine some common problems that we tackle with Scrum:

    1. We can't reliably plan for the Unknown, so we make short-term plans in scheduled, short (2-week) iterations.
    2. We don't know the perfect process, so we need Retrospectives at the end of every Sprint to improve.
    3. We don't know how the customer likes our product until they see it, so we need Reviews at the end of every Sprint to adapt our plan.
    4. We need to remain synchronized on our Sprint Goal, so we need Daily Scrum Meetings.
    5. We need someone to keep the Product Backlog healthy, so we need a Product Owner.
    6. We need someone to make Scrum work, so we need a Scrum Master.

    Well... that all looks sensible. Until you look closer.

    Sprint Planning

    Statistically speaking, regardless of team size and sprint length, your Sprint should contain at least 4 items in order to guarantee that you have something noteworthy at the end of every Sprint. This means that on average, each item for the Sprint should not be larger than 1/4 of the Sprint.
    (The proof is left as an exercise for the reader)

    This means that in every Sprint, you have at least one item that you have planned ahead at least three times longer than it takes to actually deliver it!

    Think about what this means.
    We badmouth Waterfall, because allegedly, we have Unknowns that make it impossible to plan too far ahead, yet we keep a planning horizon of 300% or bigger than the work we're actually doing at the moment.

    Does a 300% planning horizon actually sound feasible? Does it sound optimal?
    Is it possible to live with a 100% planning horizon? Yes. We plan only the things we actually work on next. Of course, this means that within a 2-week period, we will plan many times, based on more accurate information and consuming less time in each meeting.

    When it comes to planning, there is a better way than Sprint Planning.


    When it comes to Continuous Integration, Martin Fowler stated, "Continuous Integration means to integrate continuously". And with that, he meant, not monthly - not weekly - not daily, but: Continuously! Experienced Scrum Practitioners would agree that integration at Sprint End is a terrible idea and a surefire recipe for disaster.

    Now, after this slight detour, let me ask this question:
    If "Continuous Integration" does not mean "Integrate at Sprint End", why would Continuous Improvement mean "Improve at Sprint End"?

    The best time to consider whether we should improve the way we are doing things is not after you have done them - it's while you are doing them!

    When it comes to Continuous Improvement, there is a better way than Sprint-End Retrospectives.


    When is the best point to get customer feedback on a backlog item? About a week after it's Done - or earlier? This question sounds weird, yet it is what we are doing with Sprint Reviews.

    The Agile Manifesto states that we should constantly collaborate with business, and that includes customers - throughout development.
    The entire idea of MVP is not limited to one-time-up-front delivery of a small portion of value to validate the next step, it can be applied even within a single backlog item.
    A single TDD cycle already adds a little more value and can be presented to the customer, without them even knowing the difference. Stuff like A-B testing permit us to get customer feedback on even miniscule changes in real time, without having to wait for a week after getting work to "Done".

    When it comes to customer feedback, there is a better way than Sprint-End Reviews.


    We need Daily Scrum, because team members are working on different things and need to ensure that the Sprint Goal will still be reached. Effectively, this is stating that the Daily Scrum is Scrum's way of keeping the team aligned.

    That has two implications:
    What if:
    a) The team isn't working on different things?
    b) There is no "Sprint Goal"?

    When everyone on the team is working on the same thing, why would we sync on that? We are already synced.

    Let us limit the team's Work in Progress to 1 and consider the current WIP as our common goal, and Dailies become obsolete.

    When it comes to keeping alignment, there is a better way than Daily Scrum.

    Product Owner

    The Product Owner ensures product success, which includes critical business decisions.
    For all responsibilities of the Product Owner, the Scrum Guide states, "The PO may do these things or have the development team do them." - effectively, even Scrum concedes that the PO is not needed.
    That alone should trigger the question whether we need a person who has successfully delegated 100% of their work.

    One of their most important responsibilities is ensuring a well-maintained Product Backlog.

    The Backlog contains the list of all the currently known undone work. It's the team's "undone queue". I will reduce this to a simple question: "Why do we need a queue?" A queue is always an undesirable state, as it means that something important is waiting.

    JIT production systems reduce wait times. Scrum's product backlog effectively does the opposite. It is a vain promise that something may (or may not, depending on new information) be done in the future. It's good to create transparency on what the customer still needs - it's better to deliver it as fast as the customer can order.

    The PBL creates many problems that the PO has to solve. Would we really need a PO if we had JIT development?

    The PBL itself is based on the idea that we want to optimize team utilization in each Sprint. The entire concept of Utilization is another can of worms that is explored in another post.

    The Product Owner is a solution for problems that would not even exist without a Product Owner.

    Scrum Master

    This point is quite simple: If we aren't going to apply Scrum, why would we need a Scrum Master?
    Maybe an agile coach is still needed, but agile coaching works different from how a Scrum Master works.

    The Scrum Master is a solution for a problem that exists only in Scrum.


    Based on Conways Law, your communication structures - and therefore - your product will reflect Scrum. None of the structures created by Scrum are optimal.
    Consequently: with Scrum, you will not build the optimal product.

    Scrum allows you to successfully build products at a low cost, with low risk and in a short period of time. And - it works.

    Is it optimal? No.

    Should you use Scrum? That depends on where you want to go.

    The communication system of your organization should depend on your goal. 
    Only then will you build the optimal product.
    Don't feel pressured to reformulate your goals in a fashion that fits into a predetermined communication structure.