Tuesday, September 8, 2020

Agile Risk Management

 "How do we manage risks in an agile setting?" - Agile risk management differs widely from classic project risk management, because we have a different sphere of concerns. Whereas classical projects are mostly concerned with risks related to delivering within TQB (Time, Quality, Budget), an agile environment forces us to consider a much broader sphere of risks:

Agile Risk Management

Agile risk management

There are some general notes on agile risk management that may be unfamiliar or in contrast to the expectations of classic project organizations:

Risk overview

Teams (and in SAFe, ARTs) should be able to have an insight into their most relevant unresolved risks at any time. The assumption is that "if there is no risk overview, there are no relevant risks." Scrum Masters ensure that both of these statements are true.

At a minimum, risks are identified as such to be visible. Some organizations prefer to add additional information, such as severity, occurrence, detection (the FMEA process) and countermeasures, which is only relevant if you have no means of addressing them swiftly.

Risks are treated like regular work items, and move into the backlog as "potential work to do". The teams decide whether new risks are added to the Sprint Backlog or to the Product Backlog - or to the Program Backlog, in a SAFe context.

Live updates

Agile risk management is a constant exercise of evaluating available information and anticipating probable events that should be avoided, then inspecting and adapting the determined course of action. Risk management in an agile setting happens during every event (and during daily work, as well) -

  • Lean-Agile Budgets identify financial risks
  • Refinement identifies product risks
  • Planning and Dailies identify process, delivery and organizational risks
  • Reviews and Demos identify delivery and product risks
  • Retrospectives and I+A workshops identify all kinds of risks
  • PMPO Sync identifies product and delivery risks
  • Scrum-of-Scrums (SOS) identifies organizational and process risks

As such, the risk overview is a more volatile and shifting artifact than even the teams' plan, and potentially more ephemeral than the product backlog itself.

Avoid Single Points of Failure

Organizations are most resilient if there is no single points of failure, hence risk management becomes a collaborative exercise. It's better to work with focal areas than rely on a clearly delineated role-responsibility mapping. It is expected that everyone contributes to naming and resolving relevant risks, from the most junior developer to the most senior manager.

Scrum Masters facilitate team risk resolution and create transparency in the surrounding organization on those outside the team's control. Ideally, the team would be able to deal with their own risks even without requiring the Scrum Master to take action.

Risk resolution

Just as every day is an opportunity to identify risks, we should deal with them before they materialize, ideally right when it is exposed. It's the team's decision on how they will prioritize risks against other work.

Risks outside the teams' sphere of control should be adressed via proper channels. In a SAFe setting, the first channel for a team is usually from PO to PM or from SM to RTE who would involve management if required.

Focus Areas

Teams succeed by collaborating and helping each other out, so let's not go into the "sorry not my desk" antipattern. Still, people do different things, and so pay more attention to different aspects. In this context, let us examine the different focal areas of the common agile roles.

Product People risk focus

First and foremost, product people must take care that we build the right thing, and have the resources (both time and money) to do so. Hence, they must be aware of and deal with:

Financial risks

Financial risks are cash flow related. We must secure an initial investment that allows us to develop something, and in order to continue, we need ongoing funding. Within an enterprise, that's typically budget. On the free market, that's revenue, which is usually generated through sales or subscriptions. So the Product people need both the means to understand the current financial situation and to forecast the future, and thereby extrapolate wherein the risks lie.

Common financial risks include exploding license fees, stakeholders withdrawing their support, customers leaving or price wars on the market, but also pretty mundane stuff like equipment breaking down or the need for a bigger office as the team grows.

To manage financial risks, the product owner must understand their cash flow.

Since financial risks are entirely out of scope for classic Kanban, XP and Scrum, there tend to be no standard team-level mechanisms for dealing with them.

Lean-Agile Budgets are one of many SAFe mechanisms to keep the predictable financial risks away from the team.

Product risks

Product risks are related to success of the entire endeavour. We need to build the right thing right at the right time, and ensure we adopt to changing circumstances as rapidly as possible. Hence, "release fast, release often" is essential to minimize product risk. 

Common product risks range from building the wrong product over building it in a way people don't like all the way to the product becoming obsolete or unmaintainable. Hence, product risks can be located in the past, with consequences ranging far into the future. This requires constant attention both to the inner dealings of the team and the outside environment.

To manage product risks, it's essential to look beyond the backlog, into the product itself and the product's market. Metrics can serve both as lagging and leading indicators to discover and track their manifestation.

Refinement workshops, Reviews (System Demos) and Planning events should reveal product risks, both within the team and at scale.

Team risk focus

Autonomous teams have control over both their process and their delivery. Hence, the risks associated to these must be born by them:

Delivery risks

Delivery risks range from not delivering anything all the way to delivering the wrong thing or something that doesn't work, hence they include the huge topic of quality-related risks. Since delivery risks have a price tag attached, called the "cost of failure", these risks consist of more than pure impact - they also have a huge element of choice: we take calculated delivery risks when the benefit outweighs the cost.

Common delivery risks include defects, incidents and problems (in ITIL terms), not being in control over the product's technical quality, not testing right or enough as well as releasing something immature, but also failure to gather fast and reliable feedback that could expose and thereby prevent other risks.

Delivery risks must be managed, but often become visible in real time. They are hard to pre-plan.
If we see any delivery risk in the future, we should devise a strategy to start minimizing it today. Retrospectives address how we dealt with past delivery risks. Team dailies should reveal current delivery risks, and teams should actively collaborate to deal with them.
If they can't be dealt with immediately, they should be made transparent on the Team Board.

Process risks

Usually, a process risk manifests as an impediment towards doing the right thing swiftly. In larger organizations with strict regulations and massive dependencies, process risks are often outside the teams' sphere of control, which, in the worst case, may lead the idea of self-organization and team-level agility ad absurdum. 

Common process risks include handovers, bottlenecks, delays, but also technical aptitude.

Teams are expected to manage process risks within their own sphere of control. Where they lack this control, the Scrum Master must often intervene to drive risk resolution. 

Team Dailies often reveal immediate process risks.
Retrospectives are often the best point in time to deal with long-term risks.

In SAFe, we use the Scrum-of-Scrums and the I+A workshop to address cross-cutting process risks. Additionally, we can resort to Communities of Practice to deal with practice-related risks.

Scrum Master risk focus

One of the Scrum Master's core responsibilities is revealing the things nobody else sees - and that includes risks of all form and types. Sometimes, the Scrum Master actively has to examine risks from the other roles' focus to identify need for change. Additionally, there's a group of risks that will oftentimes require action on behalf of the Scrum Master:

Organizational risks

Organizational risks, in this context, are risks induced by the way the team and its environment is organized. Such risks occur within the team, at the interaction points between the teams and the surrounding enterprise, as well as imposed from outside the teams' immediate horizon. Most of them occur at friction points, that is, where incompatible parts of an organization collide.

Typical organizational risks include asynchronity, miscommunication, bottlenecks, communication gaps or inavailability of individuals as well as mismatching goals or priority conflicts. There is usually a positive correlation between organization size and organizational risks.

Two core activities where organizational risks are identified are Planning and Retrospectives. In SAFe, that would include PI-Planning and I+A workshops where the SM should feed in both input and track relevant action items.

Thursday, September 3, 2020

The Magic of Agile

One problem with "Agile" is that it often gets used as an excuse to avoid addressing the real problems in a straightforward matter. People resort to "magical thinking" - hoping that "Agile" somehow does some magic that will make the problem go away. And here's what's happening:


In anything we do, we tend to look for predictability: what are we supposed to do, and what will happen as a consequence of what we do?

The coin toss

Let's start with something really simple: tossing a coin. 

You toss it into the air, let it spin, and call "heads" or "tails", and 50% of the time, you're right, 50% you're wrong. Straightforward enough.

Would you believe me if I told you that "if you throw the coin properly, it will turn into a helicopter and fly away?" Probably not. You won't believe me, because you know exactly what a coin and what a helicopter is, and how to toss a coin. There are no Unknowns here, and as such, you wouldn't accept my claim.

But now, let me change the topic:

The Project

Your company has started a project: to toss coins and turn them into helicopters. "Insane", you say. "No," your consultant responds: "we are in the Complex domain - a problem that we don't understand requires an approach you are not yet aware of. Since you can't rely on a familiar approach, you need to use an Agile approach."

Whereas formerly, we would have said "This won't work", now we have "Agile", and the plausibility of success immediately increases!

We have done nothing, absolutely nothing, that would lead to a helicopter as a result, and likewise have done absolutely nothing that would actually help you create a helicopter. All we did was introduce "magic" into the process by reframing the Unknown: "we have no idea how that's supposed to work!" becomes: "Agile will create the result!"


Since we know very well that we're not getting a helicopter with a coin toss, we are quite comfortable to state, "That doesn't work." Now, if someone showed us how a coin turns into a helicopter mid-air, we would say, "I see the helicopter, but don't know how that works," (we increased the certainty of the outcome, but not in the approach) - and those of us with a growth mindset would likely say, "I want to learn how to do that!"

If your manager has given you 2 weeks to do it, your nerves would tingle. 

And now your manager tells you that unless you do it, you will be fired? Most likely, you'd be quite willing to try any advice on turning coins into helicopters.

What has happened here?

The further you leave your comfort zone, your willingness to accept magic as part of your mental model increases. When facing an existential threat, you'll more than happily try things you'd call nonsense under normal conditions: the plausibility of the proposal hasn't increased at all, only your perception thereof!

And this quite frequently happens with "Agile Transformations":

Magical Agility

Of course, turning "coins into helicopters" is a joke. Nobody would take that seriously. But what if our challenge was to turn:

  • Dissatisfied Customers into Happy Customers
  • Poor Time-To-Market into Fast Delivery
  • Cost Pressure into Huge Savings
  • Tons of Defects into High Quality
  • Unhappy Developers into a Highly Motivated Workforce

Someone says, "when you're Agile, you can achieve all these". and now you're all - "Okay, bring on the Pigs in Pokes, let's get this going!" - especially when your job is at stake!

And that's where you get into the realm of Agile Magic. And that's when you need to stop and think.

Let's not get into the nitty-gritty that even if you're agile, you'll still have these problems, just that you will be able to deal with them better, and let's focus on the big hitter:

When your organization has no experience with achieving these, then "Agile" isn't going to change a thing about that unless you start doing your homework!

De-mystifying "Agile"

There is no unknown component that can't be explained. Everything is transparent. We know very well how to relate cause and effect. And where we don't know, we can explore until we do.

Problem -> Approach -> Action -> Outcome. Preferably highly repeatable and reproducible. No silver bullets, no miracle pills.

Where we can't copy+paste a solution from one place to another, succeeding with agility becomes harder, not simpler: we require experimentation and a willingness to fail - and a pretty good understanding on whether failure will lead to growth or be fatal.

There's no Agile ceremony, ritual or incantation that will grant you a magical Great Leap Forward.
You must get to work and clean up the mess you're in. 

Learn what causes you to get the outcomes you currently get, and what you need to do in order to get the outcomes you want. As you learn, you'll catch a bloody nose quite frequently, so bring plenty band-aid. (figuratively speaking)

Mundane agility

Find your own way forward:
Experiment. Fail. Learn. Repeat.
That's all there is.
The only shortcut I can offer: You are probably not the first person who has encountered a problem. Thus, you can often skip or at least reduce the "Fail" part by learning from others before you move ahead.

There are many bodies of knowledge that allow you to accelerate your journey: Engineering Practice. Process Management. Product Management. Quality Management. Supply Chain Management. Team Building. Just to name a few. Make use of whatever knowledge you can get a hold of.

Inspect and adapt until you're clear why you are where you are, and where and how you want to go next. When you don't know, you'll need to figure it out. 
The more you practice this, the more comfortable you become doing it.
The desire to seek magical solutions evaporates.

There is no magic in "Agile". 

Wednesday, August 26, 2020

Stories that aren't

 Let's take a look at the "User Story Template" (also known as: Connextra Template, by origin) - "As a ... I want ... so that ..." - straightforward enough. It's common in the "Agile" space, and many inexperienced Scrum Masters and coaches learn that teams should formulate their work like this.

The result?

Formally it's correct. It's a "user story" based on the template. 

Now, what's wrong with it? About everything.

Let's leave aside for a minute the fact that this story is as much of an antipattern for "INVEST" as it could be, and focus instead on the use of the template:

1. The "user" isn't a "user". 

If we start calling developers "user", then next thing we know, testers, analysts, and project managers are also "users". It becomes a hollow, meaningless term.

A "user" is someone who actually uses the end product. 

Like ... a Candy Crush user is, for example: the "mother of a small kid who only has 5 minutes before the kids will cry again."

(Of course, developers can be users as well. But that would require them to actually use the product, in which case they wouldn't be a developer, but ... for example, a "mother-of two who sits in front of computer screens all day ..." - if that's your demographic!)

2. The want is a means, not an end

"Wants" should be something that this specific user wants to have and would be willing to use our product for, like "simple and easy fun". No user ever wants a Customer_ref_ID. Maybe they need it to identify themselves, but ... can you name anyone who wants to use a product because it has a Customer_ref_ID? 

Could you imagine running a marketing campaign with the slogan, "We have a Customer_ref_ID" - if not, then you're probably not addressing anything someone wants. 

Take it as a lithmus test for formulating "wants":

If you would feel weird to see it on a billboard on the way to work - it's probably not a proper "want".


3. The reason is self-serving and circular.

In more abstract terms, the reason is "so that I have it." - it doesn't explain which problems it solves.  It adds no information. It doesn't help us verify whether we're adding value to the product, and it doesn't help us verify whether we actually need it. It's a fake reason.
Let's, for argument's sake, assume that both the user and the want were valid: we still don't know why you want to refer to the customer by ID, and how that is better than what you're currently doing.

A good reason statement shouldn't repeat the "want", but explain why the "want" is relevant, which problem we're solving, how the world is better after the need is met.

Good reasons invite developers to understand why the user has an unmet need.

Tuesday, August 25, 2020

The Investment into Quality

If you're setting out to "become Agile", or "more Agile", I would like to say something in words as simple as I can:
Unless you're willing to invest heavily into quality, forget about "Agile".

Now, what I mean with "invest in quality" is not "throwing huge amounts of money into testing", because the investment you will make is actually free, and it doesn't involve hiring additional staff, either: If you're doing it right, you will spend a lot less money to get better results. What you need to invest is attitude, thinking, brainpower, capacity:

Now, let me elaborate:

The commitment to quality

I think that no sane person would say that "we like to produce garbage products." I have never met a  developer who would state on their CV, "I was producing crappy software." Likewise, I have never met a manager who would introduce their line of service as, "We deliver crappy software."

If nobody would say that - then why is it even something to talk about?
Most commitments to quality are just lip service.
You must act upon your commitment.

When I say, "commitment to quality", I mean that everyone, and that is, everyone, must continuously ask, "How do my decisions and actions contribute towards quality, both in a positive and a negative sense?" Any decision that leads to poor outcomes should be reverted, and any action that leads to poor outcomes should be stopped.
Nobody needs to justify themselves for not doing the wrong thing, or not doing something in the wrong way. It should go without saying that "if you see that it's going to end badly, don't do it."

Managers must commit to creating an environment where team members have the freedom to do the right thing. Reciprocally, developers must commit to resolving any problems within their own sphere of control and naming any organizational barrier towards high quality, even if that barrier has been set up by the CEO in person. This must be unpartisan, free from personal preferences or fear (and these three things are already massive barriers to overcome!)

Everyone, and that means absolutely everyone involved, from managers over developers all the way to side stakeholders, must commit to do their best to enable high quality.

Quality thinking

You must turn a commitment into action, and for that, you must understand how to achieve quality. Quality isn't a local thing happening somewhere between a finished product and its users. Quality begins as early as in the ideation, and it never ends as long as the product exists. Quality is everything. It concerns everyone of us, and each of our actions.

Let's start simple, with the choice to add a certain feature to our product: Will it make the product better, or worse? For whom? How? In what way? How does the very decision of adding it affect both the process and the outcome?
The new feature could be a boon for some, and a put-off for others. It could make the product inconsistent with its purpose. It could make it clunky. It could put stress on development. It could have effects that are difficult to discover until it's too late. Could ... now I don't want you to over-think. At least, I want you to ask the questions that need to be asked, instead of simply shoving another piece of work into the pipeline: Oftentimes, the product could be better by removing a feature instead of adding another!

So, who has to think about the quality?
Designers, in design. Developers, as they develop. Testers, as they test (of course). Operators, as they operate. And that's the simple part: "I must be mindful of quality in my work."
Developers have to collaborate with designers. Testers have to do that, too. And Ops. Logically, testers also need to collaborate with developers and Ops. And everyone with the customer. And with management: "We must be mindful of quality in our interactions."

All the time, everyone of us must constantly ask:
"What can I do, both in my own work, and to the work of others, to achieve higher quality outcomes?"

Quality practice

Move beyond words. Learn how to create quality: Design thinking. Behaviour-driven development. Test-Driven design. Clean Code. Stop the line. Data-driven decisions. Measure everything. Quality and Process management. Go-See. Standardization. Simplification. Candid feedback. 
Just to name a few.

If you want to be agile, quality isn't just for testers: It concerns everyone, even the most junior developer and the most senior manager. Even those outside IT. While, obviously, I'm not going to ask a VP of Sales to write Clean Code, I would daresay that if anyone within the organization makes a choice that results in someone else breaking with an essential quality practice, this is going to hurt the company's bottom line. Hence, everyone must have a sufficient grasp on quality practices to maximize the overall outcomes of the company.
Everyone must understand enough about quality practices to do what it takes to get best results.

Capacity for Quality

I'm not going to sugar-coat this: If you don't have processes, infrastructure and code that are designed for quality, you're not going anywhere unless you put some effort into that.

You must allocate a certain amount of capacity for improving and preserving quality.

If your company has been spending $100k on shoddy stuff and could live with it, what's so wrong with spending the same $100k on something of higher quality? Nothing! Never accept shoddy outcomes: "Do more" has lower priority than "do well." Capacity invested into low quality is lost. Capacity invested into high quality is gained.

"But," - I hear you murmur - "We will be slower, and the business / customer can't wait!" - not really. 
Stop thinking about what you deliver in a day or two. Set yourself a horizon of a month, half a year and then a year. Ponder how much effort you invest during that time into fixing bugs, into rework, into unused stuff and into rote work.
Set yourself a target to generate the same business results, without doing any of that wasted work. Then, ask yourself what you would need to get rid of this pointless work. If you manage to cut down on 10% of your current workload, you have gained 10% capacity for improvement. This capacity is now not "free for more poor quality work": It's free for improvement, so that you can do higher quality work that makes people more happy. 

True Story.

Is all of the above a figment of imagination, wishful thinking, idealism?
No. I'm talking about measurable, tangible business outcomes. 
Helping my clients engineer quality, we have significantly reduced defects, non-value adding activity and rework to cut down new feature development lead times by over 50% and expenses by as much as 30%, while reducing cost of failures on business end by seven-digit figures.
The "invest" we needed was about 50% of people's time initially - to change on their thinking, their practice and their environment, and 30% of their time subsequently.

If it was your money: Would you be willing to let people spend more of their time on quality, if the outcome was that you have to spend less money to get more things faster, while earning more money from happier customers, working with happier staff?
Yes, this question is purely rhethorical: "Time spent" is irrelevant if the outcome is "better business"!

Do it.

Sunday, August 23, 2020

The case for trainings

 When coaching, especially in large organizations, I very often encounter teams and individuals who have basically been "pushed" into Scrum, SAFe, Kanban or LeSS without ever having attended a training course. While on the one hand, I advocate that "the real deal is what happens in the workplace", I want to make a case why training is absolutely mandatory for new agile organizations and even for those who are just going through a major change. In doing so, I would like to debunk the common "arguments" given by organizations against professional training.

Why not just learn on the job?

It's entirely feasible to successfully adopt a different way of working while doing it. That's how people who devised "Agile" also did it. But it's extremely slow and error-prone. And we re-invent the Wheel. And we may not spot our obvious problems until we hit a brick wall.

A classroom setting allows the trainer to create an environment specifically focused on a certain learning objective, and drive home the point. Encountering a similar scenario "on the job" without any a priori experience will make things appear more complex, which means that it will take significantly more brain power to comprehend.

Of course, an experienced coach will guide their coachees through this situation and the learning process as well, and they will do it in a practical working environment, connected to the real workplace environment. The time factor could easily be twenty-fold, though, because a lot more guidance, explanation and time for reflection is required when the basis just isn't there yet.

What happens in agile training

After this brief introduction, I would like to explore quickly what you get out of a good agile training:

Conveying knowledge

Teaching the roles, artifacts and events of (Large-Scale) Scrum or SAFe doesn't take long.
The formal rules of these frameworks are extremely easy, and it can be done in a few hours. Hence, the issue isn't in standing in front of a class and rattling down the material - the issue is in getting people to find an answer to their own question of, "what should I do?"

Aligning on Roles

It's important that people understand their own role is and that of the people they work with. For untrained people, that's one of the major sources of conflict, because there are unspoken assumptions and conflicts until we had an open conversation. A classroom environment allows people to bring three things together smoothly:
  • Objective standards
  • Their own personal understanding of their own role
  • Others' understanding of their own role, especially interactions

Understanding events

There are numerous events in each framework. As a coach, most of the dysfunctions in practice I observe are related to people not having a proper understanding of the intent beind these events. A training environment allows the trainer to let people consider for all of these events:
  • Scope
  • Participants
  • Agenda
  • Intended outcome
With people who already practice a framework, trainers will often let participants explain how they conduct these events, then provide corrections and advice in order to help people get more value out of what they've been doing so far.

Constructing artifacts

Regardless of whether you're using an agile approach already or are transitioning from classic project organization to dealing with agile artifacts, the journey is extremely difficult for people who have no understanding on the purpose and contribution of these artifacts. A classroom setting will give people time to learn or reflect upon:
  • Intent of each artifact
  • Setting up the artifact
  • Maximizing effectiveness
  • Minimizing handling efforts
  • Dealing with typical challenges
People are often amazed when they compare how they're currently working with the ease and seeming effortlessness with which a trainer uses artifacts in a classroom setting to achieve their purpose.

Experience Anchoring

As already mentioned above, the "knowledge" part is the simple part of an agile training. A proper agile training will spend a major portion of its time on creating an "anchor" - letting people try out the approach in a safe environment, so they can draw their own conclusions in comparing this with their own workplace environment.

Living the framework

Small simulations will allow people to try out their framework, and gain some experience with both the ups and downs of the new ways of working. Depending on what participants want to get out of it, they may either use the simulated environment to explore their own role and receive instant feedback, or they can try out a different role and build some empathy.

Communicating and learning

In a training environment, it's expected that people use each exercise and share with the group what they learned, what they would like others to be aware of, and what they would like to keep for the future. This is an extremely good habit for adaptive ways of working, which oftentimes doesn't happen on the job, because people feel it's out of place or don't have time.

Subtle learning

When people don't understand an instruction in a training environment, they will typically just ask the trainer. In the working world, they would often lack the courage to speak up and just try to get through somehow, the problem will only become obvious in retrospect. A training allows people naturally adopt the benefits of the Scrum Values, such as Openness and Courage, and a good trainer will drive home the point that they're already doing it!

Team Formation

Probably the most underestimated aspect of an agile training, and also the reason why I advocate bringing in a trainer rather than sending out people to role-specifics training, is team formation. There's a huge difference in performance between a group of individuals working on the same project (product) and a team (or: team of teams).

Over the course of the training, a trainer can form groups of trainees as teams who will have had ample time to discover and work out the important aspects of their future collaboration: who can expect what from whom, how they will organize themselves - and what is important when. After such a training, people will be pumped and ready to rumble without delay.

Just the benefit of having had people spend a few days to sort out all of their misunderstandings and find a common way forward can pay for a good training with ease: The formal knowledge gained is just the icing on the cake.

Flawed arguments against training

I would like to dissect the most common arguments given against bringing in a trainer and having people attend a formal training, because I believe that there are very, very few solid reasons against kicking off a new agile unit with a good training:

Common reasons

I hear all of these arguments commonly, and none of them hold water when you think about it a little closer:

The cost argument

"People know the price of everything and the value of nothing." - Oscar Wilde

The training costs too much

You will pay for the training one way or another: If you would "save" on the training, you'd have to do it with coaching, parallel to work, and therefore, over a prolonged period of time. The time lost due to friction until people are settled as a team and have found their way to collaborate and contribute will typically exceed a month ... and now think: how much will you spend on coaching during that period?

Effectively, you've got to do the same thing, with the only difference being whether you do it in a classroom or in parallel. The cost will be the same.

We don't have budget

A common problem, specifically in large corporations' IT department. It comes from a cost accounting structure. You may have budget to "do work" (CapEx), but not budget to "set up doing the work more efficiently" (OpEx). It's a dilemma. The solution rests in TCO - total cost of ownership. 
Going the right direction after a training will be faster and cheaper than figuring things out first. Even if you save just 1 Sprint on whatever you intended to do by setting up a training - the gains are X salaries for 1 Sprint, which should be part of your overall budget. 
Start thinking in terms of throughput accounting: if you spend $100k to build something, or you spend $80k to build something, the second option is cheaper. Except that "under the hood", option 2 used training, whereas option 1 burnt the budget on "doing": Does it even matter?
You have the money. You just don't see how you should be allocating it.

People are too busy

Of course they are. Because of your current ways of working. Which is what you want to improve.
If things don't work, why would you want to keep continuing doing the wrong thing, rather than making a full stop, resetting - and starting off with full speed in the right direction?

Other Priorities

This tells me that your organization has a priority issue: If "other things" are more important than coming together to introduce a more effective way of working - then your organization doesn't understand or appreciate effectiveness. We can use the training to set the baseline for fixing this issue, but we won't fix it by continuing to do what we always did!

We can't withdraw X people for 2 full days!

Imagine someone gave you an axe that allowed you to cut trees twice as fast, but it would take Monday and Tuesday to get used to. At the end of the week, you'd already have cut more trees. It's the same with your team(s). The only reason why you can't train them: because you feel the short-term cost is bigger than the long-term benefit. I call this "The tyranny of the Urgent." If you don't break free from that now - why would get better when your team has invested even more time working suboptimally?
As the proverb goes, "The best time to plant a tree is 20 years ago. The second best time is today."

We don't need it!

What do you say to a salesperson when you don't want to take the offer? "No." It's entirely acceptable. At the same time, would you say "no" to yourself when someone was literally offering you free money with no strings attached? 
To not spend money for something that costs you less than it earns you - is economic nonsense.

We'll figure it out by ourselves

That's entirely legitimate. I had the same attitude. And I did. It took me years. If you have those years ... great! Unfortunately, I don't think your comeptitors will wait for you to learn stuff that's well known while they start dominating the market.

People already know Agile

As mentioned above, a good trainer will not focus exclusively on objective knowledge. Instead, the training will form a team (of teams) of people who agree on a common, more effective way of working: If you already had this, why would you need a new way of working?

The flawed underlying assumption

Behind all arguments given against training is a single, unspoken common assumption: "We believe that training doesn't make much of a difference."

I would like to take a jab at this assumption like this: "If you believe your current ways of working are already (almost) as effective as what the trainer would teach you, why do you even want to adopt new ways of working?"
If you believe, however, that the new ways of working will make a significant difference, then I would suggest to figure out the value proposition of a training. Drop the fallacious arguments, talk purely about your needs and expectations.
If you need help building a business case for investing into training, we can do that:

The value of training

Before you think about "training or no training", ponder these questions:
  • What's the biggest challenge we have, that we want to solve within our current way of working?
  • How would the new way of working deal with this challenge?
  • How much does this challenge currently cost us - ever day, every month, per year?
  • How much would setting the groundwork with a brief training course help us shave off the current challenge?
The answer to these should help you formulate the business case for a training, and help the trainer manage expectations as well. If like this, you end up with a negative business case for the training, then I would - most likely - not suggest adopting a new way of working at all. The friction of switching ways of working would probably outweigh the expected benefits.

Once we've learned that training is a positive case, look for the following aspects in agile training:
  • Will the training address the challenge(s) we want to improve upon?
  • Will the trainer be able to relate our situation with the training content?
  • Will trainees get specific impulses for things they could start doing differently tomorrow?
  • Will the training equip people to apply their learnings outside the classroom?
  • Will the training connect people on the work they will be doing? 
If the answer to any of them is "no", either talk again to the trainer or find a different trainer.
Once you are convinced that the answer to all questions is, "yes", then it should be a no-brainer to do whatever it takes to free everyone for a few days to get some proper training. 

Training vs. coaching

What I say in this paragraph is bad business for me - because I'm a coach: I have sufficient experience to claim that the needed duration for coaching - and thus the coaching expenses - will be significantly lower after people had training.

The reason why I say it regardless of the financial impact: the probability of change success goes up dramatically. This, in turn, means that the likelihood that you will discover the value in coaching will go up. And thus, the probability that you will speak favorably of my coaching will also go up.

An expericenced coach would be fully capable of compensating the absence of classroom training and formal education in the coaching work. From an expectation management perspective, I would still advocate for starting with training before coaching, because it's not an either-or situation.
Good training enables better coaching. "No training" means extra money for the coach, and more trouble. The best case scenario is great training followed by great coaching. It produces a win-win-win scenario: you learn more, you save money, and the coach will be more effective faster.

Factors to disregard

A lot could be said on bad decisions when choosing a training. I would just want to highlight a few factors you should simply disregard when deciding on training, because I see many people look for these factors, only to later complain they didn't get their money's worth.


I have some beef with certs, because - especially for introductory classes, which we're talking about - they do not certify competence. They certify attendance, and you don't need (virtual) pieces of paper. You need people to do work. If your training helps you to do work better, that's great. Certification is a gimmick. It adds no value to a course.


Many organizations are trying to save money by sending a few people to a 2-day course, who are then expected to act as "internal multipliers". One of the key benefits of training is to form teams who can immediately and effectively start working in a new way, You lose this benefit by dispersing people and introduce propagation layers. Those people who will work together should receive training together. And don't limit it to "managers" or "leads" - train everyone!


A canned and bottled curriculum is a double-edged sword. You need something that people can relate to, something that's actionable in your organization. As such, the trainer must at least be able to pick up your people where they are instead of merely playing off a script. If the script is flexible enough, it's not a disadvantage. Otherwise, the standard creates a problem.


Training can have fun moments, but I'm highly suspicious of trainings that people unanimously call "fun": If you want fun, send people to Disneyland. Ignore the fun factor.
When a trainer actually breaks with your current paradigms, it will throw you into mental disarray. Anger, sadness and frustration can accompany a great training that really shook your beliefs about work. In some cases, it takes weeks or months until people realize what they learned. During that time, you might see people give both thumbs down on the training, which only shifts once the learning sinks in. Don't let high fun scores attract you to a training, and don't let lower fun scores detract you from it.


A good training will pay for itself - even when you're too busy, even when you think you already "know Agile". And even when you have no money. If the training isn't worth it, then whatever change initiative you're running probably isn't worth it, either. Hence, training can be a great lithmus test to see whether you should proceed.

You will save time, you will gain productivity, and you will also have a better bottom line when you achieve the same (or better) results after a training.

Monday, August 17, 2020

Continuous Problem Solving

"What do you even do as a Agile Coach?" - well, that's easy: I help you on your journey towards better, more effective ways of working. And how do I do that? 

Well, I will start using this simple 4-step process:

The problem solving process

Step 1: The Biggest Problem

When I come in, you will have many problems. One, or just a few, will be the biggest. Let's forget the others for now. Why? Because it's better to get one problem solved than to have no problem solved, and by its very nature, solving the biggest problem will make the biggest difference.

How do we identify the biggest problem in the presence of a myriad of issues?

It's not quite as simple as "brainstorming and dot-voting": sometimes, we need both loads of data and the perspectives of many people who may not be in the room. And sometimes, nobody sees or addresses the elephant in the room. When facilitiation isn't enough, I may gather and/or analyze data, interview different stakeholders or simply connect some bits and pieces to form an image to get a conversation going. And if that still isn't enough, I'll propose to you a shortlist of problems that you can pick from.

Step 2: Root Cause

If you had a simple solution, you probably would already have fixed it. So there's a deeper cause to your problem, and we need to address it to make some relevant progress. At times, we must move your process to an entirely different level, because we can't solve the root cause - we must avoid it!

How do we find the root cause?

Simple tools include 5-Whys or, again, brainstorming and dot-voting. These are often insufficient, because once again, if we knew the cause, we would probably already have addressed it.

I'm not a big fan of "Five Why" analysis for organizational issues, because the technique usually suggests a point-based root cause, whereas the root cause may be hidden in a web of causes, and even then, it could be a network effect leading to the problem we observe. And sometimes, identifying the cause is easier for an outsider who isn't stuck in a presumed "inevitability". If that's the case, I will give you my opinion. (Although I could be wrong. Everyone can always be wrong.)

And sometimes, I frankly don't know. If, for example, the root cause is part of your internal accounting processes, I can at best tell you it's there, but what exactly - I'm not an expert on that. we'll need to call the experts in.

Step 3: Action Plan

How do we deal with the root cause, how do we get better? You may have ideas, and I also have ideas. You may lack the experience and/or expertise, and I may have it. Let's bring all of that to the table, and turn that into an action plan. 

I could propose an action plan, although you need to accept it. If you have counter-suggestions or alternatives that you consider better, go for it. I'm indifferent to whether you go with my proposal or your own: what matters is that you get some traction and start moving the big bricks.

What's most important about the action plan: it's your action plan. You own it, and you execute upon it. I will support you with whatever I can contribute that you need: facilitation, tracking, communication, workshops and sessions. Depending on how much support you need, I may also compile the outcome of all of this for inspection.

Again, like in step 2, there are problems where I can propose an approach based on my experience, and some where I'll have to pass. For example, if your biggest issue is a proprietary compiler for a proprietary programming language, I can only suggest you get an expert from the vendor to help you on the issue.

Step 4: Reflection

So you did something, or we did something. If it was a good plan, something should be visibly better now, otherwise - what did we miss, what should we do about it?

Is our problem still as big as before, or has it become smaller? How much? Did we create other problems?

I'll support you with methods, structure and facilitation in this process. And, like mentioned before, with compilations of results and outcomes. As needed, I will add my insights and opinions.

But ... how about "Agile"?

"How does that help us introduce Scrum, Kanban, LeSS or SAFe", you may ask? It may not. Or it may. For certain, it will make you more agile, i.e. improve your ability to "change direction at a high speed.

Agile frameworks are entirely in the solution space, i.e. step 3. 
If Scrum helps you solve your biggest problem, and you need someone to teach you how to Scrum, that's what I'll do.
If User Story Mapping solves your biggest problem, that's what we'll do.
If Pair Programming solves your biggest problem and you don't know how to do it, I'll grab the keyboard with you.
If your biggest problem is the lack of an overarching structure and you decide to go with SAFe, I'll set up SAFe with you. Or LeSS, if you consider that the better alternative.

What I won't do, however, is to just dump "X" onto you when that wouldn't deal with your biggest problem. I won't do it, is because people will not see the value of "X", and there's even a high probability that "X" will be blocked by whatever your biggest problem is.

Sunday, August 16, 2020

The abuse of Cynefin

Scrum has been the tact giver for "Agile" for a long time. And Scrum is all about empiricism. And this "empiricism" has become a problem in recent years: Self-proclaimed, unqualified "coaches and trainers" proclaim that current organizational processes don't help, and hordes of incompetent "Agilists" swarm the market, only to wreak havoc on unsuspecting organizations.

Based on a mis-representation of Cynefin, people abuse Scrum and ditch available knowledge wholesale, because ... "In the Complex Domain, you don't know until you tried."

Now, is Scrum or the Cynefin framework really the cause of this problem? Not really - they are merely the door-opener for the snake oil sellers. And given Cynefin's and Scrum's easy appearance, people don't spot the trap until they fell for it. These frameworks are so popular and so easy to abuse, and it's really difficult for someone who sees them for the first time to discern what's correct and what is a mis-application.

Cynefin Framework by Dave Snowden

Now, let me describe the chain of "Cynefin Reasoning" that leads us down the wrong path:
"Knowledge Work (e.g. Development) isn't simple. It happens in the Complex domain. In the Complex domain, we have unorder where the relationship between cause and effect can only be perceived in retrospect and the results are unpredictable. Complex systems are dispositional and not causal. Hence, we cannot rely on good or best practices. Instead, we need to create safe to fail experiments and not attempt to create fail safe design."

It's a flawed understanding of Cynefin, combined with a false dichotomy that becomes a toxic soup. Here's why:
  1. Just because something isn't simple doesn't mean it's automatically "complex". There's the entire domain of Complicated problems that's blissfully ignored, and even "Complex" is a category of varying degrees of complexity.
  2. The idea that existing knowledge is entirely inapplicable doesn't describe the "Complex" domain - that would be the "Chaotic" domain: We can predict the outcome of a software development process pretty well. What we can't predict quite so accurately are customer reception and market conditions, but even there, we have quite elaborate mechanisms.
  3. A shallow "dip into chaos" doesn't mean we should engulf, immerse or drown ourselves in chaos. Whenever we can prevent chaos, we're well advised to do so.
  4. A scientific approach would rely on so-called "experimental conditions" where we fix all but the one variable under examination. If we let go of all control variables, we really won't be able to predict anything any more ("Chaos"). The latter is pretty pointless, and it can be avoided in knowledge work.
  5. Just because something is "complex" doesn't mean we have no reliable process and are fully reliant on empiricism. To retain control, we need to minimize the impact of uncontrolled factors. For example, we would never have a smartphone if we didn't know exactly how to build computers, exactly how to build mobile networks, exactly how to mass-produce microtechnology, and those things would be a nightmare to use if we didn't know exactly how to build software that doesn't crash. It's complex, but it relies on a lot of things we precisely understand, and operating in this domain without high degrees of knowledge would be economic suicide.
And that's where we get into the mess today often called "Agile":
People don't understand what they're doing, because it's not a requirement to get into an "Agile" role. Fake "Trainers" without development experience make it look like that's not required - because, "hey, we got Empiricism". We encounter so-called "Agile Coaches" who don't know the building blocks of a functional company, we have to put up with 2-day "Scrum Masters" who don't know how to build a team, and see misplaced "Product Owners" who don't know what makes a product successful. And that's just scraping the surface.

Does this sound familiar?
"We don't need market research, probabilistic forecasting or demand control: Let's just write a User Story and put it into the backlog!" - "There's an entire science around Demand Management" - "Who needs that? Let's just incrementally Inspect and Adapt!"
Product Management, Quality Management, Process Management, Delivery Management, Operations Management, People Management, Finance Management, Sales Management ... we know a lot about these things! In the "Agile" community, however, there's a growing number of people who meet these with resentment, ideology and opinion: "managers are worthless anyways, so let's shoot all of that to the moon and apply empiricism!

The knowledge is discarded wholesale under the pretext of Cynefin - complexity and empiricism become the arguments against existing knowledge. "Inspect and Adapt" trumps science. Hence, "Agile" gets a free pass for obliterating functioning organizations: people with zero understanding of a domain "coach" long-term experts who have academically studied a subject, read the books, and gotten their share of field experience - and some even have the galls to claim that "coaching in the absence of knowledge works better, because it allows the coach to be unbiased!"

Thus, we set the playing field for quacks who will dismiss all scientific achievements and progress we have made in software engineering, bringing on the mumbojumbo, kumbayah, post-its and canned bikablo doodles: those professionals who did put in the hard work become indiscernable from quacks, organizations and individuals decline, order descends into chaos.

"Agile" has become a habitat for the same kind of science denialism as Anti-Vaxxers or Flat-Earthers: Except it's more subtle to spot, because the bodies of knowledge these people despise are the invisible fields in knowledge work. And Cynefin has been their door-opener.

Let me conclude with a question: "How will you approach complexity?"

Saturday, August 15, 2020

ROCE Clusters - categorizing benefits

There seems to be a huge misunderstanding in the Agile Community about what is actually a "benefit". We quickly get into a false dichotomy where people argue that only ledger impact or only innovation should be considered relevant. Let's break with that. There are many types of benefits:

Different ROCE clusters

Before we begin: In no form or fashion would I claim that this is a comprehensive list of clusters, nor that there are no other ways to generate benefits within a cluster than those described: This article is an attempt to defuse the myopic perspective that there is only one way to look at benefits. 

Usually, an initiative aims at a number of ROCE clusters simultaneously. As a Product Manager, you must understand the clusters you're affecting, both directly and indirectly. In such a case, you need to define the target order: what's your primary focus, what consequential effects do you expect?

For each cluster in question, you need some metrics to measure whether you're making progress on your initiative. Financial metrics tend to be the easiest, with capability building metrics being the hardest to figure out.

ROCE Clusters

Every initiative you're pursuing should belong to at least one ROCE cluster. For example, "Reduce server outages for our Online Game from 8 hours a week to 10 minutes a month, to avoid losses caused by offering voucher compensations worth $150,000 a month."

ROCE clustering is very useful in that it can be aggregated quickly to a Portfolio level and gives you a pretty good overview of where you are spending your money: how much do you invest into patching holes, how expensive is your new strategy, and: are you innovating enough? 
If your portfolio contains only a few clusters, you may have a massive strategic problem.

This article describes seven different clusters and some metrics you may pursue within these clusters. You may find different or additional metrics, or you may find some metrics applicable to a different cluster, and you may find that a cluster not listed here is missing for your own organization. 
If you wish to adopt ROCE clustering in your organization, you may need to identify those clusters relevant to your own organization.

Ledger Benefits

Although the most desirable benefits from an accounting perspective, these tend to be the most elusive in practice, as they are often more consequential to other benefits than direct.

The two most common ledger benefits we are looking for are:

  • Profits
  • Savings

Loss Avoidance

To discriminate this from "savings", let's first define "loss": Loss is an evitable part of your business that only exists because something is messed up.

Common sources of losses are:

  • Fines
  • Write-Downs
  • Cancellations
  • Compensations

Capability Building

The problem with Capabilities is that you can't really translate them into money: they translate into future benefits. This makes capability building very risky, because you don't know if it pays off until it does. And that may be years into the future.

Here are some kinds of capabilities you may invest into:

  • Strategic Enablement
  • Support Infrastructure
  • Network Effects


Why should we innovate? The modern business-oriented approach is to look at a problem you observe, then solve it. New solutions to known problems are the most common form of innovation, and most companies innovate mostly in a form of local optimization, i.e. solving their own problems. New products often appear on the market when they solve someone else's problem. But there are also other forms of innovation - those which only serve as a foundation for building solutions to problems. This gives us the most common forms of innovation:

  • Solving Process Problems
  • Solving People Problems
  • Solving Market Problems
  • Base Innovation

Demand Shaping

It may sound odd, but sometimes, we invest money not to build some product, but to shape demand for an existing product. Demand shaping allows us to optimize the value generation of our products and services - which includes reducing overproduction and overcharging on the one hand, and stress and overburden on the other. It comes in three forms:
  • Increasing Demand
  • Stabilizing Demand
  • Reducing Demand
I would like to go specifically into point 3 - reducing demand. That's an especially good idea on something called "failure demand", i.e. when a service unit (e.g. Complaints, Repairs) faces high demand due to something else not working. It's also an important preventive measure when over-demand would lead to negative business outcomes, such as loss of trust.

Risk Management

Risk Management could include anything that reduces impact and/or probability of anything that affects either us or our customers. Probably the most famous risk management product is insurance, which does nothing other than moving financial risks from our clients onto ourselves. It doesn't reduce probability, only impact.
The most common types of risk we're dealing with are:
  • Business Risk
  • Operational Risk
  • Market Risk


The last cluster this article covers is compliance with rules and regulations - in this cluster, we can't win, we can only minimize losses. So, how much should we spend on compliance? As little as possible, and whenever we do, we should try to combine it with at least one other cluster.
Compliance isn't as much about solving problems, as it is about meeting minimum requirements from:
  • Laws & Legal
  • Market Regulations
  • Standards
  • Safety & Security


ROCE Clusters aren't benefits, they help you categorize your benefits in a way that it becomes obvious where your portfolio budget is going and where you're looking for improvement.

You will still need an appropriate benefit hypothesis and actionable metrics to turn this into a relevant business practice. But that's another topic for another article.

Tuesday, August 11, 2020

Philosophy in Organizational Change

Yes, I'm opening a can of worms here. "We are here to work, not to philosophize!" - is that really so?
Before exploring this question, let me start by clearing up the common misgiving people have about "philosophy": this isn't idle chatter. Philosophy is all about "knowledge" - how we arrive at the conclusions we do, and what leads us to do the things we do. For example: We talk a lot about vision, strategy and values. But: How do we interpret our vision? What makes a good strategy? And how do we turn our values into behaviour? We need philosophy to move beyond the bla-bla!

I'm not exploring the entire domain of philosophy, only a few things that are really important when dealing with organizational change. And I'm just going to touch "What's in it - for you? Why should you care?" rather than explaining any details. All I'm trying to achieve is get your interest in the topic sparked. You'll have to do your reading by yourself.


Every organization has its own set of implicit and explicit values and norms. Which ones do we have, why do we have them, and do they serve us in the way they should? How do they inform our choices?
What does it take to arrive at a different set of values and/or norms?

For example: If our organization values "being busy" - how could we move to "producing value" instead?


There are tons of illogical things going on around us every day, and nobody seems to care.

The majority could be wrong. And might doesn't make right. Just because something is right, doesn't mean it's true and vice versa. When something doesn't follow from the premise: can you spot it, and where would that lead you?

Sound reasoning leads us to make better decisions. How good is our organization at reasoning?

For example: Just because there's an unspoken consensus in your organization that you "don't deploy on a Friday", doesn't mean it's an indisputable fact: Why can't we deploy every day?


Did you ever "know" something was the right way to do things, only to find someone did the opposite - and got better results? Most of what we know is little more than contextual snapshots of momentary experiences, turned by our brains into immutable "facts". And while there are definitely some things where scrutiny just leads us off a wild goose hunt, there are other so-called "facts" that we must discard in order to grow. But how can you discern?

For example: We know for certain that there's a legal reason to archive every contract in written form. But ... how does a paperless, totally virtual company do business in compliance with the law?


"What is ..." - the way we define things informs how we think about them. What is "value", what is "performance", what is "success?" - Do you have the necessary means to ask the right questions that allow you to align people around a common understanding, weed out misunderstandings, and provide a more focused definition that allows people to overcome their own mental barriers?

For example: As long as everyone has a different understanding of "Value", it becomes pretty hard to maximize value creation. How would you - properly - define it?

Philosophy of Language

Our lanugage affects our thinking, which affects our reality. The words we use have the power to unite, divide, shape or reform. The subconscious effect of our lanugage is that simple things could become possible or impossible, easy or hard. Our lanugage can create insurmountable barriers or build inviting bridges. How much of what we consider "real" in our organization is just so because we choose to use words the way we do?

For example: The most nefarious element we see in many organizations is calling people, "resources". It's not merely de-humanizing - it has practical impact: "Resources" are assumed to be interchangeable for one of equal type. That leads us to move people across projects in an attempt to optimize "utilization" - which significantly reduces performance, and thereby, drives cost and risk through the roof! 

Philosophy of Law

Ethics informs our norms and values. And what happens when we transgress those? Which one are we willing to sacrifice when two norms are in conflict? Based on what reasoning can we trade off one value for the other? 

For example: Who can decide whether it's okay to put a risky feature live that could either increase value or make customers unhappy? Can I, as an individual, place the value of learning above the risk? What consequences will my choices and actions have? 

Philosophy of Politics

Power rests where it does. We can choose to re-distribute it, but would that even be a good idea?
What's the relationship between a company and its employees or between "manager" and "worker"? What does security and individual freedom mean? How do we trade off between these, and what consequences do that have?

For example: By giving people a corset of processes, we offer a sense of security by working in accordance with known, explicit regulations - while stealing their freedom to do the right thing for the benefit of the organization. 

Game Theory

Every day, in every one of our actions, we weigh off alternatives. The course of action we choose is ultimately that which benefits us most. How do we set up systems where individual, group and organizational goals align with each other - and how do we ensure that everyone benefits most from actually pursuing these goals? Every system can be gamed, then how do we ensure that when it's gamed, we still end up where we want to?

For example: By measuring "Velocity", we might end up rewarding doing a lot of useless work, or by inflating the actual effort. None of that helps us. Then - what should we measure instead?


I hope that this teaser is enough to get you interested in learning more about the domain of philosophy, spend time on equipping yourself with understanding and methods to approach each of these topics, and begin leading the important discussions that will improve clarity, transparency and mutual understanding within your organization.

Thursday, July 23, 2020

The effective "Daily"

If you feel that your "Dailies" are an ineffective energy-drainer, it may have to do with the way you run them. Here's what I do.

First things first, I don't like Scrum's "Three Questions" much, because in my opinion, they still invite over-emphasizing the work of the individual. So here's the focus I would like to set:

Focus on the goal

It's quite important, and this is where many teams already take a wrong turn, to have a proper, clear, SMART goal for the week (or Sprint). And that goal should be elaborated along the lines of "What is different for our organization at the end of the interval?", rather than "Do X amount of work".

The secret lies in measuring our progress against such a meaningful goal, because it allows us to focus on our relative position towards where we want to be. That becomes the measure of our "Inspect and Adapt".

Keep the goal in view

Even in a Remote setting, I have adopted a habit of writing down the goal and make it the center of attention in the Daily. When people join the Daily, I make sure that everyone takes a look at our goal. This primes the mindset. 

Let's orient ourselves!

Once we have a goal, all of our discussions should center around the goal
  1. Is the goal still valid? 
  2. Are we getting closer to our goal?
  3. Is anything stopping us from reaching our goal?
  4. What's our next step towards the goal?

Avoid: Antipatterns

I have attended too many Dailies which focused either on the work items or on the contribution of individuals. These meetings usually go like this:

Avoid: Individual reporting

Person A says, "Yesterday, I wrote 5 mails and attended 3 meetings. Today, I will meet with ..."
Person B picks up, does the same.
Rinse & repeat until meeting over. 
When the meeting is over, we know everyone is doing something.
Unfortunately, we have no idea whether we're moving in circles, standing still, or even going in the wrong direction.
People are distracted from the goal, because they have to think about what they will say so that everyone knows they're busy.

Avoid: Status tracking

Scrum Master picks a look at work item A, asks, "Who is working on this?"
Someone says, "Me, still working on it" or "It's done" or the Scrum Master asks, "Who can do it?
Scrum Master looks at work item B and does the same as with A.
Rinse & repeat until meeting over. 
When the meeting is over, we know everything "in progress" is being worked on. Maybe we also make sure everyone is working on something.
Unfortunately, again, we only know that we're working, not whether we're going to reach the goal.

Bring Value

Many times, it's not clear who gets what value out of a Daily event.
The most important thing to realize: This event isn't "for the Scrum Master", or "for the company". It is for the team, and as such, for the attendees.

Law of two feet

"If any participant feels they are neither receiving nor contributing value, they should leave the meeting." 
This could have the consequence that everyone is walking away from the Daily: that should trigger a conversation on whether you're bringing up the right and important stuff!

Personal ownership

The flip side of the coin on value is: As attendee, you must be the one who is responsible for maximizing the value of your own contribution so that people won't use the Law of Two Feet on you.
Stop talking about things that don't help the team inspect and adapt their progress towards the goal, and limit your contribution to the minimal required input and questions which are relevant to remain and get back on track.

Saturday, July 11, 2020

Stop asking Why!

 The quest for reason and understanding, for change and improvement, always starts by figuring out the "Why". And now I'm suggesting to "stop asking Why?" - Why? [pun intended!]

The problems with "Why" questions

Let me start with an illustration. 

Jenny and Ahmad struggle with major issue in an untested segment of Legacy code. Ray, our coach, joins the conversation by asking, "Why are there no tests available?" - "Because", Jenny snaps, "the guy who coded this didn't write any." How was Ray helping? His question wasn't helping, it heated the mood further - and it didn't generate any new insight.
So was it even worth asking? No. It was the wrong question.

And like this, many times, when we ask "Why", we're not doing that which we intend to achieve: generate insight into reasons and root causes. 

A second problem with "Why" questions is that all parties engaged in the conversation must be interested in exploring. When people are under duress, they are interested in solutions, but not long winded discussions. Hence, they may disengage from the conversation and claim you're "wasting time".

Why that's a problem

There are numerous other problems with "Why" questions that you may have encountered yourself, so I'll list them here as types of "Problematic Why" questions:

Why? ExampleProblem
Nosy Why did you just put that document there?  When you dig into matters that others feel is none of your business, you will get deflection, not closer to the root.
 Suggestive Why don't you put the document in the Archive folder?You're implying the solution, and the answer will usually be "Okay" - you're not exploring!
Inquisitive  Why did you put the document into the Archive folder?It puts people on trial, and the response is often justification rather than inspection. 
 AccusatoryWhy didn't you put the document in the Archive folder?  This immediately poisons the conversation, provoking a "fight or flight" response. Any sentence starting with, "Why didn't you..." is easy to interpret as personal attack.
Condescending Why can't you just put that document into the Archive folder?When your question hints at perceived superiority, you're not going anywhere with exploration - it becomes personal!
Commanding Why isn't the document in the Archive folder yetJust like a parent asking, "Why are you not in bed yet?", this isn't an invitation to a conversation - the only socially acceptable response is: "I'm on it". 
RhethoricalWhy don't we go grab a coffee?The expected answer is "Yes".
 Distracting Why do you want to store your document?Although this question could be interesting, it's taking the conversation on a tangent. I can un-proudly claim to have torpedoed an entire workshop with such a misaimed "Why" question. 

While there may indeed be legitimate reasons to use these types of "Why" questions, please remember: If you want to explore and generate insight, these aren't the questions you may want to ask.

Why that doesn't help

"Why" questions become stronger and stronger as means of making people uncomfortable and less open to actual exploration as they contain, in descending order:
  1. "You"
  2. modals ("do", "can", "should", "must" etc.) 
  3. negations ("don't", "can't" ...)
  4. Past tense ("did")
  5. Judgmental terms ("even", "bad")
  6. Temporal adverbs ("yet", "still", "already")
And here is a full double bingo: "Why haven't you even pondered yet that your questions could be the problem?" - How happy does that make you to start a conversation with me on the topic?
With the above list in mind, when you begin analyzing the conversations around you, you may indeed start to feel that "Why" questions are often more reason for people to avoid exploring further than to generate valuable insights.

Why Blanks are also bad

Someone just made a statement, and all you're asking is, "Why?" - one word. What could go wrong? How could that be a problem? It can be.
Imagine you're in the middle of a conversation. Jenny says, "We didn't write enough tests." The insight is there. Now you just intercede with a probing "Why?" - and although you never said it, you have just accused Jenny of not writing enough tests, against better knowledge: her mind will auto-complete your one word question into, "Why didn't you write enough tests?"

What to ask instead?

Try re-framing "Why" questions, as to keep out of the solution space and to make people interested in actually having an exploratory conversation. The easiest way to do this is very often to avoid the term "Why" altogether.

When we take the table above, all of the "Why" questions could be replaced with an open conversation during the Retrospective, such as: "I sometimes have a hard time finding ourt documentation. What could we do about it?"

Almost all "Why" questions can be replaced with a "What" or "How" question that serves the same purpose, without being loaded in any direction. 

For example, the question "Why do we have this project?" sounds like, "I think this project is pointless!" whereas, "What is the intended outcome of this project?" assumes "There is a good reason for this project, and I may not understand it."
Likewise, the question "Why didn't we find those defects during testing?" sounds like, "Our testing sucks!", whereas, "How do those defects get into production?" assumes that "I don't know where the root cause is, and we have to locate it."


Take a look at when you use "Why" questions. Ponder when you didn't get the clarification that you intended. A truly open "Why" question can be re-framed as a "What", "Where" or "How" question that achieves the same purpose.

Experiment with alternative ways of framing questions that avoid pressing hot buttons, such as implied blame or command. 
In doing so, stick only to the facts which have been established already and do not add any extra assumptions or suggestions.

Be slow on "Why": Avoid the "Why" question until you have pondered at least one alternative that doesn't rely on a "Why".

Tuesday, June 30, 2020

Strengthen your Daily Events

It doesn't matter whether you use Scrum or Kanban, on a team or program level - Dailies are (or: should be) always part of the package.

In general, it's a good idea to have a fixed slot on the calendar where everyone quickly comes together to keep each other synced. Still, the amount of Dailies can get overwhelming. And tedious, And boring. So what? Here's a suggestion for Dailies that doesn't rely on Scrum's standard "Three Questions":

Brief Information

Dailies are not the time for discussion.  They're for brief information exchange.
Be as concise as possible, provide only relevant information.
If there is something to discuss, focus on what it is, and keep the content discussion for later. Meet after with the people who find value in the conversation itself, so that those who aren't involved are free to do something that matters to them.

Don't mention Business as Usual

Nobody cares that you were "busy" or "working on", because everyone is!
And as long as you're following the agreed plan, that's not news, either.

Should you mention that you have finished one work item, and started another?
If you're using visual indicators of progress and your board is up to date, everyone can see what you're working on. And as long as that's doing just fine - that should suffice.

Cover the four areas

Instead of focusing on activity, try refocusing on things that were not agreed beforehand:


Did anything "outside-in" happen that makes further pursuit of the current plan suboptimal?
Did you have any learnings that make a different way forward better
Do you need to change the work, or the goals?


Did something unusual occur, for instance: does something take unusually long, are you running out of work, do you need unplanned support? Are there any execution signals that imply there could be an issue somewhere?
Whatever comes up that may need further investigation or wasn't part of your initial assumptions should be mentioned, because it will distract from your original plan.


Does something block your pursuit of your current goal, be it technical, organizational or procedural.
Which work item is blocked, and what is the impact of the blockage?
I like to prepare red stickies and just plaster them across the blocked item(s), so that everyone is aware that this item doesn't make progress.


The opposite of problems - what is now unblocked, and can proceed as normal again?
Don't get into any form of detail how exactly the problem was addressed, unless multiple items were blocked and you need to be clear how far the unblocking reaches.

Be prepared!

Many Dailies are entirely "ad hoc", people just show up, and mention whatever is on their head.
Instead, try to be prepared for the Daily: do you have any BICEPS to share, and what's the best way to get the message across?

But ... I have nothing!

Yes, that's great. It means that you don't need to communicate anything in the Daily, because everything is on track.

And what if we all have nothing?

Then - cancel the meeting and continue whatever you were on to. You have more important things to do than interrupt your work to communicate trivialities.

And the social aspect?

If you want to use the Daily as a water cooler event, to decompress or whatever - you can do that. With the people who are interested. That should be part of the regular work, and not of a Daily, which is a cyclical Inspect+Adapt event to help you maximize your odds of succeeding.

Should we even have a Daily then?

That depends. In another article, I discussed that closely collaborating teams may not need a Daily. For all other teams, it's actually good if you don't need Dailies, yet still keep the fixed time slot just in case. The mechanism could change from routine daily to "on-demand" daily.

You could measure how often you need to have Dailies, which becomes a metric of how well you can predict your next steps, then use that to have a discussion of whether that's appropriate to your team situation or not.

Sunday, June 14, 2020

Planning with Capacity Buffers

I get asked quite often some questions along the line, "How do we deal with work that's not related to the Sprint Goal?" The typical agile advice is that all work is part of the Product Backlog and treated as such and the work planned for the Sprint is part of the Sprint Goal.
In general, I would not recommend this as a default approach. I often advise the use of Planning Buffers.

Where does the time go?

Teams working in established organizations on legacy systems often find that the amount of work which doesn't advance the product makes up a significant portion of their time. Consequently, when they show up in a Sprint Review, the results tend to go into one of two directions: 
Either, the team will have focused on new development, angering existing users why nobody tackled known problems - or, the team will have focused on improving legacy quality - angering sponsor why the team is making so little progress. Well, there's a middle ground: angering everyone equally. 

In any case, this is not a winning proposition, and it's also bad for decision making.

Create transparency

A core tenet of knowledge work is transparency. That which isn't made explicit, is invisible.
This isn't much of an issue when we're talking about 2-5% of the team member's capacity. Nobody notices, because that's just standard deviation.
It becomes a major issue when it affects major portions of the work, from like a quarter upwards of a team's capacity. 
Eventually, someone will start asking questions about team performance, and the team, despite doing their best, will end up in the defense. That is evitable by being transparent early on.

Avoid: Backlog clutter

Many teams resort to putting placeholders into their backlog, like "Bugfix", "Retest", "Maintenance" and assigning a more or less haphazard number of Story Points to these items.
As the Sprint progresses, they will then either replace these placeholders with real items which represent the actual work being done - or worse: they'll just put everything under that item's umbrella.
Neither of these is a good idea, because arguably, one can ask how the team would trust in a plan containing items they know nothing about. And once the team can't trust it ... why would anyone else?

Avoid: Estimation madness

Another common, yet dangerous, practice, is to estimate these placeholder items, then re-estimate them at the end of the Sprint and use that as a baseline for the next Sprit.  
Not only is such a practice a waste of time - it creates an extremely dangerous illusion of control. Just imagine that you've been estimating your bugfixing effort for the last 5 Sprints after each Sprint, and each estimate looks, in the books, as if it was 100% accurate. 
And then, all of a sudden you encounter a major oomph: you're not meeting up to your Sprint Forecast, and management asks what's going on. Now try to explain why your current Sprint was completely mis-planned. 

So then, if you're neither supposed to add clutter tickets, nor to estimate the Unknowable - then what's the alternative?

Introduce Capacity Buffers

Once you've been working on a product for a while, you know which kinds of activities make up your day. I will just take these as an example: New feature development, Maintenance & Support, fixing bugs reported from UAT - and working on other projects.

I'm not saying that I advocate these are good ways to plan your day, just saying if this is your reality - accept it!

We can then allocate a rough budget of time (and therefore, of our develoment expenses) to each activity.

An example buffer allocation

Thus, we can use these buffers as a baseline for planning:

Buffer Planning

Product Owners can easily allocate capacity limits based on each buffer. 
For example, 10% working on other projects, 25% UAT bugfixing and 25% maintenance work, which leaves 40% for development of new features. 
This activity is extremely simple, and it's a business decision which requires absolutely no knowledge at all about how much work is really required or what that work is.
In our example, this would leave the team to plan their Sprint forecast on new feature development with 40% of their existing capacity.
As a side remark: every single buffer will drain your team's capacity severely, and each additional buffer makes it worse. A team operating on 3 or more buffers is almost incapacitated already.

These things are called "buffer" for a reason: we prefer to not use them, but we plan on having to use them. 

Sprint & PI Planning with Buffers

During the planning session, we entirely ignore the buffers, and all buffered work, because there is nothing we can do about it. We don't estimate our buffers, and we don't put anything into the Sprint Backlog in its place. We only consider the buffer as a "black box" that drains team capacity. So, if under perfect circumstances, we would be able to do 5 Backlog items in a week, our 60% allocated buffer would indicate that we can only manage 2 items.

Since we do, however, know that we have buffer, we can plan further top value, prioritized backlog items that do contribute to our team's goal, but we would plan them in a way that their planned completion would work out even when we need to consume our entire buffer.

So, for example: if our Team Goal would be met after 5 backlog items, we could announce a completion date in 3 Sprints, since our buffers indicate that we're most likely not going to make it in 1 Sprint.

Enabling management decisions

At the same time, management learns that this team isn't going at full speed, and intervention may be required to increase the team's velocity. It also creates transparency how much "bad money" we have to spend, without placing blame on anyone. It's just work that needs to be done, due to the processes and systems we have in place.

If management would like "more bang for the buck", they have some levers to pull: invest into a new technology system that's easier to maintain, drive sustainability, or get rid of parallel work. None of these are team decisions, and all of them require people outside the team to make a call.

Buffer Management

The prime directive of activity buffers is to eliminate them.
First things first, these kinds of buffer allocations make a problem transparent - they're not a solution! As such, the prime directive of activity buffers is to eliminate them. and the first step to that is shrinking them. Unfortunately, this typically requires additional, predictable, work done by the team, which should then find its way into the Product Backlog to be appropriately prioritized.

Buffers and the Constraint

If you're a proponent of the Theory of Constraints, you will realize that the Capacity buffers proposed in this article have little relationship to the Constraint. Technically, we only need to think about capacity buffers in terms of the Constraint. This means that if for example, testing is our Constraint, Application Maintenance doesn't even require a buffer - because the efforts thereof will not affect testing!
This, however, reuires you to understand and actively manage your Constraint, so it's an advanced exercise - not recommended for beginners.

Consuming buffers

As soon as any activity related to the buffer becomes known, we add it to the Sprint Backlog. We do not estimate it. We just work it off, and keep track of how much time we're spending on it. Until we break the buffer limit, there is no problem. We're fine. 
We don't "re-allocate" buffers to other work. For example, we don't shift maintenance into bugfixing or feature delivery into maintenance. Instead, we leave buffer un-consumed and always do the highest priority work, aiming to not consume a buffer at all.

Buffer breach

If a single buffer is breached, we need to have a discussion whether our team's goal is still realistic. While this would usually be the case in case of multiple buffers, there are also cases where buffers are already tight and the first breach is a sufficiently clear warning sign.

Buffer breaches need to be discussed with the entire team, that is, including the Product Owner. If the team's goal is shot, that should be communicated early.

Buffer sizing

As a first measure, we should try to find buffer sizes that are adequate, both from a business and technical perspective. Our buffers should not be so big that we have no capacity left for development, and they shouldn't be so small that we can't live up to our own commitment to quality.
Our first choice of buffers will be guesswork, and we can quickly adjust the sizing based on historic data. A simple question in the Retrospective, "Were buffers too small or big?" would suffice.

Buffer causes

Like mentioned above, buffers make a problem visible, they aren't a solution! And buffers themselves are a problem, because they steal the team's performance!
Both teams and management should align on the total impact of a buffer and discuss whether these buffers are acceptable, sensible or desirable. These discussions could go any direction.

DevOps teams operating highly experimental technology have good reasons to plan large maintenance buffers. 
Large buffers allocated to "other work" indicate an institutional problem, and need to be dealt with on a management level.
Rework buffers, and bugfixing is a kind of rework, indicate technical debt. I have seen teams spend upwards of 70% of their capacity on rework - and that indicates a technology which is probably better to decommission than to pursue.

Buffer elimination

The primary objective of buffer management is to eliminate the buffers, Since buffers tend to be imposed upon the team by their environment, it's imperative to provide transparent feedback to the environment about the root cause and impact of these buffers.
Some buffers can be eliminated with something as simple as a decision, whereas others will take significant investments of time and money to eliminate. For such buffers, it tends to be a good idea to set reduction goals.
For example, reducing "bugfixing" in our case above from 25% to 10% by improving the system's quality would increase the team's delivery capacity from 40% to 55% - we nearly double the team's performance by cutting down on the need for bugfixing - which creates an easy-to-understand, measurable business case!

Now, let me talk some numbers to conclude this article.

The case against buffers

Imagine you have a team whose salary and other expenses are $20000 per Sprint.
A 10% buffer (the minimum at which I'd advise using them) would mean not only that you're spending $2000 on buffers, but also that you're only getting $18000 worth of new product for every $20k spent!

Now, let's take a look at the case of a typical team progressing from a Legacy Project to Agile Development:

Twice the work ...

Your team has 50% buffers. That means, you're spending $10k per Sprint on things that don't increase your company's value - plus it means your team is delivering value at half the rate they could! 

Developers working without buffers, would be spending $20k to build (at least) $20k in equity, while your team would be spending $20k to build $10k in equity. That means, you would have to work twice as hard to deliver as positive business case!

Every percent of buffer you can eliminate reduces the stress on development teams, while increasing shareholder equity proprtionally!

And now let's make that extreme. 

Fatal buffers

Once your buffer is in the area of 75% or higher, you're killing yourself!
Such a team is only able to deliver a quarter of the value they would need to deliver in order to build equity!
In such a scenario, tasking one team with 100% buffer work, and setting up another team to de-commission the entire technical garbage you're dealing with is probably better for the business than writing a single additional line of code in the current system.

Please note again: the problem isn't the capacity buffer. The problem is your process and technology! 

High Performance: No Buffers

High Performance teams do not tolerate any capacity buffers to drain their productivity, and they eliminate all routine activity that stops them from pursuing their higher-ordered goal of maximizing business value creation. As such, the optimal Capacity buffer size is Zero.

Use buffers on your journey to high performance, to start the right discussion about "Why" you're seeing the need for buffers, and the be ruthless in bulldozing your way to get rid of them.