Tuesday, June 30, 2020

Strengthen your Daily Events

It doesn't matter whether you use Scrum or Kanban, on a team or program level - Dailies are (or: should be) always part of the package.

In general, it's a good idea to have a fixed slot on the calendar where everyone quickly comes together to keep each other synced. Still, the amount of Dailies can get overwhelming. And tedious, And boring. So what? Here's a suggestion for Dailies that doesn't rely on Scrum's standard "Three Questions":

Brief Information

Dailies are not the time for discussion.  They're for brief information exchange.
Be as concise as possible, provide only relevant information.
If there is something to discuss, focus on what it is, and keep the content discussion for later. Meet after with the people who find value in the conversation itself, so that those who aren't involved are free to do something that matters to them.

Don't mention Business as Usual

Nobody cares that you were "busy" or "working on", because everyone is!
And as long as you're following the agreed plan, that's not news, either.

Should you mention that you have finished one work item, and started another?
If you're using visual indicators of progress and your board is up to date, everyone can see what you're working on. And as long as that's doing just fine - that should suffice.

Cover the four areas

Instead of focusing on activity, try refocusing on things that were not agreed beforehand:


Did anything "outside-in" happen that makes further pursuit of the current plan suboptimal?
Did you have any learnings that make a different way forward better
Do you need to change the work, or the goals?


Did something unusual occur, for instance: does something take unusually long, are you running out of work, do you need unplanned support? Are there any execution signals that imply there could be an issue somewhere?
Whatever comes up that may need further investigation or wasn't part of your initial assumptions should be mentioned, because it will distract from your original plan.


Does something block your pursuit of your current goal, be it technical, organizational or procedural.
Which work item is blocked, and what is the impact of the blockage?
I like to prepare red stickies and just plaster them across the blocked item(s), so that everyone is aware that this item doesn't make progress.


The opposite of problems - what is now unblocked, and can proceed as normal again?
Don't get into any form of detail how exactly the problem was addressed, unless multiple items were blocked and you need to be clear how far the unblocking reaches.

Be prepared!

Many Dailies are entirely "ad hoc", people just show up, and mention whatever is on their head.
Instead, try to be prepared for the Daily: do you have any BICEPS to share, and what's the best way to get the message across?

But ... I have nothing!

Yes, that's great. It means that you don't need to communicate anything in the Daily, because everything is on track.

And what if we all have nothing?

Then - cancel the meeting and continue whatever you were on to. You have more important things to do than interrupt your work to communicate trivialities.

And the social aspect?

If you want to use the Daily as a water cooler event, to decompress or whatever - you can do that. With the people who are interested. That should be part of the regular work, and not of a Daily, which is a cyclical Inspect+Adapt event to help you maximize your odds of succeeding.

Should we even have a Daily then?

That depends. In another article, I discussed that closely collaborating teams may not need a Daily. For all other teams, it's actually good if you don't need Dailies, yet still keep the fixed time slot just in case. The mechanism could change from routine daily to "on-demand" daily.

You could measure how often you need to have Dailies, which becomes a metric of how well you can predict your next steps, then use that to have a discussion of whether that's appropriate to your team situation or not.

Sunday, June 14, 2020

Planning with Capacity Buffers

I get asked quite often some questions along the line, "How do we deal with work that's not related to the Sprint Goal?" The typical agile advice is that all work is part of the Product Backlog and treated as such and the work planned for the Sprint is part of the Sprint Goal.
In general, I would not recommend this as a default approach. I often advise the use of Planning Buffers.

Where does the time go?

Teams working in established organizations on legacy systems often find that the amount of work which doesn't advance the product makes up a significant portion of their time. Consequently, when they show up in a Sprint Review, the results tend to go into one of two directions: 
Either, the team will have focused on new development, angering existing users why nobody tackled known problems - or, the team will have focused on improving legacy quality - angering sponsor why the team is making so little progress. Well, there's a middle ground: angering everyone equally. 

In any case, this is not a winning proposition, and it's also bad for decision making.

Create transparency

A core tenet of knowledge work is transparency. That which isn't made explicit, is invisible.
This isn't much of an issue when we're talking about 2-5% of the team member's capacity. Nobody notices, because that's just standard deviation.
It becomes a major issue when it affects major portions of the work, from like a quarter upwards of a team's capacity. 
Eventually, someone will start asking questions about team performance, and the team, despite doing their best, will end up in the defense. That is evitable by being transparent early on.

Avoid: Backlog clutter

Many teams resort to putting placeholders into their backlog, like "Bugfix", "Retest", "Maintenance" and assigning a more or less haphazard number of Story Points to these items.
As the Sprint progresses, they will then either replace these placeholders with real items which represent the actual work being done - or worse: they'll just put everything under that item's umbrella.
Neither of these is a good idea, because arguably, one can ask how the team would trust in a plan containing items they know nothing about. And once the team can't trust it ... why would anyone else?

Avoid: Estimation madness

Another common, yet dangerous, practice, is to estimate these placeholder items, then re-estimate them at the end of the Sprint and use that as a baseline for the next Sprit.  
Not only is such a practice a waste of time - it creates an extremely dangerous illusion of control. Just imagine that you've been estimating your bugfixing effort for the last 5 Sprints after each Sprint, and each estimate looks, in the books, as if it was 100% accurate. 
And then, all of a sudden you encounter a major oomph: you're not meeting up to your Sprint Forecast, and management asks what's going on. Now try to explain why your current Sprint was completely mis-planned. 

So then, if you're neither supposed to add clutter tickets, nor to estimate the Unknowable - then what's the alternative?

Introduce Capacity Buffers

Once you've been working on a product for a while, you know which kinds of activities make up your day. I will just take these as an example: New feature development, Maintenance & Support, fixing bugs reported from UAT - and working on other projects.

I'm not saying that I advocate these are good ways to plan your day, just saying if this is your reality - accept it!

We can then allocate a rough budget of time (and therefore, of our develoment expenses) to each activity.

An example buffer allocation

Thus, we can use these buffers as a baseline for planning:

Buffer Planning

Product Owners can easily allocate capacity limits based on each buffer. 
For example, 10% working on other projects, 25% UAT bugfixing and 25% maintenance work, which leaves 40% for development of new features. 
This activity is extremely simple, and it's a business decision which requires absolutely no knowledge at all about how much work is really required or what that work is.
In our example, this would leave the team to plan their Sprint forecast on new feature development with 40% of their existing capacity.
As a side remark: every single buffer will drain your team's capacity severely, and each additional buffer makes it worse. A team operating on 3 or more buffers is almost incapacitated already.

These things are called "buffer" for a reason: we prefer to not use them, but we plan on having to use them. 

Sprint & PI Planning with Buffers

During the planning session, we entirely ignore the buffers, and all buffered work, because there is nothing we can do about it. We don't estimate our buffers, and we don't put anything into the Sprint Backlog in its place. We only consider the buffer as a "black box" that drains team capacity. So, if under perfect circumstances, we would be able to do 5 Backlog items in a week, our 60% allocated buffer would indicate that we can only manage 2 items.

Since we do, however, know that we have buffer, we can plan further top value, prioritized backlog items that do contribute to our team's goal, but we would plan them in a way that their planned completion would work out even when we need to consume our entire buffer.

So, for example: if our Team Goal would be met after 5 backlog items, we could announce a completion date in 3 Sprints, since our buffers indicate that we're most likely not going to make it in 1 Sprint.

Enabling management decisions

At the same time, management learns that this team isn't going at full speed, and intervention may be required to increase the team's velocity. It also creates transparency how much "bad money" we have to spend, without placing blame on anyone. It's just work that needs to be done, due to the processes and systems we have in place.

If management would like "more bang for the buck", they have some levers to pull: invest into a new technology system that's easier to maintain, drive sustainability, or get rid of parallel work. None of these are team decisions, and all of them require people outside the team to make a call.

Buffer Management

The prime directive of activity buffers is to eliminate them.
First things first, these kinds of buffer allocations make a problem transparent - they're not a solution! As such, the prime directive of activity buffers is to eliminate them. and the first step to that is shrinking them. Unfortunately, this typically requires additional, predictable, work done by the team, which should then find its way into the Product Backlog to be appropriately prioritized.

Buffers and the Constraint

If you're a proponent of the Theory of Constraints, you will realize that the Capacity buffers proposed in this article have little relationship to the Constraint. Technically, we only need to think about capacity buffers in terms of the Constraint. This means that if for example, testing is our Constraint, Application Maintenance doesn't even require a buffer - because the efforts thereof will not affect testing!
This, however, reuires you to understand and actively manage your Constraint, so it's an advanced exercise - not recommended for beginners.

Consuming buffers

As soon as any activity related to the buffer becomes known, we add it to the Sprint Backlog. We do not estimate it. We just work it off, and keep track of how much time we're spending on it. Until we break the buffer limit, there is no problem. We're fine. 
We don't "re-allocate" buffers to other work. For example, we don't shift maintenance into bugfixing or feature delivery into maintenance. Instead, we leave buffer un-consumed and always do the highest priority work, aiming to not consume a buffer at all.

Buffer breach

If a single buffer is breached, we need to have a discussion whether our team's goal is still realistic. While this would usually be the case in case of multiple buffers, there are also cases where buffers are already tight and the first breach is a sufficiently clear warning sign.

Buffer breaches need to be discussed with the entire team, that is, including the Product Owner. If the team's goal is shot, that should be communicated early.

Buffer sizing

As a first measure, we should try to find buffer sizes that are adequate, both from a business and technical perspective. Our buffers should not be so big that we have no capacity left for development, and they shouldn't be so small that we can't live up to our own commitment to quality.
Our first choice of buffers will be guesswork, and we can quickly adjust the sizing based on historic data. A simple question in the Retrospective, "Were buffers too small or big?" would suffice.

Buffer causes

Like mentioned above, buffers make a problem visible, they aren't a solution! And buffers themselves are a problem, because they steal the team's performance!
Both teams and management should align on the total impact of a buffer and discuss whether these buffers are acceptable, sensible or desirable. These discussions could go any direction.

DevOps teams operating highly experimental technology have good reasons to plan large maintenance buffers. 
Large buffers allocated to "other work" indicate an institutional problem, and need to be dealt with on a management level.
Rework buffers, and bugfixing is a kind of rework, indicate technical debt. I have seen teams spend upwards of 70% of their capacity on rework - and that indicates a technology which is probably better to decommission than to pursue.

Buffer elimination

The primary objective of buffer management is to eliminate the buffers, Since buffers tend to be imposed upon the team by their environment, it's imperative to provide transparent feedback to the environment about the root cause and impact of these buffers.
Some buffers can be eliminated with something as simple as a decision, whereas others will take significant investments of time and money to eliminate. For such buffers, it tends to be a good idea to set reduction goals.
For example, reducing "bugfixing" in our case above from 25% to 10% by improving the system's quality would increase the team's delivery capacity from 40% to 55% - we nearly double the team's performance by cutting down on the need for bugfixing - which creates an easy-to-understand, measurable business case!

Now, let me talk some numbers to conclude this article.

The case against buffers

Imagine you have a team whose salary and other expenses are $20000 per Sprint.
A 10% buffer (the minimum at which I'd advise using them) would mean not only that you're spending $2000 on buffers, but also that you're only getting $18000 worth of new product for every $20k spent!

Now, let's take a look at the case of a typical team progressing from a Legacy Project to Agile Development:

Twice the work ...

Your team has 50% buffers. That means, you're spending $10k per Sprint on things that don't increase your company's value - plus it means your team is delivering value at half the rate they could! 

Developers working without buffers, would be spending $20k to build (at least) $20k in equity, while your team would be spending $20k to build $10k in equity. That means, you would have to work twice as hard to deliver as positive business case!

Every percent of buffer you can eliminate reduces the stress on development teams, while increasing shareholder equity proprtionally!

And now let's make that extreme. 

Fatal buffers

Once your buffer is in the area of 75% or higher, you're killing yourself!
Such a team is only able to deliver a quarter of the value they would need to deliver in order to build equity!
In such a scenario, tasking one team with 100% buffer work, and setting up another team to de-commission the entire technical garbage you're dealing with is probably better for the business than writing a single additional line of code in the current system.

Please note again: the problem isn't the capacity buffer. The problem is your process and technology! 

High Performance: No Buffers

High Performance teams do not tolerate any capacity buffers to drain their productivity, and they eliminate all routine activity that stops them from pursuing their higher-ordered goal of maximizing business value creation. As such, the optimal Capacity buffer size is Zero.

Use buffers on your journey to high performance, to start the right discussion about "Why" you're seeing the need for buffers, and the be ruthless in bulldozing your way to get rid of them.

Friday, May 29, 2020

Why there's no traditional test strategy in agile organizations

The test strategy is a fundamental pillar of quality assurance. It would seem plausible that quality is independent of development approach, hence the same approach can be used irrespective of whether  we are Agile or not.

Nothing could be further from the truth. Indeed, a traditional approach to quality assurance is entirely incompatible with an agile approach - to the point where it becomes a problem in terms of performance and quality! Hence, if you have a traditional test management background and enter an agile environment, there are a few things you need to understand, lest your best intentions be met with massive resistance and pushback.

The goal of testing

Why do we test? This fundamentally changes between a traditional and an agile setting:
From accepting the past to sustaining the future
Traditional, stage-gated testing is centered around the idea that at some point, the work previously completed by developers enters into a QA process, during which the conformity of a product towards specified requirements is assured.  This is past-oriented. It assumes that the product / release / version is "finished".

An agile tester works with a team delivering in small increments - in the case of Continuous Deployment, that could be hundreds of increments per day. The product is never "finished". There will always be more work in the future. The agile team is supposed to work in a way that whatever was the last delivery, we can call it a day, and tomorrow, we won't find a shambles caused by whatever we messed up.

The testing mission

Let's start with the shift in testing objective, because different goals verify a different approach:
From "finding defects" to "preventing defects"
Traditional testing assumes that software has defects, and that developers make mistakes. In the worst circumstances, it assumes that unless checked upon, developers will not do what they are told.
Traditional QA serves a threefold purpose:

  • Verify that developers have done what they claim to have done
  • Catch the mistakes in the work of developers
  • Discover the defects in the software

In an agile setting, this is reframed, to a much more positive outlook on both the people and their work. Our agile testing serves three purposes:

  • Consistently deliver high, preferrably zero-defect, quality
  • Provide evidence that the product does the right thing
  • Prevent defects as early as possible

 As a consequence, some core concepts of how we go about "Agile Testing" change:

Test activity

The major contribution of testing changes:
From mitigating project risk to enabling teams
Whereas a traditional Test Strategy contributes to the project objectives, mitigating the risk of project failure through poor quality, agile testers enable their teams to continuously and consistently work in an environment where quality related risks are a non-issue.
You have to think of it like this: A nuclear power plant has to invest effort into preventing nuclear fallout. A watermill doesn't do that, because it wouldn't even make sense.

Likewise, an agile test strategy won't concern itself with defects. Hence, a lot of things you may have learned that are "must-have" as a test manager are just plain irrelevant:

Test Preparation

We move on to the first major area of a test strategy: preparation. Traditionally, we create a test case catalog by writing test cases to cover the content of the Specification documentation. Then, during the test phase, execute the test cases to verify conformity to requirements. If a test case finds no defect, we label it as passed. Once a certain threshold of test cases were passed, we can give a "Go" from testing.

There's one fundamental problem here when working with agile teams: there is no specification document! Then what? To make a long story short, we still have a specification, and we still have test cases: The tests are the specification

A few things don't exist in an agile organization, though:

Test Case Catalog

The Test Case Catalog is built on the idea that there is something like a major delivery that needs to be tested successfully  to meet the project objectives. That idea is incompatible with agile ways of working.
On a high level, we discriminate two types of tests: those that ensure quality, and those that help us understand what quality is.
All tests of the first category become part of our test suite - they are run on every build, they get run as soon as they get created, and they get created as soon as the feature starts being developed.

There is no test case catalog that has been created "upfront, for later execution".

Risk-Based Testing

Typically, a test case catalog contains a myriad of test cases that the team will not have time to conduct. Hence, Risk-Based testing helps to match capacity with effort, while reducing the overall quality risk. In an agile organization, things look different.
We don't develop things that don't need to pass tests. And we don't create tests that don't need to pass. Testing is part of development, passing tests is part of the process, and the tests are as much part of the product as the productive code itself.

Test Data Creation

Most traditional testers have at some point encountered the difficults of acquiring the necessary data to conduct certain test cases - sometimes, it's not entirely clear what data is needed, how it should look like and (in case of mass data) where to obtain it. When techniques like BDD with Specification by Example are in use, we have our test data as part of the product design.

Test scenario setup

In traditional software testing, it would often take hours, sometimes days, to set up an intricate test scenario required to check only one thing in the software. And then pray that we didn't make a mistake - or all that effort was lost! If that's the case, then our architecture has some issue: Tests in our pipeline should bring everything necessary to run as quickly as possible - in seconds rather than hours! And if a scenario takes days to prepare, it'll be a maintenance nightmare, so we'd rather not have any of these to begin with.

Test scenarios move from a per-release basis to a per-code-change basis, which means that it doesn't even make sense to plan scenario setup strategically: it moves entirely to the work execution level.

Defect Management

Traditional test managers feel appalled when an agile team tells them that they neither have, nor want, defect management. Given the typical defect rates observed in traditional Waterfall organizations, it's unthinkable to not systematize and institutionalize defect management.

Let me return to the nuclear plant example. Of course, it needs to have both processes and means to deal with toxic waste: You'd get second thoughts if there was no hazardous waste management. But what if there were barrels labelled as "Nuclear waste" in your local sushi diner? You'd bolt for the door - because such a thing simply doesn't belong there!
It's the same for defects. They don't belong in an agile organization. That's why we don't need defect management.

And with defect management, we lose the need for many other things that would be part of a good traditional test strategy:

Defect management process

In an agile team, dealing with non-conformance is easy: When a test turns red, the developer stops what they're doing, fixes the problem, and continues.
Under ideal circumstances, this takes seconds - if it takes minutes, it may already be an issue where they involve other people on the team. That's it.

Defect prioritization

Don't we all have fun with the arguments that ensue around whether a defect is Priority 1,2 or 3? We don't need any meetings to align and agree on a priority model are pointless if there's a "stop the line" process where any defect would immediately interrupt all other work until resolved. 

Defect status model

Given that a known issue is either someone's top priority being worked on, or it's already fixed, we don't need much of a status model. That reduces organizational complexity by a massive amount.

Defect Tracking

There is nothing to track, by default. If there are defects to track, we have problems we shouldn't be having.

Defect management tool

The agile organization would prefer to resolve the root cause that would mandate the need for such a tool. We should not institiute a tool based on the idea that quality problems are inevitable.

Defect status meetings

No defects, no defect meeting.

Defect reports

What would you expect from a report with no data?

Defect KPIs

Who hasn't seen the ping pong that ensues when a defect was shoved between developer and tester a dozen times, with the tester claiming "it's a defect", and the developer arguing it isn't? When you measure testers against rejected defects while measuring developers against the amount of defects, you generate this confliect. Without defect-related KPIs, there's no such conflict.

Test Management

It's an unfair assertion to say that there's no test management, because agile tests are well-managed.

Test Plans

What we don't want is assigning and scheduling test cases or types to individual testers, irrespective of whether a feature will actually be delivered. Instead, every backlog item has all necessary tests related to it, It's clearly defined who runs them (the CI/CD pipeline), where (on the CI/CD stage) and when (on every build). Part of the refactoring process would be to move the test plan away from a backlog item into the test suite - a default element of the test plan becomes the "full regression" of everything that was formerly built. Hence, a traditional test plan becomes redundant.

Test Tracking

Once you've got your test case catalog, you need to track your test cases. Not so in an agile setting, where the CI/CD pipeline runs and monitors every single test case. "If the test exists and is part of the suite, it will be run every time we change the code, and the developer will receive a notification if anything is wrong." - what would you want to track?

Test Documentation

This isn't fair, because a test documentation exists: in the log files of the CI/CD pipeline, for every single change to the codebase. It's is just that we don't give a hoot about the documentation of test individual cases, because the entire document would read "Step - executed, passed. Test - executed, passed", since wherever that's not true, we get information on what wasn't okay, when and where.

Test Reporting

We don't do stuff like reporting the percentage of test cases passed, failed and "not-run". There are only two valid conditions for our entire software: "all tests passed", or "tests not passed".  And there's not really a need to report testing at all, because if a single test hasn't passed, we can't deploy. So, we really only need to report development progress.

Test Status Meetings

In a waterfall organization, we need to track test status, typically by having routine meetings where testers report their progress Is vs Should, the amount of defects they found, and how many of them were already closed, plus an estimate how likely they consider completion of their work by the end of the test period.
This meeting wouldn't make any sense in an agile organization, because there would be nothing to talk about.

Test Management Suite

Agile organizations rely heavily on automation. There's probably a tool for everything that is relevant and can be automated. Still, you're not going to find a Test Management or Application Lifecycle Management Suite - because it has nothing to do.
If your test cases written in the central repository and managed by the pipeline, your test protocols are managed by your artifactory, and you don't have any defects to track ... what exactly would you expect such a tool to do?

Roles and Responsibility

We need to agree on which role has which responsibility in the testing process - who writes test cases, who reviews and approves them, who runs them, who communicates defects, who tracks them, and so on. None of this would be required in an agile setting: The team writes, reviews and runs test cases, and deals with any problems encountered. The role is called, "Agile team member", and the responsibility is "contribute to the team's success." What that means can be more or less flexible, and just like the different members in a family have different strengths and weaknesses, we don't want the game of "Not my responsibility" or "Why didn't you..." - because none of these discussions help us reach our team goals. The only discussion we are looking for is "How can I contribute to ..." - and that may change upon need. We wouldn't want a static document to contradict what people feel they can achieve.

Test Levels

We have a Test Pyramid, and technically, that doesn't change in an agile environment. But it means a different thing than in a traditional organization. 

In a traditional organization, we would decide up front on certain test levels, which tests to run on which level, and when to do these test levels.

In agile development, the test levels are fluid. We decide on a test, and we execute it. We then refactor it, to conduct it on the most effective level, and that should - first and foremost, be the unit level. Pulling every test to the lowest possible level is essential to retaining a sustainable test suite, and that means there can be no hard cut of what to do where.

Test types

We have the "Test Quadrants" which give a simple and clear overview of what should be tested, and whether it's automatable. Unlike a Test Strategy document, which would define once for all which of these test types we use, what we do to cover them and where to run them. In an agile setting, these quadrants are more of a constant discussion trigger - "do more of this, maybe some less of that is enough, how can we do better here ..."

Test Automation

Automation is often an afterthought in classical test strategies - we identify a number of critical test cases that need to become part of the Regression suite, and as far as effort permits, we automate them. This wouldn't work in an agile setting. With the "automate everything" DevOps mentality, we wouldn't define what to automate - we'd automate everything that needs to be done frequently, and that would include pretty much all functional tests. It would also include configuration, data generation and scraping data from the system. We wouldn't include it in a test strategy, though, because how helpful is a statement of "we do what makes sense" - as if anything to the contrary would make sense.

Release Management

Ideally, we would be on a Continuous Deployment principle - and where that's not feasible, it should be Continuous Delivery. We also want to have a "Release on Demand" principle, that is: when something is developed, it should be possible to deploy this content without delay. Whether it would be released to users immediately or after a period should be a business, and not a technical, question. In most settings, the content would already be live and using "Dark Release" mechanisms, become available without changes to the code base.

Test Phases

A major concern in traditional testing is the coordination of the different test phases required to meet the launch date. Some activities, like test preparation, need to be completed before the delivery of the software package, and all activities need to be completed a few days before launch to meet the project schedule.
When the team is delivering software many times an hour, and wants to deploy to Production at least once a day, you're going to be short on luck creating a phase gated schedule - acceptance, integration and system tests happen in parallel, continuously. They don't block each others, and they take minutes rather than weeks.

Scheduling non-functional tests

Yes, there are some tests, like Pen-Tests or load tests, that we wouldn't run on every build.
Whereas a traditional test strategy would put these on a calendar with clear test begin/end periods, we'd schedule intervals or triggers for these types of test - for example, "nightly, weekly, quarterly" or "upon every major build or change to the environment".

Installation schedule

A major issue in traditional testing is the schedule for installation - what will be installed, when, and by whom. We'd prefer to "push" builds through the automated pipeline, and expect every member of the team to "pull" any version of the software onto an environment of their choice at any time.
If you think this results in chaos, try reframing this into, "How would the software and the work need to look like for this to not result in chaos?" - it's possible!

Go/No-Go Decisions

In an agile setting, you're not working to achieve a "Go" approval from management by providing evidence that quality is sufficient. If there are any quality issues, we would not continue anyways.

Test Coverage

A key metric in traditional testing is test coverage - both the percentage of requirements covered with tests, and the percentage of tests successfully completed. Neither of these statements makes sense in a setting where a requirement is defined by tests, and where the work isn't "done" until the test is successfully completed: test coverage, from the perspective of a traditional definition, must always be 100% by default. Why then measure it?

Restricted approval

Whereas the traditional tester is usually tasked with doing whatever is required to ensure that a new software release can get the "Go" approval, the approval is usually made "with restrictions". That means: "we know the delivery isn't really up to the mark, but we have a deadline, and can't afford to postpone it." Everyone knows the quality isn't there, it's just a question of how many corners we can cut, and by how much.  Agile testers have a different goal: Understanding that corners we did cut today will need to be smoothed out in the future - requires us to ensure that there are no cut corners!

Approval without restrictions

When there are no cut corners, there is no "restricted Go", and when all approval decisions are always an unconditional, and unanimously "Go" - there is no need for Go/No-Go decisions.

Test Environments

Probably one of the hardest battles fought when transitioning from a traditional testing approach to an agile testing approach is the topic of environments: more environments add complexity, constraints on environments reduce agility. The fewer environments we have, the better for us.

Environment configuration

If we move towards a DevOps approach, we should also have "infrastructure as code". Whereas a traditional test team would usually have one specialist to take care of environment configuration, we'd expect the config to be equal, or at least, equivalent, to the Production environment - with no manual configuration activity. Our test strategy should be able to rely on our CI/CD pipeline to be able to bootstrap a test environment in minutes.

Installation schedule

A major issue in traditional testing is the schedule for installation - what will be installed, when, and by whom. We'd prefer to "push" builds through the automated pipeline, and expect every member of the team to "pull" any version of the software onto an environment of their choice at any time.
If you think this results in chaos, try reframing this into, "How would the software and the work need to look like for this to not result in chaos?" - it's possible!

After all these, I hear your cry, ... but ...

How about Regulatory Compliance?

"We need evidence for, e.g. SOX Compliance, to warrant that the software meets specification, that all tests were executed, by whom, when, and with which outcome." Yes. That could be. And that's no conflict to agile testing.
I would even claim that agile testing is a hundred times better suited to meet regulatory requirements than traditional testing. And here's why:

  1. The exact statements which were used to do a test are code. 
  2. That means, they have no space for interpretation, and can be repeated and reproduced infinitely.
  3. It also means they are subject to version control. We can guarantee that the test result is exactly correlated to the test definition at that timestamp. Our test cases are tamper-proof.
  4. There is no room for human error in execution or journaling. The results are what they are and mean what they mean. 
  5. All test runs are protocolled exactly as defined. There is no way for any evidence to be missing. By storing test results in an artifactory, we have timestamped, tamper-proof evidence. And tons of it.

Proper agile testing as nothing to be afraid of when facing an audit. It's better prepared than traditional test management could ever be, and under most circumstances, an auditor would require much less evidence to ascertain compliance than an agile team would require just to do their job.

All that said, this begs the question ...

Do we need an Agile Test Strategy?

If you've been paying close attention to all I've written above, you may wonder if there's a need for a test strategy in an agile environment at all.
After all of the above points, the answer will surprise you: Yes, indeed we need an agile test strategy.
We will explore this test strategy in more detail at another time. At this time, let me just reduce this to headlines of what it means and will cover:

The Agile Test Strategy ...

  • belongs to the teams developing software.
  • is a living documentation that explains what is currently happening.
  • focuses on increasing customer satisfaction and product value.
  • is closely related to the Definition of Done, the objective standard of when a team has completed their work on any given item.
  • contains the team's quality-related Working Agreements that provide a explanation of how the team is collaborating to meet quality objectives.
  • addresses organizational and technological measures required to attain and sustain high quality.
  • minimizes both current and future total effort required to deliver sustainable high quality.
  • optimizes the Test Pyramid
  • utilizes the Test Quadrants
  • leverages manual testing to optimize learning

The key question an agile test strategy would always revolve around is, "What are we doing to meet our commitment to quality with even less effort?"


With this definitely long article, I hope that I could shed some light into the differences between agile and traditional testing strategy. You can add massive value to an agile organization if they don't have a test strategy yet, and you can always find something within an agile test strategy that can be optimized. What you should not do, however, is to consider the absence of a traditional test strategy as a "flaw" in an agile organization.

You need to be familiar with the differences in the two approaches, so that you can avoid making unhelpful suggestions and focus your efforts on the things which move the teams forward.

The major assumption of this article is that an agile organization is actually committed to and using the engineering practices required to attain continuous high quality. Where this isn't true, the Agile Test Strategy would include the way to achieve this condition - it wouldn't focus on instituting practices or mechanisms contrary to this goal.

Friday, May 22, 2020

Entrepreneurial Value for in-house development

"We can't measure the monetary value of a Feature". A common complaint, and oftentimes mere ignorance. It's economically disastrous for an organization that spends money on software development!

Here are a few simple, effective ways of making the value of development transparent even when developing complex in-house systems:

Core goals

Before we start, we need to understand that there are fundamentally two different reasons for developing software. While occasionally, these goals can be combined, they are usually distinct, and one of them takes priority.

Revenue increase

Some components allow the company to generate more revenue - acquire new customers, sell additional goods, etc. For those components, measuring feature value is very straightforward: The value is the difference in revenue that can be attributed to the implementation of a feature.

In these cases, the value of a feature can be found on the "Assets" section of the balance sheet - the difference between old and new assets.

Expense reduction

For established processes, most of the work is aimed at increasing efficiency through performance optimization - saving time or resources (e,g., packaging / fuel etc.) to reduce expenses.

In these cases, the value of a feature can be found on the "Liabilities" section of the balance sheet - the difference between old and new liability.

Equity generation?

Argubably, there may be features that turn out to neither generate revenue nor cut down on expenses.

Many people argue that "you got what you got", and proceed to treat such features as assets - claiming that they are becoming shareholder equity that doesn't show up as cash flow.
I, myself, would argue that this is not the case.

From a technical perspective, such features need to be candidates for removal - in software, every feature has code, and every line of code increases the product's complexity, and complexity correlates to the amount of work required to deliver future value -- hence, such features, though seemingly innocent, are technical liabilities!

Determine feature value

After the above is established, we will not discriminate anymore whether a feature's value is determined by an increase in revenue or reduced expenses. Instead of providing specific techniques or methods, this section focuses on thinking patterns that help in determining the value of feature.
These patterns can be selected or combined based on circumstance.

Value Stream thinking

Understand the operational value stream you are contributing to, and the place you hold in that value stream. Your contribution to that value stream is the leverage you have for development.
For example, when your value stream produces a revenue of $10000 a day, and after the implementation of the feature, it's still $10000 a day ... how much did the feature add? Maybe it reduced operating expenses. If, however, they are still the same, the feature did zilch.

Theory of Constraints is a great method of figuring out whether it's even theoretically possible for your feature to add value: if you're not working on the value stream's current or next constraint, chances are slim that you will add value to the organization!

Make of Buy thinking

There's an economics proverb, "If it's known, and you can earn money with it, someone is already doing it." In many cases, there's already a vendor solution that does the same thing you would be developing, and that vendor has a price tag attached to their solution.

While you still can't know if that's going to be the actual value of your feature, the NPV is capped at whatever this vendor is asking. So, for example, if you could buy a cloud solution that solves your problem at $199 a month, we ignore discount rates and cash flow, calculating NPV for a 5-year period, we'd end up with the feature being worth no more than $12k. So unless you can deliver it cheaper, you may not want to build it.

Service thinking

If you remember the beginning of the Internet age, AOL was a service born when companies realized that they had capacities available that others would pay for. Internet giants like Google and Amazon have since successfully replicated the model of selling a solution to their own problem to others as well. You already paid for it - so every cent you make with a sale is net profit! What is stopping you from capitalizing on the idea? If you're doing it right, there's even a chance you can make more money off selling the feature to others than the value it has within your own organization! More than one software company was borne out of scratching one's own itch.

Even if you're producing something "in-house", always ask the question, "Who else has this problem, and how much are they willing to pay to get it solved?" - if the answer boils down to, "Nobody" or "Nothing", then chances are that you're solving the wrong problem.

Wednesday, May 20, 2020

Key concepts every Product Owner must understand

Let's take a look at some key concepts every Product Owner should understand:

Manage options

As Product Owner, you receive ideas for things that your product could do. Each of them is an option of something that could be done.

There's usually a flood of ideas, some are better, others worse. You need to sort them through, typically by ordering them in a backlog.

You choose the ideas: Good ideas, bad ideas - big ideas, small ideas. Your call.
As far as possible, avoid promises to anyone - they reduce your freedom of choice.

Each of those ideas requires you to make a certain investment in order to turn it into a Product.
That investment is, first and foremost, money. You need to secure this investment, lest your idea dies before it is realized. That's funding.

Spend to Gain

As Product Owner, you have a development team doing the work of making ideas happen. It's not important for you how they do this. That's what they are experts for. What matters for you: what they work on, and in which order.

Your team has a finite capacity - how much work they can do per time. If you feed more ideas to the team than their capacity, they won't get all of them done. So choose wisely how you want to use this capacity.

Development work is fuelled by money - salary, infrastructure, you name it. The rate at which your team spends money is your burn rate. There's a one-way correlation between burn rate and money - a slight reduction in burn rate often significantly reduces capacity, while an increase in burn rate often has no predictable influence on capacity.

With a stable team, your burn rate is constant. Once you know your capacity and burn rate, you can figure out an approximate investment required for an idea. Your estimates will usually be wrong, and  a better crystal ball will only reduce your capacity.

Outcome Focus

The Return on Invest is how much money you made off realizing the idea. Oftentimes, it's elusive (you can't really know at all), and usually it's entire guesswork before you're done. So keep the investment low until you actually see this Return on Invest and turned an assumption into hard cash.

Look for small product increments that allow you to generate a constant flow of value.
Don't get blinded by big numbers: It's better to get $100 every month, starting tomorrow - than to get $5000 once, two years from now. There are many reasons for that - the biggest one being that you can't know if you'll get that money until you did.


You must compare your Return on Invest (ROI) with your Investment. Let's assume it's your own business. You'd understand immediately that if the ROI is lower than your investment, and if your cash flow is worse than your burn rate - it's just a matter of time until you have to foreclose.

And that's exactly what a Product Owner needs to do:

  1. to ensure that the product generates a positive Return on Invest, by eliminating bad ideas and prioritizing good ones.
  2. to ensure that there's a positive cash flow by feeding a constant stream of ideas that can be realized with a controlled investment before making a "pivot or persevere" (build something else or more of the same) decision 
  3. to prioritize ideas against one another, by figuring out which ones promise the best Return on Invest

Your job

First and foremost, you have to understand money to be a Product Owner.
The development work is the experts' job. Even on prioritizing and formulating ideas, they could be the experts. 
As long as the ideas the development team realizes result in an overall positive cash flow, the team can continue and grow - the business is healthy. And that's your job. Everything else, you can delegate.

A fictional scenario

To employed Product Owners, I like to give this thought experiment:

Imagine that you were on a bazaar.

  1. You can buy the product for the investment required.
  2. You get the product delivered after the development time has elapsed.
  3. You can sell the product for the ROI it generates in your organization. 
  4. The difference is your income.
Would you build the same product?

Monday, May 11, 2020

The role of testers

As a CTO, ten years ago, I said, "I pay my testers to find defects". This attitude has since changed.

I now think that when testers are finding defects, then someone, somewhere is doing something wrong. They shouldn't.

Instead, they should:

  • establish a "test everything mindset" in the team
  • ask the critical questions early on which allow the team to build a high quality product.
  • work with the organization and users to guide efforts invested into establishing high quality.
  • collaborate within the team to establish both technical and operative means for preventing low quality.
  • exercise user empathy to help the team develop a product that doesn't just meet acceptance criteria, but even "feels right".
  • take critical looks at the product to see which aspects weren't covered with Acceptance Criteria and identify improvement.

Plus a lot of other things.

And note how the idea of "finding defects" isn't even in there.

Saturday, April 25, 2020

The defect funnel - systematically working towards high quality

Take a look at this diagram: Which of these images best describes your quality strategy?

The four stages - from left to right - are:

  1. Automated testing - issues detected by automated test execution.
  2. Manual testing - issues detected by manual testing efforts.
  3. System Monitoring - issues detected by monitoring capability.
  4. User Reports - issues encountered by users on the running system.

The bars indicate:

  1. Red: Too many issues to deal with
  2. Yellow: A bearable, greater amount that needs to be prioritized rather than dealt with.
  3. Big green: A greater amount of issues that gets completely handled.
  4. Small green: A negligible amount of issues that are being dealt with as soon as they pop up.
  5. No bar: "we don't do this or it doesn't do much."

The defect funnel

Although the images don't really resemble much of a funnel, this "defect funnel" is similar to a sales funnel. In the ideal world, you'd find the highest amount and the most critical defects early, and as a delivery progresses through the process, both amount and criticality decrease. Let's take a look at the ideal world (which never happens in reality) -

Automated testing should cover all the issues that we know can happen - and when we have a good understanding and high control of our system, that should be the bulk of all issues. If we rigorously apply Test Driven Design, we should always have automated tests run red when we create new features, so having red tests is the ideal scenario.
Manual testing - in theory - should not find "known" problems. Instead, it should focus on gray and unexplored areas: manual testing should only find problems where we don't know ahead of time what happens. That's normal in complex systems. Still, this should be significantly less than what we already know.
Monitoring is typically built to maintain technical stability - and in more progressive organizations also to generate business insights. If we find unexpected things in monitoring, it basically means that we don't know how our product works. And the amount of known problems we have should be low, because everything else is just a sign of shoddy craftsmanship.
User reports are quirks we learn from our users. Since we're the designers, creators and maintainers of our product, no user should know more about it than we do. Still, it can occasionally happen that either we choose to expose our user to a trial, or that a scenario is too far out of the norm to predict before it happened. The better our control of our system is, the lower the amount of stuff we don't see before our users.

In the real world, the funnel usually doesn't even remotely resemble a funnel at all. This should be a clear-cut sign that your process may neither be working as intended nor as designed.

No systematic quality approach

If you don't have a coherent approach to quality at all, this is most likely how things look like: If you encounter a problem, it's either by chance during testing, or because users complain.
You can't really discriminate whether the issue was caused by the latest deployment, or has been around for a while and simply never shown up before.
If there's any test automation, it's most likely just regression tests, focusing on the critical scenarios that existed for a long time. Since these tend to be stable, test automation hardly finds any issues.
System monitoring will only detect the most glaring issues - like "server down" or "tablespace full".

In such a situation, developers are fighting a losing battle: Not only do they not really know what caused the problem, or how many problems there actually are - every deployment invites problems. You never know how much effort anything takes, because of the constant interrupts to solve production issues. Reliability is low, quality is low, predictability is low - the only things that tend to be high are effort and frustration.

Hence, most larger organizations adopt systematic quality gated processes:

Waterfall: Quality Gates

Adding systematic quality control processes, with formal test cases, systematic test execution and rigorous bug tracking allows IT to discover most of the critical issues before a deployment hits the user. If this is your only quality measure, though, you're not reducing defect rates at all.
Delays in deliveries to the test environment cut down test time, so test cases get prioritized to meet time constraints.

New components are tested manually ("no time for automation") and everyone sighs with relief when the package leaves development - there's neither time, money nor mental capacity to mind Operations.
The time available to fix found issues is never enough, so defects merely get prioritized - the most critical ones fixed, and the rest are simply released along with the new features: the long-term quality of the system degrades.

In such an environment, testers continually stumble upon legacy problems and simply learn to no longer report known issues. Quality is a mess, and every new user stumbles upon the same things.
The fortunate thing for developers is that they're no longer the only ones who get blamed and interrupted - they have the QA team to shift blame to as well.

Introduction of Agile Testing

The most notable thing about agile testing is that developers and testers are now in the same boat. By having a Definition of Done that declares no feature "Done"  before tests were executed, developers no longer benefit from pushing efforts onto the next desk, and test automation - especially of new components - becomes mandatory to keep cycle times low.

What's scary is that the increased focus on quality and the introduction of agile testing techniques seem to reduce quality - the amount of issues suddenly discovered becomes immense! The truth is that the discovered issues were always there and are inherent both to the product and the process. They were just invisible.

Many teams stop at this point, because they don't get enough time to fix all know problems and stakeholders lose patience with the seeming drop in performance. Everyone knows testing is the bottleneck, and instead of pushing forward and resolving the issue once for all, they become content with "just enough" testing.
Hence, they never reach the wonderful point where the amount of issues discovered by users start to decline to a bearable amount. But that's where the true victory of using higher degrees of test automation, user centric testing and closer collaboration with development manifest.

Shift-Left Testing

 It's not enough to "do Agile Testing", we have to change the quality approach. By having every team member - and users - agree on quality and acceptance criteria prior to deployment, by moving to test driven design, by formulating quality in terms of true/false verifiable scenarios prior to implementation - and finally, by automating these scenarios prior to development, we break the problem of finding issues after the fact, that is, when the code is already wrong.

When we first move to Shift-Left Test, we will typically encounter a lot of situations where we discover that the system never did what it was supposed to do, and the newly designed test scenarios fail due to legacy issues. At this point, effort may have another explosion, because a lot of discussions will be required to make the system consistent. The reduction in speed and the increase in problems is a sign that you're moving in the right direction.

In the context of shift-left testing, teams often add extra capabilities to the system which mainly serve for testing purposes, but which are also great hookpoints to enlarge system monitoring to catch certain business scenarios, such as processing or procedural failures.
All of the problems thus caught earlier will not hit the user any more, and this becomes the first point where users start to notice what's going on - and begin to increase confidence in the team's efforts.

Moving to DevOps

Once you've got the quality of the creation of new features under control, it's time to enhance your sphere of control and ensure users also have a good experience of your system. You can't do that without Ops on board, and you need to start solving the issues Ops encounter with a higher priority.

Investing into monitoring for new components becomes an integral part of your quality strategy, for two reasons: First, you will need ways to test your value hypotheses against real world data, and second, since you're designing for quality, you need to ensure this design doesn't break.

You'll still be hitting legacy issues left and right - because you still never had the time to clean them up. But you start to become more aware of them as they arise, and by systematically adding monitoring hookpoints to know issues, you learn to quantify them, so that you can systematically work them off.

The "Accelerate" Stage

In their book, "Accelerate", Gene Kim, Nicole Forgsen and Jez Humble, describe four key metrics of high performing organizations:
  1. Lead time
  2. Deployment frequency
  3. Mean time to recover
  4. Change Fail Percentage
Being world-class on these metrics is only possible with stringent quality control in every aspect of your process, and it's only possible if your system has high quality to begin with.

What may come as a surprise: we're not even aiming to eliminate all known issues in design: That would be too expensive, and too slow. Instead, we're making informed optimization decisions: Does it cost more to automate a test, or to establish a monitoring ruleset that will ensure we're not running into problems? Do we try to get it right the first time, or are we willing to let our users determine whether our choice was good?
An Accelerated organization, oddly enough, will typically feature a lower degree of test automation and less manual testing than a Shift-lefted organization, because they do not gain value from these activities as much any more. For example, shooting a record of data through the system landscape and validating the critical monitoring hookpoints tends to be significantly lower effort than to design, automate, execute and maintain a complex test scenario. Plus, it speeds up the process.

Friday, April 24, 2020

CONWIP Kanban - implementing Covid regulations with ease

The Covid social distancing regulation forces stores to adapt new strategies of ensuring distance and hygiene are maintained while people go shopping.

Today, I discovered an application of Conwip boards in daily life - and people may not even recognize that they're doing it: because: there's no board.

Let's look at a supermarket, and visualize it as a board:

Stores have instituted a fairly robust process that ensures - given a normal, self-balancing distribution, social distance can be maintained, without much supervision.
They have merely reduced the amount of Shopping carts to become the Constraint on store capacity, and have set up a few extremely simple rules:

  • No shopping without shopping car
  • Don't get too close to other people in the shop
  • Keep within the distance markers at the cashier

There are a few implicit rules that go without saying:

  • If there's no shopping car, you have to wait until one becomes available or you leave.
  • Bring back your shopping car after packing up.

The system self-balances and exercises full WIP control:

  • If there are too many people in the store, there will be no carts left, hence no more people coming in.
  • If a queue is forming anywhere, no carts will be released, hence no more people coming in.
  • Once a queue is dissolved, carts will be released, allowing new people to enter the store.

As a TameFlow practitioner, I could immediately spot what's going on here: the store has adopted a type of CONWIP Kanban:

  •  the shoppers are our Kanbans (WIP), 
  • the carts our Replenishment tokens, 
  • the amount of Replenishment tokens is our CONWIP limit
  • the Constraint is defined by the store's size, and modeled by demand controlling through the CONWIP limit
  • the Replenishment buffer is the cart pickup.
  • The space between carts at the cashiers functions like aTameFlow Constraint buffer.
  • That even ensures we're warned ahead when cashier is operating at or near capacity limit, and we can open another cashier.

This is a practical use case of how simple and easy it is to set up a balanced, stable Kanban system that uses TameFlow principles in the real world. You need extremely few rules, and it's almost effortless to implement.

You gain high control over the system and free real-time risk management on top - and you need neither a significant amount of time nor money to implement these type of changes!

Tuesday, April 21, 2020

The dictatorship of Relativism

Unfortunately, the relativism which permeates modern society is also invading the "Agile" sphere - and thus, organizations. This is especially detrimental because these organizations build software systems which have a significant impact on potentially millions of other people. 

It's all about Perception

Perception is extremely important in how we interact with the world around us. It's the combination of our sensory organs, our experiences and our neural processing which detemines how we perceive a situation. Therefore, people with different backgrounds will have widely different perceptions on the same subject.
Yet, to build up anything sustainable, we need to be as accurate and precise in our perception as possible.

There are still facts

Without trying to harp too much on the Covid pandemy - a virus doesn't care what we would like the situation to be, or whether we believe that it's a significant threat. There's nothing we can discuss or negotiate with the virus and we can't tell it anything, either.
We can't bargain with it, we have to face reality and work our way from there.
The same goes for business figures. And IT. You can't argue with the bank account that it would be nice if it were just a bit more positive. You can't tell a crashing stock market that developers feel bad about it. Your server doesn't care which of its 0's you would prefer to be 1's. What's there - is there. You have to submit and deal with it.

How sustainable is the willful ignorance and denial of the facts that reality confronts us with?

Thoughts and opinions

Is it okay to have a clear opinion on a matter? Yes.
I would even go so far as to state that many people who claim to "have no opinion" are either deceiving themselves or (trying to) deceive others. In some cases, I would go as far as attributing malice. This becomes most obvious in cases where people who profess to have no opinion become militant against someone who voices theirs. If you're really un-opinionated either way, why is that specific opinion so much of a problem?
The scientific approach would be to examine a claim based on the evidence, and if it holds, to support it - and if it doesn't hold, to dismiss it. There is no, and I repeat, absolutely zero reason to attack the person who proclaims an opinion simply for having it. And still, that is what we see. Logic dictates that we must discredit the idea, not the speaker!

Is an open, transparent workplace consistent with censorship and thought crime?

Predictive capability

A general rule of science is that "models with predictive capabilities are better than those without." The quest to increase the predictive capabilities of our models has brought us running water, heat and electricity for our homes, it has given us cars, computers, the Internet - and sufficient food onto our plates.
While arguably, no reputable scientist would say that any scientific model is perfect or beyond scrutiny, we first need to find a case where scientifically validated models and methods do not yield the predicted outcome before we should discard them - especially where they have proven time and again to produce significant benefits.

What are these "better ways of working", compared to the effectiveness of verifiable methods which have been proven to achieve significant improvements?


Evolution has ingrained us deeply to recognize patterns. Our brains are wired to seek patterns everywhere, and match to the most probable ones.  When we look at the sky, we see flowers, sheep - and many other things. That is our mind playing tricks on us. But it doesn't discredit patterns as a whole. 
For example: Five people ran in front of a train. They all died. See a pattern there? Do you really need to run in front of a train to figure out what will happen?

Should we dismiss the idea of patterns, and in the same breath apply patterns that have nothing more than anecdotal evidence as support?


Especially in times of change, we need orientation. And in almost every case, even a sub-optimal fixture is more beneficial than a complete loss of support. Few leaders think of themselves as beyond scrutiny, and oddly, it's those who do so tend to attract the largest following in times of turmoil.
Is it better to lead or to not lead? When people need direction and are unable to find theirs, it's usually the most ethical choice to set a direction first, and then offer the opportunity for change.

Would we prefer everyone struggle by themselves, denying them the benefits of rallying around an idea they can all agree to?

End the Relativism

Not everything is relative.

There are facts. We can misinterpret, misunderstand or misrepresent them - but they are still there. Instead of soft-cooking the facts, we need to get better at interpreting, understanding and representing them.

Everyone has an opinion. Neither are facts equal to opinions, nor are people who have a clear opinion automatically wrong. We have logic and science to tell us which is which. By celebrating the freedom to have even a wrong opinion, we learn to be respectful towards one another. Reasoning teaches us to sort out the wheat from the chaff.

We need predictability. We can't predict everything, but we can predict more than nothing. The more predictability we have, the more likely we will still be alive tomorrow. Instead of mushing up everything with the terms "complex" and "unknown", we need to simplify as far as possible (but no further) and learn as much as we can.

We rely on patterns. We're really biased in the patterns we observe and how we interpret them. At the same time, there are repeatable and consistent (scientific) patterns and (esoteric) phantasms. The distinction between the two is what brought us to where we are today, for a good reason.

Society is based on leadership. Strong leaders can be a great boon to others. Beneficial leadership can propel hundreds of thousands of people to a better future. If we want to truly help people, we help those who have the potential to lead to do it for the better.

Stop the trash coaching

If you are an "Agile Coach" who:

  • institutes a culture of relative interpretations until it becomes impossible to discern what's right or wrong - you're destroying people's ability to make the critical, timely decisions.
  • hushes up people who boldly go forward with their opinion - you're instituting a totalitarian system where creativity and courage are impossible.
  • constantly harps on everything being unknown - you're removing the very basis of what makes a company successful: understanding.
  • rejects well-established patterns and methods because allegedly, those things don't exist "in the Complex" - you're not reducing complexity, you're pulling in chaos!
  • denies the value of proper leadership - you're opening the door towards anarchy and decay, not towards teamwork or growth!
None of these things are helping your client grow. These destroy people's ability to do the right thing.

Do the right thing

Forget the labels. It doesn't matter whether we're called coach, consultant, advisor or whatever.
The client has a problem to solve, and they need help. Guidance. Support. Whatever. You're there to make a positive difference.

When the client needs to:

  • Figure out what's going on - establish what we know and what we don't know. Don't pull that which is known into chaos. 
  • Get the facts straight, help them get the facts straight. Institute metrics, create transparency. Collect data. Gather evidence. Minimize bias instead of dwelling on it.
  • Have reliable methods or techniques - start with the ones that have proven to be most reliable, then inspect and adapt from there. We don't need to re-invent the Wheel, and we most certainly don't need placebos or magical thinking.
  • Get out of a mess quickly - lead and teach others how to do that. Don't let people stranded or disoriented when every minute counts. There's time for talk, and time for action.
  • Move forward - show the way. It doesn't matter whether you "help them find theirs" or you just bluntly tell them what your opinion is. Break the "analysis-paralysis". Companies have business to do. It's better to revise a wrong decision than to remain indecisive or lost.
By doing these, you will be a tremendous help to the people around you, and a good investment for your clients.


While it's good to check the adequacy of our mental models: People to whom everything relative, or who promote the idea that everything needs to be discussed and strong decision-making is off-limits do not belong into business. Especially not into IT.

When you identify problematic stances and behaviours in your "Agile Coaches", get rid of them. Quickly. They will do more harm than good.

And if now you conclude that I don't have a "proper Agile Coaching mindset", that's totally up to you. I don't see "Agile" as an esoterical space where everything goes and nothing is true - I see it as a company's ability to do the best possible thing, swiftly and reliably. And that requires knowing what "the best possible thing" is. Where that conflicts with the label "Proper Agile" - so be it.

Sunday, March 15, 2020

Remote Agile Coaching

Probably the biggest challenge Agile Coaches and Scrum Masters face during the Corona Crisis: "How do I effectively coach - remotely, especially when people are distributed?" If you've never done this before, you might be stumped.
Fortunately, this has been my business model for a few years already, so I might have some pointers that could help you get going.

Your most precious asset  - the coaching journal.

First things first: Remote Agile Coaching is much more difficult than on-site coaching, and can throw you into an existential crisis. I have spoken to many Scrum Masters who felt this was "just not for them", and I won't claim that it's the same thing. It's oftentimes less effective, more frustrating and less gratifying than working with a team face to face. Yet, when it's the only option - you've got to make the most of it!

Remote Development is still fairly easy compared to Remote Coaching: while a Developer has a clearly defined code objective and the work they do happens entirely within their IDE, the CD pipeline and on servers - whereas the coach relies on human interactions and thinking processes. These are often entirely invisible in distributed teams.

There are a number of key elements to your success in Remote Coaching. To keep this article at least somewhat concise, I will focus on only two aspects:
  • Coaching Agenda
  • Execution Signals

Disclaimer: This article is not an introduction to Agile Coaching. It's mostly concerned with the key factors of successful Remote Coaching. Therefore, important aspects of Agile Coaching may not be covered.

Coaching Agenda

It's quite easy for colocated coaches and/or Scrum Masters to work successfully in a sense-and-respond mode, i.e. simply be there for your coachees, observe their actions and use spontaneous communication to trigger reflection and change.
The same is not true for Remote Coaches, who are limited both in senses and responses - the value proposition is more in the line of triggering "the big hitting changes". And since you can't push change, you need to figure out what people need. This can't be done ad hoc, so you need an agenda.

To begin with the Obvious, it's not sufficient to facilitate the Agile Events (Planning, Daily, Review, Retrospective, I+A, PIP) - you'll be entirely ineffective if this is the only thing you do! You need many other things, and you need them without infringing on the work of your team(s). And that's the challenge: You must enhance the team's ability without infringing on their capacity.

Providing methods

As an agile coach, part of your competency is providing effective methods. And since you don't have full real-time interaction, you need both a plan and the means to roll methods out.
So, here are some things you need:

  • Start with a delta assessment on what's missing and create an overview of the required methods that would be needed by the client.
  • Arrange the necessary items in a method introduction backlog. Put it into a digital backlog tool and let your client prioritize it (not just once, but all the time!)
  • Ensure you have the right technological means to make the methods available. If you need, for example, an Online Retro tool, you'll have to at least come up with a first option, because when the client doesn't know the method you're talking about, they are not yet in a position to make a choice which tool to use!
  • Some tools do not support the method you're using, so you either need to adapt your method to the tool or find a better tool. Still, avoid introducing a zoo of "Agile tools" - you'll never find your information afterwards! (Therefore, it pays to know what's still in the backlog so that you're not reverting last week's choice every week!)
  • Keep privacy and security constraints in mind. You can't use just any Cloud Platform and put confidential information there!
  • Remember organizational standards: While team autonomy is a great thing, it's terrible for the company if 12 different teams use 15 different delivery platforms: You may need to align certain tools with other parts of the organization.

Speed of Change

Probably the biggest mistake I have made in the past: since you're sitting remotely, you may not understand how time-consuming other activities in the organization are. This can lead to not respecting your coachees' ability to process change. What may feel terribly slow for you as a coach may be all they can stomach, and what's the right speed for you as a coach may overburden them, because their daily business is so time-consuming. As a coach, it's important that you set back your own feelings in this regard.

So, here are some specific tips:

  • Figure out what the "right" rate of change for your client is. 
  • If that means you'll only have one change-related coaching session per month, don't push for more.
  • Instead, if your time on that topic is so limited, spend more time preparing to maximize the value of the session.
  • Create a timeline of what changes will happen when, and align with the client.
  • It's significantly better to do one high-impact change than ten low-impact changes, because that puts less stress on the client's organization.


Not all coaching methods that you'd use face to face are effective in a remote setting, and your have a less effective feedback loop. The time between applying a method and discovering the outcome increases, as does the risk of misinterpretation.
Some methods tend to be much more effective in remote coaching than others, though, and here are my favorites that you should definitely at least try:

  • Journaling. Keep a journal, and reflect on it at frequent intervals.
  • Canvases. To structure communication, experiment with canvases both during sessions and as prep work.
  • Screenwriting. It can have a massive impact on your coachee's reflection if you do nothing other than write on the screen the (key) words they're speaking. The upgrade is screen sketchnoting, but that's ... advanced.


In remote coaching, you have to work a lot with "homework" - both prepping and follow-up. This is equally true for both you and your coachee. Make sure that right from the beginning, you have a Coaching agreements that ensure the coachees will be diligent on their homework, so that you get maximum value out of each session.
One such coaching agreement could include, for example, that it's no problem to cancel or postpone a session if homework wasn't done.

Typical homework assignments for the coachee can include:

  • Trying out something discussed during coaching
  • Reflecting on certain outcomes
  • Gathering certain information to make a more informed decision next time
  • Filling a canvas (see above)

This is where the meta-methods come into play again: as a coach, you'll need to remember what homework was agreed during the coaching session. You should have another agreement that it's not yours, but the coachee's responsibility, to track and follow up on this homework. Still, you need to keep track if the coachee does this to remind the coachee if an important point slipped their mind.

Execution Signals

 A remote coach is both limited in observation and interaction, it's much harder. You're losing out on a lot of the subtle nuances of (especially non-verbal) communication going on.

Since people are busy with other things, and you don't want to interrupt your coachees at inconvenient times or with distracting questions,you need to collect your information:

Pull coaching

A big issue I had in the past is that nobody pulled coaching, despite a Working Agreement that people would contact me if they felt the need. This rendered me ineffective and worthless to the client! It happens, for example, when people are unaware of what the coach can do for them, or feel they're inconveniencing the coach.
  • Make sure people send you frequent signals, or ask why this is not happening.
  • People have understand where they can pull in coaching - discuss the intent and purpose of coaching.
  • When people are overburdened, coaching is the first thing they drop. Have that conversation when  it happens!


How do you know that something is going well or requires attention?

The general idea is that you can trust the team that things are going well until they inform you otherwise! This should be clarified in a Working Agreement!
  • You have to expect that you coachees may have blind spots and don't know when something isn't going well. So, you need access to further sources of information
  • The ticket system, for example, is often a great source of information for the usual problem hotspots: overload, overcommitment, overburden, delays, impediments - whatever. 
  • Ensure you're not monitoring the team or their work, but their ongoing success!


How do you deliver outside-ins? In Remote Coaching, there is a lot of asynchronous communication that you need to bring together. For example, in your work with the coachees' stakeholders (e.g., management, users) you learn things that the coachees would need to be aware of. 
This requires both a communication strategy and frequent synchronization points, so that you're not interrupting or shooting out the message with unintended consequences.
(A specific learning - simply posting something in the team's chat channel that you could have  mentioned in the team's physical office without problems can start a wildfire, so you need to choose words wisely lest your "coaching" becomes more of a distraction than a help.)


If this is your first time Remote Coaching, you may be overwhelmed by this article, and yes - it does take time, thought and preparation to get going.
Sitting back and reflecting on your colocated coaching experience is a great way to get started.

If you have any questions, reach out to me. My contact information is in the Imprint.