Sunday, March 15, 2020

Remote Agile Coaching

Probably the biggest challenge Agile Coaches and Scrum Masters face during the Corona Crisis: "How do I effectively coach - remotely, especially when people are distributed?" If you've never done this before, you might be stumped.
Fortunately, this has been my business model for a few years already, so I might have some pointers that could help you get going.

Your most precious asset  - the coaching journal.

First things first: Remote Agile Coaching is much more difficult than on-site coaching, and can throw you into an existential crisis. I have spoken to many Scrum Masters who felt this was "just not for them", and I won't claim that it's the same thing. It's oftentimes less effective, more frustrating and less gratifying than working with a team face to face. Yet, when it's the only option - you've got to make the most of it!

Remote Development is still fairly easy compared to Remote Coaching: while a Developer has a clearly defined code objective and the work they do happens entirely within their IDE, the CD pipeline and on servers - whereas the coach relies on human interactions and thinking processes. These are often entirely invisible in distributed teams.


There are a number of key elements to your success in Remote Coaching. To keep this article at least somewhat concise, I will focus on only two aspects:
  • Coaching Agenda
  • Execution Signals

Disclaimer: This article is not an introduction to Agile Coaching. It's mostly concerned with the key factors of successful Remote Coaching. Therefore, important aspects of Agile Coaching may not be covered.

Coaching Agenda

It's quite easy for colocated coaches and/or Scrum Masters to work successfully in a sense-and-respond mode, i.e. simply be there for your coachees, observe their actions and use spontaneous communication to trigger reflection and change.
The same is not true for Remote Coaches, who are limited both in senses and responses - the value proposition is more in the line of triggering "the big hitting changes". And since you can't push change, you need to figure out what people need. This can't be done ad hoc, so you need an agenda.

To begin with the Obvious, it's not sufficient to facilitate the Agile Events (Planning, Daily, Review, Retrospective, I+A, PIP) - you'll be entirely ineffective if this is the only thing you do! You need many other things, and you need them without infringing on the work of your team(s). And that's the challenge: You must enhance the team's ability without infringing on their capacity.

Providing methods

As an agile coach, part of your competency is providing effective methods. And since you don't have full real-time interaction, you need both a plan and the means to roll methods out.
So, here are some things you need:

  • Start with a delta assessment on what's missing and create an overview of the required methods that would be needed by the client.
  • Arrange the necessary items in a method introduction backlog. Put it into a digital backlog tool and let your client prioritize it (not just once, but all the time!)
  • Ensure you have the right technological means to make the methods available. If you need, for example, an Online Retro tool, you'll have to at least come up with a first option, because when the client doesn't know the method you're talking about, they are not yet in a position to make a choice which tool to use!
  • Some tools do not support the method you're using, so you either need to adapt your method to the tool or find a better tool. Still, avoid introducing a zoo of "Agile tools" - you'll never find your information afterwards! (Therefore, it pays to know what's still in the backlog so that you're not reverting last week's choice every week!)
  • Keep privacy and security constraints in mind. You can't use just any Cloud Platform and put confidential information there!
  • Remember organizational standards: While team autonomy is a great thing, it's terrible for the company if 12 different teams use 15 different delivery platforms: You may need to align certain tools with other parts of the organization.


Speed of Change

Probably the biggest mistake I have made in the past: since you're sitting remotely, you may not understand how time-consuming other activities in the organization are. This can lead to not respecting your coachees' ability to process change. What may feel terribly slow for you as a coach may be all they can stomach, and what's the right speed for you as a coach may overburden them, because their daily business is so time-consuming. As a coach, it's important that you set back your own feelings in this regard.

So, here are some specific tips:

  • Figure out what the "right" rate of change for your client is. 
  • If that means you'll only have one change-related coaching session per month, don't push for more.
  • Instead, if your time on that topic is so limited, spend more time preparing to maximize the value of the session.
  • Create a timeline of what changes will happen when, and align with the client.
  • It's significantly better to do one high-impact change than ten low-impact changes, because that puts less stress on the client's organization.

Meta-methods

Not all coaching methods that you'd use face to face are effective in a remote setting, and your have a less effective feedback loop. The time between applying a method and discovering the outcome increases, as does the risk of misinterpretation.
Some methods tend to be much more effective in remote coaching than others, though, and here are my favorites that you should definitely at least try:

  • Journaling. Keep a journal, and reflect on it at frequent intervals.
  • Canvases. To structure communication, experiment with canvases both during sessions and as prep work.
  • Screenwriting. It can have a massive impact on your coachee's reflection if you do nothing other than write on the screen the (key) words they're speaking. The upgrade is screen sketchnoting, but that's ... advanced.

Homework

In remote coaching, you have to work a lot with "homework" - both prepping and follow-up. This is equally true for both you and your coachee. Make sure that right from the beginning, you have a Coaching agreements that ensure the coachees will be diligent on their homework, so that you get maximum value out of each session.
One such coaching agreement could include, for example, that it's no problem to cancel or postpone a session if homework wasn't done.

Typical homework assignments for the coachee can include:

  • Trying out something discussed during coaching
  • Reflecting on certain outcomes
  • Gathering certain information to make a more informed decision next time
  • Filling a canvas (see above)

This is where the meta-methods come into play again: as a coach, you'll need to remember what homework was agreed during the coaching session. You should have another agreement that it's not yours, but the coachee's responsibility, to track and follow up on this homework. Still, you need to keep track if the coachee does this to remind the coachee if an important point slipped their mind.



Execution Signals

 A remote coach is both limited in observation and interaction, it's much harder. You're losing out on a lot of the subtle nuances of (especially non-verbal) communication going on.

Since people are busy with other things, and you don't want to interrupt your coachees at inconvenient times or with distracting questions,you need to collect your information:

Pull coaching

A big issue I had in the past is that nobody pulled coaching, despite a Working Agreement that people would contact me if they felt the need. This rendered me ineffective and worthless to the client! It happens, for example, when people are unaware of what the coach can do for them, or feel they're inconveniencing the coach.
  • Make sure people send you frequent signals, or ask why this is not happening.
  • People have understand where they can pull in coaching - discuss the intent and purpose of coaching.
  • When people are overburdened, coaching is the first thing they drop. Have that conversation when  it happens!

Exceptions

How do you know that something is going well or requires attention?

The general idea is that you can trust the team that things are going well until they inform you otherwise! This should be clarified in a Working Agreement!
  • You have to expect that you coachees may have blind spots and don't know when something isn't going well. So, you need access to further sources of information
  • The ticket system, for example, is often a great source of information for the usual problem hotspots: overload, overcommitment, overburden, delays, impediments - whatever. 
  • Ensure you're not monitoring the team or their work, but their ongoing success!

Outside-Ins

How do you deliver outside-ins? In Remote Coaching, there is a lot of asynchronous communication that you need to bring together. For example, in your work with the coachees' stakeholders (e.g., management, users) you learn things that the coachees would need to be aware of. 
This requires both a communication strategy and frequent synchronization points, so that you're not interrupting or shooting out the message with unintended consequences.
(A specific learning - simply posting something in the team's chat channel that you could have  mentioned in the team's physical office without problems can start a wildfire, so you need to choose words wisely lest your "coaching" becomes more of a distraction than a help.)



Conclusions

If this is your first time Remote Coaching, you may be overwhelmed by this article, and yes - it does take time, thought and preparation to get going.
Sitting back and reflecting on your colocated coaching experience is a great way to get started.

If you have any questions, reach out to me. My contact information is in the Imprint.

Friday, March 13, 2020

Remote work is coming - some pointers for developers

Many organizations find themselves struggling with the Corona Crisis, because they have never prepared for Remote Working. Without going into the reasons, I want to give you some pointers from my own experience of what you can do to make the best out of it.

The domain is huge, so in this post, I'll focus only on Developers in order to keep it a bit concise.

A remote working space - doesn't mean it needs to be much different from an office.

Working Remotely as a Developer

This has three domains:
  • How you yourself work
  • How you collaborate with your teams
  • Participation in agile events
We'll take a look at each, separately.

Working remotely

Working remotely by myself, I have come to appreciate the value of a desk with three-monitor layout: Centrally in front of me, the laptop - on my left, the Web Meeting where I can see people's face - and on my right, a browser where I can switch between the latest build, the CI tool and the ticket system.

It's incredibly hard to maintain break discipline, and so I often spend many hours glued to my seat, despite all promises to myself to take frequent breaks. Therefore, an ergonomic chair is essential.

Although I do what is arguably called "home office", I refrain from taking the laptop anywhere else except my desk. I maintain an isolated (albeit small ) room as office which is noise protected. Still, I only un-mute the microphone when it's necessary to speak.

I maintain "regular office hours" (i.e. typically between 7:30am and 6:00pm) because that's when others need to contact me and I need to be there - for them

Since I have the luxury that I have a wall right in front of me, I also keep a Personal Kanban so that I'm always aware of where I myself am heading at the moment. I am not sure how important this would be to developers whose only goals are the team goals - as a remote coach, it helps me tremendously.

( While I have heard rumours that other people don't seem to do this, I keep the same level of hygiene as if going out, including proper attire. Everything else would just be gross - to myself! )

Team Collaboration

This section is more specific to developers, although similar aspects apply to other roles and responsibilties. You don't need to write code to rely on some basic teamwork practices.

Continuous Integration gets an entirely different meaning when working distributed and remotely - especially if your code base is bigger than your team! I prefer to commit after every single line of code change, whenever tests run green. Yes, that can be dozens of commits in a single day, and that's actually how it should be. If you haven't done so yet, now is the time to learn how to rigorously and reliably apply CI and Clean Code practices. It saves tons of arguments and headaches!

A physical board is out of question when nobody is in the room - you'll need a virtual task board, a.k.a. ticket management system. Without entering too deeply into the solution space, you'll need to update your tickets in real time is essential to keeping the team synchronized. The perfect time to update a ticket is to use the CI time between "Push" and whenever the CI returns the commit status. Commit hooks could be valuable to automate this process as far as possible.

I hate disrupting development because of meetings, but we need to synchronize - frequently. The easiest way to cut down on the need for synchronization through frequent pair or mob programming sessions. To do that in a remote setting, you should find an IDE plugin that supports remote pairing, and if your IDE doesn't support that - now may be the right time to move to a different IDE.
The next way to reduce disruptive meetings is to use your team's virtual office room and ping people, asking them for a convenient time to meet. Ideally, you're able to refrain from using separate chat apps, because that creates a different stream of communication that adds distraction and drains productivity: you've already got at least four things to pay attention to already!

Sometime, you'll want to be alone. Let your team know that you need privacy and turn off the camera when that need arises. There are many reasons - make Working Agreements that ensure everyone understands what your team's mode of collaboration is. Some such agreements could include, i.e. the need to recede, specific time periods where everyone has to be online, timeframes for pair programming etc.

Participating in agile events

Some events are inevitable, though: Dailies, Planning, Reviews and Retros for Scrum - as well as PIP, Demos and I+A in a SAFe context. 
Since this post would be too long with my full tips on each of those meetings, I'll cut it short here by giving some generic guidance:
  • A "One person speaks" rule alone doesn't help because you often don't know who speaks next until that person does. Be forbearing when someone accidentally interrupts you. Maybe you need working agreements that make sure everyone gets their voice.
  • Prepare for meetings offline. Reserve some preparation time on your calendar for this.
  • Avoid status reporting during Dailies or Reviews. If you do this, you've got a collaboration problem that should be addressed as soon as possible!
  • Arrange for appropriate tooling support for events like I+A and Retrospectives. Relying on your team office platform probably won't be enough. There are tons of tools out there. Invest time to research online collaboration tools. Experiment until you found one that suits you.














Monday, March 9, 2020

The testing bottleneck

Test appearing as a bottleneck is a recurrent theme across many organizations. In this article, we will explore why test often becomes the constraint - and ways out of the situation. Adherents of the "Theory of Constraints" will recognize this article as steps 2 and 3 of the "Five Focusing Steps". All of the proposals improve test performance - yet none of them rely on investing a single additional cent!

The test bottleneck

Test execution is a necessary activity between development and delivery - there's no way to avoid this, and no amount of "Agile" or "Shift-Left" is going to change that. Hence, the question is not so much "Where to test?", as it is, "How to approach testing?"

Test Execution - typically a bottleneck by design!
Development, Build, Test, Delivery and deployment - an inevitable sequence. How and why does test execution become the bottleneck then?

By looking away from the testers and the software package as a whole, any specific product change work item is in a "Wait state" whenever nobody is actively working to process it. Hence, most of the activities listed below block the flow of work with no value added.

Note of caution - the entire article is written with the assumption that test is the constraint - the solutions can't be applied if the constraint is known to be elsewhere!

Stop doing the wrong things

This section is a list of traditional tester activities that may quickly consume all available test capacity - and the consequence is that there's often no time left to do the work that would actually matter. 
So here are things that testers shouldn't even be doing. 


Test Setup

Traditional software test may be part of a "push process" where developers provide code and the backlog item then immediately goes onto the tester's desk: creating a running build, installing the version on a test environment, getting it to run - all the tester's problem. 

This paints a straightforward ideal picture: there should be zero tester activity and zero delay between developers providing a new version and the start of test execution.

The solution space here is simple and obvious: all the above mentioned activities should follow an automated standard process. Unless we have a 100% repeatable and reproducible process for these activities across development, test and operations, we do not have a proper guarantee that this process will yield the same outcome anyway.

There are plenty of tools out there that can be used to automate this process, and if your organization hasn't done this yet - automating build and installation is the simplest quick win for your test execution.
A little bit more challenging, but still almost effortless is the automation of smoke tests - doing automatically what needs to be done anyway each time a new product version is installed.

How to do it?

When test is a bottleneck anyway, you don't benefit from adding more burden onto testers, so instead of pushing another backlog item into Testing, use the developer's time to let them automate whatever they use to create a build on their localhost, and whatever the installation manual says. If that's not enough, let developers observe what testers and ops do, and automate that as well: move towards Continuous Delivery!
The development time invested into setup automation typically pays for itself within weeks - and starts saving money ever after! Plus, everyone in the organization will wonder how you could ever have lived without it.


Test Case Creation

Functional and Acceptance Testing rely on test cases - depending on the complexity of the change on an entire test case catalog. It's not unusual that properly defining test cases takes as much (or more) time as development.

Depending on the order of delivery, test cases may not be prepared by the time the first delivery of the product arrives. This creates two problems: first, the delivery must wait until the test case is prepared - and second, testers have to re-prioritize their work, leaving whatever else they were doing in a wait state.

A typical problem caused by the asynchronous creation of test cases and development is that testers may not have written the test cases to match exactly that which developers have already delivered (especially if increments are really small), making the test case fail upon execution, resulting in unnecessary "false positives" and communication overhead. 

The reflex solution

Many organizations defer testing until both all test cases are created - and the entire test object is completed. Depending on the size of the backlog items in question, the consequence is "big batch" and asynchronous processing: There is no longer a direct connection between development work and testing. We end up with a postponed, prolonged "Test Phase", which oftentimes also results in a "bugfixing phase" - which is disruptive to everyone, and unpredictable in both duration and outcome. Most organizations that choose this route inevitably compromise both on quality and sustainability.

An improved solution

Approaches like ATDD and BDD, combined with Design Workshops, allow for an early and aligned specification of acceptance criteria, test approach and test scope. Since these collaborative approaches ensure that the right questions are asked before development, people can align both on tests and development outcomes early on in the process. This ensures both that there is less discrepancy in understanding between developers and testers (which means there will be fewer defects) - and also that there will not be time passing between receiving a delivery and beginning to test. Likewise, since tests are defined before development starts, a delivery will no longer lead to interrupts and blockers on other work caused by missing test cases.


Bug Tracking

Another reason for putting product work into a "Wait State" that is all too common in large organizations is bug tracking - the longer the list of known bugs, the more test effort is diverted to managing the defect backlog and doing re-tests for fixes. This time eats down on test execution time, and also delays the delivery of value. And this delay becomes exponential: the more defects need fixing before a delivery can be cleared for release, the more time a backlog item spends in Wait.

The reflex solution

When bugs are an issue, the solution is often to introduce a dedicated test manager, who does nothing other than prioritize, track, monitor and report defects. This fixes neither the defects nor the problem of missing capacity. Instead, we detract from this dissipation of capacity by institutionalizing it in a formal role.

An improved solution

As ridiculous as it sounds - the easiest way to reduce bug tracking efforts is not to create bugs. As an alternative where this is not (yet) an option, the best possible choice is to produce fewer defects, and to introduce reliable mechanisms for ensuring that defects are actually resolved.
Smaller changes, i.e. smaller increments, will contain fewer opportunities for defects, and the optimal size of a delivery should have the potential to contain a maximum of one defect - the one change that was made. This comes back to Continuous Integration / Continuous Delivery.

Another part of the problem is that in traditional test management, any reported defect is a "promise" - that means more work later: both developers and testers will have more work with this defect at some point in the future. Ideally, though, developers don't only provide a fix, they also provide evidence that the fix was effective and the defect doesn't return. That's where automated regression testing comes in. Developers should automate the test that yielded the defect, use it to verify the presence of the defect in the described scenario, then use that same automated test to verify the correct behaviour. This, too, removes the capacity drain on testers.


Test Management

Another drain on test capacity is the question of which test cases to run when, tracking how many of them did run - and how many of them were successful. As long as there are defects, bug tracking (see above) comes on top, and with that, Go/No Go Recommendations, which require both preparation and meetings. And of course, with that, a hefty load of compromise, politics and technical debt.

The solution

A consequent use of BDD/ATDD means that all functional tests will be automated as part of the development process, and evidence of their correctness will be provided as part of the build process. 
When all functional tests are automatically executed the minute that a code change is made, including both regression and changes - and there is no way to proceed in delivery as long as even just a single defect occurs, this eliminates a multitude of test management jobs:
  • There is no need to track defects, because developers can't proceed with defects.
  • Go/No Go Recommendations are always "Go", because the test suite provides evidence that there are no known functional risks.
  • Reports and evidence is collected by the system, eliminating manual effort from testers

Test Automation

Testers often have to make an either/or decision between automating tests and executing tests. In a traditional testing mindset, the decision will most often favor doing the tests manually, and "automation will be done later". Note that "later", in this context, translates to, "as soon as there is time", which, in a bottleneck situation, is just a euphemism for "never".
The result is a vicious circle: Lack of automation means tests consume more capacity, which means there is less time for automation, which means we need to do more manual testing, which means there will be less automation. Additionally, there will be slower feedback for developers, which means there will be more defects - the consequences are already described in the other sections!

The reflex solution

Knowing the problem and understanding the vicious circle, most organizations simply decide to invest into test automation. Since in many cases, testers are not developers, they resort to the use of specialized tools for creating this automation that do not rely on developer knowledge. 
In almost every case, the automated test suites created in this way will eventually give rise to some critical problems that make the entire approach unsustainable:
  • There's a disconnect between the code and the test cases which can yield both false positives (reporting a defect when there is none) and false negatives (not finding a defect).
    • False positives create significant effort for defect analysis, which again drains testing capacity.
    • False negatives reduce confidence in the test automation and lead to further effort.
  • Automated test suites require continuous maintenance. If the test code is not Clean Code, the maintenance effort will eventually become prohibitive. Most organizations eventually come to the point where they need to trash their Test Automation created exclusively by testers.
  • Automated test suites created with tester tools often test at the wrong level, making the tests slow - on many cases, so slow that executing these tests after every code change is not an option.

A better solution

Instead of having testers, who are already constraining the performance of the development process, spend time on creating automated tests of questionable test code quality, use testers to define which scenarios can and should be automated, and use a testing framework close to the source code for creating tests that maximize execution performance. Apply rigorous Clean Code Practices, including Refactoring, to move every piece of test execution to the best possible level in the Test Pyramid. This significantly speeds up test execution. It likewise reduces the amount of effort required to maintain and update the test automation suite.


Functional and Acceptance Tests

We have learned from ISTQB what kind of tests are required in software testing: from happy path over branch coverage, edge cases all the way to negative tests.  Why do testers do execute these tests? Because there are (probably) defects. And the new delivery can't be released until we know what the defects are, where they are, and how bad they are.

The reflex solution

When testers find many defects in functional testing, the obvious solution is to have the testers do more testing. This "more testing", in practice, means either postponing the delivery until defects are fixed (a theoretical, yet rare solution, because it is so undesirable) - or adding more testers. Neither addresses the root cause, i.e. why there are defects. Eventually, we get into the vicious circle of bug tracking and big batch delivery.

A better solution

None of the above mentioned tests need to be executed by testers. Why are there defects? We come back to having a disconnect between development and test, i.e. having built the wrong product to begin with.
Again, the solution is to ensure that quality criteria are clearly available to developers, consistently understood by everyone - and verified before software even enters testing. This sets testers free to do the tests that can not sensibly be automated: for example, one-off tests, UX testing or exploratory tests.


Work-Arounds

Testers often spend hours to set up an intricate scenario in the system that would allow them to press that one button which would determine success or failure of the test, and therefore, make or break of an entire Release. They may be spending time to reverse-engineer the database, copy+paste data into web service requests, manipulate files on the system and many other things, just to be able to run their tests. None of these activities should ever be required to be done manually - and mostly, they shouldn't be the responsibility of testers. Every minute testers invest into these activities is a minute testers waste in regards to doing the things they really should be doing.

The reflex solution

Many organizations set up special data and configurations on their test environments which, under no circumstance whatsoever, must be used for any purpose other than the tests they are intended to be used for. In some cases, painstaking effort is invested into creating both surrounding governance and maintenance scripts that only exist to maintain the integrity of the tests.
This approach diverts massive test capacity from doing the work that matters. Every minute spent on this "solution" is a high-risk investment into an unsustainable test approach that still drains test capacity.

A better solution

The organization should have a serious discussion about what the best way to provision test environments is. The ideal situation is a "No-Touch Bootstrap", which provides a pristine test environment that is optimized to conduct all automated and manual tests with minimal effort and delay. Required data and configuration should be injected via the product's own capabilities, i.e. "design for testability", as part of the development process.
To create an optimally testable software is an exercise that involves testers communicating testing needs, designers and architects conceptualizing a way to achieve testability, and developers creating code that minimizes the effort of doing the right tests in the right way. 
Even when a legacy system doesn't offer proper testing capabilities, developers are the right people to provide scripts and other software solutions that allow testers to focus on that which matters in testing. 


Doing the right things

If this article leaves you wondering what testers should be doing instead, and whether they'll still be needed at all, the short answer is "Yes".
The long answer is just barely scraped in many other posts - for example: engaging in product discoverydesigning better test approachesOptimizing the test pyramidImproving existing tests, constantly improving the understanding of "how to create better tests", pushing for Zero Defect quality and shifting the test paradigm.

If we consider this short list as the value testers bring to an agile team, we'll just leave it with the short question: "How much time do testers have left do those things after we've subtracted all of the time they're doing the things they shouldn't be doing?" All of these would have a scaled and sustainable value for the team, the product and the organization.
And still, in most organizations, the ratio is abysmally low. Because people just push more work onto testers instead of finding ways to enable them to bring the value they could!

So here is my challenge: Do a pie chart and let your testers draw a slice for how much time they spend on each activity described in this article and use the outcomes as a reflection opportunity.

Summary

The intuitive "solutions" to capacity and performance problems in testing are neither helpful, nor sustainable. A paradigm shift is required, and part of that paradigm shift is to allow the available testers to work with maximum efficiency.

Some key activities that can maximize test efficiency include, without limitation, the ability and capacity of every team member and:

  • "Stop the Line" when the "Waiting for Test buffer" spills over, and not start more work until the pipeline clears - to reduce the amount of coordination effort required for testing.
  • Examine every activity done by testers and asking, first, "What would be required to make this activity no longer needed?", and if it's inevitable: "Can someone else do this, or at least parts of it?"
  • Reduce (or: removing) the possibility for defects by aligning early on in order to eliminate all tester activity related to handling defects,
  • Engineer the software itself to ease testing,
  • Automation of functional and acceptance tests as early as possible, ideally before any software is delivered (ATDD approach) and no later than before the first delivery,
  • Automation of time-consuming repetitive activity (especially functional regression test),
  • Moving test automation work to developers in the simplest, best possible way that is most consistent with the product's code,
  • Coach people in test execution, such as to share the workload.
  • Separate tester activity into "sustainable" and "unsustainable", and relentlessly push for higher sustainability.
Depending on how much work described in the main section of this article your testers are doing, and how much delay is incurred in testing, you will quickly see substantial benefits in outcomes by doing the things above: And you don't need to invest a single additional cent!

Sunday, March 1, 2020

11 tips for utilizing flow efficiency to boost performance!

Flow efficiency - the art of achieving (sometimes significantly) more without working more. It's the Holy Grail both of Lean and Kanban. How do you achieve it?

Here are 11 actionable suggestions to get started on your journey to maximum flow efficiency.


1 - Map your process 

Do you know what happens between the time when a work item begins, and when it ends? By definition, a "process" is "a series of actions or steps taken to achieve a particular end."
In Kanban, the idea of process mapping should lead to a Kanban board - where each action becomes its own column on the board.

During this exercise, it's specifically important that you "focus on the baton, not the runner".
Your process is not defined by how your teams are organized or who has which skillset - it is defined by the works items being processed. Hence, your process mapping exercise should not focus on having people describe their work, it should be described by looking at the flow of work.

An example process map for development work


2 - Make wait time explicit

At every handover in the process, the work item will most likely spend some time waiting before being processed. There are two types of wait time in the process: inherent to the activity (for example, an analyst waiting for an appointment with a user), and inherent to the process (for example, a refined backlog item waiting to be picked up by a developer). Initially, we are only interested in the wait time inherent to the process. 

Visualize them on your process map:

A process map with "WAIT" markers

3 - Visualize processing time

The next step is to use whatever data (or experience values) you have in order to label the process with corresponding times for each step - weighted averages are perfect, averages are good - even guesstimates are enough.


A process map with processing time annotations

4 - Act on Wait Time

The definition of flow efficiency is the ratio of wait time vs. touch time, so in order to improve our flow efficiency, we want to see what we can do about wait time.
The elimination of wait time will improve both our flow efficiency - and our throughput time, without changing anything in "how" anyone works on any item.

There will usually be one step in our process sticking out, where wait time is higher than with all other steps. This is where we can have the biggest impact with the least amount of change:

Our example process has a throughput time of 40 days - a single change could speed it up by 20%!

5 - Focus on Throughput

The troughput rate is the ratio of inventory (i.e. WIP) getting processed per time. Based on Little's Law, you have two big levers to improve your throughput rate: Reducing inventory - and reducing processing time!

What many organizations forget, though: you have another lever on throughput: increasing the amount of WIP passing through your system!
The easiest way to do this is to examine the process for blockages: anything that leads to started work items to not move smoothly and uninterrupted through the process.

It's very common for organizations to suffer from various other causes of bloackages - including, without limitation: waiting for people or events, lack of materials (e.g. environments), higher priority work passing by, interference of other processes etc.

Identifying the common places where work in our process gets blocked.

6 - Understand constraints

 A common theme in every organization is that overburdening the process constraint causes blockage. When we push more work into a step than can be completed, some of that work will natually be blocked due to lack of capacity.
Unfortunately, in complex knowledge work, it's often really difficult to know why the constraint is constrained. And even if we have that knowledge, it may not even help us - because that's an action upon touch, i.e. trying to change how people work in the hopes that more work flows through the system.

It's also extremely important to remember that "constraint" and "bottleneck" are not necessarily the same thing:

Our process has two "bottlenecks" - yet only one "constraint" that defines overall performance!
Looking at the Step throughput rates data will reveal which step is the real constraint, and which is an irrelevant bottleneck.

Our example process is constrained by deployment: if we improve development, nothing will get better!
It's extremely important to understand this difference, because if we improve upon a bottleneck that is not the constraint, we will just shift the wait time downstream!

7 - Reduce workload

Organizations that specialize in departments tend to focus exclusively on department performance, and even teams with specialists tend to focus on role performance. Both of these are entirely irrelevant, as the only important performance metric is the overall system's performance: process throughput!

Reducing the workload sounds absolutely counter-intuitive to specialists, yet it's the most important in improving flow efficiency: it reduces in-process inventory, simply by starting less work.
Stopping excess work leads to "the baton, not the runner" moving significantly faster - no magic involved!

All the local optimization performance can be eliminated without affecting process performance!
While this makes every person with cost center responsibility cringe, we have achieved a seemingly miraculous change: we are working less and still get significantly better throughput rates - for free!

Let's do some number crunching to make the case with an example:
Previously, we had an average processing time of 40 days - and an average of 40 items in progess.
That means our throughput rate was 1. While we are still producing 1 item per day, this little tweak reduces the in-process nventory, which means the average age of items stuck in process goes down. By eliminating 20 items stuck in process, we double the throughput rate - to 2, which means work now flows through our process twice as fast!
And this is where things get magical: We have moved from acting upon processing time to acting upon excess inventory: Throughput optimization has become our lever for performance.


Nobody works harder, many people have less stress - results stay the same, yet process performance has doubled!

8 - Redirect excess capacity

The reduction of "blind effort" leads to a lot of excess capacity in the organization - capacity that is now free for anything except starting new work!

A question I like to ask in coaching, "What overburdens the Constraint?" - let's trust for the minute that work at the Constraint is already done as well as humanly possible, i.e. "everyone does the best they can".

In many organizations, specialization and a "my job is done" attitude has led to people involuntarily (or unwittingly) pushing work elsewhere, until it eventually becomes a burden on the Constraint, where then blame is placed for poor performance.
We need to reverse this mechanic and instead ask, "What work is being done at the Constraint that can be done by someone else - even if it's significantly less efficient if done by others?"

This does not mean that, for example, we will tell analysts or developers to perform Exploratory Tests. Instead, analysts could prepare test scenarios and developers could create small semiautomation tools that allow testing experts to proceed faster, which means the Constraint has less work in order to achieve the same outcome.

Relieving the Constraint of work increases throughput while (almost) everyone is working less and nobody is working more.

9 - Set the right Constraint

In most organizations, constraints exist simply because they happened to be where they are. A common "solution" is to hire more people to work on the Constraint, until the organization grows to a point where coordination of all the work in progress becomes the constraint - oftentimes leading to shadow inventory that exists beyond the coodinators' horizon.

A much smarter tactic is to deliberately place the constraint where it makes sense:
The best way to place the Constraint is to ensure that the most precious asset of the organization (i.e. the rarest skill, the most talented people, the most expensive equipment) defines the capacity limits, then act accordingly.
This means that there is all of a sudden a "right constraint" - and it shouldn't move.

Add capacity to all steps that should not be the constraint to ensure the Constraint doesn't starve!

10 - Plan for idle time

While Lean optimization would call out for idle time as a waste, we need to reverse that thinking. 
A machine can easily be planned for maximum utilization. We can easily calculate upfront how many items a factory can produce per day, per month - and even a year ahead of time. We can then set up our plant in a way that we have just enough capacity to produce just enough output just in time.

Humans don't tick that way. Knowledge work is unpredictable. People can't spend 100% of their time focused on churning out output ... Something unexpected always comes up, whatever - you name it: "The high probability of low probability events" makes it impossible to predict accurately. And the last thing you'd want is 20 people not being able to work, just because one person isn't available - hence: the need to ensure the Constraint doesn't generate flow blockage!

The best way to make decent forecasts is to ensure that the deliberate Constraint doesn't get into an Overburden state even when an unexpected event happens: 

As counter-intuitive as it seems: the Constraint needs to be the opposite of a bottleneck - it must have excess capacity!

11 - Feed the Constraint

The step that feeds the Constraint should provide a buffer that is both small enough as not to accumulate blockage - and big enough to ensure the Constraint doesn't starve, i.e. become idle.

Provide a constant, sufficiently big stream of inflow of work for the Constraint to ensure the Constraint never runs fully idle - as that would reduce throughput.
A deliberate misappropriation of  dimensions - the step that feeds the Constraint must be able to do so!



Bonus - Change your mindset!

A common misunderstanding in Kanban is to have "Doing", "Done" columns for each step of the process. This presumes that "my work is done, someone else's problem now".

When we stop focusing on the runners and start looking at the baton, i.e. when we observe the flow of work instead of the efficiency of each activity, we can't maintain this kind of thinking.

A work item that is "work in progress" is, by the very definition of the word, "progress" never done. Either it's "in processing" or "waiting for further processing" (i.e. "not done"). 
Therefore, it's quite important to banish the term "Done" from all buffers in the process, because it supports the wrong mindset!

We need to understand that there is no "Analysis Done, Development Done, Testing Done" - only "waiting for Development, waiting for Testing, waiting for Deployment". And all of these "waiting for" columns kill our flow efficiency. Unlike traditional Kanban, where it's good to have low Column WIP and getting items into the "Activity Done" column as fast as possible is an aspirable goal, flow efficiency re-defines the goal: we don't want any buffer columns at all!
The one in front of the Constraint is inevitable, but all other buffer columns are actually problems asking to be solved!




All of these tips are taken straight from the book: Tame your Work Flow by Steve Tendon and Daniel Doiron.

If you're interested in knowing more, please contact me for a certified TameFlow "Mastering Flow" class where we will explore these and many other actionable tweaks that can boost your organizational performance massively!

Friday, February 28, 2020

SORT Canvas - Focusing change discussions

Often, everyone in the room has a different perspective on what could and should be done, and discussions go in circles. How do you facilitate the discussion to move forward?

I have created the SORT Canvas to discuss high-impact change initiatives. Its main area of applicability is complex, systemic long-term change, i.e. organizational change initiatives. 
It can also be used for Team Retrospectives, although that's not its main focus.

This simple canvas allows you to steer discussions about change initiatives towards agreement and first results. Plan 2 hours for a full canvas session.


An example use case for the canvas might be SAFe's "Inspect and Adapt Workshop", when the trains needs to discuss more than small changes.

Discussion Ground Rules


  1. Have everyone in the room: Make sure that all parties contributing to the situation and the potential solution are in the room. This doesn't mean all people - representatives suffice.
  2. Choose a topic: Pick a single topic, and focus on that topic. Agree to de-scope topics that are only borderline relevant or unrelated.
  3. Use abstractions: Complex environments have a lot of nitty-gritty details. When discussions get into details that aren't relevant for the group as a whole, abstract.
  4. Align perspectives: Every person has a different perspective when the discussion starts. Instead of having people elaborate their own perspective, focus the discussion on "What do others need to know?" and "Where is my perspective different from that of others?" in terms of the four topics.
  5. Move sequentially: Explain the four segments of the discussion (Situation, Objectives, Roadmap, Tasks) - and decouple the steps. Do not pre-empt by jumping between the four sections. Have points in the discussion where you explicitly ask, "Can we move on to the next section?"
  6. Write it down: As early as possible, write down a point. Move as quickly as possible to answer the question, "Can we all agree on this?" - if not, make sure disagreement is heard.
  7. Stop circular discussion: When we return to a point that's already written down in the current section, stop.
  8. Clarify the goal: The goal is to come up with a common understanding of what problem we currently have, what we are going to do about it - and how that will help us reach a better state.


S - Situation

The first step is to come to a common understanding of which situation we are currently facing. Especially in larger organizations, people have vastly different perspectives - more so if they come from different departments and experience a clash of interests.

Describing the situation usually starts with a short description of either the main problem we face or the main change we want to make.

For example:

  • an initial problem description could be: "IT doesn't deliver high quality" versus "Business is too vague on Acceptance criteria". The conflict is obvious - the solution isn't.
  • an initial change description could be: "We want to move from specialist teams to cross-functional teams."

Neither of these are workable, and there are typically a lot of reasons why the situation currently is like it is. Note down the key problems, such as unwanted side effects or unfortunate long-term outcomes along the way. The description of these will help us focus the discussion for later steps.

Techniques like Five Why, CLD or Fishbones could be used to guide this discussion, although that may already be method overkill. Most discussions are sufficiently focused.

Try to get the most critical 5-10 points written down before moving on.

After describing the situation, we can choose to limit our further discussion to a subset of the situation or problem statements before moving to the next section.


O - Objectives

Captain Obvious would state, "We don't want to have those problems any more" - although that is too generic. Let's be specific on what the future would look like: which problems are gone, and how is it instead?

For example:

  • An objective describing the future state could be: "IT and business agree before implementation on what defines high quality" or "A single team has both all the competency and authority to deliver Working Software to Production"

As you noted in my example, they contain the words "and", so they are indeed more than one item, and should be separated out, for example the first item might turn into:

  • "Business agrees that the solution is of high quality when the Acceptance Criteria are met",
  • "Teams agree to deliver solutions that exactly match the Acceptance Criteria",
  • "Teams will pro-actively communicate when AC's aren't clear",
  • "Both parties will collaborate to finalize the AC's before the Sprint",
  • "When teams feel AC's previously agreed upon no longer make sense, they will seek dialogue."

Point in case: specifying an objective without "And" or "Or" ensures that different perspectives are properly discriminated, conflations are resolved - and ambiguity is clarified.

It's entirely valid to have multiple objectives: it means that the change might be more complex, or that only a few of them would be pursued in the remaining discussion.

In some cases, a single change action will create progress on multiple objectives - in that case, it's good to be broad on the objectives. Otherwise, it might be a good time to pick a subset of objectives before moving to the next section.


R - Roadmap

When we know where we stand and where we want to go, it's a good time to start asking: "How do we get from here to there?"

During this stage of the discussion, we should try to find the most effective way to proceed from the current situation to where we want to go.

Simple changes mean that it's a single change, and that would also mean the roadmap is quick and easy. If all participants agree that no intermediate action is required, that's great - and we can move already to step 4.
More likely though, you'll have to iterate and make a series of incremental changes to achieve sub-goals. 
The purpose here, again, is not to make a comprehensive plan for perfection - it's to outline the major milestones on the road to success.

Returning to our example, the roadmap could look like this:
  • "Align Definition of Done with Business"
  • "Business people actively participate in Planning"
  • "Everyone knows how to ask key questions in Planning"
  • "Acceptance Criteria are robust"
While having SMART items on the roadmap is nice, it's much more important that participants agree on the roadmap items, their value and effectiveness. We also need to ask how fast the change would proceed. If we already know that it's going to be years until the objectives are achieved, it's not even important what the items further down in the plan are - what's important is that we agree that the first two or three points are actionable.
We will inspect and adapt anyways.

T - Tasks

Once we know the key milestones on our change journey, it's important to agree on the specific next steps.
To reiterate: the purpose is not to come up with a comprehensive change plan. The purpose of this activity is to get actionable outcomes of the meeting, clear next steps that we'll work on.

When we define tasks, we first need to agree on the most critical milestones on our roadmap - what we want to achieve first, as well as that which has the highest probability to "make or break" the change. 

Here are actions to consider:
  • Moving forward quickly, i.e. "quick wins".
  • Actions that minimize change risk, i.e. "pivoting".
  • Building the groundwork for future change, i.e. "go slow to go fast".
For each action, make sure that everyone agrees they are clearly linked to the objectives and roadmap.

Once you have agreed on maybe a dozen potential actions, decide which are the first three you will tackle. Be specific on what you want to do, who will do it - and when. Ensure that people commit to taking ownership - never assign a responsible person!

The general rule applies that you can't assign tasks to people who are not in the room. What you can do is formulate tasks like "Get buy-in from <person> for <action>" and have an attendee take ownership. Once you have identified that the specific person is necessary part of the change process, they need to be part of further sessions.

Make sure there's follow-up after the meeting ends.
An approach that might come to mind for dealing with following up on the tasks might be the "POPCORN Flow".


Wrapping up

With all participants, agree that the meeting has described a relevant problem, a desirable objective, a feasible roadmap and important actions.
Agree to a future appointment to follow up for an Inspect and Adapt session where the SORT will be repeated - revisiting all four segments.




Wednesday, February 26, 2020

ART Design - and: Solving the wrong problems

It's amazing how good organizations are at solving the wrong problem, i.e. "doing the wrong thing righter". Here is a great example of how not to design an ART:



This was the outcome of a PI-Planning. Their planning Retrospective led people to conclude that "electronic tools are much better for developing a Program Board than physical boards, because the dependencies are way too difficult to correlate with all those strings floating around, falling off, getting intertwined and so on."

This Train operates based on the assumptions that they have the right ART, the right teams - and are working on the right things. Never did it cross anyone's minds that any of these assumptions might be wrong.

The wrong ART

The first question we need to ask ourselves: do we have the right Agile Release Train?
If there is an extensive dependence on people outside the Agile Release Train, or there's a massive capacity bottleneck within the ART, while people outside the ART do have capacity to do the blocked work, then we might want to slice the ART different.

The wrong teams

It's okay for teams to occasionally depend upon one another. It fosters learning and information exchange. Some Product Managers even go as far as to purposely define features in a way that "swarming" across teams allows the ART to generate value faster. When teams choose to split work  and collaborate in real time to maximize value generation, that's a plus.

What is not okay, and a clear indicator that we have the wrong teams: When no team can get any work done without relying on actions of other teams.
Even Component Teams should be able to release value straight into Production, when teams can only do piecemeal work, those are specialist teams that inhibit the flow of value.

I have seen more than once how teams, guided by experienced SPCs and insightful RTE's, have spontaneously used the PI-Planning to "disband and regroup" - reforming as teams capable of delivering end-to-end value: It's possible!

The wrong work

The above example is a clear example of what leads to doing the wrong work. With so many dependencies, "dependency management" is already a full-time job. It shouldn't be. It should be effortless.
The prime directive of dealing with dependencies is: "Minimize."

When I see two or three dependencies on a PI Planning board, I'm happy - it means, we have decent team constellations and the right skills in the teams.
When I see more than ten dependencies, I will express concerns about team constellation.
When I see more dependencies than features on the board, I would ask the ART to work on resolving their dependencies rather than figure out better ways to manage their dependencies.





Tuesday, February 25, 2020

The problem of Proxy Users

User Stories - what's a user story, what's even a user? And: Why is that important?

The definition of a "user"

User - a person who uses or operates something.
Sources: Oxford, Google
A "person" can be both an individual or a legal entity.
The specific mention of the term "person" is quite important. We will get back to that later.

There are other definitions out there, also - that have a different emphasis:
User - a person or thing that uses.
Sources: Dictionary.com, Collins Dictionary
This definition opens a different way of thinking - that "things" (physical and/or virtual objects) can be users s well.

Types of "users"

There are various types of users.

To explain them, let's take a look at this scenario:

Scenario:
A person has bank account without any deposit money.
The problem? Checks bounce on this empty account.

Here are some core types of potential users and their problems in this scenario:

CategoryDescriptionExample
End user
(Customer)
Someone whose problem is being addressed.Bank client:
Bouncing checks means there is unpaid debt.
Service ProviderAnyone who provides a service using the product.Bank:
May get calls from both their customer and the people their customer gave checks to, asking what's wrong.
Intermediary Representative of the Service Provider towards the person whose problem is being addressed.Bank clerk:
The person at the counter, answering on behalf of the bank what's happening.
ProxyRepresentative of the person whose problem is being addresed.Bank account:
A virtual entity that acts as a proxy on behalf of the customer both towards the bank and the customer's transaction partners.
DelegateActing on behalf of the person whose problem is addressed.Accountant:
Responsible for balancing the customer's finances.
AdministratorResponsible that the product or service can be used correctly. Bank IT:
Checks need to be rejected rather than processed.


Why is the discrimination between those "users" important?

Know your Users!

Users define success

For example, if you develop in-house business support software, your user is often a single company, which means you'll have an entirely different way of measuring success than if your user is a Customer on the free market (of whom there could potentially be a couple billion). And the value of your product may depend on two categories of users: the Service Provider, whose balance sheet ultimately determines whether you are building the right product - and the Intermediaries, whose use of the product determines whether you are building the product right.

Users want their problem solved

Especially in large corporations, it's easy to lose sight of what the product actually is - and who the users actually are. By solving the wrong user's problems, you run the risk of local optimization - potentially damaging the interests of your real users in the process!

For example, look at this rather typical "User Story":
As a Security Specialist, I want people to choose difficult passwords and short password renewal cycles, so that accounts stay secure.
Then, think of yourself: How would you like it if the engineering team of your favorite app would spend a month to create password rules so complex that you need a PhD in Cryptography just to be able to determine what's a valid password - and then being forced to choose a new, different password every time you log in!


Solving the right problem

Intermediaries, proxies and administrators pose requests on systems, and while their requests are undoubtedly important, they all don't help you to solve the right problem!

Compare, for example, the following two user stories:
As a bank clerk, I want to be able to see the customer's balance with a single click.
As a bank, I want to know why I'm losing customers!
And now, let's add another user story:
As a bank account, I want to let my customer know when I have no money!

Do you see the difference between those three stories?

Users who have no financial stake

The first story may have sprung up from a conversation with a real intermediary user, and it's fairly easy to know whether you are solving this problem. However, it's almost entirely disconnected from the real user's (the customer's or the bank's) problem.
Unless it just by happenchance occurs that seeing customer balance also addresses the second problem, there's a good chance that spending time with the concerns of intermediaries derails the team from relevant business objectives.

Users that don't have a voice

The second story is an existence-threatening problem. Yet the "bank" as the legal entity that gives employment to the engineering team, has no face or voice. Its true voice may be hidden behind layers of management and bureaucracy. It may be too far away from the engineers to give it relevance and too abstract to be worth touching.

Proxies have no real needs

The third story is all too common in large enterprises: Components, proxies - even API's become "users" - assuming that if we service them, we are solving a relevant problem!

There are no proxy problems

Here's an uncomfortable truth: the proxy has no problem.
It's never the proxies' problem - it's ultimately, a customer or provider problem.

Only when real users (end users and/or providers) interact in a way that requires interacting on a path that crosses the proxy, is there a real problem. And there are two ways of solving this problem: either helping the proxy to do their job better - or to eliminate the proxy's part of the process altogether.

Hidden assumptions

There are always hidden assumptions in proxy stories:
  1. The voice of the proxy echoes the voice of the unheard real user
  2. The proxy is on the critical path of transactions 
  3. The proxy is the constraint to the performance of the system
  4. A dependency on the proxy is inevitable
  5. Solving the proxy's problem is the best solution to the real probem

By accepting the "story of the proxy", which most agile teams do with little pushback or thought, we run a massive danger of sub-optimizing a solution, and limit the solution space. While this can be valid in many cases to provide focus, this begs the question: When, if not during refinement or planning, does the team spend time to think of which solution is the best to solve the most pressing problem?

Working the Constraint

The reason why I harp so strongly on the point of "proxies" and "intermediaries" versus "real users" - is that while they are on the road to creating a higher-performing system, it's nowhere automatic that improving upon their bidding improves the performance of the system.

Theory of Constraints teaches us that:
  1. Optimization outside the Constraint is - at best - ineffective.
  2. Time spent optimizing outside the Constraint is Waste.
  3. To work on the Constraint, you need to know how to optimize it.
  4. When the Constraint is operating at its capacity - elevate it!
Here's what happens when you listen to the voices of Intermediaries or Proxies as if they were the real users:
  1. You start to forget who the Real User is.
  2. You remain oblivious of how the changes you are making affect overall performance.
  3. You don't know if you're working the Constraint or somewhere else.
  4. You may institute a new Constraint on the Critical Path, worsening the system.
  5. You will miss solutions that elevate the constraint, especially if the constraint is of higher order.


Summary

It's not forbidden to have "proxies". It's just dangerous. 
To avoid the most common pitfalls, start asking:
  • Do we understand the real customer behind the problem we are solving?
  • Is the solution on the critical path to success?
  • Are we working the Constraint?
If you can't answer these questions with a clear and resounding "Yes", you're definitely in dangerous territory - and if you can answer any them with a clear "No", you may want to reconsider implementing this story to begin with!

The problem isn't as much who your user is - it's that "fake users" make you lose sight of what value is and how to create it!

Friday, January 31, 2020

Double Queues for faster Delivery

Is your organization constantly overburdened?
Do you have an endless list of tasks, and nothing seems to get finished? Are you unable to predict how long it will take for that freshly arriving work item to get done?
Here's a simple tip: Set up a "Waiting Queue" before you put anything into progress.

The Wait Queue


The idea is as simple as it is powerful:
By extending the WIP-constraint to the preparation queue, you have a fully controlled system where you can reliably measure lead time. Queuing discipline guarantees that as soon as something enters the system, we can use historic data to predict our expected delivery time.

This, in turn, allows us to set a proper SLA on our process in a very simple fashion: WIP in the system multiplied with average service time is when the average work item will be done.
This allows us to give a pretty good due date estimate on any item that crosses the system boundary.
Plus, it removes friction within the system.

Yes, Scrum does something like that

If you're familiar with Scrum, you'll say: "But that's exactly the Product Backlog!" - almost!
Scrum attempts to implement this "Waiting Queue" with the separation of the Sprint Backlog from the Product Backlog. While that is a pretty good mechanism to limit the WIP within the system, it means we're stuck with an SLA time of "1 Sprint" - not very useful when it comes to Production issues or for optimization!
By optimizing your Waiting Queue mechanics properly, you can reduce your replenishment rate to significantly below a day - which breaks the idea of "Sprint Planning" entirely: you become much more flexible, at no cost!

The Kanban Mechanics

Here's a causal loop model of what is happening:


Causal Loops

There are two causal loops in this model:

Clearing the Pipes

The first loop is negative reinforcement - moving items out of the system into the "Waiting Queue" in front of the system will accelerate the system! As odd as this may sound: keeping items out of the system as long as possible reduces their wait time!

As an illustration, think of the overcrowded restaurant - by reducing the amount of guests in the place and having them wait outside, the waiter can reach tables faster, there's less stress on the cook - which means you'll get your food faster than if you were standing between the tables, blocking the waiter's path!


Flushing Work

The second loop is positive reinforcement - reducing queues within the system reduces wait time within the system (which increases flow efficiency) - which in turn increases our ability to get stuff done - which reduces queues within the system.

How to Implement

This trick costs nothing, except having to adjust our own mental model about how we see the flow of work. You can implement it today without any actual cost in terms of reorganization, retraining, restructuring, reskilling - or whatever.
By then setting the work you permit within your system (department, team, product organization - whatever) to only what you can achieve in a reasonable period of time, you gain control over your throughput rate and will thus get much better predictability into forecasts of any type.



Footnote:
The above is just one of many powerful examples of how #TameFlow deals with our pre-conceived mental models in order to enable us to create better systems - at no cost, with no risk.

Tuesday, January 28, 2020

The six terminal diseases of the Agile Community

The "Manifesto for Agile Software Development" was written highly talented individuals seeking for "better ways of developing software and helping others do it." Today, "Agile" has become a 
playground for quacks of all sorts. While I am by no way saying that all agilists are like this, Agile's openness to "an infinite number of practices" has allowed really dangerous diseases to creep in. They devoid the movement of impact, dilute its meaning and will ultimately cause it to become entirely useless.


The six terminal diseases of "Agile"

In the past decade, I've seen six dangerous diseases creep into the working environment, proliferated and carried in through "Agile". Each of these diseases is dangerous to mental health, productivity and organizational survival:

Disease #1 - Infantilization of Work

"Hey, let's have some fun! Bring out the Nerf Guns! Let's give each other some Kudos cards for throwing out the trash - and don't forget to draw a cute little smilie face on the board when you've managed to complete a Task. And if y'all do great this week, we'll watch a Movie in the Office on Friday evening!" Nope. Professionals worth their salt do not go to work to do these things, and they don't want such distractions at work. They want to achieve significant outcomes, and they want to get better at doing what they do. Work should be about doing good work, and workers should be treated like adults, not like infants.
An agile working environment should let people focus on doing what they came for doing, and allow them to bring great results. While it's entirely fine to let people decide by themselves how they can perform best, bringing kindergarten to work and expecting people to join the merry crowd is a problem, not a solution!


Once we have mastered disease #1, we can introduce ...

Disease #2 - Idiocracy

Everything is easy. Everything can be learned by everyone in a couple days. Education, scholarism and expertise are worth nothing. Attend a training, read a blog article or do some Pairing - and you're an expert. There's a growing disdain for higher education, because if that PhD would mean anything, it'd only be that the person has got a "Fixed Mindset" and isn't a good cultural fit: Flexible knowledge workers can do the same job just as well, they'll just need a Sprint or two to get up to speed! 


And since we're dealing with idiots now, we can set the stage for the epic battle of ...

Disease #3 - Empiricism vs. Science

I've written about this many times - There's still something like science, and it beats empiricism hands down. We don't need to re-invent the Wheel. We know how certain things, like thermodynamics, electricity and data processing work. We don't need to iterate our way there to figure out how those things work in our specific context.

Empiricism is the idiocratic answer to ignorance, and it's increasingly replacing scientific  knowledge. Coaches don't just not point their teams to existing bodies of knowledge - they question scientifically valid practices with "Would you want to try something else, it might work even better?" The numbers don't mean anything - "In a VUCA world, we don't know until we tried." - so who needs science or scientifically proven methods? Science is just a conspiracy of people who are unwilling to adapt.


Which brings us into the glorious realm of ...

Disease #4 - Pseudoscience

There are a whole number of practices and ideas rejected by the scientific community, because they  have either failed to meet their burden of proof, or failed the test of scrutiny. Regardless, agile coaches and trainers "discover", modify - or even entirely re-invent these ideas and proclaim them as "agile practices" that are "at least worth trying". They add them into their coaching style or train others to use them. And so, these practices creep into Agile workplaces, get promoted as if they were scientifically valid, and further dilute the credibility and impact of methods that are scientifically valid.
NLP, MBTI and the law of attraction are just some of these practices growing an audience among agilists.


And what wouldn't be the next step if not ...

Disease #5 - Esoterics

Once we've got the office Feng Shui right, a Citrine crystal should be on your desk all the time to stimulate creativity and help your memory. Remember to do some Transcendental Meditation and invoke your Chakras. It will really boost your performance! If you have access to all these wonderful Agile Practices, your Agile Coach has truly done all they can!

(If you think I'm joking - you can find official, certified trainings that combine such practices with Agile Methods!)


Even though it's hard, we can still top this with ...

Disease #6 - Religion

I'll avoid the obvious self-entrapment of starting yet another discussion whether certain Agile approaches or the Agile Movement itself have already become a religion, and take it where it really hurts.
Some agile coaches use "Agile" approaches to promote their own religion - a blog article nominates their own deity as "The God of Agile" (which could be considered a rather harmless cases) - and some individuals are even bringing Mysticism, Spiritism, Animism or Shamanism into their trainings or coaching practice!

Religion is a personal thing. It's highly contentious. It doesn't help us in doing a better job, being more productive or solving meaningful problems. It simply has no place in the working environment.



The Cure

Each of these six diseases is dangerous, and in combination, their harmful effect grows exponentially. At best, consider yourself inoculated now and actively resist against letting anyone induce them into your workplace. At worst, your workplace has already contracted one or more of them.

Address them. Actively.

If you're a regular member (manager / developer etc.) of the organization that suffers from such diseases: figure out where it comes from and confront those who brought in the disease. Actively stop further contamination and start cleansing the infection from your organization.

If you're a Scrum Master or Coach and you think introducing these practices is the right thing to do: if this article doesn't make you rethink your course of action, for the best of your team: please pack your bags and get out! And no, this isn't personal - I'm not judging you as a person, just your practice.