Friday, May 17, 2019

Need Gap Reduction

Let's cut straight through the bullshit. You want to become agile and ask yourself, "How do we know if we're making progress?"
This one simple metric tells you if you're moving in the right direction.

If you're not reducing the Need Gap, you're doing it wrong!

The Need Gap

A "Need Gap" exists between the time when I, as a consumer, realize that I would need something - until the need is satisfied.

The difficulty of many classical projects is that the need may be satisfied in other ways outside the project's scope or it may completely disappear even before the project's outcomes are delivered.

Why not ...

There are other common metrics. Couldn't we also choose them?
Okay, I'm cheating a bit here. The "Need Gap" is two-dimensional, so it's actually two metrics, so I'm not ditching common agile


Time-To-Market is simply an incomplete metric. It doesn't account for the (potentially massive) time lapse between making the solution available and the solution achieving its purpose.


Value is related to wealth obtained by providing the solution, although the amount of wealth benefit that can be monetized depends on many factors. To avoid having to go into a full economics discourse on game theory, we can simplify this by saying that value is again an incomplete metric.


In a business context, "learning" is related to an unfocused metric that needs to have context. Things we might want to learn should ultimately lead to a bottom line.
For example, learning:
  • Expressed, unexpressed and upcoming needs
  • Better ways of meeting needs
  • Monetizing customer wealth gain
Ultimately, learning is an enabler to addressing the Need Gap.

The Need Gap

A need gap indicates both a delay in satisfaction and an impact on wealth.

Time to Satisfaction

The time to satisfaction is the lead time plus delivery time plus customer reception time, including any potential feedback cycles until the customer is happy with the outcome.

Impact on Wealth

Customer wealth is a function both of money and other factors. 
Cost reduces wealth - as does the unmet need itself, whereas the customer's own ROI increases their wealth.
Economic customers are looking to increase their wealth, so the cost spent on meeting the need should be lower than their loss of not having met the need.

Rising Need Gap

The need gap grows in multiple ways:
  • When the time between feeling a need and meeting its need, the need gap grows
  • When the customer's wealth is reduced
As the need gap grows, the customer's self-interest in discovering alternate solutions also grows. 

Reducing the Need Gap

Ideally, there is no "Need Gap" from a customer - by the time a customer realizes a need, they have their need met:

  • When the Need Gap is low, there is little customer benefit to making changes in the way of working.
  • When the Need Gap is high, the customer benefits from making changes that reduce this need gap.
  • When the Need Gap doesn't get reduced, then the customer has no benefit from any change made in the process.
A reduction of need gap implies the goal-focused
  • increase in customer value, 
  • reduction of processing time,
  • keeping of development cost to lower than the delivered benefit,
  • improvement of value management and discovery,
  • closure of feedback loops,
  • optimization of monetization

Measuring the Need Gap

The Need Gap is quite easy to measure - look at the time between when people first speak of a need until people stop talking about it and how (un-)happy customers are during that time.

Wednesday, May 15, 2019

There's no such thing as a DevOps Department

"Can we set up a DevOps Department, and if so: How do we proceed in doing this?"
As the title reads, my take is: No.

Recently, I was in a conversation with a middle manager, let's call him Terry, whose organization just recently started to move towards agility. He had a serious concern, and asked for help. The following conversation ensued - (with some creative license to protect privacy of the individual) :

The conversation

Terry: "I'm in charge of our upcoming DevOps department and have a question to you, as an experienced agile transformation expert: How big should the DevOps department be?"

Me: "I really don't know how to answer this, let me ask you some questions first."
Me: "What do you even mean by, 'DevOps Department'? What are they supposed to do?"

Terry: "You know, DevOps. Setting up dev tools, creating the CD pipeline, providing environments, deployment, monitoring. Regular DevOps tasks."

Me: "Why would you form a separate department for that and not let that be part of the Development teams?"

Terry: "Because if teams take care of their own tools and infrastructure, they will be using lots of different tools."

Me: "And that would be a problem, as in: how?"

Terry: "There would be lots of redundant instances of stuff like Jira and Sonar and we might be paying license fees multiple times."

Me: "And that would be a problem, as in: how?"

Terry: "Well, we need a DevOps department because otherwise, there will be lots of local optimization, extra expenses and confusion."

Me: "Do you actually observe any of these things on a significant scale?"

Terry: "Not yet, but there will be. But back to my question: How big should the department be, what's the best practice?"

Me: "Have you even researched the topic a little bit on the Internet yet?"

Terry: "I was looking how many percent of a project's budget should be assigned for the DevOps unit."
Terry: "And every resource I looked pretty much said the same thing, 'Don't do it.'.But that's not what I am looking for."

Me: "Sorry Terry, there is so much we need to clarify here that the only advice I can give you for the moment is to reflect WHY every internet resource you could find suggested to not do it."

Why not?

Maybe you are a Terry and looking for how to set up a DevOps department, so let me get into the reasons where I think Terry took a wrong turn.

  • Terry was still thinking in functional separation, classic Tayloristic Silo structures, and instead of researching what agile organizations actually look like, he was working with the assumption that agile structures are the same thing.
  • Terry fell into another trap: not even understanding what DevOps is, thinking it was only about tools and servers. Indeed, DevOps is nothing more than removing the separation between Dev and Ops - a DevOps is both Dev and Ops, not a separate department that does neither properly.
  • Terry presumed that "The Perfect DevOps toolbox" exists, whereas every system is different, every team is different, every customer is different. While certain tools are fairly wide-spread, it's up to the team to decide what they can work with best.
  • Terry presumed that it is by default centralization is an optimization. While indeed, both centralization and decentralization have trade-offs, centralization has a massive opportunity cost that needs to be warranted by a use case first!
  • Terry wanted to solve fictional problems. "Could, might, will" without observable evidence are probably the biggest reason for wasted company resources. Oddly enough, DevOps is all about data-driven decision makings and total, automated process control. A DevOps mindset would never permit major investments into undefined opportunities.
  • Terry was thinking in terms of classical project-based resource allocation and funding. Not even this premise makes sense in an agile organization, and it's a major impediment towards customer satisfaction and sustainable quality. DevOps is primarily a quality initiative, and when your basic organizational structure doesn't support quality, DevOps is just a buzzword.

Reasons for DevOps

Some more key problems that DevOps tries to address:
  • Organizational hoops required to get the tools developers need to do their job: Creating a different organizational hoop is just putting lipstick on a pig.
  • Developers don't know what's happening on Production: Any form of delegation doesn't solve the problem.
  • Handovers slowing down the process: It doesn't matter what the name of the handover party is, it's still a handover.
  • Manual activity and faulty documentation: Defects in the delivery chain are best prevented by making the people with an idea responsible for success. 
  • Lack of evidence: An organization with low traceability from cause to action will not improve this by further decoupling cause and action.

How-To: DevOps

  • If you have Dev and Ops, drop the labels and call all of them Developers. Oh - and don't replace it with the DevOps label, either!
  • Equip developers with the necessary knowledge to bear the responsibility of Operations activity.
  • Enable developers to use the required tools to bring every part of the environment fully under automated control,
  • Bring developers closer to the users, either by direct dialog or through data.
  • Move towards data-driven decision making on all levels: Task-wise, structurally, organizationally and strategically.

And that is my take on the subject of setting up a DevOps department.

Monday, May 13, 2019

The Management-Coach clash

When an agile coach or Scrum Master is brought into a traditional line organization, there is a high risk of misunderstanding and clashing, oftentimes leading to frustration. Why is that so? Let us explore the traditional expectations towards different roles.

Upton Sinclair said it all with his famous quote:
It is difficult to get a man to understand something when his salary depends upon his not understanding it.
As I had these discussions multiple times in the past, here is a short diagram to illustrate what causes the clash. We will explore below:

Asserting authority

Managers get paid to be in control of their domain. When things aren't under control, they are expected to bring them under control. A good manager is expected to be able to state at any time that everything happens according to their provision, and nothing happens that wasn't agreed beforehand.

A consultants' authority is granted and limited by their hiring manager, and the consultant is expected to exercise their authority to drive their specific agenda in service of their manager.

A coach doesn't assert authority over others, and won't drive a predetermined agenda. Instead, a coach helps people realize their own potential and freely choose the course of action they consider best - although clearly within the scope.

And that's where coaches and managers clash:
Managers might think that coaches will exercise authority over the team and thus be sorely disappointed when they see that the coach isn't in control of the team's work, and the coach will be sorely disappointed when management reveals that they think the coach is doing a crummy job for not taking control.

Solution focus

Managers are expected to have a solution, and do whatever it takes to get one. The proverbial statement, "Bring me solutions, not problems" is descriptive here. When higher ranking people within the organization ask a manager about something within their domain, the only appropriate answer is either "This is our solution" or "We are working on a solution."

For consultants, the only appropriate answer is "This is the solution", or maybe "I need x days and then I will give you the solution"

Coaching is an entirely different book: "The solution is within you. I will help you find it." - maybe the coach does have a clear understanding of the solution, maybe not. This isn't even relevant. Important is that the coach guides others in finding their solution.

And that's where coaches and managers clash:
Managers might think that coaches are problem solvers. With this expectation, either the coach should take proactive steps to install a solution before the problem occurs or immediately resolve any issue once it occurs.
Looking at the flip side of the coin, the coach may argue that they can't work with managers who always look to the coach for the solution and aren't willing to spend time and reflect on the situation.

Getting people to think

In traditional management, managers are considered intellectual, "thinkers". Their job is to think, so that others can do. (Let's ignore for argument's sake that most managers' calendar is so cluttered with appointments that they have zero time left to think.) Statements like "This is above my paygrade" are epitomes for a corporate culture where thinking is considered the responsibility of management.

A consultant has a higher self-interest in getting some people to think - at a minimum, the hiring manager should put enough consideration into the consultant's proposal to understand how their solution will affect the organization, preferably in a beneficial way.

Coaches take another stance: Ideally, they encourage others to think by asking probing, critical questions and being skeptical about other people's solution - in order to help people get beyond preconceived notions and think in wider horizons. The more thinking a coach does for the organization, the less sustainable the solution will be once the coach leaves.

And that's where coaches and managers clash:
Managers might expect the coach to use their free time to think of better ways of doing things and then come back with a clear list of improvement suggestions that just needs to be ticked off.

A coach might not do that, which neither means the coach is lazy nor that the coach isn't smart enough to do that: The coach will have a number of items in mind that need to be worked on, and help the organization get to these points one by one, when they are ready. Maybe the only thing management sees is that coaches asks a few key questions so that people will think about what's really going on and which systemic changes will improve the situation.

Defusing the conflict

Misalignment of expectations is the main cause of frustration between coaches and management. Early and frequent communication helps. Coaches need to understand how managers tick, what they get paid for and what they expect. Likewise, managers should be cautious about hiring coaches before they understand how a coach works.

For a manager, it's okay to get a consultant rather than a coach, and indeed that may be better than hiring a professional coach and expecting them to work as a consultant. It's also okay to tell the coach to tone down on the coaching and dig more into consulting.

For a coach, it's okay to start a serious discussion whether coaching or consulting is more congruent with management expectations, and propose switching into a consulting mode as fit. It's also okay to tip your hat and take your leave when consulting isn't what you expect to be doing.

It's not okay to get angry and blame one another just because we don't understand each other.

Tuesday, April 30, 2019

Coachability Index

Not all of my coaching engagements were successful, and in retrospect, it was my own mistake to not check for coachability before engaging - and also, not reinforcing or contemplating the absence of coachability factors. This article will make it easier for you, both as coach or coachee, to understand how character contributes to coaching success.

As agile coaches, we deal with people. People who have a unique personality. This uniqueness is what makes us special, what gives us the place in society we hold. In some cases, our personality is helping us - in others, it is our constraint. Every personality deserves respect, although some personalities benefit more from coaching than others.

Let us look at six character traits that increase or decrease the likelihood of benefitting from coaching. Notice that this article is not about success as a coach - it is about the coachee's success!


How open is the coachee to talk about themselves and their situation? Do they reveal when they're stuck or experience setbacks? Coaching is limited towards what the coachee is willing to reveal and work on - any area closed to coaching remains out of scope for coaching improvement. The ethical coach will build a sphere of trust for the coachee to enter into - the step of opening themselves remains up to the coachee.


Positive IndicatorNegative Indicator
Reveals personal perspective
and perception
Constantly insists, "This is not relevant"
Explores context and backgroundOver-Focussing on single points
Puts themselves into the center Doesn't talk about themselves

Impact on Coaching

  • When there's strong openness, the bond between coachee and coach allows the coach to explore further
  • When there's little openness, most coaching discussions will avoid the real issue. 
  • When there's closedness, even the coach may be led off track by the coachee.

My advice

People have reasons for being closed. These may be beyond the coach's ability to influence. A coach can not work with a person who won't open up, that's why it is important to be able to withdraw from coaching when this situation can't be remedied.


Coaching is a gift to help others grow. A gift not received is waste. Part of coaching is that the coachee needs to be willing to receive criticism, feedback and even pushback. The coach is just one of many people to offer feedback and may occasionally criticize the coachee on their progress. If the coachee ignores such input, soft-cooking or even justifying their current stance, coaching efforts will evaporate.


Positive IndicatorNegative Indicator
Actively solicits feedbackThinks they are in a position
to give, but not receive feedback
Mentions feedback or criticism
they have received
Blames people who criticize the coachee
Inquires the reasons for criticismQuestions the motives of any criticism
Accepts any feedback
as an opportunity
Brushes off feedback

Impact on Coaching

  • When there's strong receptiveness, there's a big window of opportunity for meaningful change.
  • When there's weak receptiveness, most coaching discussions will avoid the real issue. 
  • When the coachee is rejective, coaching might be considered a threat rather than an opportunity.

My advice

Many people have un-learned this kind of listening, especially people in strong blame-oriented hierarchies. Find ways to go "beyond blame" and beyond deflection - or you might as well end the coaching.


We don't live in a vacuum. Our mere existence affects those around us - be it in action, inaction, word or silence. We can't "not communicate", and every communication has an impact. How aware is your coachee of their impact on their environment? Do they put effort into understanding the consequences of their words and deeds?


Positive IndicatorNegative Indicator
Cautious of their effect on othersThinks it's always "the others"
Understands that even staying silent has an effect"Not criticizing is already enough praise"
Expresses approval of positive eventsFinds reasons to criticize even positive events
Realizes how their behaviour affects othersWillfully oblivious to existing feedback loops

Impact on Coaching

  • When there's high awareness, the coachee can benefit tremendously from active feedback.
  • When there's low awareness, some consequences of change may escape the coachee.
  • When there's obliviousness, the coachee has no mirror to look into.

My advice

Almost everyone is born with a good amount of awareness, some even with too much. Many have been conditioned to reduce their own awareness. In many cases, the position or communication structure of an organization offers an advantage to people who reduce their awareness. Building sufficient awareness is often a precursor to specific coaching on special topics, such as agile mindset and behaviour.
Too much self-awareness can lead to "analysis paralysis". In that case, it's important to encourage experimentation with feedback and understanding trial and error as an opportunity.


There's more than one way to see things. Everyone has their perspective - but can we change it? Limiting ourselves to one single perspective may blind us to reality. A change of perspective allows us to consider different viewpoints, ideas and even conflicting information - and integrate the useful bits with our own for new opportunities.


Positive IndicatorNegative Indicator
Asks, "How do you see this?"Excessive use of terms like, "No" or "But"
Invests time to learn about
different perspectives
Dismisses others' perspectives as flawed
Receives different, even
disagreeing sources of information
Reduces information intake to single channels

Impact on Coaching

  • When the coachee is well able to integrate new perspectives, there will be a broad field for coaching
  • When the coachee prefers to maintain a limited perspective, growth potential is equally limited.
  • When the coachee actively rejects any perspective other than their own, there will be no significant change.

My advice

People who take a limited perspective choose to restrict their world view. As Einstein said, "Problems can not be solved from the same level of consciousness as they were created." - when you can't reach a different level of consciousness, that's the end of meaningful change already.


"Let's cut beyond the talk and get down to business: How am I contributing to the problem? What should I change? With whom should I talk? How do I know I'm making progess?"
Cut beyond the smalltalk. Actions speak louder than words. As coach, you don't need the coachee to assure you of anything - the coachee is both the driver and beneficiary of change.


Positive IndicatorNegative Indicator
Gets down to the core of the issueBeating around the bush
Assumes personal responsibilityMakes excuses
Feels personally accountableDeflects from issues
Has an action planConstantly winging it
Decides, Acts, ReceivesIndecisive
Tracks own progressInactive

Impact on Coaching

  • When there's graveness, change can be fast and usually effective.
  • When the coachee is overly relaxed, results may be considered optional.
  • When coaching is considered a laughing stock, this will become self-fulfilling prophesy.

My advice

When people aren't serious about making a change, they might have reservations. Do they not see how they benefit from more agility or don't they see how their contribution makes a difference?
Not all people want coaching. Respect that. Make the issue transparent and offer quick closure.


The coachee has to be committed to grow as a person in whatever dimension they want to be coached in. As coaching is a guidance on another's journey, when we push or drag the coachee, little good will come out of it.


Positive IndicatorNegative Indicator
Has a goal to improve towardsIsn't interested in defining personal goals
Discontent with status quoFeels that everything is okay
Sees setbacks as learning opportunity Blames external factors for setbacks
Strives to overcome shortcomingsDoesn't learn from failure
Trying and StrugglingReassurance without action

Impact on Coaching

  • When there's strong growth commitment, you can lead meaningful discussions about "Why" and "How" that may lead to significant change.
  • When there's weak commitment, the translation of coaching into results will be low. 
  • When there's explicit uncommitment, there's a high risk that the coach will just be the next person on a long list of people the coachee isn't happy with.

My advice

When you see a lack of commitment, start the conversation right there. Unless the situation can be improved, coaching will be a waste of everyone's time.

All in all

The "perfect coachee" would be strong in all six dimensions. I have worked with such people, In each such case, it's a pleasure and an honour to serve them as a coach.

Normal coachees aren't perfect in the above sense. Nobody would expect them to be. If someone is weak on a handful of factors, this could be a great for a conversation. In some cases, this can be boosted just by making the issue transparent and offering solid reasons for change.
In other cases, the coaching itself will be restricted or impedited by the missing factors.

And some people just aren't for coaching. I have made the mistake of trying to "convert" them in the past. That caused frustration on both sides. As coach, it's good to be clear why we're there and how our coachees can benefit from coaching. If that isn't received, offer coachee to opt out - leaving the door open to resume later.

And finally - don't coach people against their will. That always ends up with bruises.

All of this applies equally when coaching individuals, teams or entire organizations - as teams and individuals are composed of multiple individuals.

Friday, April 26, 2019

Agile Architecture

What's should an Architect in an Agile environment do?

Irrespective whether we have a single person as "assigned architect" or have architecture as a shared responsibility of (one or more) teams - there's a massive difference between "Emergent Architecture" and "Architecture by Chance". 
The one thing worse than having the wrong goal is not having a goal at all - and that's true for architecture as well as for the product from a customer's perspective, so let's discuss a bit of what the scope of agile architecture is.

Note: This article is my current, personal view on the topic and I sincerely invite feedback and criticism if I'm off.

Architectural Intent

Architecture may or may not have an intent. Such an intent is usually to create an overall system that minimizes cost or effort for one, more or all involved parties.
This intent can be formulated either in the short, mid or long term. If it isn't formulated at all, the probability quickly diminishes that the optimization goals of the organization are indeed met.

Optimization goals

Typical optimization goals for architecture could be, for example:
  • Minimize Cost of Change
  • Minimize Time to Market
  • Minimize working hours
  • Organizational autonomy
You're free to pick your own, based on your specific circumstance.

Organizational Constraints

Sometimes, we also operate under constraints, which may need to be considered as well. 
Examples might be:
  • Regulatory compliance (e.g. privacy laws)
  • Technological limitations (e.g. company policy)
  • Security constraints (e.g. contractual demands)
These restrict the solution space and should be handled with caution - too many suffocate innovation, while missing any essential ones can kill the product! (as seen in the recent Sixt vs. Accenture case)

Not Strategic Architecture

Although the following activities are indeed part of architecture, these topics are not strategically relevant from an architecture perspective: The more fluently they can shift throughout a product's lifespan, the better other optimization goals can be met.
  1. Formulating, planning and forecasting system capabilities
  2. Defining interfaces, processes, data models and interfaces
  3. Component delivery strategy and component responsibility
  4. Deciding on specific techonologies to solve certain problems
While these things are relevant, agile teams can decide those things on a per-item basis and can indeed evolve and dynamically change when knowledge and skills to do these things are available in the team.

Architectural Direction

Some core questions may be worthwhile answering upfront. These are directional rather than content-wise and can be answered with "Yes" or "No":
  1. Do we want to have Loose Coupling?
  2. Do we want to enable Micro-Releases?
  3. Will we go xAAS?
  4. Do we need scalability?
Note that I'm not talking about big architecture upfront here - even for my company homepage, I have answered these questions by taking a couple minutes to orient myself what I want to achieve before writing the first line of code. 

Architectural vision

A vision that nobody can remember is worthless, as is a vision that doesn't help in orienting activities. Using the above questions, a vision statement could be formulated like:
"To develop a system where changes can be done (and undone) at any time within seconds, we create loosely coupled components that can be independently released as autonomous services which can be flexibly scaled."
Then comes the difficult part: getting there.
And that's more difficult on a "brown field" than it is on a green field.
In order to get there, we need to have some mechanisms in place, which may or may not be helpful depending on where we want to be.

Core paradigms

Where we may need to actively invest effort into enabling and establishing some core paradigms for an equally robust and flexible architecture. 

These paradigms could be, for example:

  • Continuous Integration/Delivery/Deployment
  • Automate Everything
  • Measure Everything
  • Test-Driven Design
  • Impermanent Components
  • You Build it, you run it (“true DevOps”)
This means that every architectural decision made by the teams can be benchmarked by whether (or not) an action, decision or choice brings us closer or distracts us from these paradigms.

As such, we may add to our architectural vision further statements like the following:
Every component uses a Continuous Delivery process, and where possible, even Continuous Deployment. 
Every activity within the pipeline and on the production environment is automated. 
We have real time data on every aspect of the system and therefore full transparency on its behaviour. 
Code that hasn't been designed based on executable tests doesn't go live. 
Replacing components is a low-effort task. 
Developers have both responsibility and authority regarding production.

All of these paradigms are still outside the specific solution space and do not answer the "How".

Architectural roadmap

After we have agreed on where we want to go, we need to actually go and do it. Since our customers don't pay us to do architecture, but to deliver something of value, these actions should be seamlessly integrated with our development efforts, ideally experimenting with practices and introducing tools while we're delivering tangible value.

If we're in a Green Field, setting up a CD pipeline and delivering an automatically tested "Hello World" into a measured, containerized production environment may bloat up the first user story by a day or two, and then speed up everything that comes afterwards.

On a Brown Field, that's an entirely different story - we can't just do this so easily. What if we have 16 million lines of unsustainable code riddled with defects? CD would be insanity. There's no data from Production, nobody knows the current state of the deployment environment and extracting a single feature from the existing monolith could take months.

We then get into an architectural action roadmap that might look like this:

  1. Short term goals
    1. Automate build process & set up CI pipeline.
    2. Agree with stakeholders to accept current "status quo" as baseline quality for CI/CD.
    3. Every new line of code gets a unit test.
    4. Build metric hooks and set up a measurement system.
    5. Build service API for core monolith.
  2. Mid term goals
    1. Introduce Coding Conventions and add measurement to CI pipeline
    2. Stop the current branching process and move towards genuine CD process
    3. Automated deployment to Test Stage
    4. Automate e2e tests for critical user journeys
    5. Decouple frontend and backend.
  3. Long term goals
    1. Build new capabilities as autonomous services.
    2. Containerize services.
    3. Extract service components from existing monolith.
    4. Move towards polyglot development.
    5. Move individual services to Cloud.
    6. Build Security Services (IAM, Secrets-as-Service, etc.)

These items can then be added into the Product Backlog and prioritized as needed. The authority and autonomy on how to do this specifically still rests within the development teams.

Any architectural activity that doesn't trace back to a goal may be questionable. Any architectural goal that doesn't trace back to a paradigm invites scrutiny - and if anything doesn't match intent, another discussion may be in order.

Architectural responsibility

None of the above items mandate having a specific "architect" role, although in larger systems, it may indeed be worthwhile to have at least one person take responsibility to see that there is indeed clarity on the architectural:
  1. Vision
  2. Intent
  3. Direction
  4. Goals
  5. Paradigms
  6. Ongoing Activity

Wednesday, April 17, 2019

Minimal SAFe

"It's bloated, it's big, it generates massive overhead. And it cements status quo"
I claim - this isn't SAFe as intended, it's just how it is often understood. 

In this article, I want to share my experiences on what a minimal SAFe implementation actually is.

To set context: You don't use SAFe for single teams. You use SAFe when you have multiple teams only. The mechanisms of SAFe are intended to create overarching: transparency, alignment, ability to act and quality. If you have no issues with any of these, then SAFe just isn't for you. Simple as that.

A number of assumptions are stated as if they were facts. Everything can be put to scrutiny, but this would just bloat this article.

Minimal Organization

Unlike the SAFe big picture suggests, SAFe is actually very lean on process. Take away the SAFe events, there is - nothing that wouldn't need to be done anyways.
SAFe doesn't mandate anything to be done for the sake of SAFe. Everything SAFe has to say on a process level is merely guidance for unsolved problems that you may be having. If you have no specific problems, feel free to use whatever process suits you.

Team organization

SAFe has very little to say on how teams should work. SAFe is fine with teams using XP, Kanban and Scrum - and also with Waterfall or any custom approach the teams consider effective.

Problem only happen in reverse: Teams used to semiannual releases will struggle to meet SAFe's requirement of producing and demonstrating a fully integrated System every other week.
Development teams having a good Working mode and solid Engineering Practices will find a good SAFe implementation to be nearly invisible except PI Planning and the System Demo, where they interact with others contributing towards the same goal.

Program organization

At the minimal extreme, SAFe has very little program level activity: Common goals, shared Backlog, addressing and resolving systemic issues. That's it.
Program level activity might be done in less than an hour a week, and the Program organization might just be three people: Product Management, Release Train Engineer and System Architect.
Not much overhead for 50-150 developers, if you ask me.

The reasons why Program Level activity often takes so much tend to be:

  1. Organizational impediments galore
  2. Teams don't collaborate properly
  3. Mindset and behaviour in Program Level isn't Lean-Agile

All three reasons should constantly trigger the question, "How can we do things differently so that we (as a Program Team) need to do less work?"

Minimal Roles

SAFe has three "Specialist" Roles that aren't shared with other frameworks. Does that necessarily add head count and complexity?

Product Management

If you have one product and multiple teams, you need to be able to deliver a single, Integrated, Working Product - so the work of each team should contributes to a single Whole.

Someone needs to define what the Product is. SAFe calls this "Product Management" The requirements of Product Management are:
  1. A single Product Vision
  2. A prioritized Backlog of unharnessed Value
  3. Aligning priorities with the organizational strategy
  4. An common plan aligning the indivdual team goals
  5. Taking responsibility when this plan is in danger
The function can be assumed by the only Product Owner, one of multiple Product Owners or even all of them.
If we have 4-6 teams and a single Product Owner who also assumes Product Management responsibilities, SAFe might hint that this PO may have too little time for each team but it's still SAFe.

If you want to have a single PO who also assumes PM responsibility, that's still SAFe. No overhead needed.

System Architect

The role of the System Architect is probably the most disputed in SAFe. Let's forget the label for a minute and just say, "It's generally a good idea if you have someone whom you know you can ask if there are technical issues that are bigger than the team".
Does that someone need to decide what the architecture looks like, do they need to actively propose a solution? Let's be flexible.
Maybe that someone just feels responsible for rallying architecture-focused developers from the team on a common architectural goal - a community spokesperson who has the backing from other developers to represent them in order to prevent time-consuming, low-value large discussion groups.

Rather than asking, "Should we have a System Architect?", I would like to ask: "What happens if we have multiple teams and nobody who takes responsibility to bring developers together, nobody who can speak with the Product Owner about Technology in a language they understand?"

If that somebody is a senior developer part of a team, that's still SAFe. No overhead needed.

Release Train Engineer

Self-Organization is a great thing, until developers realize that multi-team collaboration often leads to overarching conflicts and local optimization.
Someone should make sure that global optimization happens, teams talk to each other, global issues become visible and systemic issues get addressed at system level.

This "someone" may well be a Scrum Master or a line manager - the important portion is that the RTE is able to take a neutral perspective, doesn't go into the work when big picture reflection is needed and has the standing in and backing of the organization to pull the big levers for change.

If someone is already doing this, you have found your Release Train Engineer. No overhead needed.

Minimal SAFe Events

Probably the second biggest complaint about SAFe except role mania is meeting madness. So let's address why these meetings exist and what's the minimal overhead.

PI Planning

The 2-days PI Planning with all of its content is one of the main "fear factors" for agile teams who have gotten used to lean, agile planning. Here's the deal: Nobody said the PIP needs to be 2 days. And nobody says it must have a 10-point, 16-hour agenda. Those are just suggestions.

What is essential is that teams consider not only their own next iteration, but about:
  • Their mid- and long-term goal
  • Their recent progress towards their goal
  • Their next short- and mid-term goals
  • Their mutual contribution towards each other
So, let's again, ask in reverse: "What happens if teams working on a common product don't do these things?

The PI Planning is not optional. What is, indeed, open to discussion are the questions of "How" and "How long". 

Is the PI Planning an additional event on top of other events teams have? Indeed. This is the one meeting that many teams fail to have, and it's also the reason why so many organizations fail to make the jump from multiple disjoint, erratic Scrum islands towards a "Team of Teams" which is aligned on a common goal. 

The PI Planning's default is 2 days per 3 months, which is just about 3.5% of development time. If reduced to one day, that number shrinks below 2%. A bearable amount of meeting to see the big picture, synchronize, align, sort out some misgivings and whatever else may be better done in person than via e-mail.

System Demo

How SAFe's System Demo is conducted is very flexible. It can be highly interactive, it can be an upfront show by Product Management or it can be each team having a "science fair" showing their work. What's not optional, however, is to produce an integrated system increment. The Demo only reveals if the produced increment is indeed, both usable and valuable.

"The facts are friendly, every bit of evidence one can acquire, in an area, leads one much closer to what is truth" - Carl Rogers
The System Demo generates these friendly facts, and there's nothing worse than either not seeing the outcomes within their big picture or holding fast to the fixed belief that somehow, things magically will work out.

A System Demo can take anywhere from half an hour to a few hours (with preference given to the shorter timespan) and the more often Demos happen and the more people are involved, the higher the level of transparency. Demos are never a waste - if they are used as a learning and feedback opportunity.

Systemic Inspect+Adapt

This event is where the Release Train Engineer, management and Scrum Masters try to find solutions to systemic problems encountered in the PI. 
To not go into too much detail, if you're not getting at least a tenfold Return on Invest between spent time versus solved problems, you're doing it wrong.

The Systemic I+A is another element oftentimes missing in Scrum implementations - a group of people having a dedicated slot in the calendar where impediments outside an individual team's sphere of control are addressed and resolved.

While the meeting itself may be invisible to developers, they are cordially invited to contribute known impediments - and to provide feedback on the effectiveness of solutions.

Scrum of Scrums

In frequent intervals, teams should synchronize around the questions, "Are we on track?", "Are we impedited?" and "Do we need help from someone?" - this meeting can be conducted by anyone, although the Scrum Masters should possess this information, so what's better than letting them align?

A SoS for a dozen teams can be over within 5 seconds if everyone says "We're doing great", and it can be done in a minute if the outcome is "We need to get together" or "We're challenged, but working on it with <others>". At least the communication should happen so that others know if they can chime in.

Once you have a communication channel where people can deal with issues bigger than one team, SAFe's minimum requirement is met.

SoS, like a Daily Scrum, can be over in a minute and should be over within 15 minutes. It shouldn't affect developers' schedule at all.

PO Sync

As soon as you have multiple Product Owners, they need to align frequently. Synchronization on a product level should happen at least once per iteration. That sync can happen before, as part of, or after, the Review (which is in SAFe terminology is called Demo).

If your Product is aligned around a shared goal and common goal, you meet the need of the PO Sync.

This becomes redundant in a Single-Person construct as people tend to be aligned with themselves (and if not, you have other issues that SAFe can't address, anyways).

The PO Sync takes as much time as it takes to keep the product aligned.  This can be anywhere from a few minutes to a couple hours. It should happen unbeknowest to anyone who is not part of Product Ownership of Product Management.

Minimal Plan

"But Michael, SAFe is still Big Upfront Design with quarterly releases", may be the biggest complaint. I claim: "Not if you're doing it right!"
What SAFe mandates in PI Planning is sharing a midterm vision, taking a good look at upcoming features - defining and aligning common goals. There's a massive leeway in what this means. 

Argue as much as you want, developers want to know if they'll be developing websites or washing machines 2 months down the line.  And business also needs to know if they're investing into software or tomatoes. 
We're not talking whether we will color that button red or green eight weeks down the line. That would be a complete misunderstanding of SAFe's PI Planning.

Business Context

What's the Big Goal, the headline for the next three months? Where do we want to be?
"Go Live", "Hit 1 Million Customers", "Serverless", "Seamless Integration" - that's the Program goal for the next quarter, the "Business Context". 
Additional fodder might be market reception or user feedback, but that's just the details.

If you need font size 6 to describe the goal on a single Slide, you're definitely over-engineering.


Features are not containers of stories. They are hypotheses of business value, which we either obtain or not. If we don't know how we want to generate value - how do we know that we do? SAFe isn't about nerdgasm, it's about sustaining and growing a business!

How many features should we have? Well - a minimum of one. Otherwise, that would mean "We have no idea how to create value within the next three months!". If we have 50 features, we're also doing something wrong, because that translates to, "We don't know what's important.
A set of 5-10 features may already be more than plenty for a couple months. 

How upfront do features need to be? Given that a value hypothesis can become invalid or be replaced with a different hypothesis at any time, there's definitely an argument that overplanning can lead to waste. 
Usually, feature waste is caused by poorly defined or overly specific features.
If your feature is, "ACME Corp. platform integration- transaction volume $3m per month", then there is sufficient leeway in the details to not have to replan when something doesn't work as intended or the value is obtained in a different way that we initially assumed. 


Why do we plan stories in advance? To get a common understanding of what we're up to, which problem(s) we're solving and whether that's even feasible within the time we have.

Many people don't realize: SAFe doesn't prescribe defining all user stories in PI-Planning to fill 6 iterations. Instead, SAFe operates with a fuzzy planning horizon: We plan 80% of our capacity for Iteration 1, 60% for Iteration 2, 40% for Iteration 3, 20% for Iteration 4 as "Commitment", that is, with work which has high predictability, necessity and value. In terms of fully planned Iterations, that's  just 2 full Iterations - the same planning horizon Scrum would encourage, except taking into account that uncertainty increases over time and it may be a while until we can replan face to face.

The remaining time in a PI is "Stretch", that is, we create an understanding based on today's information, what is the most plausible thing to do. Depending on how flexible demand in your organization is, Stretch can contain anything from "We'll keep it open for stuff we don't know about today" over "We would do these things if nothing more important shows up" all the way to "We have lower priority things that still need to be done eventually, so let's plan them now - if it doesn't get done, we'll replan later."

Stories are specific user problems - not individual tasks. SAFe specifically discourages planning tasks upfront, as this may distract people from focusing on the value at hand.


The big mistake many teams make is that they use Business Objectives as mirrors of features or containers of pre-planned stories. Objectives are a strategic tool to help align both within the team and around the team. It's not even necessary to have a new objective for each iteration, but it's essential to have a goal. You start with a mid-term goal and break it down into achievable steps, ideally 1-2 iterations - but 3 can also be fine. And you shouldn't focus on team specific objectives , your objectives should be collaborative, otherwise you're doing it wrong.

Why do we need objectives? To see if we did go where we wanted to be, and to help others align with us without needing to drill into minute details of each story. 

Minimal SAFe Plan

Summing up the entire section on Planning - SAFe suggests that you should have:
  • A business goal to move towards in the next couple months.
  • A clear understanding of how you want to produce value.
  • A decent understanding of the upcoming user problems you want to solve that fill 2 Iterations.
  • Aligned objectives that enable frequent success validation.
  • Enough flexibility to allow that at least half of your plan can go wrong or change.
If you have all these elements, you have a SAFe plan.

Being Minimally SAFe

You are Minimally SAFe when:

All those points are reasonable - and SAFe provides means and mechanisms to attain them.

How much effort do you need to invest into those? That depends on where you're coming from. 
If you couldn't achieve these things in the past - it might be a long journey.
If you've set the right levers already - very little.

The cost of Minimal SAFe

If you're already agile on a team level, SAFe might just require 2-3% developer capacity and roughly 5% from Product Owners and Scrum Masters on top of what each single team would invest.

Now, compare this to the cost that would be incurred if any of the above four items are not met.

Saturday, April 13, 2019

Design Choices - is it really BDUF vs. Agile?

Do we really end up with Chaos or randomness if we don't design upfront?
The question alone should be sufficient to warrant that the answer isn't so straightforward. Indeed, it would be a false dichotomy - so let's explore a few additional options, to create choice.

Let's tackle the model from left to right:

Upfront Design

The epitomal nemesis of agilists, "Upfront Design" (UD) is the idea that both the product and its creation process can be fixed before the product is created. This approach is only useful when the product is well known and there is something called a "Best Practice" for the creation process.
In terms of the Cynefin framework, UD is good for simple products. A cup, a pen - anything where it's easy to determine a cost effective, high quality production approach. This is very useful when the creation process is executed billions of times with little to no variation for an indefinite time.

Normally, Upfront Design isn't a one time design effort. There are extensive prototyping runs to get the product right and afterwards, to optimize the production process before the real production process starts, as later adjustments are expensive and may yield enormous scrap and rework. All of this upfront effort is expected to pay off once the first couple million units have been sold.
The problems of this approach in software development are closely related to ongoing change: There is no "final state" for a software product except "decommissioned".
Your users might just discover better options for meeting their need in certain areas of the product - would you want to force them to do with a suboptimal solution, just because you've finished your design?

Likewise, a new framework could be released that might cut development effort in half, providing better usability and higher value at better quality: Wouldn't you want to use it? UD would imply that you'd scrap your past efforts in the light of new information and return to the drawing board.

Industrial Design

In many aspects, Industrial Design (ID) is similar to Upfront Design - both product and process design are defined before the creation actually starts. There is one major difference, though: the design of the product itself is decoupled from the design of the product creation process. This creates leeway in product creation based on the needs of the producer.

ID accounts for specifying the key attributes of the product upfront and leaving the creation process open until a later stage. The approach acknowledges that the point in time when the "What and Why" is determined may be too early to decide on "How". In implementation, ID accounts for "many feasible practices" from which appropriate "best practices" are chosen by the process designer for optimizing meta attributes such as production time, cost or quality.

Glancing at the Cynefin framework, ID is great for complicated, compound products assembled from multiple components in extensive process chains. Take, for example, a cell phone, a car or even something as simple as a chair, all commodities which get mass-produced, although both with a limited lifetime and still a limited quantity.

The challenges of ID in software development are similar to those of UD: there is no "final state" for software - and by the time it gets into users' hands, requirements might already have changed.

While for most industrially created products, the economic rule "Supply creates its own demand" applies and marketing and sales efforts can be used to make the available product attractive to an audience - Enterprise Software Design is about creating a product for one specific audience that doesn't like to be told what they're supposed to need!

Most corporate software development still happens in a similar way, although in many cases, the creation process has been defined and standardized irrespective of the product, with the assumption that all software is the same from a development perspective: This might be akin to assuming that a gummy bear follows the same creation process as a car or a chair!
Software development doesn't produce the same software a million times - we create it once and use it. And the next product is different. If indeed your company pays developers to develop the same product time and again, I have a bridge to sell you ...

Emergent Design

Emergent Design (ED) is buzzword terrain, as different people have a different understanding of what this means, and while advocates of UD/ID might call ED "No design", we likewise see people who indeed have no design call their lack of structured approach ED. This muddies the waters and makes it complicated to understand what ED is really about.

From a Cynefin framework perspective, ED is a way of dealing with the chaotic Domain in product development - when neither the best solution nor implementation method are known or constant! ED is what typically happens during prototyping, when various options are on the table and by creating something tangible, appropriate design patterns start to emerge.

ED requires high discipline. as systematic application of different design approaches helps reduce both time and effort in establishing useful designs, yet the designers need to fully understand that at any time, following a pattern may lead to a dead end and the direction needs to be changed. ED therefore requires high skill in design, quick forward thinking as well as massive flexibility in backtracking and changing direction.

ED accounts for constantly changing, evolving and emerging customer needs, a near infinite solution space and the inability of any person to know the next step before it has been taken.

Modern software development infrastructure is a showcase for ED: It takes minutes rather than days to change a design pattern, add, modify or remove behaviours and get the product into users' hands. The cost of change on a product level with good engineering practice is so low that it's hardly worth thinking upfront whether a choice is right or wrong - a design choice can be reverted cheaper than its theoretical validation would be.

The downside of ED is that neglect in both process and product design can slow down, damage and potentially outright destroy a product. It is essential to have a good understanding of design patterns and be firm in software craftsmanship practices to take advantage of Emergent Design.

Integrated Design

With Integrated Design (IntD), the focus shifts away from when we design to how we design. While UD and ID assume that there are two separate types of design, namely product and process - ED assumes that product design patterns need to emerge through the application of process design patterns.
IntD takes an entirely different route, focusing on far more than specification - it doesn't just take into account meeting requirements and having a defined creation process.

Peeking at the Cynefin framework, Integrated Design is the most complex approach to design, as it combines several domains into a single purpose: an overall excellent product.

IntD interweaves disciplines - given Cloud Infrastructure, development techniques such as Canary Builds are much easier to realize, which in turn allow the delivery of experimental features to validate a hypothesis with minimal development effort, which in in turn requires production monitoring to be an integral part of the design process as well. The boundaries between product design, development, infrastructure and operation blur and disappear.

Integrated Design requires a high level of control over the environment, be it a requirement modification, a software change, a process change, user behaviour change or a technological change.
To achieve this, practitioners of IntD must master product design techniques, software development practices, the underlying technology stack including administrative processes, monitoring and data science analytics. I call the people who have this skillset, "DevOps".

"Any sufficiently advanced technology is indistinguishable from magic" - Arthur C. Clarke
While master DevOps do all of these things with ease, people unfamiliar with any of the above techniques may not recognize that a rigorous approach is being applied and ask for a more structured, distinguishable approach with clear dividing lines between product design, process design and system operation.
Rather than accepting the near-magical results produced by such a team, they might go on a witch hunt and burn the heretics at stake, in the wake destroying the product built in this fashion.

And then, finally, we still have the default option, oftentimes used as the strawman in the advocation of each other design approach:

No Design

While seemingly the easiest thing to do, it's myopic and fatal to have No Design. No reputable engineer would ever resort to applying No Design, even if they know their design might be defective, expensive or outright wrong.

The reasons why No Design aren't an option go a long way, including - but not limited to:

  • You don't know whether you achieved quality
  • You're continuously wasting time and money
  • You can't explain why you did what you did

The problem with No Design is that any design approach quickly deteriorates into No Design:

  • An UD software product tends to look like it had No Design after a couple of years when people constantly add new features that weren't part of the initial design.
  • When ID products change too much, their initial design becomes unrecognizable. The initial design principles look out of place and lacking purpose if they aren't reworked over time.
  • Applying ED without rigour and just hoping that things work out is, as mentioned previously, indeed, No Design.
  • While IntD is the furthest away from No Design, letting (sorry for the term) buffoons tamper with the an IntD system will quickly  render the intricate mesh of controls ineffective, to the point where an appearance of No Design indeed becomes No Design.


I have given you options for different types of design. The statement, "If we don't do it like this, it's No Design" is a false dichotomy.
This article is indeed biased in suggesting that "Integrated Design" is the best option for software development. This isn't a universal truth. An organization must have both the skillset and the discipline to apply ED/IntD techniques, otherwise they will not operate sustainably.

  • No Design is the default that requires no thinking - but it creates a mess.
  • Full-blown Big Design Upfront is a strawman. It rarely happens. Almost all software has versions and releases.
  • Industrial Design is the common approach. It nails down the important parts and gives leeway in the minute details. 
  • Emergent Software Design requires experience and practice to provide expected benefits. When applied haphazardly, it might destroy the product.
  • Integrated Software Design is a very advanced approach that requires high levels of expertise and discipled.

Monday, April 8, 2019

What happens when people are part of multiple SAFe ARTs

SAFe does suggest that people should be full-time members of their ART. If you think that this suggestion is optional, here is the hard reality:

At the end of the first Program Increment, people felt a bit disillusioned with the volume of outcome generated by the ART.  An ART is a rather large amount of people, so the hope was that there would be a rather large amount of "Done Product". This wasn't the case.
Some developers were complaining that they couldn't get anything done because they were "spending too much time in meetings".

Hence, I investigated.

ART delivery survey

We asked ART members in a quick survey to categorize their working day into four categories:
  • Time unavailable for the ART (i.e. doing "legacy structured work" or "working on other ARTs")
  • Time spent in SAFe/Scrum meetings, such as Planning, Daily Scrum, Reviews, Retrospectives
  • Time spent on daily chores, such as Maintenance & Support etc.
  • Time spent actually working to deliver Features
This wasn't about effort monitoring or getting precise numbers, we just wanted to have an idea what we're facing.

Here is what I got:

This anonymized data represents the situation of an ART launched by an organization which simply didn't have the choice to 100% commit all ART members before getting going.

ART delivery capability

There were some interesting findings:

  1. Capacity: The "real capacity" of the ART was significantly smaller than the "presumed capacity" due to part-time ART membership. Statistically, one in five participants of the PI Planning was an uninvolved spectator.
  2. Meetings: The amount of time spent in SAFe meetings was very proportionate at 15%. This, however, became an issue was a 10% committed member of the ART spent all of their available time in meetings.
  3. Chores:  In a Devops production environment with legacy processes, there's a significant capacity drain caused by daily business. Nobody had this on the radar until we surveyed it.
  4. Features: The total delivery capability was not much higher than a few dedicated development teams - everyone else was busy, but not with getting new features out of the door! The results delivered by the teams were the logical conclusion of the setup.

ART member overburden

When we took a closer look at the feature delivery time of individuals, here is what we found:

Yes, this was a shock! A few unfortunate individuals had to consistently do overtime just to keep up with the ART - and that wasn't even about getting feature related work done.

So to say, some people got steamrolled below the (Agile Release) Train. I guess you could sympathize with these people when they say they'd rather not work in the ART - yet they enjoyed the working mode. They just felt the workload was crushing!

Also, the average ART member was barely spending a quarter of their time on feature delivery, so even a room full of developers wasn't getting much more done than than a single powerful delivery team.

On a positive note, dedicated fulltime ART members were able to spend nearly 80% of productive quality time to get features delivered, so we had the proof that the root cause of the problem was neither SAFe nor the ART, but peoples' part time availability.

Size matters

As a coutnermeasure, individuals had to choose either to be 100% on the ART or to go 100% into another ART. Even though some people had their technical expertise home in multiple ARTs, the mix construct just didn't work out in their favour.

It's better both for the organization and the individuals to move towards 100% ART membership and be in one stable, dedicated team. 

This has implications on the size of the ART.
An ART with 100 members of whom a great portion is less than 100% committed will be bigger than a dedicated ART would need to be. This additional size requires additional communication, adding additional coordination. All of this increases effort, risk and cost while reducing productivity.

Not to mention finding a sufficiently big PI Planning room and overnight accommodation becomes increasingly challenging with increasing ART size.


From an economic point of view, I hence coined the phrase: "Better 1 dedicated than 3 uncommitted ART members".

And I feel that the human aspect conveys the same message: Better focus on one thing you're good at than constantly jump between construction sites.

The message?
It's important for leadership to commit on a Priority 1 for their people, because the cost of indecision may be the entire productivity those people have to offer!

Friday, April 5, 2019

Trust me, you don't want to hire a coach!

Before, I said you shouldn't hire a consultant (did I?) - and now I say that you don't want to hire a coach, either (will I?) - and there's a good reason for this as well.

Exhibit 1: A typical "Agile Coach" job offer

Just your every day job ad for "Looking for an Agile Coach"

I googled for "Agile Coach" and this was one of the first hits that came up. I omitted a few things and reduced sentence length to make it slightly more legible. Besides that, this kind of ad is typical and representative of the job market.
Note for employers: If you're looking for an agile coach and copy+paste the image, you will get anything except a capable agile coach.
Note for job seekers: If you don't see why the statements in the ad are problematic, you may well apply - just don't call yourself "agile coach".
Okay, let's get to the core of the matter.

This is not a coach

A typical "agile coach" job offer posted by non-agile organizations typically requires skills like:
  • Project Management
  • Requirement Management
  • Process Management
  • Performance Management
  • Meeting Management
  • Problem Management
In this case, I'd daresay: "Why not call it Agile Manager if it's all about management?"

Some of the big misunderstandings are:

Agile Management

Agile teams are mostly self-organized. Agile managers keep out of the team's "Who", "What" and "When", so the classic "Who does what until when" management question is answered by the agile team members themselves! Likewise, "How" and "With What" are determined by the team as well.

Regarding Project, Process or Requirements Management activities  - as far as they are still necessary, they are located as within the team - teams take care of their work. In Scrum terminology, they "have all the necessary skills to produce a Done Increment of work". And that includes managing their own work.

Hiring an agile team member with project management expertise is one thing, hiring an agile coach to manage the team is a completely different matter - you don't need an agile coach to manage your agile team!

Coaching or Management?

When strong interaction with management is required to cause a mindset shift and to set the right levers for change, hiring a coach with a decent understanding of - and prior experience in - management may indeed be necessary. Expecting an agile coach to apply these management techniques towards a team, implies that:

1. The Coach will have to apply non-agile behaviour.
2. The Coach will have to leave the coaching hat at the entrance.
3. The Coach will have to accept status quo rather than drive change.

If you're looking for such a person, that's legitimate. Just remove the label "Agile" and "Coach" from the job offer - an agile coach would be ineffective as agile coach if they did as expected.

Subject Matter Expertise

In recent months, I have observed a strong negative correlation between depth of subject matter expertise and effectiveness of coaching. Why? Because when push comes to shove, the subject matter expert will dive in, actively contribute to problem resolution - and completely lose sight of the big picture, becoming incapable of reflecting and discovering the systemic issues which lead to the need for a coach involving in subject matters.

As a specific example, I have seen a number of organizations who have assigned their most knowledgable expert as coach. The consequence was that this type of coach is busy every single day communicating subject matters (effectively disempowering the Product Owner) and finding solutions on a product level (effectively crippling team learning and self-organization). At the same time, they had no time left to coach others, which made their role a misnomer.

I seriously question the intent behind looking for a coach with a high depth of subject matter expertise - if a high level of expertise is required to sort out matters, hire an expert instead!

Certification Mania

There are many types of certifications. In this context, I want to talk specifically about three categories:

1. Degree Mill certificates

Given the money required to obtain them, a plush doll might hold the title. What exactly do you expect to get when you look for a coach who has one of these?

2. Unrelated certificates

There are a ton of certificates which prove absolutely nothing about either "Agile" or "Coaching". What do you expect to get when you hire a pilot with a cooking certification?

3. Overcertification

I personally get the jiggles when I see a company explicitly looking for a CST or SPCT to coach a single team, especially when offering a junior developer's paycheck. If this is you, I would seriously ask you to do a reality check.

While certain certificates fall into none of these categories (let's call them, "relevant certification"), the value of a certificate is not in having the certification - it is in the learning journey which led one to obtain the certificate. Some people underwent an even deeper learning journey without obtaining a certificate, and it is nothing more than a remnant of fixed mindset culture to ask people to show certificates which add absolutely no value to their coaching qualification.

In short, almost all certification requirements prove that you don't even know what an agile coach is!

Do you really want a coach?

Many companies think they are looking for an agile coach, but they aren't looking for an agile coach, because they don't know what an agile coach is or does - and they don't even want that!
Neither will agile coaching be welcome there, nor will the organization get an advantage from agile coaching.

Spend a little bit of time to get educated on coaching before creating a job posting to fill such a key role in an agile transition.

Here are some great book tips to get you started on coaching:
  • Marshall Goldsmith: "What got you here won't get you there"
  • Watts & Morgan: "The Coach's Handbook"
  • Kimsey-House (x2), Sandahl & Whitworth: "Co-Active Coaching"

If you can't be bothered at least skim these books, don't hire a coach, it just won't fit!

Sunday, March 31, 2019

When not to scale Scrum

Typically the first question I hear after a company decided that Scrum may be the way to go forward in their IT organization, the next question I hear: "But how do we scale it?"

As mentioned in a previous post, there are reasons why you shouldn't even try to scale. Aside from that, you may simply not be ready to scale.

Before you attempt to scale your Scrum, maybe you want to go through this non-exhaustive check list. It only exists to give you an idea of what to improve on before plunging into Scale.
Maybe you want to answer these on a scale of "Definitely not (1), Unlikely (2), Somehow (3), Usually (4), Definitely yes (5)" - then decide areas for improvement based on feasibility and value.

So, here is your checklist:

1 - Delivery

  • We are able to deploy the last merge to Master to a productive system within less than 15 minutes.
  • We can release partial work at any time, because we use Feature Toggles to disable unfinished features.
  • There is no more work to do when we call a story "Done".

2 - Teams

  • We have stable teams that have a routine of working together.
  • Our teams autonomously get stories "Done". They do not depend on anything from anyone outside the team. This includes activities (such as testing or deployment) as well as sign off.
  • Our teams have what they need to succeed. They do not require permission to use applicable tools to get their job done.

3 - Scrum Masters

  • Scrum discipline within the team works without having the Scrum masters say or do anything.
  • Scrum Masters act as mentors and supporters, not as managers.
  • Scrum Masters are seasoned experts in Scrum. They have seen Scrum in multiple contexts and are aware of the difference between avoiding pain and healing the pain. They avoid going the easy, wrong way.

4 - Product Owners

  • The "someone who calls the shots" is the Product Owner. Not other stakeholders.
  • The PO is not a business analyst detailing features to become implementable, this can safely be left in the domain of the team.
  • Product decisions are based on Net Present Value, Opportunity Cost, Customer Retention, Market Share etc. - as opposed to politics.

5 - Outside Organization

  • Company priorities and objectives are aligned across all areas, including IT as well as Marketing, sales, finance. Conflict of interests is resolved on top management level.
  • External stakeholders fully understand their responsibility in enabling successful delivery.
  • Managers enable the team and do not require non-value adding activity, such as reports

6 - Continuous Improvement

  • We know our 3 top pressing impediments
  • The most pressing impediment is actively being resolved
  • Minor changes which help increase productivity are readily implemented without further mention.

7 - Technical Debt

  • It is widely accepted that technical debt must be controlled
  • We measure and are aware of our technical debt
  • We don't release any code with significant technical debt


The model of Shu-Ha-Ri applies to Scaling more than anything.

An organization in the SHU stage of Scrum multiplies their problems, not their solutions, at scale. When you can't frequently and reliably deliver value with 1 team, you won't be doing that with 5 or more teams, either!
Successful scaling requires coaching by RI stage Scrum experts and an organization that is at least on the HA level.

As long as you don't have a working implementation of Scrum within individual teams, don't even think about scaling!

Tuesday, March 26, 2019

Your Sprint Reviews suck, and that's why!

Do you have the impression that your Sprint Reviews suck, that they are a waste of time? 
Are you missing real users, are attendants squirming for the meeting to end as soon as possible - or have they found something else to do with their time that has more value, for example counting the holes in the ceiling?
Or is your Review already a sausage party where nobody except team members attend, because everyone else has decided to vote with their feet?

I've sat through many a Review, and found some common patterns that have a high tendency to lead to terrible Sprint Reviews.
In this article, I list some of the most egregious sins and how to avoid them. 

Sin #1 - Activity Reporting

Imagine you're going to a Genius Store, looking for the newest features on the latest iPhone - and the salesperson starts by saying, "We set up a new chip factory, but it was a lot of trouble to buy the grant for the land, to get a building permission, and you know all those new building regulations - we had to hire a whole department of lawyers just to go through the paperwork, and then there was that issue with ..." - for heaven's sake, you'd run out of that store screaming, never to return!

And yet, that's what teams consider a "normal" thing to do during Sprint Review. Why? Let me formulate the intention of the Sprint Review as a User Story, using the Connextra Template:

As a user of your product, I want to know about the latest changes so that I can decide whether I like them.
That's all a user wants to know. If this is not the #1 top priority of your Review, you have nothing to say and are wasting everyone's time.

The fix? Think what a Review from that fancy new (insert product) would look like for you to want to attend. Not the Behind the Scenes - the Product Review Session that you would pay to get a ticket for! Like, BlizzCon, for example! And then, move your Reviews in that direction.

Quick fix: Forget about YOU. Talk about THEM (the customer!) and their needs in terms of the Product!

Sin #2 - Ticket Management

When the team whips up their ticket management first thing upon starting the Review, I already know it's going to be a waste of everyone's time. Regardless of whether that tool is Jira, Excel, Trello or whatever - I know that this team can't think from a user's perspective.

Seriously, would you come to this blog if you had to first click through a summary of the work I did before you could find out about the content I produced? Probably not. And that's how users feel when you confront them with the work you did.

Rather than describing what work you did, you should start with something called a "value proposal". Let the users know very briefly why they're here, what they're going to be investing time into and which benefit they're going to get out of this. If this hasn't been achieved within the first minute, you might as well stop and cancel right there.

Quick fix: Forbid the use of ticket information during the Review.

Sin #3 - Powerpoint

Yet another great indicator of a Review that is going to suck is that the center of attention is a slide deck, not the product itself. When I see slide decks and no usable software, I immediately make a mental note, "The team doesn't know what is useful." While in some rare cases, Powerpoint has a beneficial effect, our usual text-filled or bullet point lists describing details are a great method of getting people to divert their attention elsewhere.

The center and focus of attention of the Review is the Product and the User. Everything else is secondary or irrelevant. If you can't show it in the Product, you better have a damn good explanation why it's relevant for the user!

Quick fix: Forbid the use of Presentation software during the Review.

Sin #4 - Disconnect

When users can't see why they benefit from the items the team has completed during the Sprint, there is a deep rooted problem somewhere in how the team works. This can't be fixed in the Review, and the Review is neither the place for the team to earn sympathy points from the users, nor is it the place to brag about how much work you achieved.

The Review is a synchronization point between team and users, where users learn what's new in the Product and where the team leans how users like the new features. If this synchronization is broken, the Review is pointless.
When the Review becomes a show where the team demonstrates themselves, their work or their outcomes without actively listening to the Voice of the Customer, you might as well scrap the Review.

Quick fix: Make user statements in the Review lead to real change in process and product.

Sin #5 - No Action

At a minimum, I, as a user, attending a Review, expect to see Real Action with the product. I may not expect something fancy, but I expect to see how the Product works. Preferably, I expect to fiddle with it myself. When I don't see the Product in Action, I haven't seen anything. And I have wasted my time.
I can form my opinion best when I get hold of the tablet/mouse/keyboard connected to the Real Application, but when this isn't possible, I at least want to see what the product looks like. In descending order of catching my interest as a user are:
  • #1 - I get to use the product
  • #2 - I see you using the product
  • #3 - I see a video of someone using the product
  • #4 - I see pictures of the product
  • #999999998  - I hear you talking about what the product can allegedly do.
  • #999999999  - I don't even get that.

Quick fix: Hand the product to the user and see what happens.

Sin #6 - Filling the timebox

Another great way to waste people's time is to feel pressured to use the Review's timebox. If you only have a 5-minute topic, the Review can be over in five minutes. Keep it short and simple. Get the information out that needs to go to the users, and get the feedback in that you need.
Your users' time is precious, as is yours. Don't extend the Review meeting. If you feel that you have too little to show, that's a great topic for a Retrospective, but no reason to bloat up your Review.

Quick fix: Focus on essentials, keep the Review as short as possible.

Setting up a good Review

So then, what is a good Review?
First, you start long before the Review. You start with the Sprint Planning. What's the team's compelling Sprint Goal? Why is it important? Which user problem are you setting out to solve? What are the individual pieces of the puzzle contributing to solve the user problem (e.g., "user stories")?
How is your solution valuable for the user? What is the users' newfound benefit?
If you haven't thought about this right from the beginning, you won't fix it in the Review.

Tell a compelling Story:

  1. What is the user problem you tackled?
  2. What is the pain the user has had?
  3. What is the solution you provided?
  4. How does the user benefit from your solution?

And then - let the user do the talking!