Thursday, November 17, 2016

The biggest question when Scaling Agile

Pretty much every large company states, "We need an agile scaling framework". 
I do agree that when 50+ developers need to collaborate, then a scaling framework provides massive benefits. There is one question left unanswered. One unspoken, unchallenged assumption looms like a specter over every scaling approach. Before asking this question, I will list out the reasons why it needs to be answered.

Are you asking the right question?

Creating complex products

A complex system has, by definition, a fairly high complexity. A common assumption is that Divide+Conquer (D+C) is a good way to approach complex problems: Split one big problem into many smaller problems, distribute these and bring the solution back together. Sounds promising. 
A Scaling framework can then be used to maximize the effectiveness of the D+C approach.

Question 1: How do you define the problem?

When you're starting the development of a large product, you typically have a need - and want some product to meet that need. Due to the very nature of product development, the solution doesn't exist yet. Have you thought about these:
Is your problem even properly defined yet? Is it sufficiently clear where the efforts will really be? Do you know enough about the challenge domain to divide and conquer it?

Question 2: What is really the problem?

When you're setting up a product development organization, you simply assume that the need is clearly defined, and then it's "just work". 
What happens if even the question wasn't the one you should have asked during planning? What happens when the plan turns sour and you're giving the wrong answer to the right question? Does it help you when more people work on the wrong things? 

Question 3: Where is the problem really?

In scaling agile development, you assume that the main problem is that "there is too much work to do"for a limited amount of people. As I wrote elsewhere, the problem with lots of work is that it causes lots of work. And with that, I mean non-value added work like coordination, task switching, meetings etc. 
Have you considered the proportion of value-added work focused on the product versus the amount of non-value added work focused on your own process? Are you aware how much time your developers coordinate thanks to your organizational structure? Is your organization product-focused or process-focused? What happens to coordination overhead when you add complexity to your processes? What happens to development time?

Question 4: Is the problem really that big?

You simply assume that you need a lot of people to create the product you need, then you wonder what is the best way to organize these people. Have you considered that Google was basically developed by two people in the first few years? And Facebook by one?
Is your product really so great that it blows Google and Facebook out of proportion? Why aren't you making trillions yet? Are your understanding and approach optimal?

Question 5: Does much really help much?

The book "The mythical man-month", written a good 40 years ago by Frederick P- Brooks, has long since discredited the idea that "throwing additional people at a product speeds up delivery proportionally". To this day, corporations consider "scaling development" a solution to "The mythical man-month". As mentioned before, great products were built by a handful of people - and more people just add work without value.
Could fewer people doing less work deliver a better solution? Would it really be slower if fewer people participated?


After asking so many questions around the key issue, here is the real question:

Do you really need "scaling up"?


  • Do developers have the 100% best possible knowledge to find a solution? 
  • Are they using the 100% best possible technology to solve the problem? 
  • Is everyone 100% focused on doing the most valuable thing? 
  • Is every work item done 100% value-added?
  • Is the work done 100% effective?

Remember, when adding additional people, the percentage does not go up. It typically goes down.
If you multiply the real percentages you have, they are often abysmally low. Maybe 10 people work at 20% potential - so 3 people working at 80% potential might make more progress than these!
If you have 100 people working at 5% potential, then a single team might be more effective than they!

Have you exhausted all possible means to do the same thing with fewer people?

Only if the answer to all the questions in this section is "Yes" - then the answer to "The Big Question" is "Yes". Before that: You will be scaling up your organizational problems - not your problem solving.

Wednesday, November 16, 2016

Clearing our own mental model

In the wise words of Marshall Goldsmith, "For everything you start, you need to stop something." - when we are embarking a new journey, we need to throw out some old ballast. One of the biggest burdens we carry around are our own mental models, shaping our perception of reality and therefore, our thoughts, behaviours and actions. When was the last time you did some housecleaning for your own mental model?


How our mental model affects us

Everyone builds a mental model based on at least three assumptions:

  1. Reality exists
  2. We form a model of reality through interaction and observation
  3. Models with predictive capability are better than those without
From that starting point, we are building everything we consider "real". Probably the most noteworthy assumption is #2. It indicates that during every single second of consciousness, we are shaping our model of reality.
Our mental model of reality has assumed it's current shape from the second we were born until today. Each aspect and angle of this model is based on observations, interaction and deduction.
Our choice of action is then determined by the outcome we predict based on our own model.


The problem with our mental model

"All models are wrong, some are more useful than others" - another quote I picked up somewhere.
We do not know reality. We can't. We only can form a "pretty likely model of reality" - and that model only exists in our mind! The shape of our mental model is determined by the interactions we had - and the observations we made. Since we are neither omniscient nor omnipotent, we didn't have some important interactions and haven't made some important observations - or have misinterpreted some observations.
This means our mental model of reality usually suffers from three key drawbacks:

  1. Incompleteness
  2. Inconsistency
  3. Incongruence

Incompleteness means that there are events beyond our comprehension.
For example: I don't understand why there are black swans in Australia. I have never bothered to learn how this came to be, so I couldn't explain why Swans can be white or black, but not green.

Inconsistence means that if we scrutinously considered everything we know, we would realize that multiple things we assume as "true" individually can't be "true" together.
For example: I consider Tim to be a nice person, and I am aware that Tim is not nice to Alice - so what is it then? Is Tim nice - or not?

Incongruence means that different people models of reality may either fail to overlap (I know what you don't) or mismatch (I think this is true, you think it's false).
For example: The UKIP supporters think it's good to leave the EU, while the EU proponents think that's a terrible idea. Either party drew their conclusion based on a number of assumptions and facts that may either be unknown, weighted differently or dismissed by the other party.


Mental model housekeeping

To do some proper housekeeping, we need to be aware of the following:
1. Our mental models are just that - models.
2. We benefit from having a more accurate model.
3. Incongruent models can be aligned through open interaction with other people.

Now, let us discuss some methods for doing this housekeeping:

Aligning concepts

We have so many inconsistent concepts, just like the one above.
Once we become aware where these inconsistencies lie,
we can uncover the reason why we have these concepts.
Next, we formulate a hypothesis for the conflict, then design an experiment to disprove at least one of the concepts.

It could be that we failed to disprove any of them - in which case, we probably haven't dug deeply enough and need a better hypothesis.
It could be that we managed to disprove all of them - in which case, we may need to forget everything leading us to either conclusion.

If we disproved all but one of them, the best way forward is to discard the ideas that no longer hold true. Especially in this case, it could be that even what we believe now is still wrong: We just don't know until we have more information.

How do I align concepts - in practice?
It's quite simple. When I discover that I have conflicting ideas, I mentally rephrase "Tim hates me." and "Tim is a friendly person" into "I assume Tim hates me". Then, I ask myself, "Why would Tim hate me?" - then I may go to Tim, and be quite upfront: "I feel we don't get along very well.". Tim might meet that with an unexpectedly friendly: "What can I do so you feel more comfortable?" - my first assumption is already invalidated. My model is more consistent now.

Pruning loose ends

We are bound by so many concepts that arise seemingly without reason. 
For example, Tim said something bad to me yesterday - and now I have the concept "Tim doesn't like me". My concept is not founded on a sufficient amount of evidence.
This concept now binds my interactions with Tim, even though it is merely a loose end in my mental model. The more loose ends I carry around, the less freedom I have in my interactions with my environment.

Through introspection, we might drill into the "Why" of picking up this loose end and tying it to our model. In our attempts to do this, we complicate our model by adding numerous assumptions without any foundational evidence.
We need to become aware of what our "loose ends" are - and consciously discard such concepts.
This helps us form a more consistent model of reality.

This approach is based on Occam's Razor, the suggestion that "The model relying on the fewest assumptions is often the best"


How do I prune loose ends - in practice?
Tim might actually have said to me "Dude, you messed that one up." I can now integrate that sentence into my model right away, filling the missing gaps with unspoken assumptions, one of which may be "Tim doesn't like me". I can also choose to simply say "Yup", and regardless of whether I agree with Tim or not, I simply don't attribute these words to my understanding of Tim's relationship with me.

In retrospect, I may need to be aware that "Tim hates me" and question myself, "How much evidence does support this concept?" - unless the evidence is already overwhelming, the easiest thing may be to simply go to Tim and say, "Want to have a chat?", seeing if that chat generates evidence to the contrary. 

Probably the hardest way of pruning loose ends is to drop the concept as it pops up. Since our concepts are hardwired in our brain, pruning like this becomes a difficult exercise of psychological intervention: becoming aware of the dubious concept, then redirecting thoughts into a different direction when the concept manifest. This method does not resolve the underlying inconsistency and is therefore unhelpful.

Resolving dissonance

My concepts often don't match your concepts, because neither my experience nor my reasoning process is the same as yours.
The "easy way" to resolve dissonance is war - just get rid the person who doesn't agree with you. Unfortunately, that doesn't mean that your model of reality got any better.
When our own strive is to obtain the best possible model, we need to attune our model based on others' ideas and reasoning.

First, we need to expose ourselves to others' thoughts.
Then, we need to discover where our thoughts mismatch those of others.
Next, we try to uncover which assumptions lead to the mismatch.
Together, we can then form a hypothesis of which assumptions are more likely.
Then, we can begin aligning concepts together, coming up with a shared model that is more congruent.

Resolving dissonance requires two additional key assumptions:
1. It could be that my model is wrong.
2. I can find out enough about other models to integrate a portion into my own model.


How do I resolve dissonance - in practice?
Nothing easier - and nothing nothing harder than this. Just talk. Unbiased.
Have an open conversation without predetermined outcome.

Punching holes

We typically assume that what we know and observe is true. Then, we build new assumptions on that. Very rarely do we spend time trying to disprove what we know.
The Scientific Method is based on the idea that we can't prove anything to be true, but we can prove something to be not true. We consider as "probably a good explanation" by exclusion, i.e. when every experiment to prove that the opposite failed. So, our goal should be to come up with an experiment to prove us wrong.

We can improve our mental model by using this approach to try and punch holes into our model.
If we succeed - our model is bad and we can discard the assumptions we just invalidated.
If we don't succeed - it still doesn't mean our model is "right", it only means that it's the best we have for the time being.


How do I punch holes - in practice?
When my model assumes "Tim is unfriendly", the most effective way to punch holes is creating situations where I am exposed to Tim in settings which minimize the likelihood for him to be unfriendly.



Summary

Frequent clearing our mental model is very helpful in improving our understanding of the world around us - and our interactions with others.

The exercise of cleaning always requires the following:
1. Being consciously aware of our assumptions.
2. Doing something about it.
3. Never being content with our current understanding.

Simply starting is the best way.

Monday, November 14, 2016

Five Pitfalls when scaling Agile

Especially large corporations are looking for quick and easy ways to transition from classic development to agile development practices. The faster a transition is intended, the more likely dangerous pitfalls are overlooked. As Agile Development is intended as a sustainable practice rather than a way to get a project done, management has a tremendous share of responsibility in success.



Here are five pitfalls that you will need to deal with when you desire to scale agility:

1. Fundamental agility

Transitioning the processes towards agile is very easy. Basically, you're deregulating and training - then people can "do Scrum" or another agile approach: In the big picture, you have accomplished maybe 1% - and 99% are still "toDo". There is still a long journey.
At the core of agile development is the Inspect+Adapt process. This requires a mindset of scrutinously examining whatever is happening - and making changes when something is going in the wrong direction. To get proper Inpect+Adapt into your organization, you need to do two things:
First: detoxify the current way of working. Remove any element in your company culture that makes engineers unwilling or scared to own their process. This requires breaking tons of command+control processes and fundamentally changing how engineers are managed. Change only happens when you let it happen!
Second: Create a healthy way of working. You need to implement a management system that encourages engineers to own their decisions, changes, mistakes and successes. This requires setting up both structures and processes that honestly treasure individual contributions, even when they are not going in the direction you like. People will only contribute when you let them!

Unless you have first established fundamental agility, your agility will be brickwalled. 
Teams without fundamental agility will neither benefit nor contribute to scaled agility.

2. Craftsmanship

Instituting the ceremonies of Scrum can be done in a single day, including training and making the decisions. They give you the basics to "work agile". Short-term planning and frequent updates on the plan, feedback on the product and incremental improvements to the process are essential. Like this, developers can come up with the optimal way of working over time. You just may need to calculate a lot of time if Engineering Practices are not in place yet.
Your engineers need to be familiar with concepts like Version Control, Continuous Integration, Test Automation, Emergent Design, TDD, BDD, Pairing, Code Conventions - and many others from the XP book. They need to be able to consciously decide which of these practices are helpful in their current context and select the appplicable items in context.

 If your developers are still unaware of these practices, they well be doing the motions of "agile" without practicing agility.

Teams working without proper Craftsmanship can actually decrease the implementation speed of scaled agility.

3. Team Spirit

Often considered esoterical, team spirit is paramount to scaled system improvement. Many managers still think that individual objectives (and potentially even Stack Ranking) help developing high performing staff. As scaled agility is mostly about implementing a complex adaptive system with many contributors, the need for contribution to the Whole far outweighs the contribution of the individual to their own good. Team Spirit, in this context, means that individuals are willing to subordinate their personal interests to accomplish the overall company mission. 
Team Spirit is not as much about "doing fun stuff with others" (rafting, go-kart, paintball etc.) as in finding satisfaction in doing stuff that help everyone and advance the company mission. For this, every engineer needs two things:
First, they need to know how they can contribute. As this depends totally on the individual's current situation, it needs to arise from self-organization and intrinsic motivation.
Second, they need to have assurance that virtuosity has it's own reward. Any organizational impediment that creates a personal disadvantage in achieving overall goals needs to be removed.

A classic example of "team spirit" is soccer or football: History has proven that the game is not won by the best players. It is won by the best team. Engineering is the same: Competing aces do not win the game in the market. You win by joining forces, synergizing ideas and going in the same direction.

Groups of engineers without Team Spirit will actually spend disproportionate amounts of energy on "being busy". A scaled group will only invest an insignificant portion on achieving the shared mission!

4. Transparency

Many organizational transformatios run afoul by lacking sufficient insight into the overall system to enable global optimization.
Classic impediments to transparency occur on all levels: From developers being unaware of the impact of another developer's work on theirs - all the way up to managers being unclear what the Absolute Priority 1 of the organization currently is. 
The lack of transparency results in a myriad of other problems, ranging from engineers unknowingly sabotaging each other all the way to entire teams doing the wrong work. None of these is a desirable condition, and the amount in which these things are happening denotes the criticality and priority at which transparency should be increased. Sometimes, an organization spends nearly 100% of their capacity on problems caused by lack of transparency. In those situations, "scaling" is probably not a lesss accurate depiction of the work than "struggling".
There are many ways to increase transparency to enable proper scaling. These include, but are not limited to:
Transparency is antiproportional to coordination overhead and impediments. Because of that, transparency at scale is directly proportional to ROI.

5. Focus

Probably the biggest issue in organizations is a focus on utilization: bringing work to people, assuming that when everyone is "busy", we create a lot of value. An agile organization is pretty much the opposite: We bring people to the work that needs doing, understanding that value is not correlated to activity. We focus on delivery, getting things done. A classic problem of many organizations is that in their attempts to maximize utilization, many different items are "work in progress", requiring task switching and coordination. Already in trivial settings, the lack of focus quickly diminishes ROI and significantly increases throughput time: We spend too much time figuring out why nothing gets done and too little time actually getting things done.
The main idea behind focus is: It costs less to do the wrong thing first, then the right thing - than to do two things at once. 
Focus sounds trivial, yet it is incredibly hard to implement: The entire organization must have a clearly ordered list of objectives where every objective is uniquely prioritized. There may only be one priority 1. Next, focus requires strictly limiting Work in Progress.
Focus is mostly a mindset change for managers: You need to accept that idle time actually costs less than overload. You must accept that you can't have everything at the same time. 

Unfocused teams will be fully utilized, yet completely ineffective. The less clear the focus, the longer it takes to produce value.


Summary

"Scaling agile" that does not pay close attention to the aforementioned pitfalls will most likely result in a cargo cult. Outwardly, your transition may be successful - while inwardly, you will be missing the benefits of the adoption. 

To ensure that your agile transition is successful, a common approach is temporarily bringing in experts who have been in the trenches and know these pitfalls.
To avoid these pitfalls in a scaled environment, I suggest the following:
  • Get Management support for the change. You will turn your organization upside-down in closing the pitfalls. That requires unconditional management support.
  • Set up an Agile Transition Team (ATT) consisting of managers and developers alike. The ATT commits to a clear change backlog.
  • Bring in executive coaches for key people in the transitions. This includes line managers, Product Owners and the new Scrum Masters alike. The external coach must have experience at scaled agility.
  • Use technical coaches to enable teams adopt suitable engineering practices much faster: This pays off in delivering a much better product!
  • Hire a consultant to lead the transition. This person must know what they are doing and must be absolutely no-nonsense. They need to be empowered to make even unpopular changes.
  • Train everyone in agility. Use a safe classroom setting to demonstrate the impact of the change.


Wednesday, November 2, 2016

I hate SAFe ...

".. because it focuses too much on management, and too little on teams." - this is what I hear many times from agile coaches who have very strong opinions regarding SAFe.

Let me just ask you a few questions about your own experience, before getting into SAFe:

  • Whom do you find more difficult to convince of agility: developers - or managers?
  • Where do more organizational impediments arise: development - or management?
  • Who will have more effect by changing something they can control: developers - or managers?
  • Who is less likely to change their ways of working during an agile transformation: developers - or managers?
  • Who needs an answer to the question "What will I do in an agile organization?" more urgently: developers - or managers?
  • Who do we spend more time with during an agile transformation: developers - or managers?

Where agilists failed

Agility has a sad history of neglecting, ignoring and even badmouthing managers. Statements ranging from "Management is optional" to "You don't need managers in an agile organization" all the way down to "Impediments tend to have a job title ending with 'manager' or starting with 'head of ...'" alienate management.

People fail to see that every existing company has a management structure - and that managers are all highly skilled individuals who got into their position for what they can bring to the company. Structure has rendered many of these people ineffective or counterproductive. You have bright heads with brilliant ideas doing nothing except filling powerpoint slides and spreadsheet reports, attending a gazillion of meetings. I don't think that many who hold a managerial role consider that "This is what I went to university for!"

The SAFe answer


SAFe is a framework that does not go out on a limb with a "Let's fire all the managers!" declaration of war. 
Instead, SAFe does what agilists have neglected over decades: Finding an appropriate answer to the question "What is the role of a manager in an agile organization?"
The answers provided in SAFe are not surprising at all. They are based on what the reality of agile transformations in the past has already taught us.

Managers are still valuable in an agile organization, with two important constraints that need to be clearly understood:
  1. Engineers aren't "resources". You can't manage people like resources. We respect people.
  2. Engineers don't work on an assembly line. Tayloristic management doesn't transfer into knowledge work. We assume variability and preserve options.
 As a consequence of these two, the role of the agile manager undergoes a radical transformation. A traditional manager must un-learn many behaviours and adopt new behaviours to adequately serve an agile organization. 

How can a manager know which behaviours are detrimential and which are desparately needed? 
Well - you don't learn that in a Scrum Master class. 
Managers desparately need clear answers to the questions I asked initially. The Scaled Agile Framework takes a shot at the question "What is the role of a manager in a large, agile organization" by first digging into the question what an agile organization looks like - and from there explaining which changes management role need to live through - and why these changes in roles are essential.


Summary


I don't hate SAFe because it has so much to say on management. I love it for what it has to say regarding management. It provides a perspective for some of the most skilled and crucial knowledge workers a company has. 
A manager in a SAFe organization will finally be what they always longed to be: valuable. And SAFe gives managers a solid set of "baby steps" to outgrow their past.


Friday, October 7, 2016

Effective agile leadership patterns

Many organizations struggle with the notion that "leadership" equals position on the Org Chart. That is, the C-Levels lead everyone, division heads lead their division, team leaders lead their team etc. This concept may have been valid at a time when people were not sufficiently educated to discover the best way forward autonomously. But this is questionable in a world where managers don't even understand the work done by their teams. We have discussed previously that leadership is situational based on the situation of the team. Let us take a look at three different leadership patterns.

The Forerunner

The most straightforward way of leading agile is "to boldly go where no man has gone before",
A forerunner experiments, takes new ways, and learns by Inspect+Adapt, taking others along on their journey.

A forerunner is effective when others continue the journey they started.
It's very difficult to be a forerunner when the things you're doing are not the same things others are doing.

The Enabler

Another way of leading agile is by enabling others "to boldly go where no man has gone before".
An enabler considers the roadblocks which prevent others from experimenting, taking new ways and learning by Inspect+Adapt.
To enable, "walking gemba" is essential, i.e. seeing the reality of the problems causing others to not move forward.  Scrum typically sees the Scrum Master as the team's enabler, although the PO can also go a great length in clearing road blocks as well.

An enabler is effective when others can move beyond the path they cleared.
The further a person moves away from the team, the less effective they are at enabling. Likewise, decreasing proximity increases misunderstandings and "doing the wrong thing".

The Thought Leader

A thought leader does not lead by doing, but by providing impulses.
It is completely up to others to start experimenting, taking new ways or learning based on the impulses provided by a thought leader.

A thought leader is effective when others are inspired to try out the things the thought leader suggested.
Thought leaders can be completely disconnected from the team's situation, because they are just offering inspiration. They might even be completely disconnected from the team's organizational context, making their inspiration useful nonetheless.

What this means

None of the proposed three leadership patterns relies on a position on an org chart. Except for the Enabler, who can be effective by dissolving impediments caused by the org chart itself, agile leadership works best without positions or titles on an org chart.
Titles may even be counterproductive. Leading agile is something that you either do or don't. It is entirely possible that a regular developer in a team is a more effective leader than all the managers in the company together.

The most plausible way for a line manager to be an effective agile leader is in an enabler.
Forerunning is only possible by forsaking the position in the Org Chart and joining an agile team.
Thought leadership takes extremely deep understanding and many years of expertise and is difficult to attain for managers.

Conclusion

The harsh truth about "agile leadership" is that organizations have a problem with middle management.
When lots of middle managers find value by acting as enablers, we probably have so many organizational impediments that the organization is pretty much doomed. However, when they can't function as enablers, they can't be considered "leaders" any more than any other person on the development teams.

Sprint 0? Good? Bad?

I recently joined a discussion where people advocated against "Sprint 0".
The arguments delivered are along the lines of "Sprint 0 is only necessary when you consider Scrum a process", "It involves people that don't need to be involved", "It overloads the terms", "Waste", "BUFD", etc.


So, let me create a bit of transparency, regarding what I consider as "Sprint 0":

The following side conditions are in place:

  • It's not necessarily the same time box as the delivery Sprints, but we're working time boxed
  • There is a Backlog for things we need
  • The Deliverables are: 
    • A clear Product Vision
    • A team that can deliver, including a capable PO and SM
    • A technical environment which permits the team to deliver
    • Management supports Scrum
    • Stakeholders are aware of how Scrum changes the game
    • Assured Funding
    • Dedicated availability of developers on the team
  • The team (i.e. the transition team) is delivering using an Inspect+Adapt approach.
There is no doubt that such a phase is necessary, so that the Scrum team has a chance to succeed.

To me, discussing about how to name that phase is just a game of po-tay-to, po-tah-to.

Maybe we can end the religious discussion around Sprint 0 by just calling it the "Potato"?

Tuesday, September 27, 2016

Iterative development?

There's an image that's going around on the Internet after a talk from Spotify as an explanation for iterative development, and it's being picked up by trainers, coaches and consultants from around the world - but it's wrong.

Here it is:

"Iterative development" analogy abused from another blog

Do not use this analogy!
Here's why.

The problem
I will ask these questions to help you draw your own conclusions.

What is the bottom row claiming to be a display of "how to do it right" actually implying?
  • Is the purpose of a skateboard and a car the same?
  • Are the people buying a skateboard, motorcycle and a car the same people?
  • How likely will a person who just bought a skateboard upgrade to a car?
  • Can you recycle your marketing campaigns?
  • Will a great skateboard designer know how to build good cars?
  • How many learnings from the last success/failure will help you do better in the next iteration?
  • What % of the last iteration's product is "waste" in the new iteration, how much can be reused?
Pretty much every increment would require restarting the entire business from scratch: New customer segment, new experts, new design, new product. There is no constancy of purpose at all.
How likely will you succeed with that if you're doing that in a 2-week rhythm?

Fixing it
If you really want to do iterative development, here's how you should approach the increments:

  • First, get some tires (not deliverable, but already a lot of work)
  • Then an axis to connect them (can already carry something)
  • Then an engine (automated propulsion)
  • Then a chassis (safer to ride)
  • Then windows (water/dirt proof)
  • Then, paint the whole thing

Most likely, you'll continuously find improvement potential in one of the above areas before ever finishing, but if you can't get these things done, you're not going to sell cars.

But how many companies have ever successfully produced streetworthy cars based on their skateboard expertise?

So, the upper row actually makes more sense than the lower row (even though it's also wrong).

Trash this analogy. Get it out of your head. It's misleading.

Visualization tip: The "i" people

Visualization can help a lot in communication. While technical people can easily draw some UML or ER-Diagram, they quickly feel challenged when having to draw people interacting in different ways. But drawing people isn't hard. Here, I'll show you a way to draw people based on the letter "i". These "i-people" can be doing all kinds of things. 
There is absolutely no magic in there and it's a good way to lose the shyness of drawing people.


1 - draw a handwritten "i" base.


2 - draw the head. It's just the dot on the "i", but you use a circle:


3 - Use the arms to make the person do something. To stand still, just draw a straight line from left to right on the tip of the base:



Now, that really wasn't hard.
Next, you just use different line shapes for the arm, and your i-people can do quite a lot of things:


For this exercise, I've exactly copy+pasted the "i" and added different arms.
It's really not hard.

Just try it!


Friday, September 9, 2016

Agile Leadership - Buzzword Bingo

One of the newest buzzwords appearing in the agile community is the "Agile Leader". Didn't we spend decades preaching situational leadership and how every team member can be a leader?
Well, it seems like Larman's law #2 is taking effect and soon, every middle manager in line-oriented companies "doing agile" will be a "certified agile leader".
In the process, the term "leadership" and many others will be overloaded to be so devoid of their intended meaning that you'll wonder what they are actually supposed to mean.
Be prepared that your division manager agile leader will have a whole repertoire of terms to drive your agility.

So I just had some fun, created a Buzzword Bingo game for "Agile Leadership":


Enjoy!

Thursday, September 8, 2016

Are you an organizational catalyst?

The newest buzzword in the agile community is "Catalyst". The Scrum Alliance considers being a "Catalyst" an essential part of being a coach and an agile leader. It's actually taken from the book "Leadership agility" by Bill Joiner.
It's always funny to hear coaches talk about the need to ask questions and reflect on your own behaviour, then doing exactly the opposite by unquestioningly adopting buzzwords. They have picked up something that seems to sell well and are now following in herd mentality. Since "agile coaches" are promoting something they don't even understand themselves, let me take some time to explain what a "catalyst" actually is!


The myth: "A catalyst speeds up a transition into a new state".
That's actually true, but you need to understand how a catalyst does that.
Catalysis is a highly complex chemical process with lots of constraints that one should be aware of when using the term.

No Philosopher's stone

A universal catalyst ("Philosopher's stone") that can make any kind of change happen does not exist.
Every catalyst is a "one trick pony" that can do only one thing in a very restricted context. In a different context, a catalyst will be useless or even an inhibitor. Do you really want to be good for one thing, and only that one thing?

Energy parity

The definition of a catalyst states, "In the presence of a catalyst, less free energy is required to reach the transition state, but the total free energy from reactants to products does not change"

This may be understood as "less energy is required". No: It means less energy is required to reach the transition state, but the total amount of energy to reach the goal does not change!

Let's translate what this means: The catalyst destabilizes the system in an otherwise stable state. Then, it channels the energy difference throughout the change process. All consumed energy is lost, and all the remaining energy must be invested before the catalyst is out of the system. The catalyst actually binds energy until the process ends.
The catalyst takes a massive active role in the change process. Typically, not much would happen without the catalyst. Every single aspect is being modified by the catalyst on more than one occasion.
The catalyst is an extrinsic change funnel, which in an agile context is equivalent to stating that the change is imposed on the target. A catalyst completely destroys autonomy.

Irreducible complexity

Catalytic reactions are highly complex. Scientifically, a catalytic reaction looks like this:

  1. X + C → XC
  2. Y + XC → XYC
  3. XYC → CZ
  4. CZ → C + Z

The same process, without a catalyst, would look like this:
  1. Y + X → Z
The catalyst is an essential component in every stage of a catalytic process, and there is no direct relationship between start and end of the process, although the catalyst is not necessary for the process to take place. Not only is the catalytic process significantly more complex than necessary, it may be impossible to figure out the natural relationship between X,Y and Z if one has never observed the reaction without presence of the catalyst.
The catalyst becomes "irreducible complexity" and hides the simplest way to reach a goal.

Bottleneck

A catalyst "participates in the slowest step of a reaction, and rates are limited by amount of catalyst and its activity."

This simply translates into "The catalyst is the bottleneck of change."


Potentially unsustainable

A catalyst "does not change the energy difference between starting materials and products."

This also means that the energy difference between the starting point of the change and the endpoint thereof is independent of the catalyst. Catalysts can induce highly unsustainable change that would not have happened without them. Catalysts might even be the cause for creating an unstable system.

Value neutral

Catalysts also "do not change the extent of a reaction".

This is basically stating that the same result could have been achieved without the catalyst by investing more energy. Effectively, this means that the presence of the catalyst really only reduces the energy investment, but it does not add any extra value.


Change exempt

Let's close this discussion with the final straw: A catalyst, by very definition of the word, "remains unchanged after the reaction."

What this means: While the catalyst did put a lot of effort into the change, the catalyst ultimately was not changed. For the catalyst, the change was just a temporary thing that is completely brushed off, left without a trace.


Should you be a catalyst?

Let's sum this up: An organizational catalyst is someone who:

  • Is a one trick pony potentially causing damage with their involvement
  • Does something that could happen in different ways without them
  • Starts processes that can't be terminated until all energy has been spent (i.e., removes agility)
  • Interferes dramatically with others' autonomy
  • Adds significant complexity which can no longer be taken out of the system
  • Becomes a massive bottleneck and Single Point of Failure
  • Hides the real change going on from those who are involved
  • Does not learn anything from what they are doing

Decide for yourself.

Thursday, August 25, 2016

SAFe: Setting up the Value Stream level

After various discussions about the alleged massive management overhead introduced by SAFe 4.0, let me clarify what's really brought in with the additional level called "Value Stream". The Value Stream Level combines multiple Agile Release Trains. As a matter of fact, you don't even want to go there unless you have significantly more than a hundred developers working on the same product. This level is only necessary in massively scaled product development, something you want to avoid in the first place. 
But when you can't - you need to find a way to deal with the problems introduced by an organization sized equally to multiple enterprises collaborating in (near) real time. And SAFe has a proposal how to get you started on that one, too.


Defining the value stream

What's a value stream? Simply put, it's all stuff happening "from customer (demand) to customer (satisfaction)". In some enterprises, that's obvious - while in others, it may be hard to grasp.

An example value stream
Let us take an example, "What is the value stream of a smartphone?" - That depends. When you are talking about a telco carrier, you as a customer sign a contract, get a SIM card and a device, register it - and start calling. You then get monthly invoices and that's it. From customer side.

But what is going on in the background:
To get a contract, you select a package typically considering of tariffs, prices, products, options and bundles that will be assigned to your customer account. All of this stuff handled in so-called "business support systems" (BSS). As customer, you don't care much how they do that, but BSS platforms are often provided by specialized organizations due to their complexity. It might even be fair to call this an independent product.  It may be adequate to label BSS platforms a "product" in it's own right, required not by you, the customer - but by the Telco carrier in order to serve their customers. Depending on the carrier, in this line alone you might find 500+ people working.

Next, of course, you want to make a call. But for that, your device must be activated in the telco network. That requires some interaction between the BSS and the network stations. For simplicity sake, let's just say that the physical network is yet another sub-product required to provide service for you, but ordered by the carrier.
There's also a product line called "Operations Support Systems" (OSS) taking care of that. There's major corporations doing only the Network base stations stuff, and there's major corporations doing only OSS stuff. The things going on here are highly technical and interest nobody except operators, but otherwise you couldn't make a simple phone call.

This means our example value stream actually consists of three product lines, only two of which are exposed to you as a customer. In each of these product lines, some magic happens so that you get to make your call.

So, here's what the value stream would look like:

A value stream perspective for a mobile network operator
As noted already, BSS, OSS and Network may be completely independently organized "technical value streams", for example when they are outsourced. SAFe would not advocate to start by insourcing all activities, especially where it does not make sense from a revenue perspective.

Continuing with our example, let's just assume we are dealing with a so-called "Virtual Network Operator" (MVNO) who does not have their own network. In this case, the "Network" and even the OSS would be a purchased service, provided as closed black box. Our own development would be using the output of these value streams, but would not be directly interacting with them in the process, so our SAFe organization would embed, but not directly touch them.

But we still have a problem: There's the BSS teams providing value to end customers by setting up new product lines and also those who provide value to our own business with stuff like accounting, tax records, audit reporting and yada (plus our black box technical value streams providing OSS and Network services for end customer value) - but they're too many to organize in a single Agile Release Train (ART). Now what?

Splitting up the value stream into multiple ARTs

An Agile Release Train can accomodate anywhere from 50-150 developers. Once we get beyond that, stuff like the Dunbar number and regular organizational complexity get into our way. So we need to keep the ART at a sensible size, while still being able to deliver useful products to our customers.

Here are some splitting strategies. Please note that while the terms "Bad", "Better", "Best" are definitely judgmental, there may still be pressing reasons to follow a specific approach.
A "bad" choice is still better than paralysis.

Bad: Component split

Probably the most obvious form of splitting is a technical component split, allowing developers to focus on a specific subset of technical systems. While that is possible, it's a great way of maximizing dependencies and coordination overhead while minimizing productivity and customer value. We don't want to go there.

Better: Feature category split

In our example, we might consider splitting the value stream around categories such as tariffs, campaigns and infrastructure. These kind of feature areas would be a good starting point to form a feature team organization that can deliver end-to-end customer value. Of course, there will still be dependencies - but far less than a component setup.

Best: Customer segment split

Probably the most common form of splitting may be "private customers", "business customers", "VIP" and "internal customers", having feature teams serve each customer segment independently. With this approach, strategic management can easily decive to boost/reduce the growth of a customer segment based on how many people would be working in the respective segment. Of course, there's also interaction between the segments, but with a robust product, these should never be game breakers.


Setting up multiple ARTs

So, after identifying how we want to split up our value stream, keeping in mind that each split should be between 50 and 150 developers in size, we'll end up with multile independent Agile Release Trains, together forming a Value Stream.

After reaching clarity which developer is assigned to which ART (just for clarity sake: every developer works on one ART, every agile team is part of one ART) there are multiple ART's to launch and coordinate.

Here is the proposed SAFe structure for setting up multiple ART within a single value stream:

The Value Stream Level - a team of ART's

This one should cerate a deja vu, as it looks exactly the same way an Agile Release Train is set up - and this similarity is intentional.

In another article, we will describe in more detail how the roles and responsibilities change in comparison to a single ART when this form of split occurs.

Summary

Coordination at Value Stream Level becomes an issue when more than 150 developers collaborate on the same product - and even then, the complexity of what you do depends highly on how your organization is set up. On Value Stream Level, you may have multiple ART's sliced in different setups, you may have black boxes of consumed services etc. 

Going into this level of complexity is only necessary for the largest product groups. SAFe provides a way for them to get started in a structured way even there are too many people to coordinate within a single ART.

Do not set up the added complexity Value Stream level coordination unless inevitable.

Disclaimer: Opinions expressed in this article are the author's own and do not necessarily coincide with those of Scaled Agile Inc.








Wednesday, August 24, 2016

SAFe: The structure of an Agile Release Train

I have heard many different views of what an Agile Release Train (ART) actually is, ranging from a predetermined release schedule all the way down to nothing other than a renamed line organization. None of these are appropriate. Let us clarify it's basic intention. As Dean Leffingwell puts it, an ART is no more or less than a "team of teams". But what does that look like?


One Team Scrum

Basically everyone is familiar with the constellation of a Scrum team, but for brevity's sake, let me include a small summary: Every Scrum team has a Product Owner, a Scrum Master - and the developers. This same constellation is more or less applicable for other agile teams - even if they don't actually use Scrum.

A Scrum tean


Multi Team Scrum

But since a Scrum team is limited to 3-9 developers, how does that look like when your organization is, say, 50, or 80, or 150 developers? Do you put the PO outside the team? Yes and no. The Scrum Master? Maybe, maybe not. How do developers interact?
In fact, Scrum does not answer any of these questions, as the scope of Scrum is a single team. Consequently, larger organizations adopting agility struggle to find their answers. Their Scrum organization sooner lor later looks like this:

A Multi-Team Scrum adoption

This model actually works, but it leaves some questions un-answered, such as: "How do we make sure we're all working on the most valuable stuff?" - "How do we make sure we're not impeding each other?", or, for business: "Whom do I talk to with my request?" - "Who could take this feature?" - "What's Priority 1?" - "Is there a way to get this out of the door earlier?" - "When will the next major release be ready?"

The need for coordination

While this may still be resolved for 3-4 teams, this scenario might become a nightmare for business when there are 10 or more teams: Transparency, the key to any decent agile development, is lost in the mud. The more focus development teams have, the more likely they will not work on the highest priority.

The first obvious level of coordination is: The Product Owners need to be aware of what other PO's are working on and what the overall Backlog looks like and where their own priorities are within the bigger system.

Typically, in large organizations, impediments are endemic to the overall organization. As such, even independent Scrum teams will all be struggling with the same or similar problems caused by the bigger system. Likewise, each team itself will be powerless to change the entire system.

As such, the second obvious level of coordination is: The Scrum Masters should be aware of what's going on in the other teams around them, and how their team affects other teams.

Another problem arising in this scenario is that teams may suggest or implement local optimizations which may be fine for their own team, but detrimental for the other teams! For example, think of one team deciding to switch to an exotic programming language like Whitespace, because they're all Whitespace geeks: How can other teams still work on that code?

As such, the third level of coordination is: The Developers should be aware of what's going on in the other teams around them, and how their team affects other teams and the product.

The SAFe approach

What SAFe® does here, is basically nothing more and nothing less than consider a "Scrum team" like a developer in a larger organization and create the same structure on a higher level:

An ART - a team of agile teams

Looks an awful lot like a Scrum team - and that's intentional.

New Roles

Before we get into "Whoa, three new roles - more overhead!", let us clarify a few things: First, huge organizations do require more overhead. Don't go huge unless inevitable. Second, while SAFe® suggests these roles, it does not mandate them to be full time roles. It's entirely possible that these are merely additional responsibilities of existing people. However, experience indicates that in huge organizations - these things tend to become full time jobs.


The Product Manager (PM)
The PM relieves each invidivual PO of aligning the overall big picture with the different stakeholders.
The big differences between a PM and a PO is basically that while the PO is working with teams and individual customers, the PM is working with PO's and the strategic organization. Their main responsibility is making sure that there is only one overall "Product Backlog" from which all the teams pull work - so that at any given time, there is only one Priority 1 in the entire organization.

The Release Train Engineer (RTE)
You could say that the RTE is a "super Scrum Master", but that's not quite the point. While their responsibility is definitely similar, they don't work as much with a team as they work with the organization and management: For the teams, we already have Scrum Masters. 
The RTE, on the other hand, paves the way to corporate agility. The main concern of the RTE will be the legacy structure around the teams, to create a positive learning and innovation environment to nurture the agile teams.

The System Architect (SA)
That's the only really new thing on the ART is the System Architect. To clear up the common misconception about agile architecture right from the start, their responsibility is not to draw funny diagrams of cloud castles and force the teams to implement them. Much rather, their role is to guide and coach architecture, so that we don't end up with uncontrollable wild growth. Likewise, when individual team members have questions about the architecture, this SA would be the first person to come to. 

Changes to existing roles

The Product Owner (PO)
A Product Owner may be in charge of more than one team. Practice indicates that 1-4 teams tend to work out, otherwise the risk of losing focus increases. 
At scale PO's tend to become specialized in areas of the product (such as: Product Catalog or Customer Services) and need to synchronize the overall big picture with each other. Most of all, they need to synchronize with the PM, who feeds back into corporate strategy.

The Scrum Master (SM)
A Scrum Master may also be working with more than one team. Practice indicates that 2 is already the limit.
Facing the team, the main difference for the SM is that they need to point the team to interact with people from different teams, rather than being a bubble for themselves. 
Facing the organization, the SM has to have a much deeper understanding of "Spheres of control", and communicate the impact of outbound impediments. They may need to hand over large blockers to the RTE, and may likewise receive input from the RTE when their team needs to budge in order to move a larger block out of the way.


Summary

I hope that this article explains how SAFe®'s structure of the ART is not "relabelling the same old thing", but simply putting Scrum on a bigger level.
To repeat again, "don't go into scaling unless inevitable". But when you need to, the ART model minimizes the deviation from good Scrum practices.




Friday, August 19, 2016

Be careful of so-called "agile coaches"!

An agile coach is supposed to help agility "stick" within an organization. But that is not always the case. Unfortunately, the label "agile coach" is not a protected trademark. Anyone can wear that title. As such, there is a huge risk that the so-called "coach" will do more harm than good. Caveat emptor!
Here are a few stories of what "agile coaching" I have experienced so that you can actually avoid it. As a disclaimer: I do not consider all agile coaches to be quacks. There are a few whom I highly respect. But there's a lot of quackery giving them a bad name - and not many talk about it.

Purposefully unhelpful 

Probably the most idiotic phrase in the arsenal of an "agile coach" is "You need to find this out by yourself". Of course, that is supposed to inspire self-learning. But honestly. Not everyone wants to learn everything by themselves. Here's my story:
I just came into a new enterprise as a consultant. I asked the team coach "What's the wifi password?" - "You need to find that out by yourself"
This guy was serious that I should rather learn the WiFi password by myself than have someone "tell" me. Dude. I can paint a picture of a stick-man, label it "agile coach" and it'll be more useful than such a coach. Why do people even hire coaches who can't even discriminate when self-learning makes sense?

One trick pony

They say that for a coach, moderation, conflict management, coaching and mediation are key skills. This has the unfortunate side effect that we see "agile coaches" popping up who are domain experts in these exact subjects - and nothing else! Meaning: They are sociology or psychology majors who have never written a single line of code and are now trying to teach developers how to work better. Here is my story:
I was working with a team that faced numerous difficulties. One of these was the lack of a coach. So, they hired one who was really good at talking and creating a positive mood: Actually, too good. Unfortunately, this person had only ever attended a 2-day Certified Scrum Master course and NEVER worked with a software development team.
They had zero knowledge of things like technical debt, Continuous Integration, software testing or other engineering practices - not even PO stuff like backlog management, value prioritization or right-sizing the work!
The team was going in full circles, continuously struggling to figure out stuff "everyone knows" and caught management attention eventually because of high defect rates and unusually low throughput. It was blamed on the developers. The team got disbanded and forcefully rearranged. The "coach" never realized anything was wrong - because hey, the team was always happy and learning!

Feigned expertise

How can you coach something you're actually clueless about? It seems that for some "agile coaches", agile experience is truly optional. They think that having a couple certifications qualifies you and give themselves a label of expertise they do not actually possess. Here is my story:
I am occasionally meeting with an "agile enterprise coach" (CSP) to discuss about the various problems they face. Based on their CV, they've got a decade of "agile experience". At first I was befuddled when they started asking me trivial questions about stuff like backlog prioritization or why people limit WIP. I realized that this person had never really worked in an agile way: They had no idea what the real purpose of Continuous Integration was, they had never even attended, much less moderated a Retrospective - and they haven't actually seen what a workable Product Backlog looks like!
Oddly enough, this person is seriously working in enterprise agile transformations, introducing Scrum to teams, even coaching/educating internal Scrum Masters and managers. Looking behind the scenes revealed that things could have been done within weeks that their clients are still struggling with after years.
Seems like the old conmen statement "There's a sucker born every minute" still holds true.

Hiding incompetence

A coach can always conveniently hide between "stimulating self-learning". I'd call it more fair to say "Some things I know. These I will help you with. Other things, we'll learn together". Especially in the latter category, I personally call it un-ethical to climb on a pedestal and profess to guide others' learning journey. But here's my story. I heard it over a cup of coffee with an upper manager:
A large product group tried to adopt Scrum for the development of an important product a good decade ago. Long story short, 500-people Scrum is not the same as one team. So, they had, "challenges". And since they couldn't figure out any way of getting past a specific one, they spent major bucks and flew in a highly reputable "Scrum coach" to make progress. For two hours the coach answered every question with a counter-question or reframed it. But the client felt there was no substance. Finally, the manager's collar popped and he bursted out: "Now tell me, ONLY with a Yes or No: Do you know how to solve this problem?" Pushed into the corner, the answer was "No". At which point, the manager exploded: "Then this meeting was 100% waste." Not only did they never try to approximate a solution or give helpful pointers, they simply left the client stuck with an unresolved problem. Even years after, to that manager - and their peers - "Scrum coaching" is associated with that specific name and has a very sour aftertaste. 
It should be fairly easy to state what your competencies are and what aren't. It's fair game to state that you don't know everything. But when others rely on your help, it's unfair to leave them hanging.
Note how "problem solving" is not mentioned by coaches as a coaching skill.


Getting away from the stuff that I would actually call fraudulent, where the client's ignorance of one's own incompetence is used to make a quick buck, let us now turn to the softer area of mindset.

Unable to see the big picture

Good coaches should be unbiased, because bias prevents us from seeing the big picture. Reflection and self-awareness help us to overcome bias to serve others better. Or: So is the theory. Some of the most biased people I have met in my life bear the title of "agile coach". Their bias is so incredible that they try to convince me of silver-bullet solutions that simply won't work in context. Here's my story:
I was once working with a company that had a HUGE quality issue: Their legacy product was a technical garbage heap: Developers literally had nightmares about the code base. Some threatened to quit were they forced to dig into that mess any deeper. Customers were rioting, Customer support was desparate. Customer problems (such as: lost orders, missing payments, wrong products shipped) never got fixed. I like to name things the way they are. When a customer spends money for A and then gets B, that's a DEFECT. A failure that the customer does not want. Period. So, I was fighting tooth and nails with management to limit WIP and value-prioritize defects so that we could actually drive down defect rates. The results were splendid: Customer Service actually started giving names to developers that were no longer synonymous with "monkey". Anyhow. Comes along this veteran "agile coach" who suggested "You shouldn't call them defects. Wherever I go, the first thing I do is to remove that label. This will cause an important mindset change!"
I spent over an hour mostly listening to why it's important for the team that the PO treats all the work equally. They didn't even account for the fact that "defect", in that case, was not merely a label but a metric to draw attention to the horrible technical mess, so that we could have sufficient power to weigh the need for a long-term technology change against the need for short-term business evolution (i.e. new features).
I did get their point, but I saw they didn't get mine. And they didn't care to.

Misunderstanding assumptions

As I stated elsewhere, people can and do make assumptions all the time. We navigate in what we perceive as "reality" by making and deriving assumptions. And some of them are inconsistent with each other or with evidence presented. As such, we should always be ready to anabdon our assumptions in favour of better ones. "Five Why" analysis can help us explore our assumptions. But some people just don't get it. Here's my story:
A team gathered for their retro. Within a few minutes, they simply decided "We need to write more unit tests". So, the team dug out their Five-Why tool: "Why?" - "Beacuse we have too many defects" - "Why?" - "Because we don't have enough unit tests." - "Why?" - "Because we didn't think they were that important." - "Why?" - "Because we didn't know." - "Why?" ... - "Dude, shut up your food hole!"
This team had already learned their lesson, but the coach made it look like there was more to it - down to the point where they really just got nauseating. "Five Why" is one of many ways to uncover false or misleading assumptions, but there's a point where it's fairly safe to simply let it go. A coach should not dig out all assumptions. They should be aware which assumptions are reasonable and which are unreasonable.

Wrong focus

Agile coaches might focus on the wrong things when they miss the big picture. Especially when their understanding is limited, they will quickly optimize in the wrong direction. Here's my story:
I was working with an organization where a certain middle manager always tried to impose their specific ideas (such as: separated test team, using HPQC rather than spock etc.) on teams. As i was doing my best to rally management support for the teams' ideas, I got into the line of fire from that manager. Basically, he was undermining the techical quality measures built by the teams with an email to ALL the PO's and coaches. So, I replied to ALL, because I wanted ALL to take a stand. What happened? This "agile coach" suggested introducing business email etiquette rules, because they felt bothered by a Reply-All on a matter they considered personal between me and the manager. So, we had etiquette rules enforced. Great! Problem solved! ... About half a year after I left, the manager won - now they have a Test Department reporting test results in HP-ALM. But hey, at least they have formal email etiquette rules!
It's actually quite funny how often agile coaches propose a solution without engaging in direct dialog with the concerned parties - and without trying to understand the problem they are solving. There was no mediation sesssion held to uncover why the conflict actually existed. The real problem never got solved. Neither any of the many coaches nor any PO bothered to understand the fundamental problem.



Conclusion

Am I perfect and pointing fingers elsewhere? No. I undoubtedly have some communication issues and maybe many of the situations I encountered would have turned out different if I had known how to better communicate. But I learn.
However, I would also expect "agile coaches" to bring honour to their profession.

When the solution isn't known, approximate. But be straight about it. Never claim to help others with things you don't understand: That's fraud.

Especially from a coach, I would expect the following: Be fast to learn, but slow to judge. Engage in dialog. Never decide before verifying your own assumptions. Be ready to discard your preformed assumptions. Don't draw biased conclusions. Let people know when you don't know.

It's called PDCA for a reason: Never act before checking. And, from a coach, I'd expect that to be a double check.

Don't play games: a coach is not a mad scientist!


Final disclaimer: I do not consider all agile coaches to be quacks. There are a few whom I highly respect.




Wednesday, August 17, 2016

Agile learning for starters

I have previously discussed the "cost of learning" and it's impact on the learning strategy. After establishing that we should always keep this cost of learning below the Point of No Return, let us consider the differences in learning. The dogmatic statement "A coach should not prescribe a solution, but foster self-learning", presumes that self-learning is universally the best approach. But is it?

Let us consider which companies/teams typically call for help, based on this simple model:

Do you know why you don't know what you don't know?

There's a hidden relationship to the Cynefin Framework hidden: Software development is a socio-technological problem, and the issues of communication, understanding and skill are just a few factors affecting the team's performance. We work in the complex domain, where any model has an inherent error.
Usually, when a company requests external help, they tend to be basically aware that they don't really know what their problem is and that they assume someone else can help them make progress. In terms of our model, uncertainty is high and people admit that their specific knowledge and understanding of the problem domain is shallow. That's good. It's a basis for learning.

Initiaing problem solving

We have a wicked problem here: How do we know we're doing the right thing - and how do we know we're getting better at it?
A consultant has no choice other than first gaining clarity whether the problem is comparable to a problem where a solution is known, so would first try to drive down uncertainty - by asking questions and experimenting with the process.

If the problem is in a domain where deep expertise is available, the problem solving process is reduced to tapping into available expertise.

If the attempt to reduce the problem to a domain where a solution is known fails, this indicates that we're working in the domain of the Unknown.
This one splits down again:Either, we know that all known solutions fail, in which case we need to innovate - or all attempts to reduce uncertainty failed, which indicates our problem is ill-defined and we need to clarify the problem until we have a workable problem.

Innovative problem solving

If there is need to innovate, we're pretty much clear that we'll be using empirical data, feedback loops, inspect+adapt and experimentation to iteratively anneal the situation. The best thing a consultant can do in this situation is to provide support based on their own experience to discern which experiments make sense and what the available data implies.

There are tons of techniques for innovative problem solving, starting with Kaizen Events, Design Thinking, TRIZ ... potentially even a full blown Design For Six Sigma (not advised). Determining the suitable problem solving technique may also at the discretion of the consultant.

Introducing known solutions

When expertise is available, the consultant must factor in the impact and urgency of getting the problem solved.
Impact is high when there is a risk of crossing the Point of No Return, i.e. destroying the company / team, or have individuals lose life, health or their job.
Urgency is high when you only get one shot.

  • If both impact and urgency are high, a dogmatic solution will save time at the expense that the inherent understanding remains low. Autonomous learning is purposefully replaced with prescriptive approach for a greater good.
  • If impact is high, yet urgency is low, the consultant may choose to underline the solution process with moments of learning to deepen understanding. This will reduce long-term dependency and the risk for misunderstood assumptions around the solution.
  • If the impact is low, yet there is a sense of urgency, the consultant might actually provoke "learning from failure" to create deep understanding for the next time.
  • If both impact and urgency are low, the consultant should not invest further time. Providing a pointer on how the team could learn solving the problem can be sufficient. If they learn - good. Otherwise - no harm done.

Summary

In this article, I described only the consultant's approach when the team is lacking knowledge and ability that is available to the consultant. A different approach is required the team's knowledge already exceeds that of the consultant.


A good consultant weighs the costs of learning against the benefits of learning and chooses the optimal approach, carefully considering the tradeoff between short-term results and long-term results.
Innovative problem solving should generally not be used for known solutions, since that approach is inherently inefficient. Although it facilitates learning, it also maximizes the cost of learning.

Peoples who dogmatically insist on facilitating innovative problem to maximize learning solving might force the team to reinvent the wheel when a firetruck is sorely needed. That's not helpful. It's snake oil.
There are times for learning and times for just doing. Know the difference.

Wednesday, August 10, 2016

Coach, Trainer or Consultant - a false dichotomy

There are a lot of opinions going around on the Internet concerning "coaching vs. consulting". Especially coaches who like to discern their position will pose this question and suggest that consulting is somehow inferior to coaching. Let's leave the emotional aspect out of this and reduce this to assumptions. In this article, I will include "training" as well, because of the significant overlap. 


The model

For all three services (coaching, consulting, training), there's two sides involved: Service provider and service taker (client). Both make assumptions about themselves and about the corresponding other party. For simplification purposes, we will not list all assumptions, but focus on the essentials that are related to learning.

Underlying assumptions for each role

Interpretation of the model

The first thing we should be clear about is that these are all just assumptions.
Since they are assumptions, it's good to clarify that both client and service provider have the same understanding of these assumptions beforehand, since they define expectations.
These assumptions are not axioms, since each variable can be verified objectively by asking questions and observation. Neither client nor service provider should turn any of these assumptions into dogma and insist they be true regardless of reality. You must accept that any of them may turn out invalid at any time.

Provider responsibility

For each of the three roles, the service provider is expected to have a clearer understanding of the big picture than the client. As such, regardless of whether you are coach, trainer or consultant - you need to be actively on the lookout whether the above assumptions are valid. One skill you need to bring to the table is the ability to realize when they are invalid, because it breaks the model of your role. When they are invalid, you must take steps beyond dogmatic insistence on the definition of your role in order to move into the direction of success.

Client responsibility

The main reason for getting help is that you don't really know what you may need to know. Your initial assumptions may be invalid because of what you did not know. Given better information, you need to adjust your course of action accordingly.


Application

Roles are really just transient. 

It's probably easiest for the trainer who provides a specific training service to stick to the agenda and simply leave. Worst case, the training did not help and the very limited few days of training are wasted. However, even trainers often add coaching techniques and modules to their trainings where they actively generate learning with their clients. In rare cases, that may turn into consulting sessions. When a trainer leaves, the client should have an appetite to try out the training knowledge and learn more.

The line is significantly more blurry for coaches and consultants.
The best thing a consultant can do is enable the client to solve the specific problem and related problems individually by producing learning within the organization. This may include domain-specific trainings in skills the consultant provides and coaching key players in doing the right thing. When a consultant walks out, the client should be able to say "From here, we can move forward by ourselves." - which is the best thing a coach would also hope to achieve.

For a coach, the main difference to a consultant is that there is no specifically defined problem initially and that the coach is not expected to come up with a solution. However, a good coach should understand that there are situations where simply giving a directed pointer to existing solution in order to instill an appetite for learning and experimentation is a good way forward. That's a training situation. Also, sometimes, the coach needs to take carefully considered shortcuts in the learning process to prevent irrecoverable damage: That's consulting. Depending on where on the learning curve the team is, that can be quite a big part of the job.


Summary

Assume that coaching and consulting are two distinct roles and that you are either-or is a false dichotomy. In the same breath, it's an even worse misinterpretation to consider one of them"superior" to the other, because both simply rely on different assumptions. A good consultant will use significant coaching techniques in context, and a good coach will use significant consultative techniques in context as well. "Context" depends on observation and interpretation and is usually very mixed. Be ready to accept this mix. Your actions then also need to reflect this mix.

Being dogmatic on one specific role and insisting on the above assumptions as axioms is done only by people who are unable or unwilling to consider the systemic implications of their own actions. That's snake oil. Caveat emptor.