Thursday, November 7, 2019

The value of idle time

The Product Backlog is a mandatory part of Scrum. Together with the Sprint Backlog, they define both the planned and upcoming work of the team.
There's a common assumption that it's considered good to have a decently sized product backlog, and as many items in the Sprint Backlog as the team has capacity to deliver. Let's examine this assumption by looking at a specific event.



The "no backlog" event


It was Tuesday evening. I had just a busy day behind me.
I was chilling, browsing the Web, when I received a message on LinkedIn. The following conversation ensued:




Mind Lukasz' last statement: "The most impressive customer service ever".
Why was this possible?

Had Lukasz contacted me half an hour earlier, this dialogue would never have happened. Why? Because I would have been busy, doing some other work. Lukasz would have had to wait. His request would have become part of my backlog.

Service Classes

There's a lot of work I am pushing ahead of me on a day-to-day basis.
But I classify my work into three categories:

  1. Client related work - I try to cap the amount of client related work, to maintain a sustainable pace.
    It's a pretty stuffed backlog where things fall off the corners every day.
  2. Spontaneous stuff - I do this stuff as fast as I see it, because I feel like doing it.
    The hidden constraint is that "as I see it" depends on me not being engaged in the other two types, because these take 100% of my attention.
  3. Learning and Improvement - That's what I do most of the time when not doing Project work.
    I consider web content creation an intrinsic part of my own learning journey.

These categories would be called "service classes" in Kanban.
I am quite strict in separating these three classes, and prioritize class 1 work over everything else.

Without knowing, Lukasz hit my service class 2 - and during a time when I was indeed idle.
Since class 1 has no managed backlog, I got around to Lukasz' request right as it popped up, and hence, the epic dialogue ensued.

Service Classes in Scrum

If you think of the average Enterprise Scrum team, class 1 is planned during Sprint Planning, and class 2 activities are undesirable: all the work must be transparent in the Sprint Backlog, and the SBL should not be modified without consent of the team, especially not if this might impact the Sprint Goal.

Many Scrum teams spend 100% of their workload on class 1, going at an unsustainable pace, because the constantly descoped class 3 work warrants future-proofing the work.
Even if they plan for a certain amount of class 3 work, that is usually the first thing thrown overboard when there's pressure to deliver.

The importance of Spontaneity

Few Scrum teams take care of Class 2 work, and Scrum theory would dictate that it should be placed in the Product Backlog. This just so happens to be the reason why Scrum often feels like drudgery and developers are getting uncomfortable with practices like Pair Programming.

"Spontaneous stuff" is a way to relax the mind, it helps sustain motivation and being totally uncommitted on outcomes allows creativity to flourish.



Load versus Idle Time

As mentioned, class 1 is bulk work. As workload increases, the percent amount of class 1 activity quickly approaches 100%. Taking care of class 3 activity means that increasing load quickly diminishes idle time activity to go to Zero.

Since I already mentioned that idle time activity creates magic moments, both for team members and customers, high load with zero idle time destroys the "magic" of a team.

Wait Time Idleness

One source of Idle Time is Process Wait Time.
In a Lean culture, wait is seen as detrimental waste. This is both true and false. It is true when the organization doesn't create value during wait, while incurring costs. It is false when this wait is used to generate "magic moments".

Buffer Time Idleness

Both Scrum and Lean-Kanban approaches encourage eliminating idle time, as would the common "agile Scaling" frameworks. Team members are constantly encouraged to pull the next item. or help others get work in progress done faster.
This efficiency-minded paradigm only makes sense if the team controls the end-to-end performance of the process, otherwise they might just accumulate additional waste. Theory of Constraints comes to mind.

On the other hand, buffer removal in combination with a full backlog disenchants the team - there will be no more "magic moments": Everything is just plan, do, check, act.


Idle Time and Throughput

The flawed assumption that I want to address is that buffer elimination, cross-functionality and responsibility sharing would improve throughput. Maybe these will increase output, but this output will be subject to the full lead time of any other activity.

Backlogs vs. Idle Time


Genuine idle time means that the input backlog currently has a size of Zero and a parallel WIP of Zero as well. There is no queue: neither work-in-progress nor work-to-do.
An idle system doesn't require queue management. When idling, throughput for the next request is exactly equal to work time - the maximum throughput speed we could hope to achieve. This kind of throughput speed can look absolutely mind-boggling in comparison to normal activity cycle times.

The impact on organizational design

A perfect organization takes advantage of idle time points that maximize throughput speed - not efficient utilization avoiding idle time.

The #tameflow approach suggests that you need to understand your end-to-end workflow, and prepare buffer idleness for critical activities that affect throughput time. This will optimize for flow of results rather than the individual capacity utilization.


Summary

The conversation with Lukasz is an example of the benefits of having idle time in your work.
This kind of idle time allows for "magic moments" from a customer perspective.

Just imagine an organization where "magic moments" are the norm, and not the exception.
This requires you to actively shape demand: when demand is roughly equal to capacity, we can eliminate backlogs.
Demand queues destroy the magic.

Eliminate the queues. Make magic happen.


Saturday, November 2, 2019

Health Radars are institutional waste!

There's a recent trend that organizations transitioning to agile ways of working get bombarded with so-called "health checks" - long questionnaires asking many questions, that need to be filled in by hundreds or maybe even thousands of people in short cycles. They deprive organizations of thousands of hours of productivity, for little return on this invest. 

Radar tools are considered useful by consultants with little understanding of actual agility. 
My take is that such tools are absolute overkill. What you can do - to save time and effort, and get better outcomes.




The problems of health radar tools

Health radars are deceptive and overcomplicate a rather simple matter. They also focus on the wrong goal.
A radar is only helpful when things are happening outside where you could otherwise see them.
If an organization wants to be agile, the goal should be to improve line of sight, not to institutionalize processes which make you comfortable with poor visibility.

The need for a radar reveals a disconnect between coaches/managers and the organizational reality.

Early transition radars

When an organization doesn't understand much about agile culture and engineering practice, you don't need a health radar to realize that this isn't where you want to be: time-to-market sucks, quality sucks, customer satisfaction sucks, morale sucks. No tool required.

Initial health radar surveys usually suffer from multiple issues:

  • Culture: Many traditional enterprises are set up in a way that talking about problems isn't encouraged. The health radar results often look better than reality.
  • Dunning-Kruger effect: people overestimate their current understanding and ability, as such, overrate it.
  • Anchoring bias: the presented information is considered far more reliable for decision making than it is.

I don't think it needs much further explanation why taking a health radar under these conditions can actually be a threat, rather than a help.

Repeat surveys

The next problem with health radars is that they are usually taken in cyclical intervals, usually ranging from monthly to quarterly. Aside from people starting to get bored having to answer the same fifty questions every month (oddly enough, agile development would encourage automating or entirely eliminating recurrent activity!).

Frequently repeating the surveys thus suffers from:
  1. Disconnect between change and data: Especially in slow-moving environments, the amount of systemic change that warrants re-examination of the state tends to be low, so the amount of difference over time that can actually be attributed to actual change in the system is low. 
  2. Insignificant deltas: Most change actions are point-based optimizations. Re-collecting full sets of data will yield changes that are statistically insignificant in the big picture.
  3. Fatalism: When people see that there are dozens of important topics to be changed, and that progress is really slow, they might lose hope and be less inclined to make changes.
  4. Check-the-box errors: With increasing frequency of surveys, more and more people will just check some boxes to be done with it. The obtained data is statistically worthless. It might even require additional effort to filter out. Likewise, the consequently reduced sample size reduces the accuracy of the remaining data.
Those are the key points why I believe that constantly bombarding an entire organization with health radars can actually be counterproductive.


A much simpler alternative

With these four rather simple questions, you can get a clear and strong understanding about how well a team or organization is actually doing:


Sometimes, those questions don't even need to be asked. They can be observed, and you can enter the conversation by addressing the point right away.

The four questions

To the observant coach, the four questions touch four different domains. If all four of these domains are fine, this begs the question: "What do you even want to change - and why?" - and taking a Health Radar survey under these conditions would not yield much insight, either.
Usually, however, the four questions are not okay, and you can enter into a conversation right away.

1 - Product Evolution

The first question is focused on how fast the product evolves.
If the answer is "Quarterly" or slower - you are not agile. Period.
Even "daily" may be too slow, depending on the domain. If you see inadequate evolution rates, that's what you have to improve. 
And don't get misled - it may not be the tool or process: it may be the organizational structure that slows down evolution!


2 - User attitude

The second question is focused on users.
If the answer is, "We don't even know who they are" - you are not agile. Period.
Some teams invite selected users to Reviews, although even this can be deceptive - having an informal chat with a real user outside a meeting can be revealing indeed.


3 - Developer attitude

The third question is focused on members of the development organization.
If the answer is anywhere along the lines of "I'm looking for job offers" - you are not agile. Period.
Sustainable development can only be achieved when developers care about what they do, are happy about what they do and willing to take the feedback they receive.


4 - Continuous Improvement

The fourth question is focused on how improvement takes place.
If the answer is along the lines of "We can't do anything about it" - you are not agile. Period.
People need to see both the big picture and how they affect it. The system wouldn't be what it is without the people in it. The bigger people's drive to make a positive impact, the more likely the most important problems will get solved.

The core of the matter is what people do when nobody tells them what to do. Until people have an intrinsic drive to do the right thing, you're not going anywhere.

The conversation

Depending where you see the biggest problem, have a conversation about "Why": "Why are things the way they are?" - "Why are we content with our current situation?"- "Why aren't we doing better?" - "why do we even want to be agile if we're not doing our best to make progress here?"

People can have an infinite amount of reasons, so this is the perfect time to get NEAR the team and their stakeholders.

Following up

The followup set of questions after a prolonged period can be a series of "What" questions: "What's different now?" - "What have we learned?" - "What now?"



Summary

Drop the long questionnaires. They waste time, capacity and money. 
Learn to observe, start to ask questions. Reduce distance in the organization.
You don't need many questions to figure out what the biggest problem is - and most of all, you don't need to "carpet bomb" the organization with survey forms.  Keep it simple.


Often, people know very well what the problems are and why they have them. They just never took the time to get things sorted. All you need to do is help them in understanding where they are and discovering ways forward. 

Monday, October 14, 2019

Let's talk about Demand

While "Agile Development" focuses exclusively on supply mechanics, an understanding of demand mechanics is essential to managing sustainable product development - agile or non-agile, hence I want to lay a foundation of the effect demand has on a development organization.


Synopsis
A basic understanding of the economic dynamics of Supply and Demand is essential to succeed with product development. As development capacity is a limited resource, free market dynamics do not apply to product development. Without understanding and managing demand, agile product development sets itself up for failure, yet few (if any) agilists understand "demand". This article provides a basic explanation of the impact demand has on your development organization - and how you can use this knowledge to increase your odds of succeeding.


Demand in economics

There are huge misunderstandings in the agile community as to what is "demand" - down to the point where people claim that demand management isn't anything other than working the Product Backlog. Since we have ITIL, a lot of IT people seem to confuse "Demand Management" with "Requirements Management" - and conflate "demand" with "requirement", or, using agile lingo, "user stories".
Let's get back to basics for a bit.

The most fundamental concept of Keynesian economics is Supply and Demand. In a free, unrestricted market, this is the gist of Supply and Demand:

Where supply and demand meet, there's an equilibrium

"Demand" is simply how much of a given good or service people request - and "Supply" is how much of that will be available. There is also "aggregate supply" and "aggregate demand" for the whole range of good or services within an economy.

This model leads to some very intuitive statements:
  • As price decreases, demand increases. People demand more at a low price.
  • As price increases, supply increases. People are more willing to offer things at a high price.
  • At some point, there's a market equilibrium: supply meets demand.

To make this article simpler, from now on, I will infer "good" to mean "any kind of service provided by IT development", "quantity" to mean to "the amount of this good [created or requested]" and "price" as "cost of that good". I will ignore the definition of "price" as in whether that cost is opportunity cost, total cost of ownership or whatever - that too, might be a topic in and of itself. For now, just think of the most comprehensive definition of "price" you can imagine.

While macroeconomic concepts can not be applied to a division of an organization without further examination of the existing conditions, this would go significantly beyond a short blog article. To keep matters simple: The development organization (typically IT) offers certain services ("supplier") to the rest of the business and to customers ("consumers"). From there, we get into the nitty-gritty that may be worth another article.

In this context, "demand" is simply everything that consumers want, which the supplier could provide. There is no constraint that "demand" needs to be known or voiced. It can only be itemized once it's known and articulated. Therefore, demand starts long before a specific item appears in a Product Backlog.

Scarcity and Supply Limits

When a resource (in this case: time) in the production of a good is limited, this generates "Scarcity": the quantity of the good is capped due to the supply limt of the resource.

Beyond the supply limit, price is irrelevant. The demand will not be met.

On the right side of the supply limit, price and demand are irrelevant: There is no way to meet the demand - it's economically impossible. Just imagine that you wanted two suns on the sky - there is only one, so it's simply irrelevant how much you would pay to have a second - there won't be another.

We see the same thing happening when a development team (or: the development organization) has a fairly fixed size. You can slice it however you want, a day still has only 24 hours and 10 people are still only 10 people, so there is some kind of limit in place.

Let us simplify our model once again and ignore the possibility of hiring additional people, because we would still need to observe two effects here described quite appropriately in "The Mythical Man-Month" and "The Phoenix Project", that there's no guarantee that additional headcount will increase the supply limit. Even if we could increase headcount and developers were fungible, there would be still diminishing returns and scarcity of available developers, another topic that we can wonderfully descope for another article.


Increasing demand

As our business grows more successful over time. the demand for goods from the development organization increases. Whereas in the initial phase of our business, developers were experimenting a lot and had a fair amount of time to figure out the best solution, increasing demand brings development closer to its supply limit.

While a low demand can be met at a low cost, there is no way to meet the high demand beyond the supply limit that many development organizations experience.

As we get closer to this supply limit. businesses get the impression that their development becomes slower, less flexible and unproportionally costly. Close to the supply limit, the marginal price increases dramatically with ever diminishing returns on invest.

As demand increases, cost increases significantly faster than quantity

The average unit price of getting a new feature also goes up - not because development cost went up, but because of increasing competition over a limited good. Notice that this is demand competition, not supply competition. Demand competition means that people will use their influence and available means to get what they want, so the people with less influence or money will not be supplied.
Or, to translate the impact of demand competition into the world of product development: At some point, the development organization will no longer be able to serve everyone.

If now we translate this into the world of software again: as businesses become more successful, their demand for IT continuously increases. When a specific request arrives in the development organization, it's either on a point in the demand curve where supply can meet demand - or it isn't. If it is not, then it will not be in the future, because demand is moving up ...
A very simple rule of thumb in a limited supply system is: Unless demand decreases at some point, if you can't get it now, you never will.
... unless...
... demand decreases!

Demand Inflation

Which it won't, because everyone intuitively figures out this rule of thumb. As a consequence, we see a followup phenomenon: When stakeholders start to realize that their requests aren't being met, they try to get the development organization to commit to delivering as many goods as eary as possible, because later, it will be even harder to get this commitment!

This is the point where we enter the vicious circle of starting as many initiatives as possible, irrespective of whether there is enough capacity to complete these initiatives. The capacity problem is delegated to the product organization once there's commitment to deliver upon a certain request.

This vicious circle can't be broken on the supply side

The mechanics we see at work here are exactly the same mechanics that you saw during the World Banking Crisis: As soon as a bank declares that they can't pay back everyone's deposits, everyone will try to be the first to secure as much of their deposit as possible.
The announcement that "we can't serve everyone" is enough to start a spiral so powerful that it can destroy corporations and topple governments - what makes you think that your Product Owner will fare better?



Demand Management

A missing piece in the typical agile approach is that typical IT people understand little to nothing about supply and demand - and even less about demand managment.
Demand management isn't the exercise of itemizing and keeping an inventory "user stories" or other backlog items, and determining which items get implemented next (and which won't get implemented at all).
At the very minute a stakeholder receives the idea that maybe the development organization can't do it all, you start a "bank run", inviting all stakeholders to flood you with as many requests as possible, worsening your problem rather than fixing it!

Demand Shaping

A terrible misunderstanding propagated by many agile coaches and trainers is the idea that it's enough to manage the supply side of product development and have a mechanism to determine the priority and content of the product backlog, then being vocal about saying, "No" to requests for goods that have no chance of being implemented.
Simply managing the product backlog is too late in the process and a local optimization.

Let me give you an example of what "demand shaping" means:

The Diner Metaphor

You go into a Mexican diner to have lunch. The diner has already shaped your demand long before you looked at the menu.  You would be irrational to request breakfast for lunch, and you would be even more irrational to request a sushi platter there.
The restaurant probably won't have to tell anyone that both of these requests won't be served, because nobody expects to get served these dishes to begin with. Nobody would feel offended when the waiter tells a person with a sushi craving that there's an excellent sushi bar down the road.

If the diner is high class, they will not serve you a "product backlog", or a list of 150 different dishes that you can choose from. Their menu will contain maybe four to ten items, all of which the restaurant manager knows to be fast and profitable to produce at a high quality.

The next step of demand shaping happens when you approach the door and see this giant sign "Lunch Menu $6.99"  with some really tasty looking food imagery. At this point, most customers will already have lost their interest in the menu and place their order. The placement, the imagery and the price tag are enough to reduce the demand for other items by a good 80% - speaking from a Pareto Principle, the sign plus their brand is enough to give the diner what can be called "demand control".

At the same time, the manager will constantly track which dishes get ordered and if there are new trends on the market, without any direct intention to provide any of these. Even if the customers are aware of this, they have no way to influence the manager's decisions as to what the future menu will look like beyond what they choose to order. Next week's special offer and menu will be optimized to maximize sales, while minimizing effort, and if the manager makes the right predictions, customers will be happy even though they were never asked for their opinion.
This is called "demand monitoring" and "demand prediction".

The failure of IT Development

Whereas the Mexican Diner in our example made it clear that they're not a Sushi Bar, a typical company's IT department wants to have full and exclusive control of everything pertaining to Information Technology.
All of this is operating under the assumption that the the demand on a development organization can be met by the available supply.
Once demand is beyond supply, which is a natural consequence of having a successful product with unshaped demand, the organization must choose which demand it will serve and which it doesn't.

There are multiple strategies for dealing with over-demand:
  1. Start the "Demand Inflation" spiral and get into a neverending battle about priorities.
  2. Encourage a "Bank Run", which will destroy the credibility of your organization.
  3. Start "Demand Shaping" to reduce over-demand.
  4. Specialize. Stop claiming to do everything. Let others do what they can.
Smart organizations, irrespective of their development approach, will pick option 3 to reduce demand as far as possible, then pick option 4 once that is no longer an option.

Agile Frameworks like Scrum and SAFe will lead unsuspecting Product Owners/Managers into the trap of picking options 1 and 2, which may eventually lead to a failure of the entire organization.

The meta-failure of "Agile"

Managers who look towards "Agile" as ways of improving Time-to-Market and Quality easily get swayed by the claims that "Agile" will improve these metrics, while also increasing employee engagement and customer satisfaction. Hence, they transition their organizations towards agile ways of working. Product Managers get re-eductated to follow "Agile Product Management Practice", which focuses exclusively on short-term supply management and entirely ignores long-term demand management.

As a sweeping statement, agilists with an IT background are terrible at understanding market dynamics and the nature of demand. They only understand the supply side, as that's what they're typically working on. Hence, they can't help on the demand side. Letting a myopic supply-centric approach become your product strategy will be disastrous.

If your "Agile Transformation" has no demand management strategy, or if you think that "let's put everything into a backlog, then pull the most valuable items first" is a demand management strategy, you are going to shipwreck! The bank run is inevitable.


Conclusion

"Agile frameworks have little to nothing worthwhile to offer in terms of demand management. They focus exclusively on managing the supply."
Understanding and managing demand is an important part of managing a sustainable development organization. Agile frameworks have little to nothing worthwhile to offer in terms of demand management. They focus exclusively on managing the supply. Claims that demand control is not required in an "Agile" environment are myopic, made from ignorance - and can lead to catastrophic outcomes in the long term.

Organizational managers and product people alike need to understand demand management principles and practices to steer a product towards sustainable success.  An understanding of the "market" created and engaged by the development organization is essential to determine what the best next steps are.
In some cases, backtracking and demand reduction may be required before one can even begin working with a Product Backlog. Failure to understand this may result in the entire "Agile Tranformation" becoming a no-win scenario for all people involved.


To Explore

This article relies on a lot of shortcuts and makes a number of strong assumptions.
A number of additional items still need to be explored, and I invite the reader to do some of this exploration on their on until I have time to provide additional material.
These items are:

  • The applicability of Keynesian Economics to [IT] product development
  • The definition of a "good" in the context of [IT] product development
  • The definition of "price" / "cost" in the context of [IT] product development
  • The Supplier / Consumer Relationship in the context of [IT] product development
  • The interaction between product development and non-developmental IT
  • Why "Development Manpower" is not a fungible good
  • Why a "Supply Limit" exists
  • The causes of demand increase in product development
  • The causes for starting a bank run on product development
  • The effects of a bank run on product development
  • Demand shaping in software development
  • The effect of demand shaping on development performance
  • The effect of having "cross-functional teams" on the complexity of the work
  • The trade-offs, benefits and disadvantages of specialization







Thursday, October 10, 2019

Why "Agile Development" will not solve your problem

Do we even need "Agile" to improve time-to-market, reducing cost and increasing success rates? After reading this article, you may have doubts.

Do these statements sound familiar to you?
  • Software Development is too expensive
  • We are lagging years behind the demand.
  • Too many initiatives (projects etc.) fail.
The solution? Scrum. No, just kidding. Scrum doesn't have a solution there. Neither does LeSS or SAFe. Or XP, or any other "Agile Framework" that focuses on the development part of an organization. The solution rests in how we think about organizing our products long before developers actually get involved.
Too many "Agile Transformation" initiatives are focused on the output of software development and forget the big picture of the system in which they are operating. 

The Product Development Funnel

Take a look at this abstract development funnel. Every organization works somehow like this, while the slope of the funnel and the processes in each stage may vary widely. 


Let us examine the core concepts of this funnel.
To keep this article at least somewhat simple, let us equivocate "product" and "product portfolio" - because from a certain level of abstraction, it's irrelevant whether we have one product or an entire portfolio thereof: both capacity and capability are still finite.
Likewise, let us ignore the entire dispute around Lean/Agile vs traditional program and/or portfolio management, because irrespective of how you do it, you are still dealing with demand, choosing opportunities and prioritizing effort.

Customer Demand

Whether our customers are people out there on the market who purchase our product, or internal stakeholders of our product, there are always unmet needs. The amount of unmet needs that our product could meet is the amount of potential work that is still required, and that's potentially infinite - unlike our funding and capacity.

At the expense of taking a grossly oversimplified definition here, let us define "Marketing" as the effort invested into making others aware of potential needs and wants - and in return, becoming aware of what these are.

Oddly enough, even delivery is a marketing activity in some sense: as people use the product, they become more aware of additional needs. Hence, the old economic rule, "supply creates its own demand" ensures that the demand part of the funnel will never dry up in a well-run product

Development portfolio

Every demand that's deemed sufficiently important will eventually end up in some kind of portfolio, the known list of undone upcoming work. This is where the first stage of filtering occurs. Filter mechanisms include product strategy, economic viability or even internal politics. The specific reasons for filtering are irrelevant in this context.

Every item that lands in the portfolio will subsequently put stress on the development organization. In an odd twist of fate, the worse an organization's ability to manage the portfolio, the lower the satisfaction with the developed product will eventually be, irrespective of how good or bad development was.
The reason is simply because once the capacity of the development organization is exceeded, accepting further demand will not yield better outcomes.

Running Initiatives

Once a portfolio item has been scoped for development, it will become a development initiative. Organizations do have this habit of finding it easier to start initiatives than to complete them. Every initiative started will eventually result in one of three states: Live (successfully delivered), Dead (descoped without delivery) - or Undead (lingering in a limbo state where people are waiting for results that may never come).

Let's briefly examine Live and Dead: An initiative that goes "Live" is the only type of initiative that results in something called "Business Value", or even potentially positive Return on Invest. It's therefore highly desirable to get initiatives live.
A Dead initiative is an initiative that has been terminated before it has generated business value. Every cent spent on dead initiatives was waste. Completing an initiative with a negative ROI is just throwing the good money after the bad, so killing them is still the better alternative.

The biggest problem though, are neither Live or Dead initiatives - it's the Undead: An organizational culture that doesn't have the willingness to kill initiatives that have exceeded their life expectation will generate and accumulate undead initiatives. 
The problem with these undead initiatives is that they drain your organization's energy. People spend time and effort to work on and track the Undead, and this time isn't available to get other initiatives Live, so they turn otherwise healthy initiatives into Undead as well. Talk about Zombie Apocalypse!

The health of a product development organization can easily be measured by how many Undead Inititiatives they permit - every initiative ever started that was not clearly terminated or completed. The easiest, yet oftentimes most uncomfortable, fix to increase this health is by taking inventory and killing all the zombie initiatives.


Delivery Capability

The delivery capability is the one place where a common "Agile Transformation" focuses. The expectation is that somehow, magically, the adoption of daily standups, cadenced plannings, reviews and retrospectives will improve performance. It doesn't. (At least, not significantly - unless your developers are really stupid, which I claim they aren't.) And, on a more abstract level, the expectation is that this approach will somehow improve the ratio of met versus unmet demand. It doesn't.

It might look like a good idea thus to increase the capacity of the development process, i.e. add more developers or provide better tools and extra resources to make developers more efficient. It isn't.

What sounds good in theory often results in proportionately increased expectations: When development capacity is increased through additional funding, the organization will expect to be able to launch more initiatives - and thus, the problem persists. It might even get worse, but that's another story, to be told at another time.

Retrospectives and change initiatives focused on "how can we do better in delivering" might completely miss the point if the development organization is in a state of continuous overburden: the overburden will never go away as long as more demand enters the system than what can be delivered.

Everything these frequent events accomplish is that the constant pressure to deliver creates psychological stress and eventually developers burn out: Development performance may even decline!


Let's talk about waste

In this section, let us move backwards through the funnel, as the waste accumulates throughout the funnel, like a clogged pipe. We have to unclog it where the blockage is caused, rather than where it occurs.

Wasted development effort

What should developers spend time on? Developing working solutions, obviously. Yet, developers often spend significant portions of their time doing things not related to this. Let's take a crude tally:
  • Status meetings for all those running initiatives.
  • Especially the zombies.
  • Task switching effort between those initiatives. 
  • Waiting for input on an initiative.
  • Catching up with changing requirements in an initiative.
The sum of all the effort invested into these items is the "low hanging fruit" in any optimization strategy. Reducing the amount of running initiatives reduces the amount of wasted development effort, thus increasing available capacity to get meaningful work done.

Kanban addresses this by limiting Work-in-Progress (WIP), although this only helps when we limit WIP at a higher level, that is: we must control the influx of initiatives, not the amount of development tasks, in order to gain any benefits.

Scrum addresses this by limiting the amount of items a team takes on during a Sprint - however, this is not a fix when more portfolio initiatives get started than completed: the consequence is an ever-growing product backlog. The problem is hidden, not solved.
In the ideal Scrum, the Scrum Team has full control over the entire product development funnel. Since this would require the organization to have moved from cost accounting to throughput accounting, from funding work to funding teams and towards an integrated cross-functionality that puts business acumen into the Scrum Team - this is a type of Scrum most organizations don't have. Again, this implies that the improvement potential then doesn't relate to delivery.

Coordination Waste

The second stage of organizational waste occuring in product development is the coordination of initiatives.  That's where we get project managers and matrix organizations from, and it's the reason why developer's calendars are cluttered with appointments. As every initiative has someone in charge, for simplicity's sake, let's call them "Initiative Owner". This initiative owner wants to be informed about what is going on and will take care that their initiative gets due attention.

If there is only one initiative, then all of an organization's efforts can focus on completing this one initiative.
Add another initiative, and the organization has to solve the problem of coordinating both the efforts between those initiatives - and to solve blockages in either initiative without blocking any other work. The effort required for these activities increases exponentially in complexity with each ongoing initiative. 
This coordination overhead would not even exist if there was only one initiative. The time and money sunk into coordinating parallel initiatives is pure waste.

At this point, we already need to ask the question why we see value in having multiple parallel initiatives - and why we believe that the value of starting another initiative outweighs the waste caused thereby. Likewise, we can ask the question whether the reduction of initiatives reduces the waste by a level that warrants deferring, descoping or even discarding one or more initiatives.

As coordination waste is multiplicative on outcome throughput, the lever of optimizing coordination waste outweights the lever of optimizing team-level productivity.

Budget Waste

Every organization determines one way or another what will be developed. Regardless if this happens via a Product Owner prioritizing an item in a product backlog or a SteerCo starting a traditional project, at some point a decision will be made to start a new initiative. In one form or another, at this point, the organization makes an investment decision: budget is allocated to the initiative. The biggest waste of budget is to start an initiative that doesn't get completed. Any initiative started without sufficient delivery capacity to complete both the currently running and the new initiative, will predictably induce waste into the organization. 

The simplest form of reducing budget waste is by deferring the decision about starting an initiative until there is sufficient free capacity within the organization to complete this initiative without impacting any other initiative.

As budget waste yields coordination waste, and coordination waste yields capability waste, the lever of reducing budget waste is even stronger than that of reducing coordination overhead.

Marketing waste

Probably the most well-hidden form of waste is demand waste, or "Marketing waste". Any form of demand that isn't eventually met by the product generates waste. The discovery, itemization and exploration of this type of demand generate cost without value. Having more demand than one can meet is only good because it creates options, yet one needs to be careful lest one loses focus in all of these options. There's even the economic dilemma of demand pull inflation where the mere prospect of additional demand will increase cost, but that's another story, to be told another day.

The simplest way of decreasing marketing waste is by limiting demand to a level where supply is at least still sensibly correlated to demand, and the organizational processes of managing demand are low effort. 

Marketing waste propagates into the product organization at full force: Poorly defined or invalid value hypotheses block scarce downstream capacity, while an oversupply of demand leads to pressure on the system. 

Limiting the waste

As heretical as this sounds, the most effective way of improving a development organization has nothing to do with improving development - and that's where all the team level agile approaches go wrong.

The most effective strategy for increasing effectiveness is by pulling the longest levers first:
  1. Limit the amount of initiatives within the development organization. To achieve this:
  2. Limit the influx of approved initiatives. To achieve this:
  3. Limit the rate at which demand is translated into initiatives. To achieve this:
  4. Limit the influx of demand into your organization.
"Do not try to do everything. Do one thing well."
- Steve Jobs

Here is an overview of what an organization's development funnel would look like if all settings were optimal:


The optimization in this context focuses on reducing the Organizational Waste not related to delivering Useful Products, and on the decrease of Opportunity Cost by eliminating the waste associated with pursuing low-value demand.

When we take into account Little's law, we realize that this optimization approach achieves:
  • Reduced cost of meeting demands ("feature cost savings")
  • Reduced Time-To-Market for features
  • Increased Success rates of initiatives
Depending on how significant the associated waste and opportunity cost within an organization currently is, the leverage may be massive, while the organizational change (staffing, training, skill distribution etc.) is insignificant. All that's required is - thinking differently!

And the funniest part - we haven't even touched development!

Conclusions

1. Think about what you really need to optimize before looking at "Agile Frameworks" as a solution. 

2. Change your thinking before optimizing the work.

3. Optimize what you work on before optimizing how you work.

Friday, October 4, 2019

A few thoughts on SAFe


I often get asked what my general stance on this framework is, hence I would like to share my personal opinion with you.
The short answer is: “We don’t live in Cockaigne. Making the world slightly better sure beats dreaming up how it would be perfect. Still, that’s not an excuse to get up on the wrong foot.”
The long answer is this post.

Why use SAFe?

Being an enthusiast agilist, people sometimes ask me how I can support this framework. To me, that’s quite easy to answer – although the answer decomposes into a large number of facets.

Premade decisions

Some organizations have decided to implement SAFe before I get involved with them. I see my responsibility to help organizations find better ways of working, and regardless where they stand, I can do this. SAFe can be a vehicle for simplifying portfolio processes, adopting more reliable development practices, remove pointless status meeting and many other things. Is that local optimization? Could be. But it makes peoples’ lives better.

Where SAFe helps

I know that many agilists will cringe at the idea of actively suggesting SAFe. I don’t see SAFe nearly as much as the problem as the wrong expectations associated with it. Depending on where you currently stand, SAFe can be a stepping stone to a simpler, more effective organization with faster, more flexible decision processes. Which portion of SAFe is needed for that? That totally depends on which problem you want to solve. Don’t use a 40t truck to bring a fork from the kitchen to the dinner table – but when you’re stuck with a gigaton of organizational process mud, Scrum just won’t cut through the problems faster than they accumulate.

Getting out of discussion loops

SAFe’s high market penetration and widespread acceptance cuts short a lot of discussions whether certain concepts are “esoterical” or “applicable in Enterprises”. Pointing to SAFe as a resource collection can break stalemates caused by people who didn’t yet spend time familiarizing themselves with lean and agile mindset. Probably the most common concept that’s incredibly difficult to crack without having a chance to demonstrate is that teams need to be told and controlled in regards to “what to do when” in order to get results.

The SAFe Big Picture

I think that SAFe’s Big Picture is a stroke of genius, because it’s a handy diagram that can be used as a discussion starter to point out what is currently amiss, overcomplicated or ill implemented. Having a deep understanding to explain these concepts to the client is – in my opinion – essential to ensure that we don’t just go about implementing something that meets the form and actually solves the problem at hand. Implement SAFe as per picture is an entirely different issue.

SAFe concepts

True to one of Dean Leffingwell’s favorite Bruce Lee quotes, “Take whatever works, and take it from wherever you find it”, there are a lot of great ideas and concepts included in SAFe. While steering clear of mindlessly applying changes to an organization in places where it doesn’t help, SAFe’s comprehensive practice catalog is a great way to focus discussions with people who have never heard about these concepts before.

Common practice

Arguably, there’s nothing new to SAFe. Everything is “common practice” that has been used time and again in many organizations. Applying SAFe principles and practices will neither make you an industry leader nor a thought leader, but it may get companies unstuck and modernized. This, I think is the main value of SAFe.

What to expect from SAFe?

Expectation management is an important aspect of any change initiative. Agilists who expect a simple, flexible organization where teams have full technical and procedural autonomy will find themselves sorely disappointed by SAFe – because environments where that makes sense aren’t the target audience for SAFe.
SAFe addresses complex organizations where teams are bound by higher-order dependencies, either due to the complexity of the product itself or due to the sheer size of the value stream. A single team can easily build a web platform used by millions – and still, that same team would find themselves overchallenged if said platform was just a part of a much larger ecosystem: There’s a reason why companies like Amazon have more than ten developers.
Staying in the Amazon example, that’s an example of a company with Information Technology in their DNA – they know how to create software, build digital value streams and decouple subsystems. Many enterprises have a DNA where IT is just a fulfilment agency of non-digital business models. It would be a category error to treat these enterprises exactly like a Digital Unicorn. SAFe won’t get you there, either – what it can do, though, is set the change process in motion.

SAFe and the meta endgame

Let’s talk about the digital endgame for a bit. Many organizations struggle with survival. Former market leaders fade to insignificance or disappear into bankruptcy because they don’t understand digital product development. Talk about Kodak, Sears, Blockbuster or recently Thomas Cook. Non-digital giants are in their own endgame already. Unless they change massively, they will disappear.
For such organizations, a good implementation of SAFe can bring them closer to finding their place in a constantly disrupted marketplace. Then, they must move on lest they get stuck in a new status quo.

SAFe and success

A SAFe organization would approach enterprise initiatives differently than traditional Project / Program / Portfolio management. Cutting beyond labels, an “Epic” or “Initiative” (whatever agilists prefer) is still a kind of project. An ill-defined Epic won’t fare better in the face of change than a traditional project. There’s a lot to be said about “How” and “Why” we would even want to use an Epic Portfolio. Organizations that fail to address how they go about scoping, budgeting or implementing software will find very limited benefit in SAFe.
To be more successful with SAFe, it has to go far beyond IT. Portfolio management happens at senior management level, and it must be aligned with non-IT business units. Finance must be on board - accounting must change. We must move away from defining scope and content upfront and become rigorous at examining incremental value and axing initiatives when a value hypothesis turns out to be invalid. This requires a level of courage that many organizations lack. Culture must change as well.

SAFe and developers

A common gripe that many developers have with SAFe is that it leads to higher pressure and doesn’t address the fundamental problems. They therefore claim that either nothing has improved or things got even worse. From a developer’s perspective, and having seen many poor SAFe implementations myself, I could even agree.
I need to put this into perspective, though: Some companies I worked with will state that SAFe has cut their time-to-market in half and significantly reduced the failure rate of critical initiatives. What the developers don’t see – these successes that are totally outside their field of vision have secured their paychecks for many months.
To make SAFe a positive for experience for developers, the organization must work on many topics that may elude many managers: putting developers into control of their own work, providing clear, transparent objectives and meaningful work, improving the working environment and becoming an attractive employer across the board. This must be scoped in the transformation as well.

SAFe and massive change

Put bluntly, if you don’t require massive change, SAFe probably isn’t for you. And with this, I don’t mean a giant (maybe intercontinental) shuffling of the Org Chart. SAFe requires you to make drastic changes at every level, in every way. The organizational change is the easy, simple – and shockingly – insignificant part.
You have to rethink what value is, how you create it, how your organization supports it. You have to rethink which structures work, which don’t, what is local optimization and what is global. You have to rethink the importance explicit, implicit and tacit knowledge have for you. You have to rethink what “knowledge work” is and how you treat your workforce. You have to rethink how management works. How leadership works. How accounting works. How controlling works. How customer satisfaction in a digital environment works. How small things affect the Big Picture. And you have to put all of these pieces of the puzzle together to end up with a sustainable organization.

Key Challenges

SAFe has a number of challenges to overcome that aren’t automatically addressed by the implementation – they are inherent to how traditional organizations tick. I believe that these challenges can be overcome by the right people, given time and patience.

Break the Glass Ceiling

SAFe’s big picture is palatable for people who have zero agile experience, and this picture can be used to provide a satisfactory answer for every potential question one could ask. This gives decision-makers the confidence that this approach will work. Unfortunately, this gives them so much confidence that they will gladly delegate the transformation to members of their organization or consultants who are engaged. This delegation means that “the glass ceiling” is often maintained, i.e. that senior management observes, rather than changes themselves. 
The solution? As a manager, get involved. Be part of the transformation: “be the change that you want to see in the world”.

Mind the Context

To quote H.L. Mencken, “For every complex problem there is an answer that is clear, simple, and wrong”. SAFe has many such answers, and why the answer is wrong isn’t because the answer itself is wrong, but because SAFe doesn’t address your local context. Whether an  answer makes sense, and the action is useful in context or a different approach would be more appropriate – isn’t answered by SAFe. The more confident an organization is that SAFe has the right answers, i.e., the more blindly they trust in SAFe, the less Lean and Agile they will become. 
You require highly educated systems thinkers to solve problems at enterprise level.

Learning Culture

SAFe has a highly complex learning portfolio. Many organizations feel overwhelmed by this and simply skip most of it. Only the managers that lead Release Trains go to Leadership training, SAFe for teams costs too much, and SAFe DevOps is “for when we have extra money”.
This creates a catch-22 situation for SAFe: It’s so complex that you need to invest a lot of time and money to understanding it. And because organizations who want SAFe are usually in search of ways to save time and money, this learning doesn’t happen.

As the proverb goes, “you pay the price for learning one way or another”. Most managers opt for the invisible way of lost productivity, because they don’t understand how massive this cost actually is and because of the way organizations are set up: When developers are struggling with productivity for months, that can be argued easier than getting an extra training and coaching budget.

You need to be serious on learning, not only SAFe, but also Lean Management and Agile Development Practices, to get any sensible result out of any kind of “Agile Transformation”.

Question your setup

When I ask managers which elements from SAFe they need, they take seconds to reply: “Everything”. Asking for specific aspects, like the System or Shared Service Team, the answer is “Of course”. Without looking into all of the details of the framework, these elements, as well as the concept of team-level Product Owners, the functional separation of RTE and Scrum Masters, having multiple Business Owners and many other things create counterproductive dynamics that can reduce an organization’s flexibility, their ability to deliver value and speed of decision making.
These mechanisms address challenges that enterprises may have, but they should be applied with utter caution and avoided if possible. As organizations move further in their agile journey, they will find that these mechanics eventually become impediments to organizational agility and the delivery of value.
The introduction of new concepts should be taken with caution – which it often isn’t. New mechanisms should be implemented only to address a significant, well-understood problem.

Engage middle managers

Managers, traditionally, keep themselves out of operative details lest they be called “micromanagers”. Under the umbrella of “team autonomy”, they disengage even further from the work of the teams. What sounds good is actually SAFe’s biggest problem, because managers don’t learn how the work really works. Even after a prolonged period of time, managers thus often face two key problems: First, they lack the understanding and insight to spot and resolve local optimization. Second, as team autonomy increases and interaction with managers is actively reduced, teams lose trust in management decisions: proximity creates trust!

At a minimum, managers need to start “walking gemba” and experiencing how people work. Even better, they need to get into the trenches and engage deeper and more often with teams, without having their “manager hat” on, so that they can get unfiltered firsthand information of what the new way of working actually is like.

Start thinking agile

Just like there is no pill that turns a couch potato into an athlete, maintaining agility requires constant change and attention. With its undoubtedly huge suggestion catalog and massive training portfolio, SAFe may create an illusion that by following down this road, one becomes agile. In my perspective, by following all the suggestions of SAFe, one does become a SAFe organization – but this organization will not be agile unless people ask tough questions and challenge the assumptions of the framework.

An organization must learn to scrutinize everything it does, regardless of where the idea came from, and rigorously cut down on complexity wherever and whenever possible. This is the only way to avoid accumulating the same kind of “technical debt” in their structure and processes that a piece of un-refactored code would have. Constant, small changes of the structure must be as integral to the work as doing this to the product.
Maybe that topic would be called “Organizational refactoring”, … something to discuss in the future.





 






Tuesday, October 1, 2019

Psychometry: Science, pseudoscience and make-belief

Let's take a quick glance at psychometry. Personality tests abound, and they've even invaded organizations' HR departments as a means of determining who "fits" and who doesn't. This, I claim, is something we shouldn't use in agile organizations - as these models are dangerous.

tl;dr:
Be careful what you get yourself into with psychometry. Chances are you're falling for something that could cause a lot of damage. Educate yourself before getting started!
Appealing, yet scientifically dangerous: The "Four Color Personality Types"

A brief history of  Psychometry

I will take a look at the models which survived history and are still around and in use today.

MBTI

In 1917, Katharine Cook Briggs and Isabel Briggs-Myers published a model which we now know as "MBTI", Myers-Briggs Type Indicator. with 4 traits in 2 different shapes each - resulting in 16 personality types.

DISC

In 1928, William Marston was tasked by the US Military to figure out why people with the same training still had different behaviour. The model identifies four key characteristics - D,I,S and C. Oddly enough, while the original model had "Dominance, Inducement, Submission and Compliance", today people can't even seem to agree what the acronym actually abbreviates.
Today, we see terms like Influence, Steadfastness, Conscientuousness as alternate labels - which means that depending on which meaning you assign to a letter, your scores would have a totally different meaning!

The Big Five (OCEAN)

In 1984, psychologists took a renewed interest in psychometry and Goldberg e.a. proposed the "Big Five" factors, Openness, Conscientuousness, Extraversion, Agreeableness and Neuroticism.
OCEAN spawned a few models on their own:

Occupational Personality Questionnaire (OPQ)

Saville and Holdsworth launched this model in 1984, and it's still in use today. This model is specifically focused on selection, development, team building, succession planning and organizational change. It has seen updates and refinements since its inception.

NEO PI-R

Since 1978 Costa and McCrae have developed the "(Revised) NEO Personality Index" which subclassifies the Big Five into six subcategories each. One of the key criticisms of this model is that it only measures a subset of known personality traits and doesn't account for social desirability of traits.

HEXACO

As the Big Five caught global attention, researchers realized that different cultures paid attention to different personality aspects, the Big Five were revisited, specifically due to feedback from Asia. Factors like Humility, Honesty ("H") and Emotionality ("E") have a much higher impact on the social perception of an individual in some cultures than in others, and therefore upon how a person sees themselves, as well.

HEXACO led to the interesting insight that there is no universal standard of measuring personality, as the measure depends on the social environment of the measured individual.
Likewise, HEXACO studies revealed that social acceptability determined desirability of traits, and that even the formulation of questions could yield different results depending on social context.


Scientific perspective

Companies have a keen desire to use a scientific approach in determining "best fits" for a team, in order to maximize the success rates of placing a successful candidate.
As ongoing research in the field of psychometry reveals, there is no comprehensive personality model, and therefore, no comprehensive personality test.
A comprehensive personality model would require both a large spectrum of personality traits and the social background.

Model Correctness

For the time being, the only factors that have been found to be universally accepted across cultures are extraversion, agreeableness and conscientousness. Everything else is up to debate. From the other side of the coin, this means that any model without these three dimensions can not be adequate.

Even the validity of the universally accepted factors is up to dispute. For example, Dan Pink stated, "People are ambiverts, neither overly extrovert nor introvert", or in other terms: our environment and current mood determines the expression of our "Extraversion" dimension much more than our internal wiring.

It's also unclear at this time how many factors actually exist, so every model we have focuses on a limited subset, and therefore expresses a current bias.


Valid Modeling

Scientists create, refine and discard models all the time. The goal is to have the best possible model, that is, the simplest valid statement with the highest level of explanatory power. The more widely accepted a model is, the more fame will be accredited to the first person disproving said model, that is: the bigger the crowd of scientists interested in finding flaws.

Counter-evidence

The first question when creating a model would be: Is our model valid? The scientific approach would be to look for evidence that the model is indeed not valid, and the model is assumed to be valid as long as no such evidence can be produced. Note that this neither means our model is good nor that it will remain valid when further information becomes available.

Models which have counter-evidence should not be used.

Explanatory Power

The second question to ponder is: How much does our model explain? There are two common mistakes regarding explanatory power of a model:
The first is the category error, that is - to use the model to explain things which it isn't intended to explain, such as using a model that was designed to explain individual behaviours in an attempt to explain social interactions.
The second mistake would be to use the model outside its precision. For example, a model that already fails to address the cultural differences between Asia and Europe would be inadequate to compare the behaviours between a person from Asia and a European.

Preference goes to the simplest model with the highest level of explanatory power required to address a subject.

Reliable Measurement

To be considered "reliable", a scientifically valid measurement would need to be:

  • Accurate, that is, it should generate outcomes that align with reality.
  • Repeatable, that is, a test under the same preconditions should generate the same outcome.
  • Reproducible, that is, testing the same target in different environments should generate the same outcome.

The lower any of these three attributes is, the less reliable a measurement would be. Reliability of a measurement system = Accuracy * Repeatability * Reproducibility, i.e. the predictive capability of data diminishes rapidly as these factors dwindle.

Measurement systems ("tests") with low reliability should be avoided or improved.



Pseudoscience

Models which lack supporting evidence, have already been debunked, which have low explanatory power or which are based on unreliable metrics are generally considered "pseudoscience".
Statements based on such models would be considered doubtful in the scientific community.

The reason why older models, first and foremost, MBTI and DISC, despite their high (and often re-trending) popularity would be considered pseudoscience, is that they lack explanatory power and reliable measurement.

While some models claim high repeatability, many people have expressed doubts whether personality tests are sufficiently accurate.
Some assessments might even claim that "you have a family profile and a job profile", essentially surrendering reproducibility, and therefore, scientific validity.

As mentioned before, even the very refined HEXACO model suffers from a lack of explanatory power, and depending on how a test is configured for a specific environment, this specific configuration might have little supporting evidence or even generate counter-evidence.

Therefore, it stands to debate how useful psychometry could be to make statements about a person's workplace behaviour.




Make-Belief

The key criticism in regards to most psychometry tests is that a personality report from these models is a kind of a Barnum statement - people who read their report suffer from a Forer effect: Reports generated by random data might be perceived equally accurate as reports made by conscious choice. People look for the attributes that feel describes them "fairly well" and overlook the passages that aren't suitable.

Tests based on MBTI and DISC profiling suffer specifically strong from this - either their statements are so vague that they could describe technically anybody, or people would feel that whatever outcome is attributed to them is not universally applicable, or doesn't suit them at all.

The "explanation" for this vagueness tends to be that factors are fluent and exist in different levels of manifestation, which basically makes a binary classification meaningless.

The effect on people

In a statement on one website, the claim was "The outcome of the test can affect your life", which is indeed true, especially when the test is being used for job selection and you didn't get hired because you didn't show up as what the hiring person was looking for.

Using the models

The only point I give to the models is that test results can be a decent conversation starter with your team, friends or family - although I'll put that point into abyeance, because likewise could be a relevant subject matter or even the weather.


Harmful application

This is where I get into the realm of "coaching". Some coaches peddle these models as "strongly supported by science", which they aren't - and people who lack a scientific background will use these models as if they were.

Especially "The Four Colors", which are pomoted worldwide in management seminars and which are now also finding their way (in one form or another) into Agile Coaching pave the way for dangerous dynamics.

The worst application of the model I have seen are "helper cards" used by people to categorize the other people in the room during a conversation.

Promoting ignorance

There is no simple way to classify a person's behaviour within an sociotechnical system. Every model that claims to have an easy answer that utterly ignores environment is dangerous - because it focuses on the consequence while ignoring the trigger. Without educating people on the impact of environment on behaviour, psychometry becomes a distraction rather than a means of understanding!

Thinking inside the box

People are complex, very complex indeed. As a proverb from Cologne states, "Jede Jeck is anders", roughly translating to: "Every human being is different" - you just can't put people into boxes.
There's also a high probability that behaviours you observe or how you judge those behaviours are tainted by your personal bias. As long as you think of people in such boxes, you're very prone to miss important nuances.

Manipulation tactics

When I was taught DISC a decade ago, I learned that people with a strong "D" dimension respond positively to terms like "Fast" or "Effective", whereas they get put off by details. Same for other dimensions. As such, I have learned to use the DISC model as a means to use language to manipulate people to agree with me.
As helpful as such knowledge can be to make decisions, as deceptive it can be - because this sets up people for manipulation and exploitation. Is this where you want to go in coaching?

Missing the Big Picture

Psychometric models focus on the individuals, ignoring their role in their environment. Strangely enough, my first question when sitting in a DISC training was, "There's this person who's strong in all four dimensions. What's that?" During the training, I just swallowed the anwer, I didn't understand the consequences until years later: "This person is an adaptor. They display the strengths that the current situation requires."
Later, it hit me like a concrete block: People adapt to their environment. Their social role determines which strengths they will exhibit. And as their role changes, their visible profile changes as well.

As such, we can't measure a person at all, we just get a glimpse of where that person currently stands in society. Change that role, and their psychometry changes. And that role changes as circumstances change.

You can change a person's social environment to turn an inspiring leader into a tyrant.
You can change a person's belief system to turn a braggart into a humble person.
You can affect a person's incentives and turn a couch potato into a sportsman.

How much do you then think that a few dozen questions will tell you about what a person could be?

Building the wrong team

Some organizations try to build teams with a "suitable" mix of personalities and ignore that their psychometric data is a poor representation.
Psychometry can be flawed from three angles:
  1. The test itself wasn't an accurate representation of the person's beliefs and behaviours.
  2. The test outcomes were inaccurate to describe the person's beliefs and behaviours.
  3. The test ignored the current social dynamics leading to a person's behaviours.
People's behaviours and dynamics depend on context. Hence, planning based on psychometry makes unsupported assertions about the future state of the team.

How ridiculous would it be to ensure that each team is built with one Red, two Green, two Blue and a Yellow - only later to discover that a Green adapted to that role and is otherwise Red, and that the Yellow was only Yellow back when they were hired?

Making concessions

In some cases, inappropriate use of profiling other people based on observations can be used to "excuse" negative behaviours and unhealthy group dynamics. For example, bullying might be considered the conseuquence of "expressing strong dominance", and the behaviour itself or the systemic enablers might continue unquestioned.
Likewise, people with "strong agreeableness" might accept immoral behaviours, when they should be encouraged to take a stand and fight for change.



Summary

This article explains why Psychometry is scientifically not valid, why psychometric data should be treated with caution and why coaches should be utterly careful when meddling with psychometry in their work.

If you use or plan on using psychometry in coaching, be careful of the problems you are inviting.

Monday, September 23, 2019

Finding your place as an SPC

Time to take yet another jab at the SPC. The SPC role is massive, and as I mentioned before - people can't be good at everything the role suggests. That's simply because there are too many different things you might be doing.

DISCLAIMER: Proceed with caution. This model needs refinement.

So, I've created this simple model as a brain dump on what an SPC could spend their time with.
If you're an SPC and/or Agile Coach, you can try checking the boxes and see where that lands you.



How-To-Use

  • If you're an organization looking for an SPC, let them make their check marks.
    This gives you an impression of what you're hiring for.
  • If you're an SPC as part of a company's SPC community, this gives you a reflection opportunity to see if what you do is what you want to be doing.
  • If you meet as an SPC community, you may want to compare your results with your peers - it's a great discussion starter!



Notes

  • I would like to say, "There are no rights and no wrongs" - but that wouldn't be entirely true. A few combinations would be pretty insane. If you can't figure out which ones, ... I have bad news.
  • Not all combinations are consistent - I hope that's not what you're doing.
  • As the dividing line is, "where you spend more than 10% of your time" - there shouldn't be more than 10 checks.
    • I would assume that an average SPC shouldn't have more than 5 or 6.
    • Being a non-average SPC means you're probably spreading yourself too thin.


Feedback welcome. I would like to improve this so it can be the most useful tool for others.

Sunday, September 22, 2019

SPC Competency - what companies can do

In a recent post, I gave my opinion about the condition of the SPC. I have received some pushback, in multiple directions:
  1.  I'm being too negative.
  2.  The problem isn't limited to SAFe or the SPC.
  3.  What's the solution?
Let me address these points quickly before digging in.
First, I made this post because I care. Not to place blame, but to spark a discussion. And I did.
Second, I agree. The growing problem of ersatz agile coaches would still be getting worse even if SAFe didn't exist. But I don't give a hoot about the ever-growing number of pointless certing schemes out there. I would prefer if we could improve the SPC community.
Third, I don't know. But I have ideas based from what I have seen.

So - let's make this article about point three.

Getting out of the dilemma
I base these ideas on personal observations (in italics) made from one client - they know who they are.
This article is a kind of action-item list for things you may want to do in your organization to maximize the likelihood of having "good" SPCs.


The certificate is nothing - what matters is the person who bears it!

Dismiss Ersatz SPCs fast

When you identify Ersatz SPCs in your organization, you should let them go quickly. Preferably, you don't give an external SPC a prolonged contract to begin with - start with monthly contracts that you can simply choose to not prolong.
When you hire SPCs as internal staff, use a probation period to see if they're worth their salt.
The challenge is that you need further mechanisms to determine where prolonging makes sense.

My contracts are often limited to supporting ART launches. Although I believe that this isn't how you build up a sustainable Lean-Agile enterprise, I am currently considering to offer fixed price services like ART ramp-up support with success clause.

Check your metrics

By looking at the wrong metrics, you run a huge risk of ditching the right people and relying on the wrong people. Metrics such as numbers of ARTs launched (fewer is better), amounts of people trained (irrelevant) or teams converted to "Agile" create effort and trouble without value. Success metrics should be business relevant figures, and success should be measured in terms of how your organization creates value.

Transformation metrics like customer satisfaction, time-to-market and product quality are much harder to game and let you know whether your SPC is making show or progress.

Avoid wholesale packages

This one is key - because too often, organizations look to a single large consultancy to take care of the entire staffing question.

When you ask an agency to bring in droves of SPCs to do an Enterprise transformation, you're going to get a few good ones and a lot of bad apples. It's inevitable. Don't hire SPCs by-the-dozen. Create a process where you, as a client, are in control of every single SPC so that you can decide to terminate Ersatz SPCs without getting contractual problems.

I see Business Owners interview every SPC individually, and being from one specific agency can't be a success guarantee to be placed. Steer clear of "bargain bins" here - if Agency A allows you to place a second candidate at a discount, that's probably not to your advantage!


Rely on Expert opinion

Do not rely solely on Business Owner or RTE opinion as to what makes a good SPC, as the art of snake oil selling includes deceiving the Ignorant. You don't want Snake Oil.
When a Business Owner has a bad feeling, they should immediately turn to an SPC for advice.
Create a stream of communication between Business Owners and SPCs who aren't invested in the ART.

I have been invited to observe PIP's and offer feedback to the Business Owners. This approach strengthens well-meaning SPCs and exposes Ersatz SPCs.


Give SPCs a role in strategy

SAFe isn't just about lumping development teams together in ARTs, The key to Enterprise Agility is bringing different parts and layers of the organization together in transparent alignment. Experienced SPCs can help there, if you let them. Especially, don't determine what the SAFe organization should look like and then just delegate it to SPCs. This attracts Ersatz SPCs who will act as willing execution agents for poor strategy.

Include external SPCs in your Transformation team, while avoiding to delegate the Transformation entirely to external SPCs. Offer the SPCs transparency into the Transformation - and figure out ways to let them contribute beyond ART level.


Avoid One Man Shows

It looks tempting to put a single SPC in charge of a single ART launch - which creates two major risks: First, if that single SPC is a washout, you'll discover that after the PIP failed. Second, if that single SPC gets stressed out, you stand empty handed. Try distributing the load.

Involving multiple SPC's with each ART increases transformation quality and decreases risks. It's good to at least have other SPCs check in on a new ART occasionally, to see if the lead SPC requires support. Ideally, don't save on SPC capacity. 1-2 SPC per 50 people isn't too much.

Separate concerns

SAFe is a huge topic. For starters, we have the domains of strategy, training, value streams and technical aptitude.
Consulting isn't coaching isn't counseling. Process change, teaching people new practices and structural reorganization are different pairs of shoes. Managers have other needs than developers than product people. 
Few people can take care of all of that. Most excel in one area, some in two or three. Even if someone could do everything, they'd spread themselves too thin if they would do all of it at once.
Hence, allow people to play out their strengths. Let SPCs determine how they can contribute - and let them do exactly that.

Always involve multiple SPCs to launch an ART. Not everyone needs to be around full-time, as long as you ensure that people do the thing they're good at, not the things that are really not their strength.

Pair and Shadow

Once you have identified some competent SPC's, ensure that they are present at key SAFe events, such as the Value Stream Workshop and the PIP. An hour of observation and a short conversation between them and the other SPC will be very revealing.

I have very much enjoyed the conversations with a senior SPC who critically probed the dysfunctions he observed during my ART launch PIP.

Build an SPC Community

When skilled SPCs talk, discussions get interesting. By having a community of peers, you will quickly see:
  1. Who contributes and who doesn't.
  2. Whose contribution adds value and who blathers.
The SPC's you want in your organization are those who care to make a difference. Observe those who don't want to join the SPC community as a vehicle to have a bigger impact - they might have something to hide!

I enjoy every session of my client's SPC community, because it's a great way to look at the big picture and avoid local optimization. Personally, what I like most is the ability to draw attention to enterprise-wide issues.

Create a Central SPC Work Backlog

Nobody is Superman. We all have strengths and weaknesses and nobody can do everything. We should focus on the things we're good at. There's a lot of SPC work, starting with awareness sessions, Value Stream Workshops, going over trainings, program backlog workshops, PIP support, Inspect+Adapt events and many more. The SPC community should have a backlog where all known work can be made visible, so that SPCs can pull the items where they're confident to add value.

I have been "One Man Army" Coach before. It's terrible. Being able to access an SPC activity backlog where one can get items covered that one isn't good at and pull items that make the organization successful is a quantum leap.

Identify Organizational Antipatterns

Current Culture has an impact on Agile Transformation. No single SPC sees the big picture, it becomes visible when we put our individual perspectives together. This helps us identify systemic change potential and levers going much beyond the scope of a single ART. Ersatz SPCs might reveal themselves by claiming antipatterns that aren't, or claiming clear antipatterns are fine.

When we had the idea in the SPC community to collect antipatterns, I had a hundred things on my mind. It was fascinating to learn where I was victim of my own bias and where others shared my observations.

Meet. Face to Face.

In large corporations, it's easy to remain anonymous. This creates a hiding place for Ersatz SPCs. When you meet for a day in a setting like Open Space Technology or Liberating Structures Workshops, you can learn about who you're dealing with. Many minds breed new ideas.

The best you can do is create an Open Community Meetup in your company's location. This creates a time and space where you can meet your own peers. It can likewise become a mechanism to learn about one's own organizational bias and be a vehicle for recruiting interested outsiders who could make a valuable contribution.

Fuckup Nights

Talking about where "you dun goofd" isn't easy, but we all do. I do. All the time. Maybe my last article was a goof - I'll let you determine that. Provide a forum where people can talk about their mistakes openly. We can help each other out, and we can learn from others' mistakes as well.



The Agile Academy

I've taken two barrel loads of shots against the Agile Academy - and it might appear as I am totally against this idea. I am not. Indeed, I mentioned before that I believe that a properly set up Agile Academy is an essential instrument in sustaining and nurturing Enterprise Agility. I am against using this approach too early, exclusively or as a false dichotomy.

Use Neutral Trainers

An Agile Academy is very dangerous when it relies on people with local bias as trainers, who in their worst case have built their own curriculum based on their personal understanding and then proliferate this in the organization. Bringing in a mix of reputable external trainers to conduct trainings on Lean/Agile basics irrespective of your local context is more costly, but it provides a balancing instrument in the organization.

The client who is running a quite successful "Agile Academy" has a number of training partners who are not part of their organizational culture. This ensures the trainers get neutral market feedback and do not adapt to cater to your current status quo.

Trainers must be Doers

A very common mistake in Corporate Agile Academies is that you raise fulltime Inhouse Trainers who lose touch with organizational reality. 

My heart jumps when I hear that people who are doing the work in the trenches are giving trainings and feeding back their learnings from the training into their daily work, and vice versa.

Every ART SPC must take a role in training

When the Agile Academy creates a disjoint between trainer and the ART SPCs, that may be efficient from an organizational perspective, but it's terribly inefficient from a learning perspective. The Lead SPC of an ART must be in the training room, ideally as a trainer or at least as co-trainer. This strengthens the bond between them and the people they work with on a day-to-day basis. They can link training content to their daily work - and see from the training questions where further support after the training is required.

Ideally, the Academy relies on SPCs who also work in the trenches and has a standard process where the ART SPC is part of the training process.

My client didn't do this initially, and this created negative feedback swiftly as Agile Academy processes weren't synchronized to ART launch processes.

Choose candidates wisely

SPCs are multiplicators either way. A good SPC spreads a good culture. A bad apple spreads a bad culture. Is it better to err on the side of caution? I don't know. But what do you do if you have an internal SPC with an anti-agile attitude? Do your processes accommodate for that? Maybe they will use their accreditation to wreak havoc elsewhere. Although that may not be your problem - it affects the global SPC community.

Wait before you train inhouse SPCs

Being an SPC is a high responsibility. I believe that training people without agile experience as SPC is setting them up to fail. Especially if you have no Agile history in your organization, you will be challenged to find people who have enough background to succeed as an SPC. Wait before moving the SPC role "in-house". Give people time to collect experience as Scrum Masters or Release Train Enegineers before you move them into an SPC role.

My client went through an Agile Transformation years ago. There are Scrum Masters with vast experience on their back, and they have the battle scars to prove it. Such people make great SPC candidates.

Avoid role poker or favors

Internal SPCs will carry the burden and responsibility of fostering and sustaining Lean-Agile Practice long after the Externals are gone. They need to be the people who care to do this. If you select Internal SPCs by means of favoritism or to appease political demands, that's a signal that you're preserving a status quo where power play beats performance.

I have participated in deep discussions with Business Owners over the implications, pros and cons of nominating SPC candidates for the Agile Academy. As part of my coaching, I also had 1:1 conversations with the candidate to see if this is how they personally felt they could add most benefit to the organization.

Be picky

There should be no guarantee that a candidate nominated as SPC becomes an SPC. Let some time pass between nomination and training. Peer interviews by SPCs and collaboration with SPCs who are outside the line responsibility of the nominating Business Owner are a good way to do this.

The first SPC candidate I mentored for my client was probably at least as capable to meet this responsibility as I am. He shadowed me and asked questions. I coached him for a few months. This was part of his onboarding process before he went for training.

Provide ongoing support

An SPC training is nothing more than basic awareness. It doesn't make an SPC, and it doesn't equip an SPC to bear their responsibility within the organization.

SPC Candidates should ...
  1. join the company's SPC Community as early as possible.
  2. have a mentor/coach even before their SPC training.
  3. be able to rely on their mentor/coach after their SPC training.
I have observed others becoming SPC in a slow process, where seasoned SPCs were always present to support one in growing into the role. Combined with the above items from the community, this keeps weird outcrops under control.

Don't cut ties with external SPCs too quickly

I have mentioned the Dunning-Kruger Effect in the original post: how do you know what you don't know? And how do you know when you're ready to move on alone?

Fading out external support slowly is, in my opinion, imperative. Don't throw out your Externals on the day after PIP launch. Consider a prolonged period before the SPC moves on. I'm talking about months, potentially years before the last External finally moves on.
Why? Otherwise, you reward SPC's who create fire+forget show effects.
Make sure that an external SPC is responsible for a first successful PI with valuable outcomes, and give them the opportunity to make this happen.

I have seen ARTs reduce the involvement of external SPCs to I+A event attendance, PIP coaching, call-on-demand and many other means of slowly breaking dependency without cutting ties.

Plan the future for external SPCs

Some organizations rashly and harshly cut ties with External SPCs after the ramp-up phase. 
Even after the SAFe rollout is over, I strongly suggest to keep some brains close for collecting feedback, new impulses and ideas.

Part of the phase-out process needs to be creating a strategy for sustaining an influx of valuable external knowledge without falling for dependencies. And here, I'm not talking about maintaining external trainers for the Agile Academy - I'm talking about people who stand where the rubber meets the road, that is: in the trenches.


That's it.
I hope this is a more positive outlook on how I envision corporations contributing to a better SPC community in the future.