Friday, June 14, 2019

Don't get fooled by Fake Agile

Whichever consultancy sells this model, DO NOT let them anywhere near your IT!

So, I've come across this visual buzzword bingo in three major corporations already, and all of them believe that this is "how to do Agile". And all of them are looking for "Agile Coaches" to help them implement this garbage.

This is still the same old silo thinking that never worked.

There is so much wrong with this model that it would take an entire series of articles to explain.
Long story short: all the heavy buzzwords on this image merely relabel the same old dysfunctional Waterfall. In Bullet points:

  • "Design Thinking" isn't Design Thinking,
  • "Agile" isn't Agile,
  • "DevOps" isn't Devops. and 
  • Product Owners and Scrum Masters aren't, either.

Here is a video resource addressing this model:




I have no idea which consultancy is running around and selling this "Agile Factory Model" to corporations, but here's a piece of advice:
  • If someone is trying to sell you this thing to you as "Agile", do what HR always does: "Thank you very much for coming. We will call you, do not call us ..."
  • If someone within your organization is proposing to implement this model, send them to a Scrum - or preferably LeSS - training!
  • If someone is actually taking steps to "implement" this thing within your organization, put them on a more helpful project. Counting the holes in the ceiling, for example.

If this has just saved you a couple millions in consulting fees, you're welcome.

Tuesday, June 11, 2019

10 Bad reasons to choose Kanban over Scrum

"We're using Kanban instead of Scrum" - a common sentence, Especially since I do SAFe consulting, I hear this one quite frequently. Most reasons provided for this are anywhere between highly and extremely doubtful, and in many cases reveal ignorance both of Kanban and Scrum alike.

So here are my top 10 bad reasons for using Kanban instead of Scrum:


#1 - Our Consultant told us so

There's so much wrong with this sentence that it can't be fit into a single sentence.
First and foremost, if your consultant - after some short observations - knows your process better than you do, then you should seriously question what you're doing.
Also, a team should strive to be autonomous in decision making, and if your best reason for a decision is because someone else told you so, then you have absolutely no valid reason: you haven't understood what is best for your own good!

#2 - We had to choose

"Scrum or Kanban" is a false dichotomy. It's like saying, "Do you want a beverage or lunch?" - why not combine?
You can easily and very well combine Scrum with Kanban, as they address different problem domains. Whereas Scrum's primary concern is the nature of teamwork, Kanban focuses mainly on the flow of work. Especially Scrum.org has been investing significant effort in recent years to put together a "Professional Scrum with Kanban" curriculum which makes exactly this point.

#3 - 1 month is too short

A common argument I hear is that "we can't get anything Done in 4 weeks.
Sorry to say, if you're not getting anything Done in 4 weeks, chances are you're not doing Kanban either. Kanban is flow optimization, and if you sit back content with a cycle time of more than 30 days when other development professionals can easily do 1 week (or less), you're just being lazy. 
While some organizations are indeed so impedited that everything is moving slowly, working relentlessly to make the problems visible and finding creative solutions to make progress is essential to succeed in the long term.

#4 - Our work is too complex

"We do really complex stuff here, much more complex than anyone else!" - Every team which has ever mentioned this argument to me was applying a "special pleading" fallacy, usually oblivious of what other problem domains look like. 
I will admit that Logistics is complex. As is social media, telco, fintech and science. Scrum is specifically made for complex problem solving, so that's simply not a valid reason.

#5 - Our stories are too big

This argument translates to: "We set up a board and put a sticky on that won't be moving for a month and call this Kanban," Well, I hate to break the news: it isn't. Kanban is about managing the flow of work, and a Kanban board where nothing is moving hides transparency and makes optimization impossible.

I remember working with a corporation where team members adamantly insisted that stories estimated to take 4-6 months were already atomic and impossible to split any further. We took a few items from their backlog into a workshop and a 6-month user story was decomposed into almost 70 workable, individual slices of value. The biggest could be delivered in a few days, some within hours. Unsurprisingly, the entire package of all relevant items estimated to less than one month!

#6 - Requirements from all sides

If this is your reason for not using Scrum, you're defaulting! One of the main reasons for having a Product Owner in Scrum is to solve the problem that everyone thinks their specific problem is Priority 1. Where a requirement comes from doesn't matter - there's always one thing that is the best thing to do right now. Even in Kanban, an understanding of what is the most important thing is extremely helpful in order to "stop starting, start finishing". 

#7 - Maintenance is unplannable

Teams which operate on a live system oftentimes get exposed to unplanned work - bug fixes, support tickets, maintenance and whatever. My first question here is: "Why is maintenance so high?" Could taking time off the busywork treadmill help fix systemic issues reduce the amount of maintenance? I have seen such teams who purposely shifted towards Scrum and taking just a fraction of their capacity to address fundamental issues - they benefitted greatly!

#8 - Requirements change too fast

To me, this begs the question: "Are you getting things finished before the requirements change?"

Usually teams answer, "No", and then I seriously question why you're doing what you're doing. Scrum might help you sort out what's worth taking into account, trashing all whimsical ideas that are forgotten faster than they were proposed.
Just as an added side note, Scrum's sprint plan isn't immutable: There is nothing in Scrum which prevents you from integrating new information and opportunities as they arise.

#9 - Too many meetings

Scrum's events all serve a purpose of inspecting and adapting - not losing focus or track in a changing environment. There is no reason for Scrum events to fill their entire timebox and there's no need to do them in a specific way.
As long as you know why you're doing what, get frequent user feedback, improve your process and outcomes and keep the team synchronized, you're quite compatible with Scrum. If you are missing out on any of these items, your problem isn't the amount of meetings!

#10 - We don't do User Stories

I have no idea where the idea originates that Scrum teams must use User Stories, estimate in Story Points, create Burndown charts or whatnotever. When your reason against Scrum is a practice such as these, the short answer is that Scrum is neutral to practice. Scrum is merely mentioning "backlog items". These can be stories, requirements, bugs, defects, ideas or whatever else floats your boat.
Some teams find value in the User Story approach, others don't. Some estimate in Story Points, others in ideal days, yet others in amount of backlog items. All of these are valid from a Scrum perspective, although some may be more helpful than others.




Wrapping up

The discussion "Scrum or Kanban (or both)" is quite valuable for teams to determine how they want to organize themselves. Unfortunately, the discussion is often not factual and highly biased. It is often led with huge understanding deficits both regarding Kanban and Scrum.

There are a lot more bad reasons for preferring Kanban over Scrum than mentioned in this article, and there are also some good reasons for doing so. When your team mentions any of the items in this post, it's good to be suspicious and ask probing questions. Most likely you will uncover that Scrum is just a pretext and there is something else going on.

Expose the root cause and deal with it, because otherwise, critical problems might linger unaddressed.


Tuesday, June 4, 2019

Five sentences that will kill your Agile Transformation

There are some massive faux-pas statements with which senior management, potentially without even realizing, can kill an Agile Transformation within less than five seconds.

I normally avoid the term "Agile Transformation", because I think the idea itself is a dangerous suggestion that this is a time-limited process which can be completed to create a final stage, but let's look beyond that for a moment to focus on organizations who have explicitly started an Agile Transformation initiative.

While there are a near-infinite amount of ways to mess up something as complicated as Organizational Change Management, here are five sentences that can put an entire initiative to a screeching halt:

"Fund the change from your project budget"

There's no easier way to apply Larman's Laws than to apply existing accounting structures to an Agile Transformation. What do you think will be the outcome of such a transformation program, except an organization that still has the same line / project matrix organization which created the entire mess you're currently in?
And which project manager in their right mind would spend their project budget on necessary changes that will come into fruition much after the project deadline is reached?

The proper statement would be: "We have to re-examine our entire accounting structure in the mid term, and we will create a specific change fund for this initiative."

"Limit scope to IT"

Ignoring both the impact and consequence of an artificial limitation of scope fundamentally contradicts the concept of systems thinking, which relies on acknowleding both the implicit and explicit relationships of the whole system.
I always use the clockwork metaphor: If you try to speed up a single cog, there are three likely outcomes: The surrounding cogs will absorb the change, the watch will tick wrong, or the cog will break. IT is a cog of an organization, and the results will be accordingly.
If you have this approach in mind, either abandon the idea or the change altogether. It will save you a lot of pain later on.

The proper statement would be: "Start where you are, and when you hit an organizational boundary, pull in people as necessary."

"You can do this"

What sounds like a great motivational statement betrays yet another pillar of systems thinking: Management was responsible for creating the current status quo, and unless management actively contributes in changing this status quo, it will remain unchanged.
As long as management doesn't do their part, statements like "We trust you on this" are adding insult to injury, setting teams up for failure, creating negativity and bad blood in the long term.
If you're a manager and not going all-in, you are going to be an impediment!

I'll take it with Deming:
"Support of top management is not sufficient. It is not enough that top management commit themselves for life to quality and productivity. They must know what it is that they are committed to — that is, what they must do. These obligations can not be delegated. Support is not enough: action is required." - W.E. Deming
The proper statement would be: "We can do this."


"We're already good"

I am baffled when I ask managers what they expect from their agile transformation, and they limit their answer to this sentence: "We're already good, we're just looking for ways to become more efficient."

While I'm not going to argue whether they're indeed good, the attitude behind this sentence will make it extremely difficult to realize that an agile transformation is not needed if indicators like Customer Satisfaction, Time-To-Market and Product Viability are good. If these things are not good, then "getting more efficient" is entirely the wrong direction!



"This is not important"

It's a jaw-dropping moment to sit in a room with over one hundred developers, hearing a Senior Manager introduce the new way of working to the audience with the words, "SAFe is not important". Okay. SAFe is a placeholder, it could be any other framework as well.

Well, if it isn't, why did you opt to go this route for your agile transformation? Did you bother defining your organizational goals, current deficits, map them with the different frameworks and see which approach is most suitable to meet your needs?
If you didn't do that, you simply didn't do your homework, and you shouldn't have chosen a specific framework before doing that!
And if you did that, you might as well inspire people by giving them something to aspire, rather than planting a dismissive notion in their minds!

The proper statement would be: "We chose this framework as a basis to reach our goals - and we will relentlessly Inspect, Adapt and Improve to get better every day!"



In Summary

A Whole Systems Approach is fundamental to any organizational change initiative, be it Agile or otherwise. When managers reveal that they either haven't considered the systemic impact, or simply don't care, they give everyone a good reason to doubt the success of the change from Day One.

Monday, May 27, 2019

It's still about agile teams

With all the ranting around SAFe recently, I want to address one of the most common, and likewise most severe, dysfunctions in many SAFe implementations: The lack of a solid foundation in agility.

To make my point, I will take a heavily biased stance, and I am fully aware of the bias I present.


Cracking the crust paved by traditional organizational structure requires patience - and it's a slow process.

Why SAFe sucks

We start with a hierarchical organization that wouldn't know Agile if it slapped the CEO right in the face. Then, we bring in a swarm of consultants, do some Leadership training. Everyone's excited. Management huddles together and conceptualizes the new hierarchy, called "Large Solutions" and "Agile Release Trains".

When this is done, we select the members of the new hierarchy, called "Train Engineers", "Architects" and "Product/Solution Managers" and train them for their new role. All of them have been selected based on their importance in former power structures, giving each of them at least as much power as they had before.

Once that is done, we forcefully rearrange formerly functional teams in order to meet the new structure, which relabels "departments" to "value streams", "project managers" to "Product Owners" and "Team Leads" to "Scrum Masters". Off for another round of trainings, also including team members for two days and then ...

We do a PI Planning where the Waterfall Agenda is presented to the teams and then everyone has to created a detail upfront plan and commit to management's unrealistic expectations.

Done. Well, almost. Teams now report and get tracked against their 2-weeks micromanaged commitment until the next PI Planning, where the last plan is supposed to be finished.


That's the Real World!

It would be a lie to say I haven't seen this happen. The amount of dysfunctions in this approach are so numerable that it would take hours to list them all out.

Here are just the few biggest problems with the approach:
  1. SAFe is considered a management methodology, not a development approach.
  2. Management buys into something they didn't even understand.
  3. Developers were never asked for their opinion.
  4. SAFe was immediately adjusted to preserve the existing system.
  5. Mindset never changed.
  6. SAFe was introduced as a structural change, not as a way of changing how value is produced.
  7. No effort was made to even attempt and understand what "Agile" means.
Is this the fault of SAFe, neglect on management side, time pressure or some evil consultancy's attempt to screw over someone for a quick buck?
I don't know, and I will not argue, because the issues described are generic and root causes could be different in different places. 



The basis is still the Agile Team

What people forget: SAFe is quite clear that the basis of an Agile Release Train and the SAFe organization is still an Agile Team.
SAFe is the only agile framework which does acknowledge that Agile Teams can have their own, individual mode of working that may or may not coincide with Scrum, XP or Kanban - the only constraint is that the team needs to be agile!

Sadly, organizations do not invest into team level agility before pushing in SAFe. For whatever reason - SAFe oftentimes becomes a dysfunctional relabelling exercise because no effort has been taken to agilize teams.

The illusion of fast results

While Agile teams can deliver high quality Working Software in short cycles, the amount of effort required to bring a team to this level is immense. Many times, management just declares, "We are now Agile" without investing into team building, sustainable engineering practices and agile mindset.

This doesn't work and it will be a flash in the pan. Results will remain elusive, as the organization will effectively accumulate further organizational debt without addressing the underlying issues.

Teams do not become Agile by declaration, and they also do not become Agile by pushing a framework like Scrum or Kanban onto them.

Investing into Agile Teams

Agility is a slow, challenging process with a long and steep learning curve. A two-day training is not much more than waving the "Go" flag on a race track, and some organizations are even cutting corners there - thinking that "it's already enough when we sent Scrum Masters to training": NO!

To be agile, a team needs:

  • Operational Excellence
  • Technical Excellence
  • Drive (ref. Dan Pink)
Based on my experience, the willingness of organizations to invest time and money into these items decreases exponentially in descending order of the list, and it often starts at a staggering low.


Raising Agile Teams

Agile teams are in little need as by definition, they are already self-organized, understand their domain and deliver amazing results. When a team isn't described by these three attributes, they aren't agile to begin with and do not meet the minimum requirements for a SAFe organization!

Teams that are not on this level need a lot more care and support.
Some of this is achieved by management action: changing processes, structures and constraints in order to enable the teams to get there.
Another big enabler is coaching: working with the teams, helping them discover the necessary means  to take control of their work and adopt the practices which help them deliver excellent results.

Coaching is a slow process. It can't be done in a day or two, and workshop sessions - while indeed helpful for specific aspects - aren't sufficient to establish a new culture.

Unfortunately, many organizations invest insufficient time and budget into the necessary groundwork for raising agile teams. Is it suprising then that teams - and ultimately these organizations - struggle?


Sustaining Agile Teams

Agile teams stay (and grow more) agile when their environment supports agility. When an agile team is shoved into a non-agile box, the constraints will force the team to reduce their own level of agility to meet external constraints. 
When SAFe is adopted carelessly, imposing non-agile constraints on otherwise agile teams, the organization's overall level of agility will be limited accordingly. 

In order to get the most benefits out of SAFe, the question should be more along the lines of "Which organizational constraints become superfluous or redundant due to having a SAFe organization", rather than, "Which constraints do we need to add on top of a SAFe organization?"

When organizations abuse SAFe in order to introduce impediments to team-level agility, poor outcomes are the logical consequence.


Nurturing Agile Teams

Agile teams will find a supportive environment highly beneficial in order to deliver even higher business value. Where individual agile teams oftentimes struggle at arbitrary organizational boundaries - from fixed department structures to other "autonomous" agile teams - organizational change, overarching alignment and the occasional global optimization at the expense of local autonomy are indeed a benefit to systemic agility.

SAFe can then be used to provide minimal guiding structure to teams operating in larger environments, referring to common patterns addressing challenging systemic impediments.

The very reason for having experienced SPC's on board would be to both stop the people who lack agile experience from over-engineering upfront, and then pointing out exactly those SAFe patterns which can be considered when a systemic impediment becomes significant.

Unfortunately, we see too many organizations pre-conceiving "the perfect agile organization" as fast as possible and as closely as possible to the SAFe Big Picture - and then implementing it by following a plan without responding to change (which never works).
Managers are quick to proclaim that certain solutions are needed when no team has raised an impediment, and thus un-necessary structure is implemented on a whim.


Agile Management

You can't manage agile teams without being agile yourself. Before managers have changed their own mindset, attitude and understanding, they will not succeed at creating an agile organization at scale.
Most managers fail at the very simple exercise of becoming agile themselves in order to serve agile teams.

Management failure occurs at three levels:
  1. Thinking that "Agile" is for development teams only
  2. Limiting "Agile" to IT
  3. Not investing time into changing themselves
Until management comes to see their own role in the current system, and that change relies on them lifting their own anchoring of an old status quo, agility will remain a fluke.

Management philosophy and practice change is essential for SAFe to result in an Agile Organization.


My Advice

SAFe without a foundation of agile teams is madness.
SAFe driven by the organization without considering agile teams is pointless. 
SAFe imposed by management against the will of teams is craziness.

When an organization does these things, any SAFe initiative is pretty much doomed.

My advice is to take the following steps:
  1. Establish agile teams to begin with. If you can't do that, don't even think of SAFe.
  2. Invest as much into your agile teams as needed. If you aren't willing to do that, SAFe is waste.
  3. Create an environment where your agile teams can thrive. If you think you can't, SAFe isn't going to improve the situation.
  4. After you did 1-3, seriously consider which challenges you want to address and where your teams struggle. Do not introduce SAFe as a solution to problems you don't have!






Friday, May 17, 2019

Need Gap Reduction

Let's cut straight through the bullshit. You want to become agile and ask yourself, "How do we know if we're making progress?"
This one simple metric tells you if you're moving in the right direction.



If you're not reducing the Need Gap, you're doing it wrong!

The Need Gap

A "Need Gap" exists between the time when I, as a consumer, realize that I would need something - until the need is satisfied.

The difficulty of many classical projects is that the need may be satisfied in other ways outside the project's scope or it may completely disappear even before the project's outcomes are delivered.

Why not ...

There are other common metrics. Couldn't we also choose them?
Okay, I'm cheating a bit here. The "Need Gap" is two-dimensional, so it's actually two metrics, so I'm not ditching common agile

Time-To-Market

Time-To-Market is simply an incomplete metric. It doesn't account for the (potentially massive) time lapse between making the solution available and the solution achieving its purpose.

Value

Value is related to wealth obtained by providing the solution, although the amount of wealth benefit that can be monetized depends on many factors. To avoid having to go into a full economics discourse on game theory, we can simplify this by saying that value is again an incomplete metric.

Learning

In a business context, "learning" is related to an unfocused metric that needs to have context. Things we might want to learn should ultimately lead to a bottom line.
For example, learning:
  • Expressed, unexpressed and upcoming needs
  • Better ways of meeting needs
  • Monetizing customer wealth gain
Ultimately, learning is an enabler to addressing the Need Gap.


The Need Gap

A need gap indicates both a delay in satisfaction and an impact on wealth.

Time to Satisfaction

The time to satisfaction is the lead time plus delivery time plus customer reception time, including any potential feedback cycles until the customer is happy with the outcome.

Impact on Wealth

Customer wealth is a function both of money and other factors. 
Cost reduces wealth - as does the unmet need itself, whereas the customer's own ROI increases their wealth.
Economic customers are looking to increase their wealth, so the cost spent on meeting the need should be lower than their loss of not having met the need.



Rising Need Gap

The need gap grows in multiple ways:
  • When the time between feeling a need and meeting its need, the need gap grows
  • When the customer's wealth is reduced
As the need gap grows, the customer's self-interest in discovering alternate solutions also grows. 

Reducing the Need Gap

Ideally, there is no "Need Gap" from a customer - by the time a customer realizes a need, they have their need met:

  • When the Need Gap is low, there is little customer benefit to making changes in the way of working.
  • When the Need Gap is high, the customer benefits from making changes that reduce this need gap.
  • When the Need Gap doesn't get reduced, then the customer has no benefit from any change made in the process.
A reduction of need gap implies the goal-focused
  • increase in customer value, 
  • reduction of processing time,
  • keeping of development cost to lower than the delivered benefit,
  • improvement of value management and discovery,
  • closure of feedback loops,
  • optimization of monetization

Measuring the Need Gap

The Need Gap is quite easy to measure - look at the time between when people first speak of a need until people stop talking about it and how (un-)happy customers are during that time.









Wednesday, May 15, 2019

There's no such thing as a DevOps Department

"Can we set up a DevOps Department, and if so: How do we proceed in doing this?"
As the title reads, my take is: No.

Recently, I was in a conversation with a middle manager, let's call him Terry, whose organization just recently started to move towards agility. He had a serious concern, and asked for help. The following conversation ensued - (with some creative license to protect privacy of the individual) :

The conversation


Terry: "I'm in charge of our upcoming DevOps department and have a question to you, as an experienced agile transformation expert: How big should the DevOps department be?"

Me: "I really don't know how to answer this, let me ask you some questions first."
Me: "What do you even mean by, 'DevOps Department'? What are they supposed to do?"

Terry: "You know, DevOps. Setting up dev tools, creating the CD pipeline, providing environments, deployment, monitoring. Regular DevOps tasks."

Me: "Why would you form a separate department for that and not let that be part of the Development teams?"

Terry: "Because if teams take care of their own tools and infrastructure, they will be using lots of different tools."

Me: "And that would be a problem, as in: how?"

Terry: "There would be lots of redundant instances of stuff like Jira and Sonar and we might be paying license fees multiple times."

Me: "And that would be a problem, as in: how?"

Terry: "Well, we need a DevOps department because otherwise, there will be lots of local optimization, extra expenses and confusion."

Me: "Do you actually observe any of these things on a significant scale?"

Terry: "Not yet, but there will be. But back to my question: How big should the department be, what's the best practice?"

Me: "Have you even researched the topic a little bit on the Internet yet?"

Terry: "I was looking how many percent of a project's budget should be assigned for the DevOps unit."
Terry: "And every resource I looked pretty much said the same thing, 'Don't do it.'.But that's not what I am looking for."

Me: "Sorry Terry, there is so much we need to clarify here that the only advice I can give you for the moment is to reflect WHY every internet resource you could find suggested to not do it."




Why not?

Maybe you are a Terry and looking for how to set up a DevOps department, so let me get into the reasons where I think Terry took a wrong turn.


  • Terry was still thinking in functional separation, classic Tayloristic Silo structures, and instead of researching what agile organizations actually look like, he was working with the assumption that agile structures are the same thing.
  • Terry fell into another trap: not even understanding what DevOps is, thinking it was only about tools and servers. Indeed, DevOps is nothing more than removing the separation between Dev and Ops - a DevOps is both Dev and Ops, not a separate department that does neither properly.
  • Terry presumed that "The Perfect DevOps toolbox" exists, whereas every system is different, every team is different, every customer is different. While certain tools are fairly wide-spread, it's up to the team to decide what they can work with best.
  • Terry presumed that it is by default centralization is an optimization. While indeed, both centralization and decentralization have trade-offs, centralization has a massive opportunity cost that needs to be warranted by a use case first!
  • Terry wanted to solve fictional problems. "Could, might, will" without observable evidence are probably the biggest reason for wasted company resources. Oddly enough, DevOps is all about data-driven decision makings and total, automated process control. A DevOps mindset would never permit major investments into undefined opportunities.
  • Terry was thinking in terms of classical project-based resource allocation and funding. Not even this premise makes sense in an agile organization, and it's a major impediment towards customer satisfaction and sustainable quality. DevOps is primarily a quality initiative, and when your basic organizational structure doesn't support quality, DevOps is just a buzzword.

Reasons for DevOps


Some more key problems that DevOps tries to address:
  • Organizational hoops required to get the tools developers need to do their job: Creating a different organizational hoop is just putting lipstick on a pig.
  • Developers don't know what's happening on Production: Any form of delegation doesn't solve the problem.
  • Handovers slowing down the process: It doesn't matter what the name of the handover party is, it's still a handover.
  • Manual activity and faulty documentation: Defects in the delivery chain are best prevented by making the people with an idea responsible for success. 
  • Lack of evidence: An organization with low traceability from cause to action will not improve this by further decoupling cause and action.

How-To: DevOps


  • If you have Dev and Ops, drop the labels and call all of them Developers. Oh - and don't replace it with the DevOps label, either!
  • Equip developers with the necessary knowledge to bear the responsibility of Operations activity.
  • Enable developers to use the required tools to bring every part of the environment fully under automated control,
  • Bring developers closer to the users, either by direct dialog or through data.
  • Move towards data-driven decision making on all levels: Task-wise, structurally, organizationally and strategically.



And that is my take on the subject of setting up a DevOps department.




Monday, May 13, 2019

The Management-Coach clash

When an agile coach or Scrum Master is brought into a traditional line organization, there is a high risk of misunderstanding and clashing, oftentimes leading to frustration. Why is that so? Let us explore the traditional expectations towards different roles.


Upton Sinclair said it all with his famous quote:
It is difficult to get a man to understand something when his salary depends upon his not understanding it.
As I had these discussions multiple times in the past, here is a short diagram to illustrate what causes the clash. We will explore below:



Asserting authority

Managers get paid to be in control of their domain. When things aren't under control, they are expected to bring them under control. A good manager is expected to be able to state at any time that everything happens according to their provision, and nothing happens that wasn't agreed beforehand.

A consultants' authority is granted and limited by their hiring manager, and the consultant is expected to exercise their authority to drive their specific agenda in service of their manager.

A coach doesn't assert authority over others, and won't drive a predetermined agenda. Instead, a coach helps people realize their own potential and freely choose the course of action they consider best - although clearly within the scope.

And that's where coaches and managers clash:
Managers might think that coaches will exercise authority over the team and thus be sorely disappointed when they see that the coach isn't in control of the team's work, and the coach will be sorely disappointed when management reveals that they think the coach is doing a crummy job for not taking control.


Solution focus

Managers are expected to have a solution, and do whatever it takes to get one. The proverbial statement, "Bring me solutions, not problems" is descriptive here. When higher ranking people within the organization ask a manager about something within their domain, the only appropriate answer is either "This is our solution" or "We are working on a solution."

For consultants, the only appropriate answer is "This is the solution", or maybe "I need x days and then I will give you the solution"

Coaching is an entirely different book: "The solution is within you. I will help you find it." - maybe the coach does have a clear understanding of the solution, maybe not. This isn't even relevant. Important is that the coach guides others in finding their solution.

And that's where coaches and managers clash:
Managers might think that coaches are problem solvers. With this expectation, either the coach should take proactive steps to install a solution before the problem occurs or immediately resolve any issue once it occurs.
Looking at the flip side of the coin, the coach may argue that they can't work with managers who always look to the coach for the solution and aren't willing to spend time and reflect on the situation.


Getting people to think

In traditional management, managers are considered intellectual, "thinkers". Their job is to think, so that others can do. (Let's ignore for argument's sake that most managers' calendar is so cluttered with appointments that they have zero time left to think.) Statements like "This is above my paygrade" are epitomes for a corporate culture where thinking is considered the responsibility of management.

A consultant has a higher self-interest in getting some people to think - at a minimum, the hiring manager should put enough consideration into the consultant's proposal to understand how their solution will affect the organization, preferably in a beneficial way.

Coaches take another stance: Ideally, they encourage others to think by asking probing, critical questions and being skeptical about other people's solution - in order to help people get beyond preconceived notions and think in wider horizons. The more thinking a coach does for the organization, the less sustainable the solution will be once the coach leaves.


And that's where coaches and managers clash:
Managers might expect the coach to use their free time to think of better ways of doing things and then come back with a clear list of improvement suggestions that just needs to be ticked off.

A coach might not do that, which neither means the coach is lazy nor that the coach isn't smart enough to do that: The coach will have a number of items in mind that need to be worked on, and help the organization get to these points one by one, when they are ready. Maybe the only thing management sees is that coaches asks a few key questions so that people will think about what's really going on and which systemic changes will improve the situation.


Defusing the conflict

Misalignment of expectations is the main cause of frustration between coaches and management. Early and frequent communication helps. Coaches need to understand how managers tick, what they get paid for and what they expect. Likewise, managers should be cautious about hiring coaches before they understand how a coach works.

For a manager, it's okay to get a consultant rather than a coach, and indeed that may be better than hiring a professional coach and expecting them to work as a consultant. It's also okay to tell the coach to tone down on the coaching and dig more into consulting.

For a coach, it's okay to start a serious discussion whether coaching or consulting is more congruent with management expectations, and propose switching into a consulting mode as fit. It's also okay to tip your hat and take your leave when consulting isn't what you expect to be doing.

It's not okay to get angry and blame one another just because we don't understand each other.


Tuesday, April 30, 2019

Coachability Index

Not all of my coaching engagements were successful, and in retrospect, it was my own mistake to not check for coachability before engaging - and also, not reinforcing or contemplating the absence of coachability factors. This article will make it easier for you, both as coach or coachee, to understand how character contributes to coaching success.


As agile coaches, we deal with people. People who have a unique personality. This uniqueness is what makes us special, what gives us the place in society we hold. In some cases, our personality is helping us - in others, it is our constraint. Every personality deserves respect, although some personalities benefit more from coaching than others.


Let us look at six character traits that increase or decrease the likelihood of benefitting from coaching. Notice that this article is not about success as a coach - it is about the coachee's success!


Openness

How open is the coachee to talk about themselves and their situation? Do they reveal when they're stuck or experience setbacks? Coaching is limited towards what the coachee is willing to reveal and work on - any area closed to coaching remains out of scope for coaching improvement. The ethical coach will build a sphere of trust for the coachee to enter into - the step of opening themselves remains up to the coachee.

Indicators

Positive IndicatorNegative Indicator
Reveals personal perspective
and perception
Constantly insists, "This is not relevant"
Explores context and backgroundOver-Focussing on single points
Puts themselves into the center Doesn't talk about themselves

Impact on Coaching

  • When there's strong openness, the bond between coachee and coach allows the coach to explore further
  • When there's little openness, most coaching discussions will avoid the real issue. 
  • When there's closedness, even the coach may be led off track by the coachee.

My advice

People have reasons for being closed. These may be beyond the coach's ability to influence. A coach can not work with a person who won't open up, that's why it is important to be able to withdraw from coaching when this situation can't be remedied.


Receptiveness

Coaching is a gift to help others grow. A gift not received is waste. Part of coaching is that the coachee needs to be willing to receive criticism, feedback and even pushback. The coach is just one of many people to offer feedback and may occasionally criticize the coachee on their progress. If the coachee ignores such input, soft-cooking or even justifying their current stance, coaching efforts will evaporate.

Indicators

Positive IndicatorNegative Indicator
Actively solicits feedbackThinks they are in a position
to give, but not receive feedback
Mentions feedback or criticism
they have received
Blames people who criticize the coachee
Inquires the reasons for criticismQuestions the motives of any criticism
Accepts any feedback
as an opportunity
Brushes off feedback


Impact on Coaching

  • When there's strong receptiveness, there's a big window of opportunity for meaningful change.
  • When there's weak receptiveness, most coaching discussions will avoid the real issue. 
  • When the coachee is rejective, coaching might be considered a threat rather than an opportunity.

My advice

Many people have un-learned this kind of listening, especially people in strong blame-oriented hierarchies. Find ways to go "beyond blame" and beyond deflection - or you might as well end the coaching.


Awareness

We don't live in a vacuum. Our mere existence affects those around us - be it in action, inaction, word or silence. We can't "not communicate", and every communication has an impact. How aware is your coachee of their impact on their environment? Do they put effort into understanding the consequences of their words and deeds?

Indicators

Positive IndicatorNegative Indicator
Cautious of their effect on othersThinks it's always "the others"
Understands that even staying silent has an effect"Not criticizing is already enough praise"
Expresses approval of positive eventsFinds reasons to criticize even positive events
Realizes how their behaviour affects othersWillfully oblivious to existing feedback loops


Impact on Coaching

  • When there's high awareness, the coachee can benefit tremendously from active feedback.
  • When there's low awareness, some consequences of change may escape the coachee.
  • When there's obliviousness, the coachee has no mirror to look into.

My advice

Almost everyone is born with a good amount of awareness, some even with too much. Many have been conditioned to reduce their own awareness. In many cases, the position or communication structure of an organization offers an advantage to people who reduce their awareness. Building sufficient awareness is often a precursor to specific coaching on special topics, such as agile mindset and behaviour.
Too much self-awareness can lead to "analysis paralysis". In that case, it's important to encourage experimentation with feedback and understanding trial and error as an opportunity.


Perspective

There's more than one way to see things. Everyone has their perspective - but can we change it? Limiting ourselves to one single perspective may blind us to reality. A change of perspective allows us to consider different viewpoints, ideas and even conflicting information - and integrate the useful bits with our own for new opportunities.

Indicators

Positive IndicatorNegative Indicator
Asks, "How do you see this?"Excessive use of terms like, "No" or "But"
Invests time to learn about
different perspectives
Dismisses others' perspectives as flawed
EmpathyHarshness
Receives different, even
disagreeing sources of information
Reduces information intake to single channels

Impact on Coaching

  • When the coachee is well able to integrate new perspectives, there will be a broad field for coaching
  • When the coachee prefers to maintain a limited perspective, growth potential is equally limited.
  • When the coachee actively rejects any perspective other than their own, there will be no significant change.

My advice

People who take a limited perspective choose to restrict their world view. As Einstein said, "Problems can not be solved from the same level of consciousness as they were created." - when you can't reach a different level of consciousness, that's the end of meaningful change already.


Graveness

"Let's cut beyond the talk and get down to business: How am I contributing to the problem? What should I change? With whom should I talk? How do I know I'm making progess?"
Cut beyond the smalltalk. Actions speak louder than words. As coach, you don't need the coachee to assure you of anything - the coachee is both the driver and beneficiary of change.

Indicators

Positive IndicatorNegative Indicator
Gets down to the core of the issueBeating around the bush
Assumes personal responsibilityMakes excuses
Feels personally accountableDeflects from issues
Has an action planConstantly winging it
Decides, Acts, ReceivesIndecisive
Tracks own progressInactive

Impact on Coaching

  • When there's graveness, change can be fast and usually effective.
  • When the coachee is overly relaxed, results may be considered optional.
  • When coaching is considered a laughing stock, this will become self-fulfilling prophesy.

My advice

When people aren't serious about making a change, they might have reservations. Do they not see how they benefit from more agility or don't they see how their contribution makes a difference?
Not all people want coaching. Respect that. Make the issue transparent and offer quick closure.

Commitment

The coachee has to be committed to grow as a person in whatever dimension they want to be coached in. As coaching is a guidance on another's journey, when we push or drag the coachee, little good will come out of it.

Indicators

Positive IndicatorNegative Indicator
Has a goal to improve towardsIsn't interested in defining personal goals
Discontent with status quoFeels that everything is okay
Sees setbacks as learning opportunity Blames external factors for setbacks
Strives to overcome shortcomingsDoesn't learn from failure
Trying and StrugglingReassurance without action


Impact on Coaching

  • When there's strong growth commitment, you can lead meaningful discussions about "Why" and "How" that may lead to significant change.
  • When there's weak commitment, the translation of coaching into results will be low. 
  • When there's explicit uncommitment, there's a high risk that the coach will just be the next person on a long list of people the coachee isn't happy with.

My advice

When you see a lack of commitment, start the conversation right there. Unless the situation can be improved, coaching will be a waste of everyone's time.




All in all

The "perfect coachee" would be strong in all six dimensions. I have worked with such people, In each such case, it's a pleasure and an honour to serve them as a coach.

Normal coachees aren't perfect in the above sense. Nobody would expect them to be. If someone is weak on a handful of factors, this could be a great for a conversation. In some cases, this can be boosted just by making the issue transparent and offering solid reasons for change.
In other cases, the coaching itself will be restricted or impedited by the missing factors.

And some people just aren't for coaching. I have made the mistake of trying to "convert" them in the past. That caused frustration on both sides. As coach, it's good to be clear why we're there and how our coachees can benefit from coaching. If that isn't received, offer coachee to opt out - leaving the door open to resume later.

And finally - don't coach people against their will. That always ends up with bruises.

All of this applies equally when coaching individuals, teams or entire organizations - as teams and individuals are composed of multiple individuals.

Friday, April 26, 2019

Agile Architecture

What's should an Architect in an Agile environment do?

Irrespective whether we have a single person as "assigned architect" or have architecture as a shared responsibility of (one or more) teams - there's a massive difference between "Emergent Architecture" and "Architecture by Chance". 
The one thing worse than having the wrong goal is not having a goal at all - and that's true for architecture as well as for the product from a customer's perspective, so let's discuss a bit of what the scope of agile architecture is.

Note: This article is my current, personal view on the topic and I sincerely invite feedback and criticism if I'm off.






Architectural Intent

Architecture may or may not have an intent. Such an intent is usually to create an overall system that minimizes cost or effort for one, more or all involved parties.
This intent can be formulated either in the short, mid or long term. If it isn't formulated at all, the probability quickly diminishes that the optimization goals of the organization are indeed met.

Optimization goals

Typical optimization goals for architecture could be, for example:
  • Minimize Cost of Change
  • Minimize Time to Market
  • Minimize working hours
  • Organizational autonomy
You're free to pick your own, based on your specific circumstance.

Organizational Constraints

Sometimes, we also operate under constraints, which may need to be considered as well. 
Examples might be:
  • Regulatory compliance (e.g. privacy laws)
  • Technological limitations (e.g. company policy)
  • Security constraints (e.g. contractual demands)
These restrict the solution space and should be handled with caution - too many suffocate innovation, while missing any essential ones can kill the product! (as seen in the recent Sixt vs. Accenture case)

Not Strategic Architecture

Although the following activities are indeed part of architecture, these topics are not strategically relevant from an architecture perspective: The more fluently they can shift throughout a product's lifespan, the better other optimization goals can be met.
  1. Formulating, planning and forecasting system capabilities
  2. Defining interfaces, processes, data models and interfaces
  3. Component delivery strategy and component responsibility
  4. Deciding on specific techonologies to solve certain problems
While these things are relevant, agile teams can decide those things on a per-item basis and can indeed evolve and dynamically change when knowledge and skills to do these things are available in the team.


Architectural Direction

Some core questions may be worthwhile answering upfront. These are directional rather than content-wise and can be answered with "Yes" or "No":
  1. Do we want to have Loose Coupling?
  2. Do we want to enable Micro-Releases?
  3. Will we go xAAS?
  4. Do we need scalability?
Note that I'm not talking about big architecture upfront here - even for my company homepage, I have answered these questions by taking a couple minutes to orient myself what I want to achieve before writing the first line of code. 


Architectural vision

A vision that nobody can remember is worthless, as is a vision that doesn't help in orienting activities. Using the above questions, a vision statement could be formulated like:
"To develop a system where changes can be done (and undone) at any time within seconds, we create loosely coupled components that can be independently released as autonomous services which can be flexibly scaled."
Then comes the difficult part: getting there.
And that's more difficult on a "brown field" than it is on a green field.
In order to get there, we need to have some mechanisms in place, which may or may not be helpful depending on where we want to be.



Core paradigms

Where we may need to actively invest effort into enabling and establishing some core paradigms for an equally robust and flexible architecture. 

These paradigms could be, for example:

  • Continuous Integration/Delivery/Deployment
  • Automate Everything
  • Measure Everything
  • Test-Driven Design
  • Impermanent Components
  • You Build it, you run it (“true DevOps”)
This means that every architectural decision made by the teams can be benchmarked by whether (or not) an action, decision or choice brings us closer or distracts us from these paradigms.

As such, we may add to our architectural vision further statements like the following:
Every component uses a Continuous Delivery process, and where possible, even Continuous Deployment. 
Every activity within the pipeline and on the production environment is automated. 
We have real time data on every aspect of the system and therefore full transparency on its behaviour. 
Code that hasn't been designed based on executable tests doesn't go live. 
Replacing components is a low-effort task. 
Developers have both responsibility and authority regarding production.

All of these paradigms are still outside the specific solution space and do not answer the "How".



Architectural roadmap

After we have agreed on where we want to go, we need to actually go and do it. Since our customers don't pay us to do architecture, but to deliver something of value, these actions should be seamlessly integrated with our development efforts, ideally experimenting with practices and introducing tools while we're delivering tangible value.

If we're in a Green Field, setting up a CD pipeline and delivering an automatically tested "Hello World" into a measured, containerized production environment may bloat up the first user story by a day or two, and then speed up everything that comes afterwards.

On a Brown Field, that's an entirely different story - we can't just do this so easily. What if we have 16 million lines of unsustainable code riddled with defects? CD would be insanity. There's no data from Production, nobody knows the current state of the deployment environment and extracting a single feature from the existing monolith could take months.

We then get into an architectural action roadmap that might look like this:


  1. Short term goals
    1. Automate build process & set up CI pipeline.
    2. Agree with stakeholders to accept current "status quo" as baseline quality for CI/CD.
    3. Every new line of code gets a unit test.
    4. Build metric hooks and set up a measurement system.
    5. Build service API for core monolith.
  2. Mid term goals
    1. Introduce Coding Conventions and add measurement to CI pipeline
    2. Stop the current branching process and move towards genuine CD process
    3. Automated deployment to Test Stage
    4. Automate e2e tests for critical user journeys
    5. Decouple frontend and backend.
  3. Long term goals
    1. Build new capabilities as autonomous services.
    2. Containerize services.
    3. Extract service components from existing monolith.
    4. Move towards polyglot development.
    5. Move individual services to Cloud.
    6. Build Security Services (IAM, Secrets-as-Service, etc.)

These items can then be added into the Product Backlog and prioritized as needed. The authority and autonomy on how to do this specifically still rests within the development teams.

Any architectural activity that doesn't trace back to a goal may be questionable. Any architectural goal that doesn't trace back to a paradigm invites scrutiny - and if anything doesn't match intent, another discussion may be in order.

Architectural responsibility

None of the above items mandate having a specific "architect" role, although in larger systems, it may indeed be worthwhile to have at least one person take responsibility to see that there is indeed clarity on the architectural:
  1. Vision
  2. Intent
  3. Direction
  4. Goals
  5. Paradigms
  6. Ongoing Activity

Wednesday, April 17, 2019

Minimal SAFe

"It's bloated, it's big, it generates massive overhead. And it cements status quo"
I claim - this isn't SAFe as intended, it's just how it is often understood. 

In this article, I want to share my experiences on what a minimal SAFe implementation actually is.

To set context: You don't use SAFe for single teams. You use SAFe when you have multiple teams only. The mechanisms of SAFe are intended to create overarching: transparency, alignment, ability to act and quality. If you have no issues with any of these, then SAFe just isn't for you. Simple as that.

A number of assumptions are stated as if they were facts. Everything can be put to scrutiny, but this would just bloat this article.

Minimal Organization

Unlike the SAFe big picture suggests, SAFe is actually very lean on process. Take away the SAFe events, there is - nothing that wouldn't need to be done anyways.
SAFe doesn't mandate anything to be done for the sake of SAFe. Everything SAFe has to say on a process level is merely guidance for unsolved problems that you may be having. If you have no specific problems, feel free to use whatever process suits you.

Team organization

SAFe has very little to say on how teams should work. SAFe is fine with teams using XP, Kanban and Scrum - and also with Waterfall or any custom approach the teams consider effective.

Problem only happen in reverse: Teams used to semiannual releases will struggle to meet SAFe's requirement of producing and demonstrating a fully integrated System every other week.
Development teams having a good Working mode and solid Engineering Practices will find a good SAFe implementation to be nearly invisible except PI Planning and the System Demo, where they interact with others contributing towards the same goal.

Program organization

At the minimal extreme, SAFe has very little program level activity: Common goals, shared Backlog, addressing and resolving systemic issues. That's it.
Program level activity might be done in less than an hour a week, and the Program organization might just be three people: Product Management, Release Train Engineer and System Architect.
Not much overhead for 50-150 developers, if you ask me.

The reasons why Program Level activity often takes so much tend to be:

  1. Organizational impediments galore
  2. Teams don't collaborate properly
  3. Mindset and behaviour in Program Level isn't Lean-Agile

All three reasons should constantly trigger the question, "How can we do things differently so that we (as a Program Team) need to do less work?"

Minimal Roles

SAFe has three "Specialist" Roles that aren't shared with other frameworks. Does that necessarily add head count and complexity?

Product Management

If you have one product and multiple teams, you need to be able to deliver a single, Integrated, Working Product - so the work of each team should contributes to a single Whole.

Someone needs to define what the Product is. SAFe calls this "Product Management" The requirements of Product Management are:
  1. A single Product Vision
  2. A prioritized Backlog of unharnessed Value
  3. Aligning priorities with the organizational strategy
  4. An common plan aligning the indivdual team goals
  5. Taking responsibility when this plan is in danger
The function can be assumed by the only Product Owner, one of multiple Product Owners or even all of them.
If we have 4-6 teams and a single Product Owner who also assumes Product Management responsibilities, SAFe might hint that this PO may have too little time for each team but it's still SAFe.

If you want to have a single PO who also assumes PM responsibility, that's still SAFe. No overhead needed.

System Architect

The role of the System Architect is probably the most disputed in SAFe. Let's forget the label for a minute and just say, "It's generally a good idea if you have someone whom you know you can ask if there are technical issues that are bigger than the team".
Does that someone need to decide what the architecture looks like, do they need to actively propose a solution? Let's be flexible.
Maybe that someone just feels responsible for rallying architecture-focused developers from the team on a common architectural goal - a community spokesperson who has the backing from other developers to represent them in order to prevent time-consuming, low-value large discussion groups.

Rather than asking, "Should we have a System Architect?", I would like to ask: "What happens if we have multiple teams and nobody who takes responsibility to bring developers together, nobody who can speak with the Product Owner about Technology in a language they understand?"

If that somebody is a senior developer part of a team, that's still SAFe. No overhead needed.


Release Train Engineer

Self-Organization is a great thing, until developers realize that multi-team collaboration often leads to overarching conflicts and local optimization.
Someone should make sure that global optimization happens, teams talk to each other, global issues become visible and systemic issues get addressed at system level.

This "someone" may well be a Scrum Master or a line manager - the important portion is that the RTE is able to take a neutral perspective, doesn't go into the work when big picture reflection is needed and has the standing in and backing of the organization to pull the big levers for change.

If someone is already doing this, you have found your Release Train Engineer. No overhead needed.

Minimal SAFe Events

Probably the second biggest complaint about SAFe except role mania is meeting madness. So let's address why these meetings exist and what's the minimal overhead.

PI Planning

The 2-days PI Planning with all of its content is one of the main "fear factors" for agile teams who have gotten used to lean, agile planning. Here's the deal: Nobody said the PIP needs to be 2 days. And nobody says it must have a 10-point, 16-hour agenda. Those are just suggestions.

What is essential is that teams consider not only their own next iteration, but about:
  • Their mid- and long-term goal
  • Their recent progress towards their goal
  • Their next short- and mid-term goals
  • Their mutual contribution towards each other
So, let's again, ask in reverse: "What happens if teams working on a common product don't do these things?

The PI Planning is not optional. What is, indeed, open to discussion are the questions of "How" and "How long". 

Is the PI Planning an additional event on top of other events teams have? Indeed. This is the one meeting that many teams fail to have, and it's also the reason why so many organizations fail to make the jump from multiple disjoint, erratic Scrum islands towards a "Team of Teams" which is aligned on a common goal. 

The PI Planning's default is 2 days per 3 months, which is just about 3.5% of development time. If reduced to one day, that number shrinks below 2%. A bearable amount of meeting to see the big picture, synchronize, align, sort out some misgivings and whatever else may be better done in person than via e-mail.


System Demo

How SAFe's System Demo is conducted is very flexible. It can be highly interactive, it can be an upfront show by Product Management or it can be each team having a "science fair" showing their work. What's not optional, however, is to produce an integrated system increment. The Demo only reveals if the produced increment is indeed, both usable and valuable.

"The facts are friendly, every bit of evidence one can acquire, in an area, leads one much closer to what is truth" - Carl Rogers
The System Demo generates these friendly facts, and there's nothing worse than either not seeing the outcomes within their big picture or holding fast to the fixed belief that somehow, things magically will work out.

A System Demo can take anywhere from half an hour to a few hours (with preference given to the shorter timespan) and the more often Demos happen and the more people are involved, the higher the level of transparency. Demos are never a waste - if they are used as a learning and feedback opportunity.


Systemic Inspect+Adapt

This event is where the Release Train Engineer, management and Scrum Masters try to find solutions to systemic problems encountered in the PI. 
To not go into too much detail, if you're not getting at least a tenfold Return on Invest between spent time versus solved problems, you're doing it wrong.

The Systemic I+A is another element oftentimes missing in Scrum implementations - a group of people having a dedicated slot in the calendar where impediments outside an individual team's sphere of control are addressed and resolved.

While the meeting itself may be invisible to developers, they are cordially invited to contribute known impediments - and to provide feedback on the effectiveness of solutions.


Scrum of Scrums

In frequent intervals, teams should synchronize around the questions, "Are we on track?", "Are we impedited?" and "Do we need help from someone?" - this meeting can be conducted by anyone, although the Scrum Masters should possess this information, so what's better than letting them align?

A SoS for a dozen teams can be over within 5 seconds if everyone says "We're doing great", and it can be done in a minute if the outcome is "We need to get together" or "We're challenged, but working on it with <others>". At least the communication should happen so that others know if they can chime in.

Once you have a communication channel where people can deal with issues bigger than one team, SAFe's minimum requirement is met.

SoS, like a Daily Scrum, can be over in a minute and should be over within 15 minutes. It shouldn't affect developers' schedule at all.

PO Sync

As soon as you have multiple Product Owners, they need to align frequently. Synchronization on a product level should happen at least once per iteration. That sync can happen before, as part of, or after, the Review (which is in SAFe terminology is called Demo).

If your Product is aligned around a shared goal and common goal, you meet the need of the PO Sync.

This becomes redundant in a Single-Person construct as people tend to be aligned with themselves (and if not, you have other issues that SAFe can't address, anyways).

The PO Sync takes as much time as it takes to keep the product aligned.  This can be anywhere from a few minutes to a couple hours. It should happen unbeknowest to anyone who is not part of Product Ownership of Product Management.



Minimal Plan

"But Michael, SAFe is still Big Upfront Design with quarterly releases", may be the biggest complaint. I claim: "Not if you're doing it right!"
What SAFe mandates in PI Planning is sharing a midterm vision, taking a good look at upcoming features - defining and aligning common goals. There's a massive leeway in what this means. 

Argue as much as you want, developers want to know if they'll be developing websites or washing machines 2 months down the line.  And business also needs to know if they're investing into software or tomatoes. 
We're not talking whether we will color that button red or green eight weeks down the line. That would be a complete misunderstanding of SAFe's PI Planning.


Business Context

What's the Big Goal, the headline for the next three months? Where do we want to be?
"Go Live", "Hit 1 Million Customers", "Serverless", "Seamless Integration" - that's the Program goal for the next quarter, the "Business Context". 
Additional fodder might be market reception or user feedback, but that's just the details.

If you need font size 6 to describe the goal on a single Slide, you're definitely over-engineering.

Features

Features are not containers of stories. They are hypotheses of business value, which we either obtain or not. If we don't know how we want to generate value - how do we know that we do? SAFe isn't about nerdgasm, it's about sustaining and growing a business!

How many features should we have? Well - a minimum of one. Otherwise, that would mean "We have no idea how to create value within the next three months!". If we have 50 features, we're also doing something wrong, because that translates to, "We don't know what's important.
A set of 5-10 features may already be more than plenty for a couple months. 

How upfront do features need to be? Given that a value hypothesis can become invalid or be replaced with a different hypothesis at any time, there's definitely an argument that overplanning can lead to waste. 
Usually, feature waste is caused by poorly defined or overly specific features.
If your feature is, "ACME Corp. platform integration- transaction volume $3m per month", then there is sufficient leeway in the details to not have to replan when something doesn't work as intended or the value is obtained in a different way that we initially assumed. 

Stories

Why do we plan stories in advance? To get a common understanding of what we're up to, which problem(s) we're solving and whether that's even feasible within the time we have.

Many people don't realize: SAFe doesn't prescribe defining all user stories in PI-Planning to fill 6 iterations. Instead, SAFe operates with a fuzzy planning horizon: We plan 80% of our capacity for Iteration 1, 60% for Iteration 2, 40% for Iteration 3, 20% for Iteration 4 as "Commitment", that is, with work which has high predictability, necessity and value. In terms of fully planned Iterations, that's  just 2 full Iterations - the same planning horizon Scrum would encourage, except taking into account that uncertainty increases over time and it may be a while until we can replan face to face.

The remaining time in a PI is "Stretch", that is, we create an understanding based on today's information, what is the most plausible thing to do. Depending on how flexible demand in your organization is, Stretch can contain anything from "We'll keep it open for stuff we don't know about today" over "We would do these things if nothing more important shows up" all the way to "We have lower priority things that still need to be done eventually, so let's plan them now - if it doesn't get done, we'll replan later."

Stories are specific user problems - not individual tasks. SAFe specifically discourages planning tasks upfront, as this may distract people from focusing on the value at hand.

Objectives

The big mistake many teams make is that they use Business Objectives as mirrors of features or containers of pre-planned stories. Objectives are a strategic tool to help align both within the team and around the team. It's not even necessary to have a new objective for each iteration, but it's essential to have a goal. You start with a mid-term goal and break it down into achievable steps, ideally 1-2 iterations - but 3 can also be fine. And you shouldn't focus on team specific objectives , your objectives should be collaborative, otherwise you're doing it wrong.

Why do we need objectives? To see if we did go where we wanted to be, and to help others align with us without needing to drill into minute details of each story. 



Minimal SAFe Plan

Summing up the entire section on Planning - SAFe suggests that you should have:
  • A business goal to move towards in the next couple months.
  • A clear understanding of how you want to produce value.
  • A decent understanding of the upcoming user problems you want to solve that fill 2 Iterations.
  • Aligned objectives that enable frequent success validation.
  • Enough flexibility to allow that at least half of your plan can go wrong or change.
If you have all these elements, you have a SAFe plan.



Being Minimally SAFe

You are Minimally SAFe when:


All those points are reasonable - and SAFe provides means and mechanisms to attain them.

How much effort do you need to invest into those? That depends on where you're coming from. 
If you couldn't achieve these things in the past - it might be a long journey.
If you've set the right levers already - very little.

The cost of Minimal SAFe

If you're already agile on a team level, SAFe might just require 2-3% developer capacity and roughly 5% from Product Owners and Scrum Masters on top of what each single team would invest.

Now, compare this to the cost that would be incurred if any of the above four items are not met.



Saturday, April 13, 2019

Design Choices - is it really BDUF vs. Agile?

Do we really end up with Chaos or randomness if we don't design upfront?
The question alone should be sufficient to warrant that the answer isn't so straightforward. Indeed, it would be a false dichotomy - so let's explore a few additional options, to create choice.


Let's tackle the model from left to right:

Upfront Design

The epitomal nemesis of agilists, "Upfront Design" (UD) is the idea that both the product and its creation process can be fixed before the product is created. This approach is only useful when the product is well known and there is something called a "Best Practice" for the creation process.
In terms of the Cynefin framework, UD is good for simple products. A cup, a pen - anything where it's easy to determine a cost effective, high quality production approach. This is very useful when the creation process is executed billions of times with little to no variation for an indefinite time.

Normally, Upfront Design isn't a one time design effort. There are extensive prototyping runs to get the product right and afterwards, to optimize the production process before the real production process starts, as later adjustments are expensive and may yield enormous scrap and rework. All of this upfront effort is expected to pay off once the first couple million units have been sold.
The problems of this approach in software development are closely related to ongoing change: There is no "final state" for a software product except "decommissioned".
Your users might just discover better options for meeting their need in certain areas of the product - would you want to force them to do with a suboptimal solution, just because you've finished your design?

Likewise, a new framework could be released that might cut development effort in half, providing better usability and higher value at better quality: Wouldn't you want to use it? UD would imply that you'd scrap your past efforts in the light of new information and return to the drawing board.

Industrial Design

In many aspects, Industrial Design (ID) is similar to Upfront Design - both product and process design are defined before the creation actually starts. There is one major difference, though: the design of the product itself is decoupled from the design of the product creation process. This creates leeway in product creation based on the needs of the producer.

ID accounts for specifying the key attributes of the product upfront and leaving the creation process open until a later stage. The approach acknowledges that the point in time when the "What and Why" is determined may be too early to decide on "How". In implementation, ID accounts for "many feasible practices" from which appropriate "best practices" are chosen by the process designer for optimizing meta attributes such as production time, cost or quality.

Glancing at the Cynefin framework, ID is great for complicated, compound products assembled from multiple components in extensive process chains. Take, for example, a cell phone, a car or even something as simple as a chair, all commodities which get mass-produced, although both with a limited lifetime and still a limited quantity.

The challenges of ID in software development are similar to those of UD: there is no "final state" for software - and by the time it gets into users' hands, requirements might already have changed.

While for most industrially created products, the economic rule "Supply creates its own demand" applies and marketing and sales efforts can be used to make the available product attractive to an audience - Enterprise Software Design is about creating a product for one specific audience that doesn't like to be told what they're supposed to need!

Most corporate software development still happens in a similar way, although in many cases, the creation process has been defined and standardized irrespective of the product, with the assumption that all software is the same from a development perspective: This might be akin to assuming that a gummy bear follows the same creation process as a car or a chair!
Software development doesn't produce the same software a million times - we create it once and use it. And the next product is different. If indeed your company pays developers to develop the same product time and again, I have a bridge to sell you ...


Emergent Design

Emergent Design (ED) is buzzword terrain, as different people have a different understanding of what this means, and while advocates of UD/ID might call ED "No design", we likewise see people who indeed have no design call their lack of structured approach ED. This muddies the waters and makes it complicated to understand what ED is really about.

From a Cynefin framework perspective, ED is a way of dealing with the chaotic Domain in product development - when neither the best solution nor implementation method are known or constant! ED is what typically happens during prototyping, when various options are on the table and by creating something tangible, appropriate design patterns start to emerge.

ED requires high discipline. as systematic application of different design approaches helps reduce both time and effort in establishing useful designs, yet the designers need to fully understand that at any time, following a pattern may lead to a dead end and the direction needs to be changed. ED therefore requires high skill in design, quick forward thinking as well as massive flexibility in backtracking and changing direction.

ED accounts for constantly changing, evolving and emerging customer needs, a near infinite solution space and the inability of any person to know the next step before it has been taken.

Modern software development infrastructure is a showcase for ED: It takes minutes rather than days to change a design pattern, add, modify or remove behaviours and get the product into users' hands. The cost of change on a product level with good engineering practice is so low that it's hardly worth thinking upfront whether a choice is right or wrong - a design choice can be reverted cheaper than its theoretical validation would be.

The downside of ED is that neglect in both process and product design can slow down, damage and potentially outright destroy a product. It is essential to have a good understanding of design patterns and be firm in software craftsmanship practices to take advantage of Emergent Design.


Integrated Design

With Integrated Design (IntD), the focus shifts away from when we design to how we design. While UD and ID assume that there are two separate types of design, namely product and process - ED assumes that product design patterns need to emerge through the application of process design patterns.
IntD takes an entirely different route, focusing on far more than specification - it doesn't just take into account meeting requirements and having a defined creation process.

Peeking at the Cynefin framework, Integrated Design is the most complex approach to design, as it combines several domains into a single purpose: an overall excellent product.

IntD interweaves disciplines - given Cloud Infrastructure, development techniques such as Canary Builds are much easier to realize, which in turn allow the delivery of experimental features to validate a hypothesis with minimal development effort, which in in turn requires production monitoring to be an integral part of the design process as well. The boundaries between product design, development, infrastructure and operation blur and disappear.

Integrated Design requires a high level of control over the environment, be it a requirement modification, a software change, a process change, user behaviour change or a technological change.
To achieve this, practitioners of IntD must master product design techniques, software development practices, the underlying technology stack including administrative processes, monitoring and data science analytics. I call the people who have this skillset, "DevOps".

"Any sufficiently advanced technology is indistinguishable from magic" - Arthur C. Clarke
While master DevOps do all of these things with ease, people unfamiliar with any of the above techniques may not recognize that a rigorous approach is being applied and ask for a more structured, distinguishable approach with clear dividing lines between product design, process design and system operation.
Rather than accepting the near-magical results produced by such a team, they might go on a witch hunt and burn the heretics at stake, in the wake destroying the product built in this fashion.


And then, finally, we still have the default option, oftentimes used as the strawman in the advocation of each other design approach:


No Design

While seemingly the easiest thing to do, it's myopic and fatal to have No Design. No reputable engineer would ever resort to applying No Design, even if they know their design might be defective, expensive or outright wrong.

The reasons why No Design aren't an option go a long way, including - but not limited to:

  • You don't know whether you achieved quality
  • You're continuously wasting time and money
  • You can't explain why you did what you did

The problem with No Design is that any design approach quickly deteriorates into No Design:

  • An UD software product tends to look like it had No Design after a couple of years when people constantly add new features that weren't part of the initial design.
  • When ID products change too much, their initial design becomes unrecognizable. The initial design principles look out of place and lacking purpose if they aren't reworked over time.
  • Applying ED without rigour and just hoping that things work out is, as mentioned previously, indeed, No Design.
  • While IntD is the furthest away from No Design, letting (sorry for the term) buffoons tamper with the an IntD system will quickly  render the intricate mesh of controls ineffective, to the point where an appearance of No Design indeed becomes No Design.


tl;dr

I have given you options for different types of design. The statement, "If we don't do it like this, it's No Design" is a false dichotomy.
This article is indeed biased in suggesting that "Integrated Design" is the best option for software development. This isn't a universal truth. An organization must have both the skillset and the discipline to apply ED/IntD techniques, otherwise they will not operate sustainably.


  • No Design is the default that requires no thinking - but it creates a mess.
  • Full-blown Big Design Upfront is a strawman. It rarely happens. Almost all software has versions and releases.
  • Industrial Design is the common approach. It nails down the important parts and gives leeway in the minute details. 
  • Emergent Software Design requires experience and practice to provide expected benefits. When applied haphazardly, it might destroy the product.
  • Integrated Software Design is a very advanced approach that requires high levels of expertise and discipled.