Showing posts with label Cargoism. Show all posts
Showing posts with label Cargoism. Show all posts

Tuesday, April 2, 2024

"Agile" is not sacred

Some people believe that "Agile" or the ideas propagated by Agile practitioners should not be criticized. They view such criticism as a lack of understanding of Agile or disrespect for Agilists. I disagree. Let's delve deeper into this matter and explore how criticism intersects with science, reasoning, and growth intersect to bring Agile principles to life.
No religious worshipping of Agile!

Agile is not a Religion

It's tempting to defend Agile against criticism. But this turns the pragamtic, empirical approach into religious zealotry - We shouldn't hold a Holy Writ or Prophets over observable truth and evidence: Agile is not dogmatic. It thrives on openness, interaction, and an objective dissemination of plausible ideas. Treating it as an unassailable dogma turns this dynamic way of doing the best thing possible into a cultic practice.

The Cult of Dogma

Trying to staunchly defend "Agile" against critique creates a paradox: Agile, by design, thrives on openness, interaction, and the pursuit of better solutions. Turning it into a dogma stifles progress and growth. Dogmastism —whether religious or ideological— resists questioning and dissent. Agile, however, would welcome both.

Agile is Dynamic

"Agile" isn’t etched in stone: it’s living, evolving. Its core values highlight the need for continuous reflection and adaptation. Agile practitioners must embrace critique as a catalyst for growth.

Science and Agile: Kindred Spirits

"Agile" was conceived out of frustration with heavyweight project management methodologies that led to more failure than successes. Its founders sought an alternative that valued interaction, collaboration, flexibility, and responsiveness. That's the opposite of religious dogmas: Agile doesn’t demand unwavering faith. Instead, it encourages empirical experimentation and adaptation.

A place for skepticism

Scientific progress hinges on skepticism, curiosity, and the willingness to challenge prevailing theories. Agile shares this spirit. When practitioners question assumptions, experiment, and learn, they embody the scientific mindset. Agile’s empirical approach encourages us to scrutinize practices, discard what doesn’t work, and refine what does. It’s a departure from dogma, where adherence trumps evidence.

Welcome Valid Critique

An idea or practice that can’t withstand criticism is inherently flawed. Rigorous examination sharpens our tools. When criticism arises, we need to either debunk the critique with evidence or adapt our approach to the criticism. Agile’s resilience lies in its ability to evolve based on valid feedback. It doesn't coincide with flat out rejecting uncomfortable ideas.

Lab conditions

Imagine Agile as a laboratory: a space where hypotheses are tested, results analyzed, and theories refined. Just as scientists revise their models based on feedback and empirical evidence so do Agile practitioners need to do. A laboratory mindset encourages us to embrace critique, learn from failures, and iterate toward excellence.

No Sacred Cows

As long as we stick to our assumptions irrespective of the evidence, we will struggle to produce best possible outcomes.

Letting go of ideas

Metaphorically speaking, Agilists need to wield a cleaver to butcher sacred cows — those unquestioned beliefs or practices that stand between us and excellence. This helps us evolve, learn from mistakes, and refine our approaches.

Being flexible

Agility relies on flexibility, adaptability, and learning. Rigidity stifles growth. By remaining open to change and questioning established norms, we create an environment where innovation thrives.

Scrutiny and Validity

Ideas need to be subject to constant scrutiny. Rigorous examination sharpens our understanding and ensures that only the most reliable concepts are propagated. Emotional reactions or thought stopping hinder progress.

The Crucible of Scrutiny

Agile thrives when subjected to rigorous scrutiny. Just as metals are refined in the crucible, Agile ideas are honed through examination. When we question assumptions, we refine our understanding and discard what doesn't hold up. Scrutiny isn't a threat; it's a catalyst for growth.

Emotional Resilience

Emotions have their place, but they're often a bad advisor when dealing with criticism. It's natural to respond to critique with an emotional reaction that clouds our judgment. Reason, logic and evidence are much more reliable guides when emotions flare.

Constructive Criticism

"Agile" benefits from constructive criticism. Rather than approaching dissent with negativity, we can understand it as a means to foster growth, refine our practices, and elevate our performance.

Avoid Buzzword Bingo

"Agile" arguments often deteriorate into buzzword bingo — where catchphrases replace substance. Avoid jargon and focus on substance: Show me the "better way" with clarity, backed by evidence. Buzzwords won't impress anyone looking for serious answers.

Be Positive

Criticism is not an attack that needs to be defended against. Instead of shutting down the question, let's open up a conversation. By challenging assumptions, we contribute to growth. But likewise: Simply lashing out at something stifles progress. This won't lead to improvement.

Be Reasonable

Ad hominem attacks and Gaslighting have no place in a collaborative environment. Instead, let's engage in thoughtful reasoning. When you disagree, present your case logically. "Agile" thrives upon the respect of differing viewpoints, perceiving uncomfortable questions as invitations to learn.

Summary

Agile isn't a fixed monument; it's a dynamic garden. Water it with constructive criticism, prune away dead branches, and watch it flourish. Let's cultivate growth, learning, and respectful discourse.

Thursday, December 14, 2023

10 Signs You're Facing an Agile Fanatic

As "Agile" popularity rose in recent years, so did fanaticism — a rigid adherence to a fixed set of ideas that doesn't embrace the spirit behind them. In this post, we'll delve into ten signs that may indicate a person's journey into Agile has taken a detour into fanaticism. From dogmatism to denialism, we'll unravel the subtle but impactful shifts that can hinder the true essence of Agile and its intended goal: delivering value to customers.
So here we go -
10 Signs of Agile Fanaticism

Denialism

Denying the existence of bad Agile practices "because I've never seen that" hints at a lack of awareness and narrow-mindedness.

See also: "Argument from Ignorance."

Tunnel Vision

Rejecting ideas from anyone who isn't a recognized Agile Thought Leader indicates a narrow perspective and a filter that severely restricts opportunities to grow.

See also: "Ad Hominem Fallacy."

Guru Idolatry

A line of reasoning that depends on quotes from Agile gurus, using the name of the gurus as the primary evidence, may not have a point to begin with.

See also: "Argument from Authority."

Manifesto Memorization

Reciting the Agile Manifesto from memory without flexibility in its application reveals an unhelpful focus on form over function.

See also: "Formal Fallacy."

Dogma Defense

When the first reflex is to reject ideas conflicting with "Agile," without entertaining their possible validity, opportunities for improvement are lost.

See also: "Personal Incredulity."

Framework Fundamentalism

Insistence that a "proper" use of an Agile framework is necessary - shows a misunderstanding on the core Agile concept of adaptivity.

See also: "False Dilemma."

"Not Real Agile"

Labeling deviations from a personal interpretation as not "real" Agile betrays unconstructive dogmatism that doesn't encourage finding better ways of working.

See also: "No true Scotsman."

Purity Over Delivery

Prioritizing the "correct" application of "Agile" practices and methods above the delivery of valuable products to customers necessitates reevaluating one's focus.

See also: "Slippery Slope."

Blaming

Immediately responding to challenges or failure by blaming others for incorrectly following Agile practices destroys trust and the opportunity to address the deeper issues.

See also: "Attribution Error."

Elitist Tribalism

Those who separate the world into the categories "Agilists," "Those who can be converted," and "The irredeemable rest," not treating the latter as equals - are dangerous company.

See also: "Othering."

Conclusion

Always remember that Agile is ultimately about suceeding and helping others succeed. Flexibility, collaboration, and continuous improvement lay the foundation for this success. Embrace the principles, resist the notion of dogmatically following ideas or practices. This paves the way for a more resilient, innovative, and ultimately successful Agile journey.

If you spot signs of fanaticism in yourself, reconsider your position. If you spot some signs in those around you - address them. And if someone gets a full Bingo on the list above - do yourself a favor and steer clear.

Saturday, August 5, 2023

10 signs that your Transformation has failed before it started

"Agile transformation" is a popular buzzword these days, and the promises improved efficiency, better collaboration, and increased customer satisfaction are too hard for any enterprise to ignore. However, the transformation journey is not without its pitfalls. Let's take a tongue-in-cheeck snipe at some of the common causes of transformation failure.
Are you walking off a cliff?

You Know That Your Agile Transformation Has Failed Before It Started, If you...

Brought in consultants to prescribe the details of what everyone must do, when and how.

An Agile transformation doesn't come with a one-size-fits-all approach. When consultants define roles and processes without considering the unique challenges and context, we'll get a "square peg, round hole" solution. Successful transformations rely on collaboratively progressing on the Agile journey, letting teams experiment and adapt based on their own understanding and experience with a continuous interplay of opportunity, ideas, execution and feedback.

Spend more time documenting the Future Mode than experimenting or talking to people.

Agile transformation is about establising a habit of growth and learning based on iteration and continuous improvement. An overreliance on assumption-driven documentation without enough actual interactions and experiments achieves the opposite.

Already know the perfect solution, before having made a single change.

Agility is only required because we have to deal with uncertainty. An agile approach needs to acknowledge that perfect solutions rarely exist. Assuming that a "perfect" solution can be found without experimentation, learning or adaptivity will lead to missed opportunities for improvement and won't make the future organizational system any more flexible.

Can show the future on a slide deck, but not in a team.

Agile transformation is built on "individuals and interactions," not on a top-down declaration by some smart folks who know it all. A vision that exists only on a slide deck without any backing of teams who can tell "war stories from the trenches" doesn't instill much trust.

Have defined the correct process that everyone just needs to follow.

Rigidly following predefined processes is what got us into the mess that agility tries to address by fostering adaptability and flexibility. Imposing a "correct" process without degrees of freedom undermines autonomy and the opportunity to take advantage of domain specific benefits, leading to decreased motivation and ultimately, failure to realize any significant improvement potential.

Declare a mandatory universal "Agile Standard" for all teams.

Each team and organization has its own unique challenges, needs and potential. A one-size-fits-all Agile standard that disregards context stops teams from effectively practicing Continuous Improvement. Successful agile organizations treat the diversity of teams as an advantage.

Consider teams deciding their own ways of working to be a problem.

Empowering teams to self-organize and make decisions that impact their work is the means by which organizations reduce risk of failure and coordination overhead. Treating team autonomy as a liability annuls this advantage.

Apply so much rigor that Team Retrospectives don't let people change, experiment, or learn to do it better.

If the rigor and formality tells team members that their ideas aren't welcome, they'll quickly stop highlighting opportunities for improvement. When teams can't figure out how to improve in their context, "Agile" will merely become a new status quo without any sustainable benefits.

Believe that "people are doing it wrong," without giving them any leeway to do it better.

Agile transformations often involve a shift in thinking and culture, not just the mechanics of Agile practices. Blaming individuals without understanding the systemic barriers causes demotivation and resistance. A successful transformation acknowledges that there is no single "one right" approach, and focuses on enabling teams to find what works best - for them.

Your Coaches can recite the doctrine by heart, but don't understand the psychology of change.

Coaches play a crucial role in guiding teams on their transformation journey. Reciting Agile frameworks, values or principles without understanding the human aspect of change and the psychology of team dynamics alienates people and deprives them of the meaningful support and guidance they require. Successful coaches empathize with teams, create a safe space for learning, and tailor their approach to the needs of the individuals and teams they work with.

Closing remarks

Although this list might be slightly humorous, it highlights some serious pitfalls that can seriously derail transformations. Understanding these reasons for failure will help you become more successful on your Agile Journey. Being agile, fostering collaboration, and living agile values and principles is as essential for the teams doing the work as it is for the change towards Agility itself.

Sunday, January 29, 2023

Don't be Joe!

Working with Joe is a pain. If it was up to me, I wouldn't hire Joe. And I would advise every employer to also not hire Joe. Not because Joe would be a bad person. But Joe does and says bad stuff. Stuff that's bad for Joe's team. For his company. For the credibility of Scrum, even for "Agile" as a whole. And for Joe. Joe has good intentions - and "the road to hell is paved with good intentions." So - don't be like Joe.
Joe isn't a real person. Joe isn't a single person. Joe is what Joe does.
Joe is a collection of antipatterns that I quite often observe with inexperienced "Agile Coaches" or newly minted "Scrum Masters." We're all learning, and you might even have spotted Joe in what I did. I'm no better - just a little wiser. We need to have empathy with Joe, and Joe needs to reflect on their actions, as well as the consequences of their actions.

Maybe you got pointed to this page with a "don't be Joe" comment? Don't consider it an attack. Consider it a learning opportunity. We're all Joe on occasion. What matters is what we do about it. The less Joe we are, the more successful we will be.

Here comes Joe

I believe that Joe wants to do a good job. Unfortunately, Joe doesn't know what "doing a good job" in coaching means. And that spells trouble. 

Joe is certified

On their very first Sprint Planning session, Joe's team asked why they had to do Planning Poker. Joe answered, "I am a CERTIFIED Scrum Master, and that's the way you have to do it in Scrum." The team couldn't trust their ears: Joe's CSM certificate made him an authority on Scrum? They shook their heads, but obliged. The next morning, when Joe entered the office, he was shocked.

The entire back wall of the office was plastered with certificates. Joe thus learned that two developers held a CSP, two had obtained PSM-II, and everyone had gone through Scrum training years before Joe. Above that list of certificates, the team printed a big banner, "We know what we're doing, and if anyone has doubts - we have the certificates to prove it."

Joe was humiliated, and he could never establish any form of credibility with his team. Within just a few months, Joe left - and the team clearly told management that they'll never accept a Scrum Master who borrows authority from a silly certificate.

What should Joe have done?

Scrum training is really just something that gets us started on our learning journey. Joe should respect that others around him had their own learning journey, and it could be that they are much further than he himself.

Joe should be humble in order to build trust: "I am new here. I don't know who you are, how you work - or why you work this way. I would like to learn. How are you currently planning? How does that work for you?" These are simple, yet extremely effective ways to figure out where real problems are. We don't need to make changes "because Scrum says so." We'd like people to be successful, while reducing unnecessary pain.

If the team doesn't know about Planning Poker, and they have one of the problems it can address, Joe might start the conversation with, "I learned a technique in my training. Would you want to give it a try?"

And most of all - Scrum doesn't even mention Planning Poker. It's an optional, supportive practive. Joe should learn what makes or breaks Scrum before making bold claims.

Joe focuses

During a Retrospective, Joe went into great lengths to explain to the team that developers need to be "T-Shaped," or even better: "M-Shaped:" They should have a broader horizon, not only doing software development, but also learn how their business works, how to test, how to engineer requirements - everyone should be able to do everything.

One developer asked what Joe understands about software development. He answers, "I am a Scrum master. I don't need to understand. My focus is Scrum."

Another one inquired what Joe understands about the product. Again, "My focus is Scrum."

How about the company culture? Yet again, "My focus is Scrum."

Developers shook their head, then proceeded, "Isn't that hypocritical? You expect us to understand things you yourself don't understand, and you allegedly have a reason, but not us? How come?"

Joe reasoned how the Scrum Master's role is ensuring Scrum is done properly, and how challenging that is. The team wanted to have none of it, asking "Do you really believe that your stupid little 17-page guide is more complex than all of Software Development, our product, and the entire company around us?" - Joe countered, "The Scrum Master is a very special role, and if I'd get into those domains, I couldn't meet my own role properly."

What should Joe have done?

The true value of a Scrum Master is their ability to lead meaningful change. The question, "but what is meaningful?" - requires the Scrum Master to know enough about the context to ask the questions that need to be asked. At a minimum, Joe should be curious about what the team is currently doing before proposing any changes.

While Joe doesn't need to be a senior developer, Joe should show some openness and try to learn just enough about development to determine whether any of his suggestions even make sense to someone who does. It's all right if Joe doesn't understand anything about development: sitting down with a developer and having them explain how they work is a fast and easy way to figure out what the team is currently doing. Quite often, when people explain things, they already realize that something "we've always done" just doesn't make sense.

Joe also won't need to understand the details of every single one of the product's features. Without ever having managed a product before, however, Joe's advice to the Product Owner will be of limited value. Joe could start their own pet project - for example, running a Kickstarter campaign. The learnings will be invaluable for Joe.

Joe also should do a reality check: Understanding the organizational context is Joe's job. Indeed, that's the most important thing Joe has to do in order to help the team become effective.

Finally, Joe should invite rather than impose. Instead of telling his team to change their ways of working, Joe could create a link between a pain that people have, and how a specific change could make their life better.

Joe highlights complexity

Joe has learned from coaching that "in complex, adaptive systems, you can't know the outcome until you after tried." Joe uses this line to discourage any form of plan-driven approach, as well as any prediction of outcomes.

When management asked for a delivery forecast, Joe suggested - "we don't know, it's done when it's done." When developers discussed an implementation, Joe suggested - "Why don't you just get started? You'll know later if it worked." Joe's team was unaware of good engineering practice. Joe took it easy - "In the Complex Domain, we don't even know until afterwards whether we needed them."

Eventually, Joe's team got into a real pickle: the low quality product irritated stakeholders. Joe argues, "That's uncertainty. We use Scrum because of uncertainty."

Joe never mentioned that neither complexity nor uncertainty are binary: Things can be more or less complex, and more or less certain. Joe pushed his team into Chaos by removing whatever certainty they had, and making things complex that could have been simple.

What should Joe have done?

Joe's doesn't need to highlight the existence of complexity - success isn't knowing about complexity, it's simplifying until things become manageable.

"Complex" isn't an absolute. Planning, forecasting and good engineering practice could all help to make things simpler. It's just that we shouldn't bet on them.

When Joe's team has to make a forecast, Joe needs to learn what the forecast will be used for, and what the simplest way of providing the necessary information is. Clarifying margins of error and the tradeoff between accuracy and value reduces the complexity of forecasting.

When Joe's team wants to discuss implementation, Joe could probe where the team is uncertain, and what the consequences of a wrong choice are. Planning ahead just enough to avoid poor choices that could be hard to reverse is smart. If Joe was smart, he would even encourage the team to ask, "What will be the long-term consequence of this choice?" - and coming up with possible alternatives before implementing anything. That would reduce the impact of technical debt, which in and of itself causes complexity in other areas.

Equipping people to do the work they need to do can remove a whole layer of complexity in the "How."

Joe shows flexibility

Whenever someone addressed a question to Joe, he displayed his zen-like wisdom by always answering "it depends."

Joe's team eventually got fed up with this answer, because - of course: it depends. They knew that already. But: on what? Joe was missing that part.

During a Sprint Review, a senior developer proudly announced that they had fully automated Joe - the team presented a website with a free form field, and when you hit Enter - a text would appear: "It depends." Joe's manager rolled her eyes - it became Joe's last working day: his default answer wasn't earning him any respect - neither within, nor outside the team.

What should Joe have done?

Yes, it's true that "it depends." The question is: on what does it depend? What are the important factors that we need to focus on? Where are the forks in the road? What makes A preferable to B - and why?

The answer, "it depends" avoids commitment to the inquiry. A conditional answer is only valuable when it reveals viable alternatives or relevant risks.

If Joe doesn't know an answer, the best option might be - "I don't know - let's find out together?"

If Joe knows only one answer, he might state, "The only thing I'm aware of is this. We could research alternatives."

If Joe has limited experience, he might answer something like, "I have experience with this. I know there are alternatives, but I have no experience with them."

Finally, a coach doesn't get paid to be a dictionary. Stating, "I don't know" is not a character flaw. Much rather, it would be a character flaw of Joe to make it look like he has an answer, when really - he doesn't.

Joe keeps the rules

Joe took his responsibility very serious to make sure that his team had a proper Scrum environment where the rules were followed.

Joe's Scrum Team had visitors from other teams during their Daily. As one developer mentioned that there was some issue with a component being worked on by another team, that person couldn't stay silent any more and stepped forward. Immediately, Joe stepped in, "The Daily is only for the team. Nobody except for them speaks." The silenced developer raised his hand and barely could open their mouth before Joe escorted him out of the room: "If you can't follow the rules, you're not welcome."

It later turned out that Joe's team was building incompatible features and their sprint's work could not be integrated without major rework: Joe had successfully stopped the only person who could have helped from speaking up.

What should Joe have done?

Communicating the right things right at the right time is extremely difficult - even if we dedicate our entire life to this topic, we still get it wrong more often than not. The journey starts with a realization that communication tends to be pretty messy almost everywhere.

As a Scrum Master, Joe has to work on helping people communicate more effectively. When they don't know how to do this, Joe has to find out. Blocking necessary communication is never the right solution - someone will always end up with bruises.

Joe's first responsibility isn't simply to make sure that the rules are followed: Where team members lack essential information, or they aren't providing necessary information to others, that's a high-priority impediment that dramatically reduces their effectiveness - and Joe is accountable for the team's effectiveness.

Joe should try to discover where, how and why information flow between the team and outside actors is broken, and work on fixing that. Scrum has very little to say about information flow across team boundaries. When people outside the team are part of the same system as Joe's team, Joe has to find ways to help everyone communicate more effectively, not just the team amongst themselves.

If in the short term, this means breaking the rules of Scrum, Joe should accept this breach, point it out, and ask, "How should we deal with this next time?" Otherwise, Joe risks being called a Scrum Fanatic, who inists on rigorously following the rules, "without reason and against all reason."

Joe protects the team

During a Sprint Review, one of the stakeholders mentions that they're not happy with the outcomes of the Sprint. Immediately, Joe intervenes to point out that the team acted based on available information, and "if you don't give us the correct information - you need to work on that."

On another day, there was a full server outage. It turned out that someone entered data that corrupted the database. Again, Joe was ready: "You have to train users properly. It will take us days to clean this up. You can use that time to train your users properly, so this doesn't repeat."

Yet another day, finance needed some figures to calculate investments. Joe shot down the request, "If we'd help you with that, we're not going to meet the deadline for the trade fair."

Joe was slippery like an eel. Whatever problem stakeholders faced, Joe always found a way to flip it around and make sure that the problem landed somewhere else. This indeed protected his team both from overload and blame. One day, though, a sales executive mentioned to a board member, "They're not helping at all. Each time I meet them, I wish I hadn't wasted my time." - that was also the team's last Sprint.

What should Joe have done?

"Protecting the team" is a responsibility that needs to be considered in context.

Sheltering the team is a short-term mechanism to stabilize the environment. By pushing the problem elsewhere without defusing the conflict, Joe sets the stage for drama that will make things worse for the team in the long term.

Joe needs to show courage by taking ownership of the conflict, and guide parties to resolve it.

Likewise, Joe needs to create an environment where everyone from his team feels confident to courageously say, "Yes, we didn't think of this. Let's work on this and find a solution that works for everyone."

Summary

Did you realize how I managed to sneak the five Scrum Values - Courage, Commitment, Focus, Openness and Respect - as well as the foundation of trust - into the article? Joe becomes a beacon of all that Scrum is, not so much by what he knows - but by being. Every day, Joe can take a few minutes off to reflect in silence, "Which of the things I did today reflect the Scrum Values, and which of my actions don't? What could I do tomorrow, so that I am a living example of what the Scrum Values mean?" Joe could also invite his team, his peers and his management to give him feedback on how they see the Scrum Values in what he does.

We make mistakes. We have setbacks. We act based on what we know today, and tomorrow we may cringe at what seemed like a good idea today. We progress not by trying to be perfect, but by dedicating ourselves to learning. If Joe keeps this in mind, then one day - Joe will be a Master of Scrum.

Meditate on Scrum

Thursday, January 26, 2023

Is "Agile" just smoke and mirrors?

The "Standish Group Chaos Report" is often quoted as the reason why companies should undergo an "Agile Transformation" and adopt Agile Ways of Working. It supposedly proves that "Agile" is essential to survival. But when you look at the data, you may get some doubts ...
Before we dig in, I must add a huge disclaimer to this article:
DISCLAIMER

It's incredibly hard to find accurate information about the Chaos Report on the Internet. For example, infoq quotes 29% success for 2011, whereas Wikipedia quotes 34%, whereas a paper I found directly at Standish Group quotes 39%. A youtube video quotes that period at a success rate of 32%. Another souce writes that "33% of projects are successful, but only 21% deliver a benefit."

Since I didn't want to spend a couple thousand bucks on original reports, I went with "most likely accurate source" in collecting data. If anyone has the reports, I'd be happy to correct my data where it is wrong.


That said, here goes the image based on the data I found:
Is "Agile" really the cause of success?

What does the data really say?

That it's not nearly as clear-cut as we might want it to be. It doesn't send an irrefutable message that "Agile changed the world, software development is now much more likely to succeed.

There's no proof for anything - only an absence thereof. We have levels of variation that indicate we still have too few data points to make any statement with certainty. The only statements we can make with certainty:

  • The data does not prove that "Agile" made the difference.
  • It also does not prove that potential improvements could be attributed to a framework like Scrum or SAFe.
  • It also does not prove that "Agile" benefits are sustainable.

Factors commonly ignored

Looking back into the beginnings of the Standish Group Chaos Report - that was the 90's: Things other than Agile have changed as well. Here are just some of the major changes that happened since then:

Software was still "new."

Many people never operated with computers back then. They didn't know how to use them, much less how to formulate their needs in a way that made sense in the digital age. Nowadays, everyone knows what a computer is.

The Internet.

Not sure about anyone and everyone - but in the 90's, I didn't have Internet. My only reference was written books, usually published years before. There was a massive asynchronity between encountering a problem and finding an information source that could help solve it. Even as late as 2008, I was still developing for clients that restricted/disallowed using the Internet at work.

IT advanced

Back in the 90's, it was just much harder to process large volumes of data reliably. Back then, we struggled with challenges modern developers are hardly aware of. That included stuff for real nightmares, such as a CPU wrongly processing a correctly compiled piece of source code. Many sources of inexplicable failure have since been eliminated.

IT infrastructure advanced

Some of my early projects were extremely challenging, because "production like environments" were so expensive that it was impossible to afford one for each project member. Indeed, there was often only a single test environment for everyone - even that cost millions. Developers often didn't know what their code would be doing until it was live, simply because of environment constraints.

Transaction costs plummeted

Back in the 90's, we were often shipping literal CD's at project end, there was a final "golden build." Especially for consumer software, that "golden build" could make up for over 95% of the project's cost. I'm sure Atari's ET could have led to very different outcomes if they could've just made a few post-release updates at near-zero cost.

IT Project Management advanced

IT Project Managers also learned how to use these advances to become more successful. Project Management also adopted new and better ways of making their projects succeed.

What does all of that mean, then?

Restating the obvious: there were many factors at play, each of them certainly significant to boost success rates from about 15% in 1994 to the 30% we see today. But there's no one single factor that could be isolated to say, "This factor brought us to 30%, and if we just do more of that, it'll bring us to 50% or higher!" If anything, the data shows that no single factor has been identified that has the potential to boost success rates any further than what we already had in the early days of the Agile Manifesto.

"Agile" most certainly hasn't proven to be the Silver Bullet that will singlehandedly fix the industry.

Closing remarks

We don't have undisputable evidence based on publicly available, reliable sources to argue for or against Agile based on the Standish Group's research. In statistical lingo, "we can't reject the null hypothesis". That is: there's not enough evidence to reject any statement for or against.

There's a fact problem that amazed me: I would have expected that accurate data from the Standish Group's research would be more widespread. But it's not. It's a rabbit hole. I need this huge red disclaimer. I don't even know who's telling the truth or who's misunderstanding presented information (that could also include me - mind you!) Who's stating facts, who's simply pulling numbers out of thin air, and who's deliberately lying to drive an agenda?

Maybe if I had all of the data, undisputably, from its source, the picture could change?

But for now: I can't scientifically state that yes, the Standish Group has provided irrefutable evidence that Agile made a difference. 

So - I'd say that we need to drop Standish Group or Chaos Report from the list of "Reasons for Agile."  There may be others, but this one doesn't make the cut.

Saturday, January 21, 2023

The Blurb transformation

Recently, there's been a lot of buzz about Blurb being the future of work. According to Blurb thought leaders, the adoption of Blurb makes companies more successful by operating faster, better and cheaper.


Company X wants to give it a try and starts a Blurb transformation. During initial training, Blurb practitioners explain that all the current problems only exist because of the current management paradigm - and how much better everything would be if everything would be moved to Blurb.

Management takes their hands off, lets Blurb practitioners do, and observes.

Blurb practitioners begin to do exactly what management did, ableit very inefficient, ineffective and infantile. To management, it appears that Blurb practitioners neither understand what a manager does, why they do it, nor how to do it properly. But - benefit of doubt: Maybe this Blurb thing really is the future, and managers just need to wait and see how it turns out?

At some point, management begins to question the advantage of doing Blurb - the actual outcomes could be achieved much easier and faster without Blurb? Blurb practitioners say that's management doesn't understand Blurb, because they're stuck in a non-Blurb mindset: you can't be Blurb without Blurb.

Eventually, management calls and asks what benefits has Blurb actually brought? Blurb practitioners explain that they have made a lot of progress doing Blurb and helping others do it. They emphasize that in order to get the benefits, you must be Blurb, not do Blurb. And that the key benefit of Blurb is that you become Blurb.

Management begins to get dizzy. They decide to cut funding for Blurb.


If you were a manager of Company X - what would you have done?

Wednesday, November 18, 2020

16 misconceptions about Waterfall

Ok, Agilists. It's 2021, and people are still using Waterfall in corporate environments. With this article, I would like to dismantle the baloney strawman "Waterfall" that's always proclaimed as the archenemy of all that is good and would encourage you to think about how exactly your suggested "Agile" is going to do better than the examples I have taken from real-world, professional Waterfall projects.

Here are some things that many agilists may have never experienced in Waterfall projects. I did.


What you think Waterfall is, but isn't

There are numerous standard claims about what's wrong with Waterfall, which I would generously call "statement made from ignorance," although there could be more nefarious reasons why people make these claims. Point is: many of the common claims are not generally true.


Big Bang vs. Incremental

Waterfall doesn't mean that until the determined end date of the project, there will be nothing to show. I remember when I stated that I worked in a 5-year Waterfall project, people from the Agile community called that insane. It's not. We had a release every 3 months. That means that the project had a total of 20(!) Increments, each with its own scope and objectives: Yes - Waterfall can be used to build products incrementally! In corporations, that's actually normal.


Upfront Design vs. Iterative Design

With each delivery, project managers, analysts and business people sit together and discuss the roadmap: which requirements to add or remove, and which priorities to shift. I have once worked in a product that was created in pure Waterfall for almost 20 years, and nobody could have anticipated the use cases delivered in 2010 when the product's first version hit the market back in 1992. Even Waterfall projects can iterate. Especially for enterprise systems.


Death March vs. Adaptivity

When you think that someone sits in a closet and produces the Master Plan, which must be slavishly adhered to by the delivery teams, you're not thinking of a properly managed Waterfall project. While yes, of course, there is a general plan, but a Waterfall plan gets adapted on the fly as new information arises. Timelines, staffing, scope, requirements, objectives - are all subject to change, potentially even on a weekly basis if your project manager is worth their salt.


Fixed Scope vs. Backlog

If you've ever done Project Management, you know pretty well that scope is very malleable in a project. When an organization determines that meeting a fixed timeline is paramount, Waterfall fixed time projects can be pretty similar to Sprints in managing scope. While of course, you get problems if you don't manage the Critical Path properly, that's not a Waterfall problem - it's carelessness. 


Fixed Time vs. Quality

Probably one of the main complaints about Waterfall is that a team delivering on a fixed schedule will push garbage downstream to meet the timeline. Again, that's not a Waterfall issue - it's a "fixed time" issue. If you flex the time, and fix the work package, there's nothing inherent to Waterfall that implies a willful sacrifice of quality.

(And, as a witty side note - if you believe that fixed time is the root cause for low quality: how exactly would Scrum's Sprint timebox solve that problem?)


Assumptions vs. Feedback Learning

Complex systems serving a multitude of stakeholders are incredibly hard to optimize, especially when these stakeholders have conflicting interests. The complexity in Waterfall requirement analysis is usually less in trying to get a requirement right, as it is in identifying and resolving conflicting or wrong demands. The time spent upfront to clarify the non-developmental interferences pays off in "doing the right thing." Good analysts won't be making wild assumptions about things that could potentially happen years down the line. When a release is launched, good Waterfall projects use real user feedback to validate and update the current assumptions


Handovers vs. Collaboration

Yes. There's something like stage-gates in most Waterfall projects. I myself have helped Waterfall organizations implement Quality Gates long before Scrum was a thing. But it's not inherent to Waterfall - otherwise it wouldn't have been a thing in the early 2000's. Also: don't misunderstand gates. They don't mean that an Unknown Stranger hands you a Work Package which you will hand over to another Unknown Stranger at the next Gate. What typically happens: As soon as analysts have a workable design document, they'll share it with developers and testers, who take a look, make comments and then meet together to discuss intent and changes. Good Waterfall organizations have collaboration between the different specialists whenever they need to.


Documentation vs. Value Creation

A huge misconception is that "Waterfall relies on heavy documentation" - it doesn't, depending on how you operate. Heavy documents are oftentimes the result of misfired governance rather than caused by the Waterfall approach itself. It's entirely feasible to operate Waterfall with lightweight documentation that clarifies purpose and intent rather than implementation details, if that's what your organization is comfortable with. Problems start when development is done by people who are separated from those who use, need, specify or test the product - especially when there's money and reputation at stake. 


Process vs. Relationships

As organizations grow large, you may no longer have the right people to talk with, so you rely on proxies who do a kind of Telephone Game. This has nothing to do with Waterfall. A good Waterfall Business Analyst would always try to reach out to actual users, preferably power users, who really know what's going on and build personal relationships. As mutual understanding grows, process and formality becomes less and less important, both towards requesters and within the development organization - even in a Waterfall environment.


Resource Efficiency vs. Stable Teams

There's a wild claim that allegedly, Waterfall doesn't operate with stable teams. Many Waterfall organizations have teams that are stable for many years, in some cases, even decades. Some of the better ones will even "bring work to the team" rather than assigning work to individuals or re-allocating people when something else is urgent. The "Resource efficiency mindset" is a separate issue, unrelated to Waterfall.


Big Batch vs. Flow

Kanban and Waterfall can quite well coexist. Indeed, I have used Kanban in a Waterfall setting long before I first heard of Scrum where requirements flowed through three specialist functions, and we had an average cycle time of less than one week from demand intake to delivery. Waterfall with Small Batches is possible, and can perform exceptionally well.


Top-Down vs. Self-Organized

I've worked with corporations and medium-sized companies using Waterfall, and have met a lot of Project Managers and Team Leads who have worked in a fashion very similar to a Product Owner: taking a request, discussing it with the team, letting the team figure out what to do how and when, only then feeding back the outcome of this discussion into the Project Plan. Waterfall can have properly self-organized teams.


Push vs. Pull

Whereas in theory, Waterfall is a pure "Push"-based process, the field reality is different. If you have a decent Waterfall team lead, it will basically go like this: We see what work is coming in, we take what we can, and we escalate the rest as "not realistic in time", and get it (de-)prioritized or the timeline adjusted. De facto, many Waterfalls teams are working pull-based.


Overburden vs. Sustainable Pace

Yes, we've had busy weekends and All-Nighters in Waterfall, but they were never a surprise. We could anticipate them weeks in advance. And after these always came a relaxation phase. Many people working in a well built, long-term Waterfall project call the approach quite sustainable. They feel significantly more comfortable than they would be under the pressure to produce measurable outcomes on a fortnightly basis! Well-managed Waterfall is significantly more sustainable for a developer than ill-managed Scrum, so: Caveat emptor!


Resources vs. Respect

Treating developers as interchangeable and disposable "resources" is an endemic disease in many large organisations, but it has nothing to do with Waterfall. It's a management mindset, very often combined with the cost accounting paradigm. The "human workplace" doesn't coincide well with such a mindset. And still, the more human Waterfall organizations treat people as people. It entirely depends on leadership.


Last Minute Boom vs. Transparency

Imagine, for a second, that you would do proper Behaviour Driven Development and Test Driven Development in a Waterfall setting. I did this in one major program, delivering Working Software that would have been ready for deployment every single week. If you do this, and properly respond to feedback, Waterfall doesn't need to produce any nasty surprise effects. The Last Minute Boom happens when your development methodology is inapproprate and your work packages are too big, not because of Waterfall.


All said - what then is, "Waterfall?"

"Waterfall" is nothing more and nothing less than an organized, sequential product development workflow where each activity depends on the output of the previous activity.

There are really good uses for Waterfall development, and cases where it brilliantly succeeds. It's incorrect to paint a black-white image where "Waterfall is bad and Agile is good", especially not when equivocating "Agile" to a certain framework.

Proper Waterfall

A proper Waterfall would operate under the following conditions:
  1. A clear, compelling and relateable purpose.
  2. A human workplace.
  3. A united team of teams.
  4. People who know their ropes.
  5. A "facts are friendly" attitude.
  6. Focus on Outcomes.
  7. Continuous learning and adaptation.
  8. Reasonable boundaries for work packages.
  9. Managing the system instead of the people.

All these given, a Waterfall project can have a pretty decent chance to generate useful, valuable results.

And when all the above points are given, I would like to see how or why your certain flavor of "Agile" is doing better.


My claim


I challenge you to disprove my claim: "Fixing the deeper mindset and organizational issues while keeping the Waterfall is significantly more likely to yield a positive outcome than adopting an Agile Framework which inherits the underlying issues."





Tuesday, October 1, 2019

Psychometry: Science, pseudoscience and make-belief

Let's take a quick glance at psychometry. Personality tests abound, and they've even invaded organizations' HR departments as a means of determining who "fits" and who doesn't. This, I claim, is something we shouldn't use in agile organizations - as these models are dangerous.
tl;dr:
Be careful what you get yourself into with psychometry. Chances are you're falling for something that could cause a lot of damage. Educate yourself before getting started!
Appealing, yet scientifically dangerous: The "Four Color Personality Types"

A brief history of  Psychometry

I will take a look at the models which survived history and are still around and in use today.

MBTI

In 1917, Katharine Cook Briggs and Isabel Briggs-Myers published a model which we now know as "MBTI", Myers-Briggs Type Indicator. with 4 traits in 2 different shapes each - resulting in 16 personality types.

DISC

In 1928, William Marston was tasked by the US Military to figure out why people with the same training still had different behaviour. The model identifies four key characteristics - D,I,S and C. Oddly enough, while the original model had "Dominance, Inducement, Submission and Compliance", today people can't even seem to agree what the acronym actually abbreviates.
Today, we see terms like Influence, Steadfastness, Conscientuousness as alternate labels - which means that depending on which meaning you assign to a letter, your scores would have a totally different meaning!

The Big Five (OCEAN)

In 1984, psychologists took a renewed interest in psychometry and Goldberg e.a. proposed the "Big Five" factors, Openness, Conscientuousness, Extraversion, Agreeableness and Neuroticism.
OCEAN spawned a few models on their own:

Occupational Personality Questionnaire (OPQ)

Saville and Holdsworth launched this model in 1984, and it's still in use today. This model is specifically focused on selection, development, team building, succession planning and organizational change. It has seen updates and refinements since its inception.

NEO PI-R

Since 1978 Costa and McCrae have developed the "(Revised) NEO Personality Index" which subclassifies the Big Five into six subcategories each. One of the key criticisms of this model is that it only measures a subset of known personality traits and doesn't account for social desirability of traits.

HEXACO

As the Big Five caught global attention, researchers realized that different cultures paid attention to different personality aspects, the Big Five were revisited, specifically due to feedback from Asia. Factors like Humility, Honesty ("H") and Emotionality ("E") have a much higher impact on the social perception of an individual in some cultures than in others, and therefore upon how a person sees themselves, as well.

HEXACO led to the interesting insight that there is no universal standard of measuring personality, as the measure depends on the social environment of the measured individual.
Likewise, HEXACO studies revealed that social acceptability determined desirability of traits, and that even the formulation of questions could yield different results depending on social context.


Scientific perspective

Companies have a keen desire to use a scientific approach in determining "best fits" for a new team member, in order to maximize the success rates of placing a successful candidate.
As ongoing research in the field of psychometry reveals, there is no comprehensive personality model, and therefore, no comprehensive personality test.
A comprehensive personality model would require both a large spectrum of personality traits and the social background.

Model Correctness

For the time being, the only factors that have been found to be universally accepted across cultures are extraversion, agreeableness and conscientousness. Everything else is up to debate. From the other side of the coin, this means that any model without these three dimensions can not be adequate.

Even the validity of the universally accepted factors is up to dispute. For example, Dan Pink stated, "People are ambiverts, neither overly extrovert nor introvert", or in other terms: our environment and current mood determines the expression of our "Extraversion" dimension much more than our internal wiring.

It's also unclear at this time how many factors actually exist, so every model we have focuses on a limited subset, and therefore expresses a current bias.


Valid Modeling

Scientists create, refine and discard models all the time. The goal is to have the best possible model, that is, the simplest valid statement with the highest level of explanatory power. The more widely accepted a model is, the more fame will be accredited to the first person disproving said model, that is: the bigger the crowd of scientists interested in finding flaws.

Counter-evidence

The first question when creating a model would be: Is our model valid? The scientific approach would be to look for evidence that the model is indeed not valid, and the model is assumed to be valid as long as no such evidence can be produced. Note that this neither means our model is good nor that it will remain valid when further information becomes available.

Models which have counter-evidence should not be used.

Explanatory Power

The second question to ponder is: How much does our model explain? There are two common mistakes regarding explanatory power of a model:
The first is the category error, that is - to use the model to explain things which it isn't intended to explain, such as using a model that was designed to explain individual behaviours in an attempt to explain social interactions.
The second mistake would be to use the model outside its precision. For example, a model that already fails to address the cultural differences between Asia and Europe would be inadequate to compare the behaviours between a person from Asia and a European.

Preference goes to the simplest model with the highest level of explanatory power required to address a subject.

Reliable Measurement

To be considered "reliable", a scientifically valid measurement would need to be:

  • Accurate, that is, it should generate outcomes that align with reality.
  • Repeatable, that is, a test under the same preconditions should generate the same outcome.
  • Reproducible, that is, testing the same target in different environments should generate the same outcome.

The lower any of these three attributes is, the less reliable a measurement would be. Reliability of a measurement system = Accuracy * Repeatability * Reproducibility, i.e. the predictive capability of data diminishes rapidly as these factors dwindle.

Measurement systems ("tests") with low reliability should be avoided or improved.



Pseudoscience

Models which lack supporting evidence, have already been debunked, which have low explanatory power or which are based on unreliable metrics are generally considered "pseudoscience".
Statements based on such models would be considered doubtful in the scientific community.

The reason why older models, first and foremost, MBTI and DISC, despite their high (and often re-trending) popularity would be considered pseudoscience, is that they lack explanatory power and reliable measurement.

While some models claim high repeatability, many people have expressed doubts whether personality tests are sufficiently accurate.
Some assessments might even claim that "you have a family profile and a job profile", essentially surrendering reproducibility, and therefore, scientific validity.

As mentioned before, even the very refined HEXACO model suffers from a lack of explanatory power, and depending on how a test is configured for a specific environment, this specific configuration might have little supporting evidence or even generate counter-evidence.

Therefore, it stands to debate how useful psychometry could be to make statements about a person's workplace behaviour.




Make-Belief

The key criticism in regards to most psychometry tests is that a personality report from these models is a kind of a Barnum statement - people who read their report suffer from a Forer effect: Reports generated by random data might be perceived equally accurate as reports made by conscious choice. People look for the attributes that feel describes them "fairly well" and overlook the passages that aren't suitable.

Tests based on MBTI and DISC profiling suffer specifically strong from this - either their statements are so vague that they could describe technically anybody, or people would feel that whatever outcome is attributed to them is not universally applicable, or doesn't suit them at all.

The "explanation" for this vagueness tends to be that factors are fluent and exist in different levels of manifestation, which basically makes a binary classification meaningless.

The effect on people

In a statement on one website, the claim was "The outcome of the test can affect your life", which is indeed true, especially when the test is being used for job selection and you didn't get hired because you didn't show up as what the hiring person was looking for.

Using the models

The only point I give to the models is that test results can be a decent conversation starter with your team, friends or family - although I'll put that point into abyeance, because likewise could be a relevant subject matter or even the weather.


Harmful application

This is where I get into the realm of "coaching". Some coaches peddle certain models as "strongly supported by science", which indeed aren't - and people who lack a scientific background will use these models as if they were.

Especially "The Four Colors", which are pomoted worldwide in management seminars and which are now also finding their way (in one form or another) into Agile Coaching pave the way for dangerous dynamics.

The worst application of the model I have seen are "helper cards" used by people to categorize the other people in the room during a conversation.

Promoting ignorance

There is no simple way to classify a person's behaviour within an sociotechnical system. Every model that claims to have an easy answer that utterly ignores environment is dangerous - because it focuses on the consequence while ignoring the trigger. Without educating people on the impact of environment on behaviour, psychometry becomes a distraction rather than a means of understanding!

Thinking inside the box

People are complex, very complex indeed. As a proverb from Cologne states, "Jede Jeck is anders", roughly translating to: "Every human being is different" - you just can't put people into boxes.
There's also a high probability that behaviours you observe or how you judge those behaviours are tainted by your personal bias. As long as you think of people in such boxes, you're very prone to miss important nuances.

Manipulation tactics

When I was taught DISC a decade ago, I learned that people with a strong "D" dimension respond positively to terms like "Fast" or "Effective", whereas they get put off by details. Same for other dimensions. As such, I have learned to use the DISC model as a means to use language to manipulate people to agree with me.
As helpful as such knowledge can be to make decisions, as deceptive it can be - because this sets up people for manipulation and exploitation. Is this where you want to go in coaching?

Missing the Big Picture

Psychometric models focus on the individuals, ignoring their role in their environment. Strangely enough, my first question when sitting in a DISC training was, "There's this person who's strong in all four dimensions. What's that?" During the training, I just swallowed the anwer, I didn't understand the consequences until years later: "This person is an adaptor. They display the strengths that the current situation requires."
Later, it hit me like a concrete block: People adapt to their environment. Their social role determines which strengths they will exhibit. And as their role changes, their visible profile changes as well.

As such, we can't measure a person at all, we just get a glimpse of where that person currently stands in society. Change that role, and their psychometry changes. And that role changes as circumstances change.

You can change a person's social environment to turn an inspiring leader into a tyrant.
You can change a person's belief system to turn a braggart into a humble person.
You can affect a person's incentives and turn a couch potato into a sportsman.

How much do you then think that a few dozen questions will tell you about what a person could be?

Building the wrong team

Some organizations try to build teams with a "suitable" mix of personalities and ignore that their psychometric data is a poor representation.
Psychometry can be flawed from three angles:
  1. The test itself wasn't an accurate representation of the person's beliefs and behaviours.
  2. The test outcomes were inaccurate to describe the person's beliefs and behaviours.
  3. The test ignored the current social dynamics leading to a person's behaviours.
People's behaviours and dynamics depend on context. Hence, planning based on psychometry makes unsupported assertions about the future state of the team.

How ridiculous would it be to ensure that each team is built with one Red, two Green, two Blue and a Yellow - only later to discover that a Green adapted to that role and is otherwise Red, and that the Yellow was only Yellow back when they were hired?

Making concessions

In some cases, inappropriate use of profiling other people based on observations can be used to "excuse" negative behaviours and unhealthy group dynamics. For example, bullying might be considered the conseuquence of "expressing strong dominance", and the behaviour itself or the systemic enablers might continue unquestioned.
Likewise, people with "strong agreeableness" might accept immoral behaviours, when they should be encouraged to take a stand and fight for change.



Summary

This article explains why many approaches to Psychometry are scientifically invalid, why psychometric data should be treated with caution and why coaches should be utterly careful when meddling with psychometry in their work.

If you use or plan on using psychometry in coaching, be careful of the problems you are inviting.

Thursday, September 19, 2019

Why the SPC will destroy SAFe

SAFe is a massively successful Enterprise Agile Framework, and irrespective of how one thinks about SAFe, it's impossible to think away from today's enterprise world. Part of the huge impact SAFe has made was due to its - undoubtedly extremely smart and well calculated - move of quickly training a large number of SPC's and letting them spread SAFe in organizations. 
I believe that the SPC will ultimately become SAFe's downfall - let me explain.



Dwindling Requirements

When I became an SPC in 2016, the requirement was "five years of agile experience at various levels, in various roles, in various organizations." Although at that time, I already saw some SPC's who didn't meet this requirement, I was excited to join a community of very seasoned agilists who were serious of moving beyond team-level agility.

Today, we see no such constraint. The main constraint seems to be who (or whose organization) is willing to invest the training fee - growing hordes of ersatz agilists are competing for an SPC role.

While most early adopter SPCs and SPCTs are massive bundles of competence, today's average SPC  brings only a fraction of the competence of the early adopters.

Organizations hiring such SPC's, unfortunately, lack the discernment to figure out the difference between a washout and true competence. This alone will ultimately undermine trust in SAFe.

Skipped examinations

The SPC exam is undoubtedly hard. Much harder than, for example, the CSM exam and still a tad harder than Scrum.org's PSM-II.  Unfortunately, this doesn't help when it's already fairly common practice to get together in groups with one single experienced person who helps everyone pass their exams - even to the point where some consultants earn some extra bucks by offering taking exams for others. As such, this form of quality assurance isn't worth anything, either.


Ersatz Consulting

The worst development I am currently witnessing are the SPC "premium agile coaches" who ask for four-digit daily rates, even though they lack every single dimension required to succeed in the role. Not only do they have extremely shallow understanding of agility - they have no experience of the makes or breaks of an agile organization and rely solely on the standard slide decks to get by.

One SPC literally told me, "It's too much work, and reinventing the wheel, to figure out what the customer needs. I just copy+paste SAFe by the Book." - talk about even basic understanding. 
This level of expertise nets us ART's with 2 teams, Large Solutions of 40 People and Value Streams that start with a fully budgeted project or even a "Testing ART", and it's no wonder this eventually flies in people's face.

I predict that as more organizations get in touch with such Ersatz SPC's, fewer and fewer will take the gambit, until eventually the entire thing collapses. Give it a few years - if the current trend continues, it will come to pass.

Ersatz Agility

Agility relies on empiricism, a lot of it is situational awareness and context sensitivity. The idea that "SAFe by the book is Industry Best Practice" is contrary to the very idea of agility, and what would you expect as a result when your consultant dumps a number of structures, patterns and practices on your organization? Agility isn't one of the things you should expect.


Ersatz Coaches

The SPC community is also facing an increased membership of people who bear the title "SAFe Coach (SPC)" and lack any form of coaching background.
Also note that an SPC is a SAFe Program CONSULTANT, not a coach. Consultancy has its benefits - not everyone needs to be a coach! And the SPC training program doesn't even claim to make you one, either.

Many "SAFe Coaches" confuse "Coaching" with telling other people what to do or selling them solutions. In a worst case scenario, I have seen an SPC "Coach" complain that they lacked a mandate to impose their own (very poor) understanding of SAFe  on the organization without resistance.

Such Ersatz Coaches are running rampant, to use the words of a friend of mine, "causing a nuclear fallout in their wake" - and their numbers are rising, whereas the number of truly competent agilists who work as SAFe Program Consultants isn't really increasing.

As such, the odds of SAFe-oriented coaching being worth its cost diminish rapidly.



Ersatz Trainers

SAFe allows people to get a sanctioned license to train Scrum Masters, Product Owners, teams and even managers without any relevant trainer qualification .

SAFe Trainings contain a lot of learning objectives and even provide standardized trainings. But these trainings aren't worth even the time of attending if the trainer doesn't understand the learning objectives themselves and has no personal experience to contribute.

I once got a message from a disillusioned member of a corporate LACe who complained that she had just sat through a mind-numbing training with an SPC trainer who responded to a remark, "But that's Waterfall practice!" (after having explained that allegedly SAFe's Product Management requires Big Upfront Planning) with the serious question, "What's wrong with Waterfall?" and who for the heck of it couldn't figure out why Command and Control structures don't help in knowledge work.

 What learnings would you expect from such a training? Significant agility probably won't be one.
The more organizations get exposed to such training, the less likely they will see SAFe delivering any benefits.


But wait, those are just the SPC's on the market - with an ever-growing number of snake oil sellers who are just out for a quick buck. So let's talk about the alternative.

The Inhouse-SPC

Large organizations going the SAFe route quickly shift to raising up inhouse SPC's from the ranks of their own Project Managers and Line Managers. Many of them get sent to an SPC training without ever having experienced agility first-hand, and even if they have, it was only a kind of Scream. They have been in a middle manager role for a long time, oftentimes in the same corporation since they graduated. And now they take responsibility for helping their organization "become Agile". I will grant that they are motivated and keen to make a difference - they just lack experience to compare situations and the depth to predict future outcomes.
Consequently, they often implement things which look like a good idea but will have adverse effects months or even years in the future: How can they know?

Training is not Practice

The Inhouse SPC raised by an Inhouse "Agile Academy" under the tutelage of Inhouse trainers who are also very limited in Agile breadth and depth, although a great cost-saving factor, lack the background to avoid even elementary pitfalls. Even if the organization has the foresight of combining the training with on-the-job experience, we're still talking about people who basically "do organizational surgery after having read a book on the subject".

Agile Incest

The Inhouse SPC and SPCT lead to a phenomenon I would call "Agile Incest" where people from the same company define what is "Agile" based on what the company itself is doing - the benchmark becomes the current reality instead of the true potential.

Proliferating Bad Practice

Combining the two items, that is, extremely limited agile experience as well as no exposure to alternate viewpoints, will make the Inhouse SPC believe that they have reached the culmination of Agile Excellence - which is in reality Dunning-Kruger's Peak of Ignorance.
They then proceed to spread their "excellent" ideas by moving on to better paid roles in other organizations or giving talks on conferences, where the name of their organization has a more impressive sound than the actual achievement.

Just to give one example of where this leads - I was interviewed by one such SPC who had become "Head of Agile Practice" in a corporation, who called on my incompetence for making the claim that a Product Owner actually gets to decide what gets built! They patiently took their time to explain to me in detail the mandatory half-year process across five layers of an "Agile Enterprise" where others will decide on what gets built by whom and when - before the Product Owner ever gets to see a requirement or talk to a stakeholder.



tl;dr:

A massive influx and growing army of SPC's who have no background in agility are spreading dysfunctional practices. Organizations care more for SPC price and quantity than results and quality. Combined, this will eventually lead to the downfall and marginalization of SAFe.

I will leave it up to the reader to determine whether the currently visible acceleration of this process is for better or for worse.

Saturday, August 31, 2019

Agile Academy - McKinsey, you get it all wrong!

In a recent Whitepaper (LINK), McKinsey publised an approach for "growing your own Agile Coaches", i.e. creating an "Agile Academy". While I may even agree with the general idea and some major points and ideas proposed in the paper, it fosters and relies on a few massive misunderstandings that could plunge your company into disaster.



Executive Summary

With their whitepaper, McKinsey have proven that they themselves are indeed nowhere near what they claim that you need to become an agile organization - so don't take their advice, they've disqualified themselves!

Since the below article is quite extensive, let me provide an abridged version:
McKinsey demonstrated that they have a low understanding of what agility is, what agile coaches actually do, they argue fallaciously to suggest a solution to a problem that you don't even want to have: the need for masters-of-all-trades.
The suggested solution doesn't match the problem statement: They ignore the temporal dimension of the problem and move it into a different domain to make it look like it addresses the proposed challenge. The approach doesn't yield what they claim you need.

By following the McKinsey approach, you will end up with a lot of people with decent, but limited understanding who won't have what it takes to make a breakthrough change.


What McKinsey gets right

Some of the key points that they get right:

  • "Scaling Agile" isn't done by copy+paste. An Agile PMO isn't a solution, either - as agility and PMO are contrary approaches.
  • In a VUCA world, we must move away from consultants who "implement best practices". It's more sustainable to use coaching to help people learn and discover.
  • Without good coaches, your chances of becoming or sustaining an agile organization diminish rapidly.
  • Enterprise agility also requires taking care of processes anchored outside IT, such as budgets or performance reviews.
  • You can't ignore current reality - agility isn't a cloud castle.
You don't become a master by attending a course.
I also agree that a 2-days course doesn't make one a "Scrum Master", especially not an "Agile Coach" and that just the prospect of a higher paycheck inspires people without even the remotest form of qualification will appropriate these titles in their profile.
I'll even concede that people who have little to no understanding of agility will rewrite their CV's to make it look like that their project manager Command and Control role in a classic Waterfall project which gloriously went down the drain was actually an "Enterprise Transformation Agile Coaching" project - and that makes it incredibly difficult for classic HR mechanisms to filter out who is indeed a qualified coach.

Where things get fishy

There are many assumptions in the article that I'll just put into question, as they haven't been backed up.
McKinsey clings to a number of beliefs that are questionable and will not help your agile transition.
  • For what and how many "Agility Coaches" do you really need? Why even introduce yet another role instead of helping Scrum Masters learn to expand their horizon, when self-organized teams need a supportive environment, not even more complexity?
  • How can there be "a clear approach to enterprise agility" when the entire problem is that in the VUCA world, there is no One-Size-Fits-All solution - which is the reason why we need agility to begin with? And if so, how is this approach better than approaches like SAFe or LeSS, which have been around much longer and have been used in a broad spectrum of organizations?
  • Why do they believe that "the role of the Scrum Master is limited when it comes to scaling agile" when all they quote as reason for their beliefs are antipatterns on how not to be a proper Scrum Master? Isn't that like saying that you believe ships are unfeasible because you threw a rock into the river, and it sank?
  • Why define "the vision and scope of the agile transformation, informed by assessment of the organization today"? Is this how you do strategy? Wouldn't it make a lot more sense to define goals by where the organization needs to be than by where you currently stand?
  • How do you define an "Agile Blueprint" if the entire point of agility is that adaptability to circumstance isn't universal, a point they themselves made in the introductory section?
  • Why do they talk about "the agile operating model" rather than about being flexible, responding to change - which isn't a model, but an attitude and capability?
  • How can you change your reality by operating within current reality? Wouldn't establishing a new reality require going beyond current reality? And isn't "reality", as described in the article, merely a perspective held by people without agile experience?
  • If we realize that the problem of Enterprise Agility is that complex system change isn't molecular, but rather molar in nature - how can we succeed by focusing on subsystems within an otherwise static environment? 

Where it gets really problematic

The article creates a false dilemma and then so conveniently proclaims a solution, which isn't even one.
McKinsey sets up a false dilemma to sell their "solution".

Setting unrealistic expectations

A person who can do everything at mastery level won't be interested in a limited role.

The "Agility Coach" as defined in Exhibit 2 has a role complexity that shrinks the amount of people who could meet that responsibility to Zero.
Just ask yourself: How many people do you know in your organizations who have management skills, facilitation skills, developer skills, agile framework skills, technical skills, business skills - and the time to do all of this at a Mastery level? Each of those domains requires focus, otherwise mastery will wane rapidly. Top it off with that person also being creative, innovative thought leader, and you're looking for a Jack-of-all-trades, master-of-all.

Typically, a developer who learns management will lose their technical edge within a few years. The line manager who works strongly to coach and grow their people will not be spending time to learn facilitation techniques. The manager who moves in strategic management will have little time to work on empathy and listening skills, simply because their responsibilities lie elsewhere. The business expert will not be a technology expert - and vice versa. And finally, being innovative and a though leader doesn't combine well with being tied to a single organizational context.

Now imagine that you put up mastery in all of these areas as a mandatory requirement - and answer: Why should such a person work for your organization rather than start their own business?

Basic training

Even a 20 weeks curriculum won't remotely give a person the breadth and depth to be an "Enterprise Agile Coach".
Don't get me wrong. I love the idea of an "Agile Academy" and I have worked with organizations who have one. And I have worked with people who went through Agile Academy curriculums. They're great people to work with, as they do indeed both understand the basic tenets of agility and their specific role within the organization.
But a 12-20 week course (3-6 months, taken generously) in addition to the regular work isn't going to make the cut, either.
A Scrum Master training is just a starter, a teaser, a foretaste. It's nowhere enough to say one has mastered agility. A 12-20-week part-time academy is definitely one step further, but still a far shot from the experience and qualification that professional agile coaches bring.

I have stated in another article that the Scrum Master journey, if taken seriously, over the years, might take a person into the domains of technology, personal and team and systemic coaching, quality, process, general and strategic management, mathematics, psychology, sociology and philosophy. Just two days on each of these subjects, and we haven't covered any business domain yet. There is no way that even a 1-year part-time curriculum will give a person even a remotely viable understanding.

Moving the Goalposts

There is no single curriculum to do what McKinsey claims is required.

Participating in an "Agile Academy" program doesn't produce anything that even remotely resembles the Jack-of-All-Trades proposed by McKinsey's Exhibit 2. An academy program has to set a clear focus and may give teasers on fringe areas. The outcome of participating in an Agile Academy is not a technological mastermind who can innovate, coach teams and management, develop a strategy roadmap and lead enterprises to success. It's simply people who are decently equipped to meet a specific function within an enterprise,
There would be different programs for people who would:
  • support one or multiple agile teams ("Scrum Masters"),
  • professionally coach individuals ("Coaches"), 
  • organize and facilitate larger events ("Facilitators"),
  • train organizational units who are yet unfamiliar with agile approaches ("Trainers"),
  • accompany agile organizational units with methodology advice ("Consultants"),
  • take care of the transition itself ("Organizational Change Managers"),
  • lead program and product development from a business perspective ("Product Owners"),
  • create and innovate ("Developers" / "Designers"),
  • fill day-to-day coordination roles ("Managers"),
  • have people responsibility ("Leaders").
And this isn't even talking about the domain of technical mastery yet, where a decent training program outdates before it's completed.

If we would - for argument's sake - say that a person goes through all of the domains listed above, then each domain gets like a day or two: does that qualify for "mastery"? How much trust would you place in a Change Manager who has taken a single course? Would you put anyone on your Board of Directors whose management experience is limited to one day of training and a week of practice?

While it's indeed possible that one individual goes through a series of such academy programs, and while indeed many of us senior agile practitioners on the free market have enhanced capability in multiple of these domains, let's be blunt: You can't have a Master-of-All.
Finally, a person who ran through ten quarter-year programs to gain "mastery" has been in education for a minimum of two and a half years - and that's definitely not something that you will have at the start of a transition.


Limited quality

Part-time focus isn't the same as full-time focus
There's a reason why world thought leaders like Marshall Goldsmith don't excel in all of the domains proposed by McKinsey. To be the best in your field still requires full-time dedication and serious effort. Nobody becomes a thought leader by spending a couple weeks in a training academy. The best people coaches aren't software developers, the best software developers aren't people coaches. Taking the "T-Shape model" - people who have a broad insight into many domains lack depth in at least one of them, worst case - in all.

I like to use the circle illustration. A person has 100% of their time, and it doesn't matter whether we're talking about lifetime, work time or agile coaching time. Let's just talk about 100% of their time. What would you want them to invest this time into - you can choose anything.
You will realize that as you put the items into the circle, each additional item decreases the proportional share available for all the remaining ones. An increase in focus on agility automatically decreases the focus on everything else - and putting four items in there ("Self mastery", "coaching", "agile mastery", "commercial acumen") would mean that at least one gets neglected unless the person spends no more than a quarter of their time on each of these domains.
You will not become a "Master" by spending 25% as much time on a domain as a full-time dedicated expert would - and definitely not a "thought leader!"



McKinsey started with the broad claim that they have a clear solution to the self-created dilemma of finding people who have a large list of qualities, but the solution is setting up a pool of people who, if following their proposed suggestion, will not have any of the qualities on the level they insist are required!

Ignoring the setup

Agility is based on empiricism. You have to start by inspecting and adapting, not by setting up an academy.
Starting your own, internal "Agile Academy" isn't done overnight. To create curriculums that match your need, address your challenges and have a sufficiently high quality of materials is a high-effort process and relies on having the expertise to begin with. This is essential to keep agility flourishing over the years, but if this is your first step, you're not going anywhere in the next couple of years.

As agility is based on empiricism, you need people who can bring both the theoretical and practical experience of empirical ways of working into the organization. These are the people who typically get hired as "Agile Coaches". There is no way to eliminate this step, and you can't copy+paste another organization's curriculum to circumvene it.

You have to run agile development within your organization to learn what works - and what doesn't. After you did this, you can roll this out in whatever way you choose - for example, by seeding multiple teams, pulling in additional units, doing a strategic rollout, setting up an academy or whatever (I have opinions on these approaches, but that would be too big for this article). But the first step of bringing in seasoned practitioners who can shine some light into how agility practically works within your specific organization is inevitable.


The big lie about agile practitioners


"outsourcing these key roles will often lead to an influx of agility coaches who are disconnected from a company’s culture and want to dogmatically apply agile the way they know it rather than the way it needs to be molded to a particular organization" - McKinsey
Basing a core statement of the paper on a lie about agile coaches doesn't further your cause.

Let me call it for what it is: a lie, as if McKinsey had any experience in the field (which I will take for granted, otherwise the entire whitepaper is moot), they would know that the last attribute that can be used to describe proper agile coaches - is dogmatism.
I would therefore disagree with the sweeping statement that agile coaches who do not come from your specific organization are often dogmatic and will insist on pushing their pet framework rather than understanding and working with your current reality.

The first big part of the lie is that a person who pushes a specific agenda isn't a coach to begin with
The second part of the lie is that anyone who has experience with agile enterprises will know that enterprise transition is a long, strenuous process that requires deep understanding of systemic interactions, specific attenuation and oftentimes, compromise

There's even a community of so-called "Agnostic Agile" practitioners, which I would highly recommend looking into, who reject framework dogmatism, because dogmatism isn't agile. 


Not addressing the real problem

If you don't know what qualified external people are, why would you do a better job in selecting internals?
McKinsey proclaims that companies often find themselves hiring external coaches who are, in short, utterly unqualified. I agree. This happens. Organizations who aren't agile often have trouble selecting for the right people and qualified agile coaches are not dime-a-dozen.
They then proceed on the very flawed assumption that organizations who find themselves incapable of identifying who the right externals are would do a significantly better job at identifying the right people internally.

What we see here is a classic bait-and-switch: "You have no way of figuring out who the right external coaches are" [Reason: You don't know what the right mindset is and your HR mechanisms often make it impossible to onboard the right people] "so just identify people internally and grow them" . This doesn't answer the question how to identify what this right mindset is that you couldn't identify when selecting for external agile coaches. It also doesn't answer how to fix the broken HR processes which prevented getting qualified people on board. And to draw a full circle, if you can't even spot the right mindset when recruiting - how are you supposed to teach it?

If you can't find the right people externally (where a few fairly talented people exist) - what makes you think you're better at doing it internally (where definitely nobody with all the requirements from Exhibit 2 exist)?



Closing Remarks

Every company has employee churn, just by statistics. Some people leave, others come in.
Everyone in the company should be familiar with the company's specific ways of working. Agile principles and practices should be understood by everyone.
An Agile Academy, which sets up both starter curriculums and ongoing education in which interested employees can choose to participate is a feasible way of achieving this. Formulating your own curriculums to bring people with overarching responsibility "into the know" is an effective way of moving forward.
I do indeed support agile academies as an ongoing mechanism of sustaining enterprise agility.

However
An agile academy is not strategically viable to start an agile transition - and it is not a replacement for onboarding external expertise in the early phases.
Finding the right external agile coaches in the beginning is essential to get your initial seeds of agility to flourish, and it's important to ensure the curriculums aren't imprinting the Peak of Ignorance as per Dunning-Kruger-Effect, because that would kill enterprise agility before it could ever start. By just following the ideas proposed in the initial whitepaper, this is exactly what would most likely happen.

Even when an "Agile Academy" is set up, there are a number of immense dangers:

  • Curriculums focus too much on specific methods and practices, thereby being limited in usefulness and outdating rapidly
  • Standardization" or "blueprinting" is anti-agile, any curriculum aiming in this direction doesn't go in the right direction
  • Forcing or coercing employees to participate in the Academy invalidates the entire idea
  • Senior managers and HR need to lead by example and go through the curriculums lest the Agile Academy becomes a mockery of itself.