Showing posts with label Problem Solving. Show all posts
Showing posts with label Problem Solving. Show all posts

Tuesday, October 6, 2020

Dependencies aren't created equal

We often hear the proposal that dependencies are bad, and we should work to remove them.
Is that true? Well - it ... depends. Pun intended.

First and foremost, "dependency" can mean many things, and some of these are good, others bad, and some conditionally good or bad - and yet others, simultaneously good and bad.

Before I get into that, let me define what I mean with "good" and bad:

Economic perspective

(You can skip this section if you can simply live with the terms "Net Benefit", "Returns" and "TCO".)

Looking from an economic perspective, anything we do or decide results in an outcome.
This outcome has a benefit - the economic Returns.
The way to reach this outcome, its maintenance and failures on the way all have a cost - the Total Cost of Ownership (TCO).

From that, we can determin the Net Benefit: Returns - TCO.

Whether we measure that in Currency (Dollars/Euros) or in whatever unit we see fit (e.g. time, opportunities, satisfaction etc.) is irrelevant. Every outcome we get has such a net benefit.

As such, we can always compare two alternatives based on their net benefit. The outcome with the "better" net benefit is the preferable option.

For example:

Either we can:

  1. do the yard work this Saturday, and have a clean yard.
    Net benefit = clean yard.

  2. hire a gardener at $500 to do the yard work, and go to see the Playoffs.
    Net benefit = clean yard + seen playoffs - $500. 

Now, whether you prefer option 1 or 2 depends on whether you value attending the playoffs more than the $500, so often, the Net Benefit may have some subjective component that's hard to quantify in money. Regardless, a rational mind would choose whatever option has the highest subjective net benefit.

Why do I bring up this example?
Because option 1 has no dependencies, and option 2 has a hard dependency on the gardener and on the money. If you prefer option 2, you deliberately choose to have a dependency in order to increase your benefit.


Good and bad dependencies

With the concept of net benefit on our hand, let us opt for two generic alternatives:
Option A, which has dependencies and Net Benefit A as "Returns A" - "TCO A".
Option B, which no dependencies and Net Benefit B as "Returns B" - "TCO B".
To at least keep it somewhat simple, we'll assume "risk" (of whatever type) is part of the TCO.
This gives us a proper basis for discussion.

Type What Example
Bad dependency Net Benefit A < Net Benefit B,
TCO A > TCO B and
Returns A < Returns B.
Being unable to meet market demands
because a component vendor can't keep up.
Good dependency Net Benefit A > Net Benefit B and
TCO A < TCO B.
Software company buying servers
instead of building chipsets from raw silicone.
Potentially good dependency Net Benefit A > Net Benefit B but
Returns A  < Returns B.
Letting a business partner serve a market segment
you're not an expert in.
Potentially bad dependency TCO A < TCO B but
Net Benefit A < Net Benefit.
Preferred Supplier process that simplifies procurement
but means you can't get everything you need.
Mixed dependency Net Benefit A > Net Benefit B and
TCO A > TCO B.
Outsourcing sales to an agency
that takes a commission.

That clarified, we definitely want to avoid or eliminate "bad dependencies", but we may actively look for "good dependencies".

What happens in practice, though, is bias: we inflate ROI and discount TCO to make dependencies look more rosy than they are. We do this by making overly optimistic assumptions about potential payoffs (ROI) and dismissing negative factors that don't meet our concept. Of course, that is a trap we easily fall victim of and we should make sure we draw a realistic image on both ROI and TCO, preferrably erring on the side of caution.

Now, let's take a look and interpret those dependencies:

Bad Dependencies

A bad dependency is definitely something you want to eliminate. You win by removing it, hands down.

Good Dependencies

Don't be fooled, good dependencies are both everywhere and at the same time rarer than you think!

Our specialized service society is full of them. They help you make the best value of your own scarce resources, first and foremost, time. We could hardly live if we wanted to have no such good dependencies. You depend on a farmer to produce food, you depend on cars or public transportation which you haven't built, and so on. The modern world wouldn't be able to exist without them.

To eliminate such good dependencies would throw us back into the Stone Age.

After this, let's take off the rosy glasses and face reality: willfully induced good dependencies can turn sour any time. To use an illustrative example, let's say you're a non-tech company who decided to outsource their IT to an IT Service provider. Then, the market turns and your most profitable segment becomes online sales - all of a sudden, your ability to meet market demands depends on an external agent who now dictates the rate at which you're earning money!

Potentially Good Dependencies

The world isn't simply black and white, and TANSTAAFL. Partnerships are a great example of a potentially, though not universally, good dependency. In our above example, the dependency is good if you partner with someone who relieves you of some burden to allow you to achieve more. The dependency is bad if you partner with someone who allows you to achieve more, but at a price you can't really afford.

(An extreme example here might be a model who becomes rich and famous through a casting show, but is forced into a contract that ultimately makes them sacrifice family relationships and health.)

Potentially Bad Dependencies

When you can have something simple cheaper than something better, that's good if you are content with the outcome. It's bad if you aren't. Since most of the time, people want the best possible outcome, these types of dependency are usually bad.

Mixed Dependencies

These increase your business risk, and are a gambit. The bet is that by taking the dependency, you will get a better outcome. If the bet wins, this dependency is good. On the other hand, if the bet loses, this dependency is bad. Sometimes, it's both good and bad at the same time.

Taking our example of a sales outsourcing, you earn less money from your core business that is now running via agency, and you earn more money from business you otherwise couldn't have acquired. So, it's a good dependency as long as you have more extra business than the commissions, and a bad dependency otherwise.


How does all of that map to "Agile"?

Great question. All of these are business decisions. Oftentimes, it's business people who bring dependencies into a process in an attempt to optimize something. Take, for example, customer service introducing ZenDesk, or Marketing deciding to run Salesforce. Or a manager who decides to offshore the development for some of the systems integrated into a complex IT landscape.

In any of these scenarios, all of a sudden, you end up with dependencies outside your sphere of control. The question, "how do we best create the business outcome" becomes "how do we deal with the technical dependencies?"

If we leave the local optimization goggles of pure Software Development, there may be tangible business benefits which make the induction of these dependencies not just plausible, but a genuine, positive business case. 
For argument's sake, let's ignore the fact that most business cases look better than reality and deal with the fact that all of a sudden, there's a dependency.

While certain Agile framework proponents religiously advocate the removal of dependencies, the case isn't as clear-cut as it may seem.

Simply exposing a dependency doesn't mean we can, or even should, remove it.

We have to make a clear case that a discovered dependency is bad.
When we can provide transparent evidence of a bad dependency, removal should be the only logical conclusion.

If, in our investigations, we discover that from a systemic perspective, that a dependency is actually good, we would be optimizing locally in an attempt to remove it. Managing it becomes inevitable.

And that's where tools like SAFe's dependency map are more helpful than the Agile dogma of "dependencies are bad."



Monday, August 17, 2020

Continuous Problem Solving

"What do you even do as a Agile Coach?" - well, that's easy: I help you on your journey towards better, more effective ways of working. And how do I do that? 

Well, I will start using this simple 4-step process:


The problem solving process

Step 1: The Biggest Problem

When I come in, you will have many problems. One, or just a few, will be the biggest. Let's forget the others for now. Why? Because it's better to get one problem solved than to have no problem solved, and by its very nature, solving the biggest problem will make the biggest difference.

How do we identify the biggest problem in the presence of a myriad of issues?

It's not quite as simple as "brainstorming and dot-voting": sometimes, we need both loads of data and the perspectives of many people who may not be in the room. And sometimes, nobody sees or addresses the elephant in the room. When facilitiation isn't enough, I may gather and/or analyze data, interview different stakeholders or simply connect some bits and pieces to form an image to get a conversation going. And if that still isn't enough, I'll propose to you a shortlist of problems that you can pick from.


Step 2: Root Cause

If you had a simple solution, you probably would already have fixed it. So there's a deeper cause to your problem, and we need to address it to make some relevant progress. At times, we must move your process to an entirely different level, because we can't solve the root cause - we must avoid it!

How do we find the root cause?

Simple tools include 5-Whys or, again, brainstorming and dot-voting. These are often insufficient, because once again, if we knew the cause, we would probably already have addressed it.

I'm not a big fan of "Five Why" analysis for organizational issues, because the technique usually suggests a point-based root cause, whereas the root cause may be hidden in a web of causes, and even then, it could be a network effect leading to the problem we observe. And sometimes, identifying the cause is easier for an outsider who isn't stuck in a presumed "inevitability". If that's the case, I will give you my opinion. (Although I could be wrong. Everyone can always be wrong.)

And sometimes, I frankly don't know. If, for example, the root cause is part of your internal accounting processes, I can at best tell you it's there, but what exactly - I'm not an expert on that. we'll need to call the experts in.


Step 3: Action Plan

How do we deal with the root cause, how do we get better? You may have ideas, and I also have ideas. You may lack the experience and/or expertise, and I may have it. Let's bring all of that to the table, and turn that into an action plan. 

I could propose an action plan, although you need to accept it. If you have counter-suggestions or alternatives that you consider better, go for it. I'm indifferent to whether you go with my proposal or your own: what matters is that you get some traction and start moving the big bricks.

What's most important about the action plan: it's your action plan. You own it, and you execute upon it. I will support you with whatever I can contribute that you need: facilitation, tracking, communication, workshops and sessions. Depending on how much support you need, I may also compile the outcome of all of this for inspection.

Again, like in step 2, there are problems where I can propose an approach based on my experience, and some where I'll have to pass. For example, if your biggest issue is a proprietary compiler for a proprietary programming language, I can only suggest you get an expert from the vendor to help you on the issue.


Step 4: Reflection

So you did something, or we did something. If it was a good plan, something should be visibly better now, otherwise - what did we miss, what should we do about it?

Is our problem still as big as before, or has it become smaller? How much? Did we create other problems?

I'll support you with methods, structure and facilitation in this process. And, like mentioned before, with compilations of results and outcomes. As needed, I will add my insights and opinions.


But ... how about "Agile"?

"How does that help us introduce Scrum, Kanban, LeSS or SAFe", you may ask? It may not. Or it may. For certain, it will make you more agile, i.e. improve your ability to "change direction at a high speed.

Agile frameworks are entirely in the solution space, i.e. step 3. 
If Scrum helps you solve your biggest problem, and you need someone to teach you how to Scrum, that's what I'll do.
If User Story Mapping solves your biggest problem, that's what we'll do.
If Pair Programming solves your biggest problem and you don't know how to do it, I'll grab the keyboard with you.
If your biggest problem is the lack of an overarching structure and you decide to go with SAFe, I'll set up SAFe with you. Or LeSS, if you consider that the better alternative.

What I won't do, however, is to just dump "X" onto you when that wouldn't deal with your biggest problem. I won't do it, is because people will not see the value of "X", and there's even a high probability that "X" will be blocked by whatever your biggest problem is.



Saturday, July 11, 2020

Stop asking Why!

 The quest for reason and understanding, for change and improvement, always starts by figuring out the "Why". And now I'm suggesting to "stop asking Why?" - Why? [pun intended!]





The problems with "Why" questions

Let me start with an illustration. 

Jenny and Ahmad struggle with major issue in an untested segment of Legacy code. Ray, our coach, joins the conversation by asking, "Why are there no tests available?" - "Because", Jenny snaps, "the guy who coded this didn't write any." How was Ray helping? His question wasn't helping, it heated the mood further - and it didn't generate any new insight.
So was it even worth asking? No. It was the wrong question.

And like this, many times, when we ask "Why", we're not doing that which we intend to achieve: generate insight into reasons and root causes. 

A second problem with "Why" questions is that all parties engaged in the conversation must be interested in exploring. When people are under duress, they are interested in solutions, but not long winded discussions. Hence, they may disengage from the conversation and claim you're "wasting time".


Why that's a problem

There are numerous other problems with "Why" questions that you may have encountered yourself, so I'll list them here as types of "Problematic Why" questions:

Why? ExampleProblem
Nosy Why did you just put that document there?  When you dig into matters that others feel is none of your business, you will get deflection, not closer to the root.
 Suggestive Why don't you put the document in the Archive folder?You're implying the solution, and the answer will usually be "Okay" - you're not exploring!
Inquisitive  Why did you put the document into the Archive folder?It puts people on trial, and the response is often justification rather than inspection. 
 AccusatoryWhy didn't you put the document in the Archive folder?  This immediately poisons the conversation, provoking a "fight or flight" response. Any sentence starting with, "Why didn't you..." is easy to interpret as personal attack.
Condescending Why can't you just put that document into the Archive folder?When your question hints at perceived superiority, you're not going anywhere with exploration - it becomes personal!
Commanding Why isn't the document in the Archive folder yetJust like a parent asking, "Why are you not in bed yet?", this isn't an invitation to a conversation - the only socially acceptable response is: "I'm on it". 
RhethoricalWhy don't we go grab a coffee?The expected answer is "Yes".
 Distracting Why do you want to store your document?Although this question could be interesting, it's taking the conversation on a tangent. I can un-proudly claim to have torpedoed an entire workshop with such a misaimed "Why" question. 

While there may indeed be legitimate reasons to use these types of "Why" questions, please remember: If you want to explore and generate insight, these aren't the questions you may want to ask.

Why that doesn't help

"Why" questions become stronger and stronger as means of making people uncomfortable and less open to actual exploration as they contain, in descending order:
  1. "You"
  2. modals ("do", "can", "should", "must" etc.) 
  3. negations ("don't", "can't" ...)
  4. Past tense ("did")
  5. Judgmental terms ("even", "bad")
  6. Temporal adverbs ("yet", "still", "already")
And here is a full double bingo: "Why haven't you even pondered yet that your questions could be the problem?" - How happy does that make you to start a conversation with me on the topic?
 
With the above list in mind, when you begin analyzing the conversations around you, you may indeed start to feel that "Why" questions are often more reason for people to avoid exploring further than to generate valuable insights.

Why Blanks are also bad

Someone just made a statement, and all you're asking is, "Why?" - one word. What could go wrong? How could that be a problem? It can be.
Imagine you're in the middle of a conversation. Jenny says, "We didn't write enough tests." The insight is there. Now you just intercede with a probing "Why?" - and although you never said it, you have just accused Jenny of not writing enough tests, against better knowledge: her mind will auto-complete your one word question into, "Why didn't you write enough tests?"


What to ask instead?

Try re-framing "Why" questions, as to keep out of the solution space and to make people interested in actually having an exploratory conversation. The easiest way to do this is very often to avoid the term "Why" altogether.

When we take the table above, all of the "Why" questions could be replaced with an open conversation during the Retrospective, such as: "I sometimes have a hard time finding ourt documentation. What could we do about it?"

Almost all "Why" questions can be replaced with a "What" or "How" question that serves the same purpose, without being loaded in any direction. 

For example, the question "Why do we have this project?" sounds like, "I think this project is pointless!" whereas, "What is the intended outcome of this project?" assumes "There is a good reason for this project, and I may not understand it."
Likewise, the question "Why didn't we find those defects during testing?" sounds like, "Our testing sucks!", whereas, "How do those defects get into production?" assumes that "I don't know where the root cause is, and we have to locate it."


Summary

Take a look at when you use "Why" questions. Ponder when you didn't get the clarification that you intended. A truly open "Why" question can be re-framed as a "What", "Where" or "How" question that achieves the same purpose.

Experiment with alternative ways of framing questions that avoid pressing hot buttons, such as implied blame or command. 
In doing so, stick only to the facts which have been established already and do not add any extra assumptions or suggestions.

Be slow on "Why": Avoid the "Why" question until you have pondered at least one alternative that doesn't rely on a "Why".

Friday, January 31, 2020

Double Queues for faster Delivery

Is your organization constantly overburdened?
Do you have an endless list of tasks, and nothing seems to get finished? Are you unable to predict how long it will take for that freshly arriving work item to get done?
Here's a simple tip: Set up a "Waiting Queue" before you put anything into progress.

The Wait Queue


The idea is as simple as it is powerful:
By extending the WIP-constraint to the preparation queue, you have a fully controlled system where you can reliably measure lead time. Queuing discipline guarantees that as soon as something enters the system, we can use historic data to predict our expected delivery time.

This, in turn, allows us to set a proper SLA on our process in a very simple fashion: WIP in the system multiplied with average service time is when the average work item will be done.
This allows us to give a pretty good due date estimate on any item that crosses the system boundary.
Plus, it removes friction within the system.

Yes, Scrum does something like that

If you're familiar with Scrum, you'll say: "But that's exactly the Product Backlog!" - almost!
Scrum attempts to implement this "Waiting Queue" with the separation of the Sprint Backlog from the Product Backlog. While that is a pretty good mechanism to limit the WIP within the system, it means we're stuck with an SLA time of "1 Sprint" - not very useful when it comes to Production issues or for optimization!
By optimizing your Waiting Queue mechanics properly, you can reduce your replenishment rate to significantly below a day - which breaks the idea of "Sprint Planning" entirely: you become much more flexible, at no cost!

The Kanban Mechanics

Here's a causal loop model of what is happening:


Causal Loops

There are two causal loops in this model:

Clearing the Pipes

The first loop is negative reinforcement - moving items out of the system into the "Waiting Queue" in front of the system will accelerate the system! As odd as this may sound: keeping items out of the system as long as possible reduces their wait time!

As an illustration, think of the overcrowded restaurant - by reducing the amount of guests in the place and having them wait outside, the waiter can reach tables faster, there's less stress on the cook - which means you'll get your food faster than if you were standing between the tables, blocking the waiter's path!


Flushing Work

The second loop is positive reinforcement - reducing queues within the system reduces wait time within the system (which increases flow efficiency) - which in turn increases our ability to get stuff done - which reduces queues within the system.

How to Implement

This trick costs nothing, except having to adjust our own mental model about how we see the flow of work. You can implement it today without any actual cost in terms of reorganization, retraining, restructuring, reskilling - or whatever.
By then setting the work you permit within your system (department, team, product organization - whatever) to only what you can achieve in a reasonable period of time, you gain control over your throughput rate and will thus get much better predictability into forecasts of any type.



Footnote:
The above is just one of many powerful examples of how changing our pre-conceived mental models enables us to create better systems - at no cost, with no risk.

Tuesday, January 28, 2020

The six terminal diseases of the Agile Community

The "Manifesto for Agile Software Development" was written highly talented individuals seeking for "better ways of developing software and helping others do it." Today, "Agile" has become a 
playground for quacks of all sorts. While I am by no way saying that all agilists are like this, Agile's openness to "an infinite number of practices" has allowed really dangerous diseases to creep in. They devoid the movement of impact, dilute its meaning and will ultimately cause it to become entirely useless.


The six terminal diseases of "Agile"

In the past decade, I've seen six dangerous diseases creep into the working environment, proliferated and carried in through "Agile". Each of these diseases is dangerous to mental health, productivity and organizational survival:

Disease #1 - Infantilization of Work

"Hey, let's have some fun! Bring out the Nerf Guns! Let's give each other some Kudos cards for throwing out the trash - and don't forget to draw a cute little smilie face on the board when you've managed to complete a Task. And if y'all do great this week, we'll watch a Movie in the Office on Friday evening!" Nope. Professionals worth their salt do not go to work to do these things, and they don't want such distractions at work. They want to achieve significant outcomes, and they want to get better at doing what they do. Work should be about doing good work, and workers should be treated like adults, not like infants.
An agile working environment should let people focus on doing what they came for doing, and allow them to bring great results. While it's entirely fine to let people decide by themselves how they can perform best, bringing kindergarten to work and expecting people to join the merry crowd is a problem, not a solution!


Once we have mastered disease #1, we can introduce ...

Disease #2 - Idiocracy

Everything is easy. Everything can be learned by everyone in a couple days. Education, scholarism and expertise are worth nothing. Attend a training, read a blog article or do some Pairing - and you're an expert. There's a growing disdain for higher education, because if that PhD would mean anything, it'd only be that the person has got a "Fixed Mindset" and isn't a good cultural fit: Flexible knowledge workers can do the same job just as well, they'll just need a Sprint or two to get up to speed! 


And since we're dealing with idiots now, we can set the stage for the epic battle of ...

Disease #3 - Empiricism vs. Science

I've written about this many times - There's still something like science, and it beats unfounded "empiricism" hands down. We don't need to re-invent the Wheel. We know how certain things, like thermodynamics, electricity and data processing work. We don't need to iterate our way there to figure out how those things work in our specific context.

Empiricism is used as an  idiocratic answer from ignorance, and it's increasingly posed as a counter to scientific knowledge. Coaches don't just not point their teams to existing bodies of knowledge - they question scientifically valid practices with "Would you want to try something else, it might work even better?" The numbers don't mean anything - "In a VUCA world, we don't know until we tried." - so who needs science or scientifically proven methods? Science is just a conspiracy of people who are unwilling to adapt.


Which brings us into the glorious realm of ...

Disease #4 - Pseudoscience

There are a whole number of practices and ideas rejected by the scientific community, because they  have either failed to meet their burden of proof, or failed the test of scrutiny. Regardless, agile coaches and trainers "discover", modify - or even entirely re-invent these ideas and proclaim them as "agile practices" that are "at least worth trying". They add them into their coaching style or train others to use them. And so, these practices creep into Agile workplaces, get promoted as if they were scientifically valid, and further dilute the credibility and impact of methods that are scientifically valid.
NLP, MBTI and the law of attraction are just some of these practices growing an audience among agilists.


And what wouldn't be the next step if not ...

Disease #5 - Esoterics

Once we've got the office Feng Shui right, a Citrine crystal should be on your desk all the time to stimulate creativity and help your memory. Remember to do some Transcendental Meditation and invoke your Chakras. It will really boost your performance! If you have access to all these wonderful Agile Practices, your Agile Coach has truly done all they can!

(If you think I'm joking - you can find official, certified trainings that combine such practices with Agile Methods!)


Even though it's hard, we can still top this with ...

Disease #6 - Religion

I'll avoid the obvious self-entrapment of starting yet another discussion whether certain Agile approaches or the Agile Movement itself have already become a religion, and take it where it really hurts.
Some agile coaches use "Agile" approaches to promote their own religion - a blog article nominates their own deity as "The God of Agile" (which could be considered a rather harmless cases) - and some individuals are even bringing Mysticism, Spiritism, Animism or Shamanism into their trainings or coaching practice!

Religion is a personal thing. It's highly contentious. It doesn't help us in doing a better job, being more productive or solving meaningful problems. It simply has no place in the working environment.



The Cure

Each of these six diseases is dangerous, and in combination, their harmful effect grows exponentially. At best, consider yourself inoculated now and actively resist against letting anyone induce them into your workplace. At worst, your workplace has already contracted one or more of them.

Address them. Actively.

If you're a regular member (manager / developer etc.) of the organization that suffers from such diseases: figure out where it comes from and confront those who brought in the disease. Actively stop further contamination and start cleansing the infection from your organization.

If you're a Scrum Master or Coach and you think introducing these practices is the right thing to do: if this article doesn't make you rethink your course of action, for the best of your team: please pack your bags and get out! And no, this isn't personal - I'm not judging you as a person, just your practice.



Wednesday, November 6, 2019

Scrum is setting you up to fail!

The amount of debates where agilists claim, "But Scrum addresses <this topic> already!" - then proceed to quote a sentence, or even a single term from their framework's rules are staggering. The phrase, "we need to be pragmatic, and Scrum is idealistic" heats up the debate.

My take: 
In some cases, frameworks like Scrum are helpful. By themselves, however, they aren't. They provide no helpful guidance and rely on the strong assumption that the solutions to an organization's core problems already exist within the team' sphere of control. 
This assumption is borderline insane, because people wouldn't need a rule or framework for something they know how to do.

Even in regards to my article about demand, I got the reply, "Scrum does address the issue. That's what you got a Product Owner for." and "SAFe uses the term 'Demand Management' at Portfolio level, therefore SAFe has a solution." - I say that this is about as helpful in practice as stating, "We have the cure for cancer already. That's what scientists are for: They even use the term cancer research."
Yes. And: What exactly is the solution to the problem beyond assigning responsibility or attaching a label somewhere?

Let's focus on Scrum, just to be talking about something specific.
In all fairness, many Scrum practitioners state, "Scrum doesn't solve your problems, it only highlights them" - which is my answer to everyone who would claim that "Scrum does address this already.Maybe you get a label. You don't get a solution. Scrum itself has no helpful answers, not even the hint of a direction.

Scrum's dangerous assumptions

Scrum makes a lot of strong assumptions. Most of the time, these assumptions are just not valid and will cause a Scrum adoption to shipwreck.
These are all examples of conditions that Scrum is simply assumed to have:

No blocking organizational issues

Scrum can only work when the surrounding organization is at least basically compatible with Scrum. Scrum's assumption is that you are well aware of how to ensure that:
  • Organizational processes are fundamentally compatible with agile development
  • A meaningful portfolio strategy exists
  • Demand funneling "somehow works"
  • Individual incentive schemes don't get in the way of team or organizational goals
  • The organization improves where it matters
  • You have stable teams
And what if not?

Unproblematic contracts

Scrum teams must operate in an environment where developers and customers share common goals, and developers are contractually enabled to maximize organizational value. Scrum assumes that you have a contract situation where:
  • There is no functional split between different organizations (e.g. outsourced manual test - or worse, outsourced users)
  • Financial incentives encourage optimizing around value rather than activities
  • The team meets all legal requirements to deliver all required components
  • The development organization benefits from producing better / more / faster outcomes
And what if not?

People get along

Scrum assumes people can and will communicate with a goal to create value.
You have to know by yourself how to achieve the following states:
  • No communication gaps where significant information gets lost
  • Stakeholders care and show up to provide essential feedback
  • Managers understand and avoid what demotivates the team
  • People have a sufficient level of trust to raise issues and concerns
  • When all things fail, people focus on learning and improvement, avoiding blame.
And what if not?

Development issues

Since its inception, Scrum has removed all aspects of technical guidance. As such, there's now the hard assumption that:
  • Teams have the necessary skills to produce a "Done" Increment
  • Teams know about quality engineering practices
  • The team's software isn't a steaming pile of ... legacy
  • Teams are able to produce a meaningful business forecast
  • Teams can cope with technology shifts
And what if not?


The danger of these assumptions

To assume that none of these problems exist is idealism. If you make these assumptions, you will shipwreck.
To assume you can safely operate Scrum when multiple of those problems exist, you're also going to shipwreck.
To assume that attending a Scrum training course equips you to take on this gorilla is also going to shipwreck.

To assume that Scrum has a solution to any these problems is false hope or snake oil, depending on perspective. Scrum assumes that they have already been solved - or at least, that you well know how to solve them. Scrum tackles none of them.


What if not

The Scrum Guide has no guidance on any of these topics, as all of these problems are assumed to be manageable and/or solved in a Scrum context.
Where these problems are significant, Scrum isn't the droid you're looking for.

Friday, August 9, 2019

The problem with Agile Transformation Programs

Many organizations want to "become Agile", then browse through the catalog of common frameworks, pick their favorite - and run a Transformation Program. While all of these are officially communicated as a massive success, I'd like to cast a bit of light on what is actually going on.



Transformation Input

There are some "givens" that affect a framework-based Agile Transformation program before it has even been conceptualized: expectations, reality and the determined future state as described by the framework.
These are the constraints to the success of the transformation, and depending on how well they overlap, this success can be bigger or smaller. Worst case, this intersect is empty from the beginning - in which case, the transformation program is doomed.

Management Expectation

Typical management expectations from an Agile Transformation include, without being limited to:
  • Faster time-to-market
  • Lower development costs
  • Higher Quality (fewer defects)
  • Improved customer satisfaction
  • Happier employees
The choice of framework often falls to that which promises most of these benefits.
"Good" transformation programs then set targets based on improvment seen on these metrics. 

Unfortunately, to scope a proper project and/or program, the real work is oftentimes measured in amount of departments/projects/employees using the chosen Agile Framework "successfully" (whatever that means).

Real Problems

Usually less visible to management is the entire quagmire of problems the organization has accumulated over the years. Benefits don't appear from thin air - they are generated by getting rid of problems.
The more problems are known and the greater the pain to solve them, the easier it will be to actually get benefits out of a transformation. 

Cultures averse to admitting or taking responsibility for problems will be struggling to gain actual benefits from any "Transformation Program," regardless of whether it's agile or not.

Framework Scope

Frameworks themselves have a very clear scope, mostly concerned with structure - roles and process, events and some artifacts. We can easily determine how much of this structure has been implemented, and that's how success is often measured.

What's significantly more challenging: determining how compatible people's mindset and behaviour is with the intent of the framework, and how significantly the "new ways of working" get impacted by "old ways of thinking and doing".



Transformation Output

To keep this article simple, let's not argue about how well any of the inputs was understood or change was actually realized, and keep to reality - "we are where we are, and we go where we go."
This reality is:
  • some expectations will be met, others don't.
  • some aspects of the framework will be implemented perfectly, other won't.
  • some problems will be solved, others won't.
Another reality is that at the point in time when the program is conceptualized:
  • some expectations are based on known problems, others on unknown problems.
  • some expectations are based on understanding the framework correctly, others on understanding them incorrectly.
  • some program activity will be planned to do things that solve meaningful problems, others will focus on something that's not a root cause.
  • some framework components will lead to beneficial change, others won't.
... and we can't know which is which, until we have done some experimentation. 
For argument's sake, let's just assume that the program is sufficiently flexible to allow such experiments, and that everyone contributes to the best of their understanding and ability.

Programs are still time-bound, and it doesn't matter whether that timespan in 1 month or 5 years. Within this period, a finite amount of activity will happen, and this activity will lead us to wherever it does, and "not everything" will be perfect. And this is what the future reality will look like:

Outright failure

Some aspects of transformation will lead to success, others will fail to provide any improvement - or even make things worse. Let's call things by their name: When you scope a transformation program and don't get something you planned to get, that's failure.

In this section, I want to highlight the failures your program will have.

Unmet expectations

There will be a number of management expectations that haven't been met (blue area outside intersects). Some may have been unrealistic from the outset, others "could have ... should have". Regardless, someone will be disappointed. Managers familiar with diplomacy and negotiation will stomach the ordeal, knowing they got something at least.

Just be careful that the higher your expectations are and the less aligned they are with the framework's actual capability, your organizational reality and the flexibility of the transformation program, the bigger the disappointment will be.

Wasted Investment

Frameworks are frameworks, and when shown an overview of everything that the framework has to offer, managers often decide that "we need all of that". Truth be told, you don't, because a lot of it provides a solution to problems you don't have (yellow area outside intersects). But you can't know what you need until you are in a situation where you do need it.

By deciding upfront to go full-scale into implementing everything a framework has to offer, you're going to load a massive amount of waste into your transformation program - and this waste costs time, money and opportunity.

Unsolved problems

Many of the problems in your organization won't get addressed at all (red area outside intersects) - because they're unkown, too complicated to resolve or simply not relevant enough.

The intent of an agile framework isn't to solve your problems, but to provide you the means of solving them - you still need the heart, the will and the power to actually do this. 
Great transformations focus on tackling meaningful problems, thereby showing by action that resolution is possible and valuable - bad transformations avoid the mess of problem solving and focus on just covering the existing heap of problems under the blanket of a framework.

Unresolved pain points

Managers would prefer to have the perfect organization where everything is smooth and problems don't exist. But we don't live in cockaigne (purple intersect between blue and red on the left), and "Agile" won't create one, either. Problems are still real - and frameworks don't address them directly, they just provide means for addressing them.

The list of pain points is (near) endless, and seems to grow with increasing transparency. and we only have a finite amount of time. There will be un-addressed pain points. Even if the Agile Framework is perfectly implemented, many of the pain points will remain - most likely, more than imagined.

When a transformation program scopes more framework implementation than problem solving, don't be amazed if the outcome is more structural change than solved problems!



Partial Benefits

Transformation programs can and do provide benefits, in different categories:
What you see and feel, what you see but can't feel, what you feel but don't see - and what you neither see nor feel, although the latter is a difficult topic in and of itself.

Illusory benefits

Informed managers will expect the transformation program to implement certain framework elements that will indeed be implemented (full intersect between blue and yellow circle). This is great when these elements actually solve a problem the organization has - but there's no guarantee (greenish intersect between blue and yellow on the right). 
Oftentimes, we create "change for change sake" without getting any better results, because we changed something that's not a problem.
In some cases, the new status quo is even worse than the former state, but it looks better ... because it's new, and compatible with the Agile Framework.

These benefits are not real, they have cost money and kept people from doing the right thing!

Let me warn you right here: Unethical coaches/consultants will focus their efforts on the illusory benefits, to build management rapport and milk the cash cow as long as possible. 
AVOID generating benefits in this category!

Hidden benefits

The framework may actually solve some problems that management isn't even aware of (orange intersect between red and yellow on the bottom), either because the benefits take a long time to become visible or because they do not affect anything of management relevance.

A typical example is the implementation of XP engineering practices - it may actually look like teams are getting slower when they write unit tests and create deployment automation, but the benefits will become visible in the future, as defect rates and stress levels decline. Example: Developers who have worked on Clean Code microservices with Continuous Deployment never want to go back to legacy processes or code, because it's so much easier and faster to work this way - but getting there could take years (and possibly be entirely unfeasible) on legacy systems.

Let me dive in with another note of caution: Ethical coaches/consultants will focus their efforts on solving real problems, many of which managers don't see. Managers must be curious to learn from their teams whether such hidden benefits are being generated, or whether the consultant is just trying to please management.



The success story

Every organization that has invested lots of effort and money on an agile transformation program will eventually produce a success story (brown intersect area of all circles in the center) based on what management expected, how the organization actually benefitted and how the Agile Framework has brought them there.

People are smart enough to not reveal publicly how many of their initial expectations weren't met, how much activity didn't lead the organization in a better direction and how big their problems still are. But depending on how well the Agile Transformation Program was defined and executed, this could easily be some 90% (or more) of the program.

Simply put, it's insane to start an Agile Transformation Program because you read someone else's success story, because the story doesn't tell you what went wrong, how much disappointment and frustration has accrued, how much time and money was wasted - and oftentimes, you don't even see the pointers of what actually made it a success.

Real success happens where the three items intersect, and the size of this intersect is determined by:
  • How well do you focus on solving real problems?
  • How flexible are your expectations?
  • Are you using frameworks where they benefit, rather than making your organization framework-compliant?



Summary

An agile framework transformation program, conceptualized, planned and executed by people who do not exhibit an agile mindset and who do not practice agile organizational development - is going to produce a politically motivated, insignificant success story. 

Stop thinking frameworks, stop thinking programs.
Start thinking agile, and embark the agile journey in an agile way.








Wednesday, August 7, 2019

Five Principles of Organizational Agility

Do we need frameworks to be agile or not? 
Maybe. If we have a problem that frameworks solve. And if they are an appropriate solution.
Here are my five principles for change, which are paramount to the question of frameworks.




If you adhere to these five principles, my question would be: "What do you expect from an organizational framework?"

1 - Frame your problem properly

Learn to understand your problem before implementing a solution.
A poorly framed problem's solution may be worse than the current state.

Go beyond both the symptom and the Obvious and frame the problem correctly. And when you learn that you framed the problem in the wrong way, refine your problem statement rather than "working harder" to make the solution work.

There are great tools, like Ishikawa Diagrams, 5-Why-Analysis and many others which can assist you in getting closer to what your real problem is, in a very short time.

2 - Limit change

Only reduce the bottleneck constraint that causes the problem.
Large-scale changes will have uncontrollable, large-scale impact.

Figure out what the bottleneck is, why things aren't moving smoother.
When you know where the bottleneck is, be uncompromizing on making a change. By accepting the bottleneck and making a change elsewhere, you destroy the change process overall.

Tools like Process Mapping, Flow Diagrams, Lead Time Analysis and many others that will help you discover the single point where a change will be effective.

3 - Simplify change

Do the simplest possible thing to reduce constraints on the current bottleneck.
Change should be so simple that it's effortless - if our idea doesn't work, we just try the next one until we find something that does work, and that's a lot easier when we make small, simple changes rather than implementing grandiose master plans.

Simplicity is an art - "anyone can make things complicated, it takes a genius to come up with something simple". When geniuses are hard to come by, it helps to involve the people who actually face the problem, as they often have a pretty good idea why something doesn't work, and they may just need the permission to do the right thing instead.


4 - Subtract before you add

Eliminate structures, processes or tools that cause problems.
The "simplest possible thing" we can do is usually getting rid of a blockage, not adding something.
Problems don't get solved by adding something on top.

Additions to an existing problem are like band-aids on a cracked pavement: They won't really help, won't last and won't make much of a difference. The existing problem needs to be addressed and removed. Often, this is already enough.

I like to use tools like Marshall Goldsmith's "Wheel of Change" to model the change, and tend to remind people that "For each thing you add, you have to remove one thing", because otherwise we create more problems (oftentimes elsewhere) instead of solving them.

5 - Verify outcomes

Verify your outcomes before calling something a success.
Change is successful when the problem is gone.

Many organizations call change successful when the change is implemented - which often leads to "change for the sake of change", and people getting rewarded for doing things that don't help the company.

We can use methods like OKR to define which outcome we want to see, and if we didn't achieve this outcome, that should at least trigger the question of what the change has done instead.



Closing remarks
These five principles stand together and should be applied together.

A well-defined problem allows us to run a minimal change experiment in the right place, which allows us to verify fast and cheaply whether we're making an impact. And we need to get rid of at least one root cause of the problem to make such an impact.

Sunday, January 27, 2019

The Agile Fallacy

"That's not Agile!" - does it matter? My claim - no!

"Being Agile" isn't true/false - it's a spectrum from rock to photon. Everyone is somewhere.


When looking behind the "Agile vs. Not Agile" bifurcation fallacy, we can start asking more meaningful questions:

  • Does the organization meet Market and Business Needs properly?
  • Does the Product stay relevant over time?
  • Are the Delivered Outcomes useful?
  • Is the Total Lead Time acceptable?
  • Is the overall Return On Invest of development positive?
  • Does everyone Focus on the most important thing?
  • Does development create up-to-date, Sustainable Solutions?
  • Is Technology Debt under control?
  • Is Improvement Potential leveraged effectively?
  • Do Failures get addressed and corrected openly?
  • Is there Continuous Learning going on?
  • Does the company exhibit Resilience to market changes?
  • Does the organization have a Sustainable Workload?
  • Does the organization attract, grow and Retain Talent?


If all these questions are answered with "Yes", then I could ask "Why do you want to be agile?"
If the answer to some or all is "No", then I would ask, "Then what are you doing about it?"

Taking a closer look, even these questions aren't binary, but gradients which would usually range somewhere from "We need to do something immediately" to "Yeah, we might want to think about it."

All the above questions are merely indicators of whether an organization is sufficiently agile for its own good, so I would leave you, as a reader, with the initial question: If an organization is excelling in all the areas mentioned above, does it matter whether they're agile?

Thursday, July 5, 2018

"Googlewins Law" - The Google Argument

Maybe you've had an occasion with "The Google Argument" before. I call it "Googlewin's Law". What is it and how does it damage dialogue?



In homage to Godwin's Law, I call for "Googlewin's Law", and would phrase it like this:

"As a technical discussion grows longer, the probability of a comparison involving Google approaches 1"

I have observed an emerging trend that when meaningful arguments run low, someone "pulls a Google" (alternatively Linkedin, Amazon, Facebook) in an attempt to score an intended winning strike. However, most often, the invocation of Google is nothing more than a fallacy.
Here are the three most common uses of the Google Argument:

The positive Google Argument

When developers want to do something which could be called "nerdfest" and run out of meaningful arguments why doing this is a good idea, they invoke the positive Google argument:
"We could become the next Google with this". 
Typical invocations could be: "With this ranking algorithm, we could become the next Google!" - "With this sales platform, we could become the next Amazon!"

Here is why it's a fallacy:

Google never tried to become great, they tried to do something that happened to work, and because they did that exceedingly well, in all domains - from technical implementation over marketing and sales all the way to customer service - they succeeded. Oh, and they happened to have great seed funding.
Google did not become great because of one good technology, they became great because they happened to do a whole lot of other things right as well in a market where one stupid move can cost you everything.

So the next time someone pulls a positive Google on you, just ask: "What makes you so sure we don't become the next Blockbusters with that idea?"


The negative Google argument

The opposite of the positive Google argument, used as a killer argument against any form of innovation or change is the negative Google argument:

"We don't need this. We are not Google".
Typical invocations sound like: "Continuous Integration? We're not Google!" - "Microservices? We're not Google!" - "Virtual Machines? We're not Google!"

Here is why it's a fallacy:

Not everything Google does is only helpful for Google. Google is using a lot of techniques and technologies that help them achieve their mission and goals easier and more effectively.
Google has even created quite a number of useful tools, frameworks and techniques that are available open source (such as Angular) simply because they are useful.
If everything that made Google successful was anathema, you shouldn't even be using computers!



The appeal to Google

When lacking evidence or sound arguments, what's more convenient than invoking the name of a billion-dollar-company to make your case? Who could argue against an appeal to Google:

"Google also does this." - "Google invented this!"
Typical invocations would be: "Of course we need a distributed server farm. Just look at Google, they also do that!" - "Our product search page needs semantic interpretations. Google also does this!"

Here is why it's a fallacy:

First and foremost, unless you're in the business of selling advertisement space on one of the world's most frequented websites, chances are you're not going to make profit the way Google does.
Second, Google can afford a technology infrastructure that costs billions, because that's what generates revenue as well. There's an old latin proverb, "quod licet iove, non licet bove" (lit. "What is suitable for a God is not befitting an ox")
Third, Google has many billions of dollars to invest. It doesn't hurt Google to make sink $100m into a promising, yet ultimately unsuccessful innovation. I mean, yes it hurts, but it's not lethal. Can your business afford sinking $100m for zero returns? If so, you can appeal to Google, otherwise I'd be cautious.



Summary


The next time someone invokes Google, Facebook, Amazon, LinkedIn or even companies like Zappos, Spotify or whatever - think of Googlewin's Law.

What worked for others has no guarantee of working for you - and even though you are not them, not everything they do is bad (such as, for example, breathing!).
Google is not a reason either way.

Feel free to ask, "Can you rephrase that statement with a comprehensible reason that has a connection to our business?"


Sunday, June 24, 2018

Test Pyramid Explained - Part 2: Measurement Systems

Understanding the "Why" of the Test Pyramid is important in making the right decisions. This article examines the underlying foundation of testing: making statements about quality.
Why do we need to consider the test pyramid when creating our test suite?



How can we know if the software works? Whether it does what it's supposed to do? Whether it does that right?
If not, whether it's broken? What doesn't work? Why it doesn't work? What caused it to malfunction?
These are all different questions - and so the approach to answering the questions also differs. Which approach should we then take? 

Let's take a look at our test pyramid.



In an attempt to answer the questions above, we need to explore:

Measurement Systems

According to Wikipedia, a measurement system includes a number of factors, including - but not limited to - these:

Miss one of the factors, and you might end up with an entirely messed up test process!

Before we can answer how these factors contribute to your testing process, we need to examine why they are relevant - and to answer the "Why" question, we need to answer the even more fundamental question:

Why test?

There are varying reasons for testing, all of which require different approaches:

  1. Ensuring you did things right.
  2. Ensuring you are doing things right.
  3. Ensuring you will do things right.
  4. Ensuring you understand things right.
  5. Ensuring you did the right things.
  6. ...

As you might guess, a test approach to ensure you did things right will look vastly different from a test approach to ensure that you will be doing the right things.
Some approaches are more reactive in nature, while others are more proactive. Some are more concerned with the process of creating software - others are more concerned with the created software.

When no tests have formerly been in place (such as in a Legacy System), you're well advised to start at the easiest level: ensuring that you did things right, i.e. ensuring that the software works as intended.
This is our classic Waterfall testing approach, where testers get confronted with allegedly "finished" software which just needs to be quality-checked.

When you have the luxury of starting with a Green Field, you're well advised to take the more challenging, yet more rewarding route: ensuring that you will be doing the right thing right - before even starting off.
This approach requires "building quality in" right from the outset, using practices such as Behaviour Driven Development, Test Driven Development and Specification by Example.

The advantage of "testing early" is that misunderstandings are caught even before they can lead to faulty software, the advantage of "testing often" is that problems get solved before they proliferate or exacerbate.

The desirable state

A perfect testing approach would minimize:

  • the risk of introducing fault into the system
  • the time required to detect potential fault in the system
  • the effort required to correct fault in the system

When taking a good look at out testing pyramid from the last article, we can realize the following:

Test TypePrevent riskExecution timeCorrection Effort
Process ChainHardly helps:
Often doesn't
even get fixed
before launch.
Might come too late
in the process
Lots of pre-analysis
required, potentially
already proliferated.
SystemVery low:
Only prevents
known launch failure.
Very slow,
often gets skipped.
Slow, 
IntegrationLow:
Only catches defects
from proliferating in
the system.
Slow, difficult to set up.Interrupts the flow of work.
Feature&ContractBDD:
Know risk ahead.
Would run all the time
while working on a feature.
Should only affect
1 method.
UnitTDD:
Know risk ahead.
Neglegible.
Can always run.
Minimal.
Should only affect
1 line of code.


This matrix gives the impression that any test other than Feature&Contract or Unit test don't even make sense from an economic perspective - yet these types of test are most often neglected, and attention is paid to the upper parts of the Test Pyramid. Why does this happen?


Precision and Accuracy

Choose your poison

Let's suppose I turn on Google Maps and want to know how long my daily commute will take.
Imagine that I get to choose between two answers:
Answer #1: "Between 1 minute and 10 hours". Wow, that's helpful - not! It's an accurate answer with low precision.
Answer #2: "45 minutes, 21 seconds and 112 milliseconds". I like that. But ... when I hit the highway, there's traffic all over the place. I end up taking three hours. This answer was very precise - just also very inaccurate.

Do you prefer high accuracy and low precision - or high precision and low accuracy?
It seems like only a dunce would answer "high precision and low accuracy", because that's like having a non-winning lottery ticket.

Approximating meaning

When starting with nothing to begin with, it's a good idea to turn a huge fog of war into something more tangible, more solid - so we start with a test which brings us accuracy at the cost of precision. We approximate.
In the absence of a better strategy, a vague answer is better than no answer or a wrong answer. And that is how Process Chain tests are created.

Knowing nothing about the system, I can still easily answer a simple question, such as: "If I buy lettuce, bananas and napkins - will I have these exact three things shipped to my home?"
This is a typical process chain test. as it masks the complexity of the underlying process. The test requires little understanding of the system, yet allows the tester to make a definite yes/no statement about whether the system works as intended.

Unravelling complexity

When a tester's answer to a process chain test is "It doesn't work", the entire lack of accuracy in the quality statement is thrown directly at the developers, who then need to discover why it doesn't work. Testers then get trained to make a the best possible statement of quality, such as, "I got parsley instead of lettuce" and "The order confirmation showed lettuce" - the tester may never know is where the problem got introduced into the system. In a complex service landscape (potentially covering B2B suppliers, partners and service providers), the analysis process is often "Happy Hunting".

The false dichotomy

Choosing either accuracy or precision is a false dichotomy - why opt for one when you can have both? What is required is a measurement system of finer granularity.
Even in the above example, we hinted that the tester is definitely able to make a more accurate statement than "It didn't work" - and they can be more precise than that, as well. Good testers would always approximate the maximum possible accuracy and precision.
Their accuracy is only limited by logic hidden from their understanding - and their precision is only limited by the means through which they can interact with the process.
Giving testers deeper insight into the logic of a system allows them to increase their accuracy.
Giving them better means of interacting with the system allows them to increase their precision.

Under perfect conditions, a test will answer with perfect accuracy and perfect precision. And that's our Unit Test. The downside? To test for all potential issues - we need a LOT of them: Any single missing unit test means that we're punching holes into our precision statements.


Repeatability & Reproducibility

What's the most common joke among testers? "Works on my machine." - while testers consider this a developer's measly excuse for not fixing a defect, developers consider this statement as sufficient proof that the test was executed sloppy. The issue? Reproducibility.
It gets worse when the tester calls in the developer to show them the problem - and: magic - it works! The issue? Repeatability.

Reproducibility

In science, reproducibility is key - a hypothesis which can't rely on reproducible evidence is subject to severe doubts, and for good reason. In order to make a reliable statement of quality, therefore, is to ensure that test results are reproducible.
This means that given the same setup, we would expect to get the same outcome.
Let's look closely at the factors affecting the reproducibility of a test:
Preconditions, the environment, the code segment in question, the method of test execution - all affect reproducibility.
As most applications are stateful (i.e. the outcome depends on the current state of the system), reproducibility requires a perfect reconstruction of the test conditions. The bigger the scope affected by the test is - the more test conditions need to be met. In the worst case scenario, the entire world could affect the test case, and our only chance of reproducing the same outcome would be to snapshot and reset the world - which, of course, we can't do.

Our goal therefore should be to minimize the essential test conditions, as every additional condition reduces reproducibility.

Repeatability

Another key to hypothesis testing is being able to do the same thing over and over in order to get the same outcome. The Scientific Method requires repeatability for good reason: which conclusion do we draw when doing the same thing twice leads to different outcomes?
When we create an automated system which possibly fires the same code segment millions (or even billions) of times per day, then even a 1% fault ratio is unacceptable, so we can't rely on tests that may or may not be correct - we want the software itself to always respond in the same way, and we want our software tests to do the same.
The more often we run our tests, the more repeatability we need for our tests. When executing a test once a week, having 1% problems in our repeatability means that once in two years, we may need to repeat a test to get the correct result. It's an entirely differnt story when the test is executed a few hundred times per day - even a 1% repeatability issue would mean that we're doing nothing except figuring out why the tests have failed!


Flakiness

Every developer who uses a Continuous Integration (or: Deployment) pipeline has some horror stories to tell about flaky tests. Flakiness, in short, is the result of both reproducibility and repeatability issues.
Tests become flaky when either the process isn't 100% repeatable or there are some preconditions which haven't been caught in preparing the tests.
As test complexity increases, the amount of factors potentially causing flakiness increase - as well as the amount of test steps potentially resulting in flaky results.

Let's re-examine our pyramid:

Test TypeRepeatabilityReproducibilityCauses of Flakiness
Process ChainDifficult:
Any change can
change the outcome.
Extremely low:
A consistent state across
many systems is almost
impossible to maintain.
Unknown changes,
Unknown configuration effects,
Undefined interactions,
Unreliable systems,
Unreliable infrastructure
SystemExtremely low:
Desired feature
change can change
overall system.

Challenging:
Any system change can
cause any test to fail.
Unknown configuration effects,
Undefined interactions,
Unreliable infrastructure
IntegrationLow:
Every release has
new features, so tests
need updates.
Low:
Every feature change
will change test outcomes.
Unknown configuration effects,
Unreliable infrastructure
Feature&ContractHigh:
Feature tests are
changed only when
features change.
High:
Feature definitions are
comprehensive.
Uncoordinated changes in API
definitions
UnitHigh:
The test outcome
should only change
when the code
has changed.
Extremely high.
A unit test always does
the same one thing.
None.


We again observe that testing high up in the pyramid leads to high flakiness and poor test outcomes - whereas testing far down in the pyramid creates a higher level of quality control.

A flakiness level of 10% means that from 10 tests, an average of 1 test fails - so if we include a test suite of 30 flaky Tests into a build pipeline, we're hardly ever going to get a Green Master - we just don't know if there's a software problem or something else is going on.
And 10% flakiness in Process Chains is not a bad value - I've seen numbers ranging as high as 50%, given stuff like network timeouts, uncommunicated downtimes, unreliable data in the test database etc.


When we want to rely on our tests, we must guarantee 100% repeatability and reproducibility to prevent flakiness - and the only way to get there is to move tests as low in the pyramid as possible.


Conclusion

In this section, we have covered some of the critical factors contributing to a reliable testing system.
Long story short: we need a testing strategy that moves tests to the lowest levels in the pyramid, otherwise our tests will be a quality issue all by themselves!




Saturday, June 2, 2018

Things that never meant what we understood

We throw around a lot of terminology - yet we may not even know what we're saying. Here are three terms that you may have understood differently from how the original author's intention:


1. Technical debt

Technical debt has been used by many to denote willfully taken shortcuts on quality.
Many developers use the term to imply that code has been developed with poor craftsmanship - for instance, lack of tests or overly complicated structure.

Ward Cunningham, the inventor of the term, originally saw Technical debt as a means of learning from the Real World - software built upon today's understanding incorporating everything we know at the moment put into use. He took the stance that it's better to ship today, learn tomorrow and then return to the code with tomorrow's knowledge - than to wait until tomorrow before even creating any code!

In his eyes, code should always look like it was consistently built "just today", never even hinting that it had looked different years ago. Technical debt was intended to be nothing more than the existence of things we can't know yet.

Technical debt always implied high quality clean code - because that is the only way to incorporate tomorrow's learning in a sustainable way without slowing down.

2. Kaizen ("Continuous Improvement")

Kaizen is often understood as an approach of getting better at doing things.
While it's laudable to improve - many improvement initiatives are rather aimless. Especially Scrum teams easily fall victim of such aimless changes when each Retrospective covers a different topic.

Taiichi Ohno, known as the father of the Toyota System which inspired Lean, Six Sigma - and Scrum, stated, "Where there is no standard, there can be no Kaizen".

Another thing that many of us Westerners seem to be unaware of: there's a difference between Kaizen and Kairyo - with Kaizen being an inward-focus exercise of becoming the best we can be - which in turn enables us to improve the system - and Kairyu being the exercise of improving the system itself. This, of course, means that Kaizen can never be delegated!

Kaizen requires a long-term direction towards which people desire to improve themselves. Such a direction is often absent in an agile environment - short-term thinking prevails, and people are happy having done something which improved the process a little.

What this "something" is, and how important it is in comparison to the strategic direction may elude everyone. And there's a huge chance that when we consider what we actually want to achieve, our "improvements" might even be a step in the wrong direction.
Have you ever bothered talking about where you yourself are actually heading - and why?


3. Agile

"Agile" is extremely difficult to pinpoint.  It means something different to everyone.
Some think of it as a project management methodology, while others claim "There are no agile projects".
Some think of a specific set of principles and practices, while others state these are all optional.
Some confuse a framework with agile - some go even as far as thinking that "Agile" can be packaged.
Some are even selling a certain piece of software which allegedly is "Agile".

Yet everyone seems to forget that the bunch of 17 people meeting at Snowpeak were out to define a new standard for how to better develop software - and couldn't agree on much more than 6 sentences.
Especially in the light of Kaizen above - What do 'better ways' even mean, when no direction and no standard has been defined?
A lot of confusion in the agile community is caused by people standing at different points, heading into different directions (or: not even having a direction) and aiming for different things - and then telling each other what "better" is supposed to mean.

The Agile Manifesto is nothing more than a handful of things that seemed to be consistent across the different perspectives: It answers neither What, How nor Why.
To actually make meaning from that, you need to find your own direction and start moving.




Sunday, February 25, 2018

The effect of new ideas on learning

"It's always good to learn something new" - or so the proverb goes. Let's examine whether this is true, and how to maximize the impact of our learning.

In this article, I will classify new ideas into two categories: growth beliefs and limiting beliefs.

How our beliefs affect our learning.

To keep this article short, I will make a claim that I'm not going to back up further: Whatever we learn is merely a new belief, a new concept we hold about reality.

Growth beliefs

A growth belief is an idea which accelerates our future learning. Adopting a growth belief has a sustained beneficial effect on our ability to understand the world. Growth beliefs tick off slowly by slightly broadening our horizon, allowing us to integrate new ideas faster. Thinking in networks, a growth belief is a belief where further beliefs can easily be attached to. The following illustration visualizes the idea:

Upon the growth belief, further ideas are built and can be adopted easier.
The growth belief acts as a sustainable basis for growth of ideas




Limiting beliefs

A limiting belief is an idea which inhibits our future learning. Adopting a limiting belief has a long-term negative effect on our ability to understand the world. Unfortunately, the danger of limiting beliefs is that in the short term, they might give a massive boost to our ability to understand certain things. Again, thinking in networks, a limiting belief has a strong connection to related ideas which are oftentimes adopted immediately - while at the same time, the limiting belief is inconsistent with other ideas which become harder to adopt:

The limited belief is closely related to similar ideas
 - and inconsistent with other ideas, which we then struggle with or reject. 

When conflicts between ideas arise, we usually reject the new idea based on the ideas we already hold. Subconsciously, we find reasons why the new idea is wrong - either by trying to prove how it is deficient, or by trying to prove how our current belief is "superior". We are no longer open to listen to the merit of the new idea, as long as the cost of adopting it feels higher than the cost of rejecting it.

Discernment

Unfortunately, there is no certain way to know upfront which beliefs are growth or limiting ideas. As a general rule of thumb, though: The easier an idea tries to explain a complex problem, the more likely it will turn out to be a limiting belief. Or, in more scientific terms: The level of explanatory power of an idea is directly related to our long-term understanding of reality. Beliefs which we hold despite low or no explanatory basis are most likely limiting and should be put under scrutiny.

When ideas with higher explanatory power conflict with ideas we already subscribe to, we should examine what really makes us believe the things we already do and potentially discard the ideas we already hold.


Conclusion

It's only good to learn new things if we can either understand where the things we learn inhibit us, or if we have a mindset of letting go of inhibitive ideas. The latter is extremely hard, as beliefs we have already adopted are often strongly related and make up what we see as "real" - so the best thing is to avoid the trap of adopting limited beliefs wherever we can spot them.

Another thing we need to understand: One person's growth belief might be another person's limiting belief, based on both where the person currently stands and what the person is looking for. When we discover ideas to be truly limiting, it may already be too late to discard them without rejecting a major portion of who we are.

So - be careful what "new things" you learn! It might cost you dearly, years in the future!