Friday, January 31, 2020

Double Queues for faster Delivery

Is your organization constantly overburdened?
Do you have an endless list of tasks, and nothing seems to get finished? Are you unable to predict how long it will take for that freshly arriving work item to get done?
Here's a simple tip: Set up a "Waiting Queue" before you put anything into progress.

The Wait Queue


The idea is as simple as it is powerful:
By extending the WIP-constraint to the preparation queue, you have a fully controlled system where you can reliably measure lead time. Queuing discipline guarantees that as soon as something enters the system, we can use historic data to predict our expected delivery time.

This, in turn, allows us to set a proper SLA on our process in a very simple fashion: WIP in the system multiplied with average service time is when the average work item will be done.
This allows us to give a pretty good due date estimate on any item that crosses the system boundary.
Plus, it removes friction within the system.

Yes, Scrum does something like that

If you're familiar with Scrum, you'll say: "But that's exactly the Product Backlog!" - almost!
Scrum attempts to implement this "Waiting Queue" with the separation of the Sprint Backlog from the Product Backlog. While that is a pretty good mechanism to limit the WIP within the system, it means we're stuck with an SLA time of "1 Sprint" - not very useful when it comes to Production issues or for optimization!
By optimizing your Waiting Queue mechanics properly, you can reduce your replenishment rate to significantly below a day - which breaks the idea of "Sprint Planning" entirely: you become much more flexible, at no cost!

The Kanban Mechanics

Here's a causal loop model of what is happening:


Causal Loops

There are two causal loops in this model:

Clearing the Pipes

The first loop is negative reinforcement - moving items out of the system into the "Waiting Queue" in front of the system will accelerate the system! As odd as this may sound: keeping items out of the system as long as possible reduces their wait time!

As an illustration, think of the overcrowded restaurant - by reducing the amount of guests in the place and having them wait outside, the waiter can reach tables faster, there's less stress on the cook - which means you'll get your food faster than if you were standing between the tables, blocking the waiter's path!


Flushing Work

The second loop is positive reinforcement - reducing queues within the system reduces wait time within the system (which increases flow efficiency) - which in turn increases our ability to get stuff done - which reduces queues within the system.

How to Implement

This trick costs nothing, except having to adjust our own mental model about how we see the flow of work. You can implement it today without any actual cost in terms of reorganization, retraining, restructuring, reskilling - or whatever.
By then setting the work you permit within your system (department, team, product organization - whatever) to only what you can achieve in a reasonable period of time, you gain control over your throughput rate and will thus get much better predictability into forecasts of any type.



Footnote:
The above is just one of many powerful examples of how changing our pre-conceived mental models enables us to create better systems - at no cost, with no risk.

Tuesday, January 28, 2020

The six terminal diseases of the Agile Community

The "Manifesto for Agile Software Development" was written highly talented individuals seeking for "better ways of developing software and helping others do it." Today, "Agile" has become a 
playground for quacks of all sorts. While I am by no way saying that all agilists are like this, Agile's openness to "an infinite number of practices" has allowed really dangerous diseases to creep in. They devoid the movement of impact, dilute its meaning and will ultimately cause it to become entirely useless.


The six terminal diseases of "Agile"

In the past decade, I've seen six dangerous diseases creep into the working environment, proliferated and carried in through "Agile". Each of these diseases is dangerous to mental health, productivity and organizational survival:

Disease #1 - Infantilization of Work

"Hey, let's have some fun! Bring out the Nerf Guns! Let's give each other some Kudos cards for throwing out the trash - and don't forget to draw a cute little smilie face on the board when you've managed to complete a Task. And if y'all do great this week, we'll watch a Movie in the Office on Friday evening!" Nope. Professionals worth their salt do not go to work to do these things, and they don't want such distractions at work. They want to achieve significant outcomes, and they want to get better at doing what they do. Work should be about doing good work, and workers should be treated like adults, not like infants.
An agile working environment should let people focus on doing what they came for doing, and allow them to bring great results. While it's entirely fine to let people decide by themselves how they can perform best, bringing kindergarten to work and expecting people to join the merry crowd is a problem, not a solution!


Once we have mastered disease #1, we can introduce ...

Disease #2 - Idiocracy

Everything is easy. Everything can be learned by everyone in a couple days. Education, scholarism and expertise are worth nothing. Attend a training, read a blog article or do some Pairing - and you're an expert. There's a growing disdain for higher education, because if that PhD would mean anything, it'd only be that the person has got a "Fixed Mindset" and isn't a good cultural fit: Flexible knowledge workers can do the same job just as well, they'll just need a Sprint or two to get up to speed! 


And since we're dealing with idiots now, we can set the stage for the epic battle of ...

Disease #3 - Empiricism vs. Science

I've written about this many times - There's still something like science, and it beats unfounded "empiricism" hands down. We don't need to re-invent the Wheel. We know how certain things, like thermodynamics, electricity and data processing work. We don't need to iterate our way there to figure out how those things work in our specific context.

Empiricism is used as an  idiocratic answer from ignorance, and it's increasingly posed as a counter to scientific knowledge. Coaches don't just not point their teams to existing bodies of knowledge - they question scientifically valid practices with "Would you want to try something else, it might work even better?" The numbers don't mean anything - "In a VUCA world, we don't know until we tried." - so who needs science or scientifically proven methods? Science is just a conspiracy of people who are unwilling to adapt.


Which brings us into the glorious realm of ...

Disease #4 - Pseudoscience

There are a whole number of practices and ideas rejected by the scientific community, because they  have either failed to meet their burden of proof, or failed the test of scrutiny. Regardless, agile coaches and trainers "discover", modify - or even entirely re-invent these ideas and proclaim them as "agile practices" that are "at least worth trying". They add them into their coaching style or train others to use them. And so, these practices creep into Agile workplaces, get promoted as if they were scientifically valid, and further dilute the credibility and impact of methods that are scientifically valid.
NLP, MBTI and the law of attraction are just some of these practices growing an audience among agilists.


And what wouldn't be the next step if not ...

Disease #5 - Esoterics

Once we've got the office Feng Shui right, a Citrine crystal should be on your desk all the time to stimulate creativity and help your memory. Remember to do some Transcendental Meditation and invoke your Chakras. It will really boost your performance! If you have access to all these wonderful Agile Practices, your Agile Coach has truly done all they can!

(If you think I'm joking - you can find official, certified trainings that combine such practices with Agile Methods!)


Even though it's hard, we can still top this with ...

Disease #6 - Religion

I'll avoid the obvious self-entrapment of starting yet another discussion whether certain Agile approaches or the Agile Movement itself have already become a religion, and take it where it really hurts.
Some agile coaches use "Agile" approaches to promote their own religion - a blog article nominates their own deity as "The God of Agile" (which could be considered a rather harmless cases) - and some individuals are even bringing Mysticism, Spiritism, Animism or Shamanism into their trainings or coaching practice!

Religion is a personal thing. It's highly contentious. It doesn't help us in doing a better job, being more productive or solving meaningful problems. It simply has no place in the working environment.



The Cure

Each of these six diseases is dangerous, and in combination, their harmful effect grows exponentially. At best, consider yourself inoculated now and actively resist against letting anyone induce them into your workplace. At worst, your workplace has already contracted one or more of them.

Address them. Actively.

If you're a regular member (manager / developer etc.) of the organization that suffers from such diseases: figure out where it comes from and confront those who brought in the disease. Actively stop further contamination and start cleansing the infection from your organization.

If you're a Scrum Master or Coach and you think introducing these practices is the right thing to do: if this article doesn't make you rethink your course of action, for the best of your team: please pack your bags and get out! And no, this isn't personal - I'm not judging you as a person, just your practice.



Saturday, January 25, 2020

You shape culture - one way or the other!

Culture is the buzzword. Everyone wants a good company culture - but: how do we get that?
In complex cyber-social systems, cause and effect are often hard to separate.

The biggest problem with Culture: it's "self-healing". When someone behaves in ways that are not in line with the existing culture, that culture will "fix" the "problem" by removing the unexpected behaviour, either through assimilation (conforming the person exhibiting that behaviour to existing culture) or ostracization (eliminating the person exhibiting the behaviour from the system).

Hence, changing culture requires constant, active effort until the culture no longer responds to the change like the human body would respond to a disease.

Shaping signals

Culture is shaped through the signals sent by leaders. We can take any of the following stances on any given, newly arising or potential cultural element - which could be a behaviour, idea or even a mix thereof:

"This is not a problem" 


If the element is negative, the signal is: "You may continue". Culture is shaped to accept the element.
If the element is positive, the signal is: "It doesn't matter". It will only prevail if it doesn't conflict with an existing cultural element.

"This is a problem" 

The signal is: "You should not continue". Culture is shaped to eliminate the element.
Do this a few times to a positive element, and you can be certain that it will never pop up again.

"We are looking for this" 


If the element is negative, the signal is "The end justifies the means". Culture will shaped by those who benefit not only from this negative element, but also from other negative elements which generate similar outcomes.
If the element is positive, there's still going to be a struggle with incumbent negative cultural elements that conflict with the positive element: The message won't stick if clashing negative elements aren't actively discouraged and the positive element reinforced.

Mixed signals

If management is sending different signals on the same cultural element, this can quickly turn into an "Everything goes" mindset. People no longer care either way - which is absolutely fatal when positive cultural elements start to get ignored and people learn to take personal advantage from exploiting negative cultural elements.
Sending mixed signals is an absolute no-go: consistency is key!

Culture as a consequence of signals

Hence, to form a positive culture, top down leadership must actively and continuously take a stance:
- Reinforce positive elements
- Reject negative elements

Everything else will eventually breed cultural toxicity.

Feedback Culture

Management needs to respond to feedback, both directly and across hops.

No Feedback

The absence of feedback poses a huge risk that culture doesn't turn out as desired - at a minimum, it's already a sign that there's no sound level of transparency.

Conflicting Feedback

When there's conflicting feedback, there's a problem. And that needs to have a root cause, which needs to be explored. There must be a negative cultural element hidden somewhere causing clashes with the desired state.

Negative Feedback

When feedback is negative, then a stronger negative element is overriding the signals - and that element needs to become priority 1 focus.

Positive Feedback

If management receives positive feedback, that needs to be reinforced. The fly in the ointment: How do you know it's honest and unfiltered? Make sure that you hear what you need to hear, not what you want to hear!


Culture as a consequence of feedback

Leaders have the opportunity to deal with feedback in a number of ways. We need to be aware that our reception of feedback is as important in shaping a culture as the way we address the behaviour itself. The way we handle feedback either creates or breaks reinforcement loops.

Negative culture as a result of feedback handling


  • Entirely disregarding feedback encourages a "Free for All" culture where people do whatever suits them best and transparency gets lost.
  • Rejecting negative feedback will lead to confirmation bias, where leaders lose touch with reality.
  • Ignoring positive feedback may lead to an abolishment of existing progress as people learn "it's not that important".

Positive culture as a result of feedback handling


  • Acting upon mixed or negative feedback reinforces the idea that "someone cares", which will lead to more efforts put into improving the situation to open a way for the cultural element.
  • Acting upon positive feedback reinforces the cultural element itself.


Down the line

Pervasive top-down leadership is the key to shaping culture - because top management are the only people with the positional power to stop the proliferation of negative cultural elements and to anchor in positive cultural elements.

Top management sets the direction. Their sphere of control on the culture reaches exactly as far as their active involvement in culture. When managers inbetween send mixed or conflicting signals with the message from above, culture in their immediate sphere of control will adapt to their local influence.

Hence, it's essential for top management to ensure consistency of signals both with their immediate staff, as well as across the organization. They need to sense and respond to the signals sent by their staff on every level.

You can't not lead

"As an manager, can't I just remain neutral? I want my teams to self-organize and don't want to impose myself on them!"
The problem with neutrality is: we can't not communicate. 
Not actively acknowledging positive cultural elements sends the signal that these are not important - hence, that's actively "not shaping a positive culture".
The same goes for not actively rejecting negative cultural elements - which is actively "shaping a negative culture".

"Evil triumphs when good men do nothing" (Image source: AZ Quotes)

Make your choice - and take a stance!




Sunday, January 12, 2020

Teams: Slices, components, features - and false dichotomies

There's a massive confusion about what is a "feature team", what is a "component team" - and what is a good strategy to proceed. Consequently, many organizations follow the advice of "Agile Gurus" without reflecting on their reality. Let me bring a little bit of light into the topic.

The flawed model

You've probably seen this kind of model - the idea that "Cross-functional feature teams are able to deliver vertical slices of value". So, basically, you'd design a team where at least one team member can tick the box in each of the quadrants:

Does "Vertical slicing" mean you can do all  things on the horizontal domain on the vertical domain?
If you believe that this means you will have independent teams who can "deliver business value autonomously" - sorry to say, you've been taken for a ride! The mental model is flawed.

Here's why:
The model makes a massive assumption: that a "product" has only these dimensions. That may be a developer's (or IT person's) point of view. But - is that really true?

The hidden third dimension

Yeah, the Business - who cares about the business? Is that even important?
Yes - we forgot the business! There's a hidden third dimension in the model that adds another level of complexity. The response you'd elicit from a lawyer if you'd tell them that the Legal domain is "simple and easy to learn" would probably be quite interesting - the business domain is at least as complex as the technical domain - moreso when we're talking about international operations and get into multilateral agreements, cultural and language differences. Indeed, when we look at traditional sales organizations, we realize that just the single domain of sales often gets split across different lines.
Typical Sales splitting lines could be products, channels, markets or regions or customer segments. So, even each of the single business domains could turn out to have multiple sub-dimensions.

In large enterprises, the model is no longer two dimensional, "horizontal or vertical slices", it's multi-dimensional with a potentially incomprehensibly large amount of dimensions!
As such, we're not even making "slices" - the delivery of value would mean that we have to cut across n dimensions!

Defining your Product

Once we realize that we're crossing borders in more than two dimensions, we need to answer the question of "What is the product we're working on?"
Crossing what?
If we define "cross-functional team" as "a single team that can deliver end to end value", we need to be cross-functional in all dimensions!
Let me take, for example - a company that wants to add Widgets to their portfolio.
Widgets need to appear in search engines, on commercials, linked to the Online Shop so that people can buy them - Widget contracts need to be bullet proof both in procurement and fulfilment - Widgets need to get shipped to the buyers, who need to get charged correctly for their Widgets - have the amount collected from their account - and finally, customer service may need to settle disputes on Widget purchases. The simple "Widget" may thus require changes to a whole boatload of technical platforms, across a wide range of business processes - and there is no "end to end customer value" until all of these functions are implemented!

This, of course, begs the question:
Can a single team of developers manage all of this?
If the business processes, technology landscape and development processes are sufficiently simple, the answer may be "Yes".

And what if they aren't?
What if there are independent technical solutions for Online Shopping, retail, B2B sales, wholesale?
What if ERP doesn't happen in the CRM solution? And what if fulfilment is outsourced to a third party? All these conditions are normal in Large Enterprises.

Organizing Teams

Especially when transitioning towards an agile organization, it's important to accept current reality and learn to understand where we are, then move from there.
"You go to war with the army you have -
not with the one you'd wish to have at a later time."
- Donald Rumsfeld
Most traditional organizations are set up to optimize IT for utilization - that is, there are different departments, groups and teams to do fragmented work:


The Classic IT Model

Classic IT is typically specialized with "project groups" where an IT project manager oversees teams specialized in a subset of technical Engineering domains, and -depending on project size- also specialized in a subset of technologies.
These teams are cross-functional in no dimension and can not deliver "full slices" of anything. They do piecemeal work and depend on other teams for everything.

From here, we get into the question: "Which direction do we want to go?"


Cross-Functional Development Teams

Scrum, among other models, assumes cross-functional teams - but what Scrum actually means is "cross functional in engineering", that is - analysis, development, test, deployment and operation are done within the same team.
Re-organizing especially smaller projects into one or two cross-functional development teams who have full control over their development process "from requested to Done" across the entire project's tech stack is a simple exercise. Breaking the team boundaries also opens the door to DevOps and thus offers some performance benefits.

Technical Component Teams

The clarification of "technical component" is important for later - because there are also business components.
An alternative approach in highly fragmented organizations is to bring together teams that can do end-to-end work on technical components. Breaking the barriers between analysis, test and development for certain technologies (e.g., a Database) can already be a quantum leap forward.
The downside is that such technical component teams need to constantly communicate and synchronize with other technical component teams to deliver anything that works.

The reason why technical component teams might actually make sense: If we have monolithic components used by multiple business processes and other technical components, then we may have nobody except these component specialists who can actually work with this component.

An example would be a centralized Enterprise Database which serves as single source of truth for Campaign Management, Sales, Customer Service, ERP and Revenue Assurance.
Such components are a pain to work with, but if that's what you have - you need to work with it.
Typically, these technical components become the bottleneck in most development efforts, so regardless of the size of the technical component team, there will always be a lot of coordination, stress and blaming going on.

Business Component Teams

While certainly preferable to technical components, Business Components are the same as technical components, on a different level of abstraction: Consider, for example, an out-of-the-box CRM platform that serves as a central customer database, and provides a frontend for typical business user processes.
While IT may claim that this CRM is a "vertical slice" and "provides end to end customer value", this only works when we have an extremely narrow definition of "end to end", "customer" and "value".

A CRM company can create an entire business model out of providing a standard product with a standard User Interface and standard functions like "create user, administer account, CRUD customer, CRUD product, CRUD order" - so the CRM company can "provide end to end customer value" to their customers, that is, companies buying their solution.

The picture looks different when an enterprise buys the CRM solution and integrates it into their business landscape: a new product is configured in the Product Management Tool, product information is provided to the CRM via API, and the CRM has to offer business insight information via API to ERP, DWH - and even business partner platforms!
In this case, the entire CRM solution is merely a component of a larger ecosystem - making the CRM team not a product team, but a component team - by nothing other than a change of perspective!


End to End Value Delivery Teams

Is it feasible to simply assume that someone who was formerly a Java coding specialist for a CRM system to take coding responsibility for the python product management and ABAP ERP as well? That's not universal to answer - yet if the answer is "No", then having business component teams with end to end responsibility for a single component's stack and processes is probably already the limit.

Given this scenario, an "end to end value delivery team" would need the ability and expertise to work across a wide range of business processes, technologies and development functions. Using the initial 3D model, such a team doesn't deliver "vertical slices" - it serves "multidimensional cubes"!

While this may be a very fascinating perfection vision, when asking the question, "How do we work today - and how do we want to work tomorrow?" - most organizations are not even remotely at a level where it's a feasible option to immediately regroup into small teams that combine all development, technology and content expertise of the entire company!


The False Dichotomy of Feature Teams

Some agilists have very strong opinions about whether we should do "Feature Teams or Component Teams" - promoting the advantages of "feature teams" and listing the disadvantages of "component teams". Yet, when taking all the previous factors into play, we quickly realize that "feature team vs. component team" is a false dichotomy.

Cross-Functionality vs. Specialization

Feature Teams are Cross-Functional and Component Teams are specialized - that's easy to proclaim. What's assumed, yet never pronounced: "Feature Teams are cross-functional in development process and technology, but specialized in the business content domain."
If we look at development from different perspectives, the statement boils down to "feature teams are specialists from a business perspective - component teams are specialists from a technical perspective." Therefore, "teams are specialists" poses a false dichotomy. All teams specialize in something.

Component work vs. end-to-end customer value

Component Teams deliver piecemeal that needs to be integrated, whereas Feature Teams can deliver end-to-end customer value. Again, we have a hidden assumption: the "customer". If we consider "the customer" to be a project, then a database operations team can well claim to deliver end-to-end customer value: from installation over configuration to access management and service requests, the team does everything.

Would we agree that AWS is a component of software systems? As an Enterprise application developer, yes. As a member of Amazon, working in the AWS development unit, this component is the product! Does Amazon deliver a product with features - or are they delivering piecemeal that needs to be integrated?
We end up at the same problem as before: by assuming a different perspective, one person's "component team" may be another person's "end to end customer value delivery team". Therefore, "teams deliver end to end customer value" is yet another false dichotomy. Depending on how the customer is defined, all teams (or: no teams) deliver end-to-end customer value.

Levels of abstraction

Component Teams work only on a small portion of the Value Stream, whereas Feature Teams can deliver a Feature across the Value Stream. There's another hidden assumption: that the "Value Stream" is an absolute, and has only one definition.

Returning to our AWS example - the AWS platform is merely the technical platform of an Enterprise Platform. Suppose that Enterprise Platform spans a business process, then that platform is just a component of an Enterprise Process. And if that process is part of a Value Stream, then that process is again just a component. And that value stream may be a component of a Value Chain, ... again: component. And if that value chain is used by another company: again: component.

The bigger we perceive the system, everything we previously considered "end to end" becomes a component on the next level of abstraction. As the complexity of an Enterprise grows, there may be a myriad of abstraction layers. Eventually, it becomes impossible for any single person to even understand how many technical changes need to be made in order to provide "end to end customer value", or: vice versa - how much new "end to end customer value" can be generated by a single technical change.

Therefore: claiming that a team "delivers end to end customer value" poses even a second another false dichotomy: it assumes an organization without abstraction layers. Only when a single team has direct access both to the low-level tech stack of all components and the end customers of the value chain - only then will any team ever deliver end to end customer value.



Aligning mental models

In the dispute of "feature teams or component teams", we need to clarify some terminology, concepts and expectations - otherwise, we get nowhere.

Specialization

We have explored in depth the different potential domains of specialization: development specialists, technology specialists - and content specialists. By now adding the question of abstraction layers, we need to also answer the question of layer specialization. The term "full stack developer" assumes a developer working on the tech stack of a single business platform - not a potentially infinite array of business platforms with a potentially infinite array of tech stacks. At some point, the "full stack developer" would become a "Master of (almost) none." - and whether they'd actually be a "Jack of all trades" becomes increasingly doubtful as the stack grows.
We need to agree on what we call "specialization" and in what context we expect "generalization", lest we're potentially talking about "being everything for everyone".

"We can't do everything for everyone everywhere, but we can do something for someone somewhere."
Richard L. Evans

End to End Work

When we talk about end-to-end, we have different concepts, and things become difficult from there.
A really obvious example is the difference in definitions of "lead time". Whereas the common Lean understanding is that it's the time between initiation and completion of a process - this gets defined differently. While most IT project managers would calculate "lead time" as the time between when a development project gets approved and closed, the book "Accelerate" defines "lead time" as the time between "code committed" and "code deployed on Production".
Returning from the specific idea of "lead time" - one person may define "end to end" as "from customer to customer", another as "from request to delivery", another as "from development to deployment", and yet another as "from build to production".
We need to agree on the definition of "end to end", to make any discussion around "end to end teams" meaningful.

Components

We have already exhausted the subject of abstraction levels. While Software Vendors deliver certain Products, these products are just components of bigger enterprise architectures. Software Integrators do nothing other than customize and integrate one of more of these "products" into a software landscape. And even an entire array of vendor products, fully integrated into a business process - may be seen as but a component of a larger value stream.
It's irrelevant of how many layers of abstraction we have in an organization - everything one layer below is a "component". By adding one abstraction layer, every "product" turns into a "component".
We need to agree on the abstraction level we're talking about, otherwise every discussion around "component vs. product" is entirely futile.

Systems

I have painstakingly avoided the term "system" wherever possible here. The reason: everyone has a different understanding of what "The System" is supposed to be!
When we're talking about "The System", do we mean a piece of software? Do we mean the larger architectural context within which this piece of software operates, its integration context? Or do we mean the development organization developing (and integrating) said software? How about abstraction layers? How would "the system" look like at another abstraction layer?
It's common that a developer means "the piece of code that's running on a server" when they use the term "the system" - whereas a business user might consider a complex, b2b mesh of services encapsulated by a single frontend to be "the system". A systems thinker would abhor both ideas, and would equally include processes, rules and people into "the system".
We need to agree on a common understanding of "the system", lest different understandings generate confusion and misunderstanding.

Complexity

When deciding whether a team is doing simple, complicated or complex work - we're quickly falling into the trap of category errors, because "complexity", like "end to end", depends on the domains we consider. Software development is pretty simple for a single piece of content and in a single technology. As we cross technology boundaries, a simple feature can quickly become a monstrosity of technical complexity - and as we cross content boundaries, possibly even organizational boundaries - even technically simple changes can become infinitely complex.
The problem: Until we have decided the dimensions in which we assume linearity and the dimensions in which we have variation, we do not understand how much complexity we're actually dealing with!
Whereas common sense dictates that "complexity" needs to consider all relevant dimensions, these dimensions can become infinite - making everything so complex that the very word becomes meaningless!
We need to agree to what we call "complexity" - and what we don't.


Bringing the right people together

After much philosophical ado, we can use all of the above to determine how to organize teams.

A team is able to work with minimal constraints if it is:
  1. Appropriately specialized, i.e. they don't rely on third parties for knowledge
  2. Doing end-to-end work, i.e. don't have handovers as part of their process
  3. Able to work on all relevant components, i.e. don't give or receive "orders" for component work
  4. Autonomous within the overarching System, i.e. team performance depends on the team and not outside factors.
  5. Feasibly complex, i.e. given the complexity of all relevant domains and the cumulative skills of all team members, there's a realistic learning curve
Will the outcome of these five factors be a "feature team" or a "component team"?
Back to square one - that's a false dichotomy. And it's the wrong question.
In most organizations, it will be a huge struggle and steep learning journey to form such teams, and it's entirely moot to discuss how we label them. 

Before even considering whether we should re-organize, we should ponder whether any of the above five factors is currently the hindering constraint in organizational performance.

If, instead of all of these, the performance constraint rests outside the current teams - for example, in policies and procedures, in politics or processes, in budget or timelines - then it's entirely irrelevant how teams are organized, as even the "ideal feature team" would still suffer from the same constraints.

Conclusion

The entire discussion of "feature teams or component teams" is a red herring.

Reorganizing teams is only relevant if it elevates the current constraint - and the question is not whether it should be a "proper Feature Team" or whether "component teams have dependencies". 
The right question is: "will the reorganization elevate the constraint on the current system?" - only if that is the case, then will the new team structure generate any better outcomes than the previous structure! 

Hence: Work on your systemic constraints. Bring those people together who can elevate the constraint. Just let people work until the team structure is the constraint! And don't assume it is until you have supporting evidence!

Thursday, January 2, 2020

10 signs you should fire your developers

Good developers are worth their weight in gold, and quite literally so. On the downside, bad software will eventually kill your business. In this article, I will describe - from a management perspectice - ten surefire signs that you're better off firing your developers than keeping them.

At the same time, if you're a developer and you see these signs in yourself or your coworkers, today may be a great day to hand in your two weeks' notice and find a proper developer job. You're doing yourself a favor by taking the first step!

The following list are epitomes of an organizational culture which will eventually result in a steaming pile of garbage that is both worthless and expensive rather than software which lets the company flourish. Systems generated in such a culture are parasites - they suck the very life essence out of everyone who has to deal with them, and the longer you let them fester, the bigger the problems will become.


Ten signs you should fire your developers


#1 - Working 9 to 5

Creative work can't be clocked. When developers always start at a fixed time and always drop the pen at the same fixed time, never taking work home, that's a huge red flag. Sometimes, the best ideas happen while jogging, under the shower or even while playing a game. Most work doesn't happen at work - solutions created exclusively on the clock just plain suck.

#2 - Copy Paste Solutions

Software development is creating new solutions and optimizing existing solutions. By copying and modifying the same thing all the time, complexity grows over time and simple changes eventually become extremely difficult. This way of working is unsustainable, and by the time this becomes obvious, it may be economically impossible to change course.

#3 - Need for supervision

Developers need to think by themselves. When developers only do what they have explicitly been assigned to do, only work when someone is checking on them and outcomes need to be monitored before they can be released, you have a serious problem.

#4 - Closed groups

The world is bigger than we think. When developers dig trenches to the business, can't even give you the name of a single user, much less having a professional relationship with any of them - how can they build the right product? A sense of "stranger danger" where developers perceive every new face as a threat means that they have already lost touch with reality.

#5 - One Trick Ponies

"If the only tool you know is a hammer, every problem looks like a nail".
When developers see technical diversity as a threat, can only work within a single paradigm and start to frame business problems in terms of what they know rather than expanding their knowledge to suit the business domain, you have the wrong people. Combined with a "my way or the highway" attitude, you can be certain that the money you sink into development exceeds the value generated.

#6 - Learning passivity

Knowledge work is all about knowledge. A continuous hunger for new knowledge allows developers to excel in their field. When developers resist new ideas, need to be told when they need to learn something new and shy away from experimenting, they lose their edge - and so does your software!

#7 - Limits to responsibility

Great people care for what they do, and want their work to make an impact. Statements like "Works as Designed", "Works as Requested", "Quality is a tester's job" or a pervasive mindset that users are just too stupid to use the software properly imply that developers have locked themselves into an ivory tower and try to shelter themselves from reality.

# 8 - Frameworks frame thoughts

If there were a standard framework for your business, someone else would be doing it better and cheaper than you could. Be very suspicious when the answer to every question is, "<this framework> can do it", especially when the solution always looks like the framework says it has to. Once developers have a standard of standards for developing new solutions, you know they're solving the wrong problems.

#9 - Enamored with code

Nobody cares what happens "under the hood", as long as the engine runs. When developers esteem good-looking code higher than the business utility of said code, they waste time and money. Combined with an attitude that the code would look much better if it wasn't for constantly changing business demands, you know that eventually, the house of cards will collapse - and your business might tumble down with it.

#10 - Caught in the past

The marginal value of every technology is Zero. Yesterday's great solution is often just barely useful today, and last decade's state of art is today's liability. If your newest technology is half a decade old, and the "never change a running system" mindset pervades the organization, you're riding a dead horse already. If on top of that, developers feel emotionally attached to "their baby" and aren't willing to let go, you'll be stuck between a rock and a hard place.


Toxic culture

Encouraging and reinforcing the behaviours above creates a devastating culture. If you encourage behaviours that fall into any of the above ten categories, you are creating a culture where high performance isn't even possible. You may likewise inadvertently be shaping culture by not stopping these behaviours, not calling them out or just tagging along with them. As such, it's always a people problem, that is - a problem created by management.
You can't close a blind eye to these. You have to call them out, you have stop them, and you have to actively work against them. There's a reason why people do these things - and rarely is the reason that people want to hurt the business or their own career. Explore the root cause, and eliminate it!

Organizations where development works as above are career dead ends. They take the purpose out of the work and make IT as a whole a massive liability. The faster developers leave such a place, the better it is for them. If they no longer have the will to take this step by themselves, do them a favor and help them move towards a better future.



Sunday, December 29, 2019

Telemetry Canvas - figuring out the right metrics

To create transparency for the key information about your company, your system, your product - you need to align your metrics. Here is a simple canvas that can help you sort your thoughts and start your journey to data driven decision making:


The Telemetry Canvas




The canvas is simple to understand. There are two main dimensions:

Events

In an IT platform, there are mainly two types of events: Those created, conducted, orchestrated, managed or performed by automation - and those performed by humans (platform users).
Anyone whose work is affected by events should have a say in defining the most relevant items for their work.

Technical events

Everything the system and/or platform does by itself, or in support of user activity, is a technical event. We can measure technical factors such as incoming or completed transactions, inventory levels et cetera. Of course, we can categorize these by transaction type, technical component or business scenario, depending on what we are looking for.

Quality checks, build failures or network alerts are also technical events that occur frequently, and require attention.


User actions

Whatever users do may also be relevant to the performance of our organizations. If our goal is to grow our userbase, new registrations are a great metric to look at. On a marketing campaign, the next logical extension would be lead conversions. Or trending products. We might also look at revenue generation - and whatever has an impact on things we care for.
Even abandoning our product is relevant for our business performance, and can be classified as "action by inaction", potentially being a relevant user action.


Business outcomes

Events by themselves are meaningless. They derive their meaning by their impact on our business.

Good for business

We are looking for certain events, such as the successful start or completion of a transaction or the generation of revenue. Many of these events fall into the category of "The more, the merrier". The best events are those that cause no work, yet generate profits.

Bad for business

Some events are always bad news, for example complaints, technical errors or system outages. Even if nobody likes to have these, they are part of working reality, and we need to pay attention to the effort we sink into them.
In many organizations, the invisibility of the "bad news" metrics on the radar causes the organization to accumulate technical debt that may eventually kill the product or even the entire company!
The best businesses aren't those who successfully ignore the bad news - it's those who know that they have less bad news to handle than they can stomach!

Deriving metrics

Once we know which events we're looking at, we can determine how we measure them.
For example: When a transaction arrives in the system - we also want to know when it is completed: we measure not just our transaction rate and inventory, we need to know the throughput rate as well. This gives us visibility into whether we're accumulating or reducing backlog, whether we're sustainable or unsustainable!


Optimization

Once we have defined our metrics, we can set optimization goals. Some events are good for our business, others are bad. The general optimization direction is either "lower is good" or "higher is good". In rare cases, we have range thresholds, where neither too high nor too low is desirable.

The easiest way is to start by capturing data on the current state of a metric, then answering the question: "Is this a problem? If so, how big is it?" - determining whether the current value is good, acceptable or inacceptable.



Using the Telemetry Canvas

The canvas is a discussion facilitation tool, so don't use it on your own.

Step 1: Invite stakeholders

Bring all stakeholders in your product together, preferably not all, but representatives from each group. This is a non-exhaustive list of people you might want to involve:

  • Salespeople, who generate income from the product
  • Marketeers, who drive the product's growth
  • Finance, who validate the product's revenue
  • Developers, who build the solution
  • Operations, who have to deal with the live system
  • Customer Service, who have to deal with those who bought it
  • UX, who design the next step
  • Legal, who definitely don't like to have trouble with the product

The more of these functions rest within your team, the easier this exercise will be - although typically, most will be located somewhere else in the organization.

Step 2: Brainstorm events

Give everyone the opportunity to draft up events that are important to their work. There is no "right" or "wrong" at this stage, and there are no priorities, either.
It's important to remember that not all events occur within the platform, some occur around the platform, and that some events can also be caused by inaction.

Get people to write each event on sticky notes.

Step 3: Locate events on the matrix

People tend to have a pretty good understanding whether an event is good or bad, so where to place the event on the vertical should be easy. In some cases, it's unclear whether an event is good or bad - then default to "Bad", because every event means data processing and work, and work that's not good is probably a bad thing.

Likewise, define the horizontal category. In some complex systems, it's unclear whether it's a user action or a technical event. Try defaulting to "user action" - you haven't discussed where to get the data from, anyway.

Step 4: Define measurement systems

As events themselves are of no value, we need to define the measurements that we want to derive from events. These can also be combination metrics, such as "Lead Time" or "Inventory Growth". What matters is that everyone in the room can agree on what would be measured.

Write each of the measurements onto post-its and put them into the field corresponding to one of the event(s) they rely on.

Step 5: Prioritize

Not all metrics are sufficiently important. Let each stakeholder name up to three metrics that matter to them - you still need to put work into setting up data collection, and it doesn't help you to have five hundred things on your "toDo" list. 
This is not a point-based system, so it's not about dot-voting, so you end up with a bunch of individual priorities. 
Although it's good if multiple stakeholders value the same metrics, since that reduces complexity, it's not necessary that stakeholders agree on the value and importance of metrics.

Step 6: Validate

You should have a number of metrics in each quadrant now. If you're missing one of the quadrants, your measurement system is probably biased. Should that be the case, ask, "What are we missing?" Try reprioritizing metrics until you have at least two in each segment.

Step 7: Agree and align

Get everyone to agree that they have their most important metrics on the canvas. Address potential concerns. If necessary, re-iterate that this is not intended to replace current measurement systems nor a final version - it's just the beginning of a journey to align on data transparency.

Step 8: Invite for follow-ups

Once the metrics are agreed, let everyone know that there will be different sessions to define the metrics in more details, that is: how the data will be collected, how it will be interpreted and how it will be represented. This consumes more time and is not in full detail interesting for everyone.

Step 9: Agree on Next Steps

The Canvas is ready, but it's just a canvas. Make sure you have an action plan of what will happen next. Here's what is suggested:
1. followup sesssions for defining the metrics,
2. do some implementation work to measure them,
3. present the metrics in a Review,
4. start using the available metrics in decision making,
5. Inspect, Adapt and Improve.

Joint Metrics - aligning business and development

"How do we bring business and development closer together?" - the key is to create transparency into what others see.

Start the discussion about what you should make transparent, so that everyone can draw the same conclusions. Making both technological and business information visible to everyone in real time will help you cut down a lot of pointless discussions about the best course of action.


Everybody is right!

Every person has their perspective of what is the most important thing, and often, each perspective is valid. For example, a technical person will consider that technological stability and high quality code are important. Salespeople care for neither - they want to close as many good deals as fast as possible. People in service support feel stuck with tons of trouble tickets, social media marketers want campaigns to go viral. And the CEO just wants a smooth, expansive operation.

A developer can only use their time once. So - what should they focus on? How can a Product Owner know whether it's more important to boost sales or to fix defects?

Classic HIPPO Prioritization

Most organizations prioritize activities like this: Either we do the thing demanded by the person who shouts loudest, or the Highest Paid Person will give their opinion on what should be done ("HIPPO Priority").

Unfortunately, neither the people with the loudest voice, nor those with the highest paycheck, tend to have a full understanding on the implications of their demand. Follow the HIPPO, and everyone else will be unhappy. Disregard the HIPPO and risk being laid off.
Either way, the organization gets stuck under the tyranny of the Urgent - shifting attention between a series of disasters and escalations to fix.
There is no freedom to think of the Big Picture, maximize business value or consider what will happen a few years down the line.

A systemic view is needed

In a healthy organization, developers in their right mind wouldn't want sales to fail - and marketing wouldn't want the technology to fail. They are often simply unaware why their personal goals have such drastic consequences elsewhere!

The solution requires overarching transparency, laying all the cards on the table.
 In most modern organizations, there is some kind of data that people utilize, yet everyone gathers their data from a different source and interprets it in their local context. This is not to advocate a central Data Warehouse, Master Data Management or a specific data representation tool here - the problem can't be fixed technologically: A sysop would look at logfiles, developers at source code, sales at transaction records and marketing at campaign information.

None of the data is related on the surface. Yet, in a closed system, all of these are sides of the same coin (probably, an "infinity dice" would be a more applicable metaphor).

Breaking the local optimization

Every stakeholder can define metrics for their specific area of expertise. Sales is very adept in defining what is great, what's okay and what is intolerable when it comes to closing deals. Hence, it's very easy for them to define a metrical system that creates overall visibility on how healthy sales are. Developers can do the same for their systems, finance for revenue - and so on.

And then we lay all of this information on the table. When everyone has a say in what we're looking at, we have objectivity in whether we're doing great, alright or meh in the big picture. And make it visible to everyone.

Bringing the puzzle together

Imagine you log into your company account - and the first thing you see is where your company is doing great - and where it just plain sucks. From the customer service rep, all the way to the CEO, from technology to business, everyone will have at their fingertip the information where your biggest strength and where your biggest weakness is.

It will become very intuitive and easy to make key decisions, and even when diplomatic compromize is needed, people will at least understand the impact of their choices.

Leaving the hamster wheel

Many organizations are challenged to break free from the hamster wheel of tasks and activities. Product evolution is often nothing more than putting band-aids on cracked pavement. Systems are fundamentally broken, because it's all about meeting short-term goals, and rarely about larger long-term beneficial change. The future gets sacrificed for today's needs.

Planning strategically

We need to figure out where we're constantly fire-fighting, where we're in calm waters, and which problems correlate. We can use the transparency of the data to introduce measurable strategic objectives, such as, "reduce technical debt from 50000 years to 100 years", or "increase conversion rate from 1.2% to 2%". It's totally valid to have multiple strategies with multiple objectives and multiple targets in place at the same time.

Building empathy

Another great advantage of building a common metric portfolio for everyone in the organization is that we start to get empathy for one another. Developers see when sales is struggling, marketing realizes when development is drowning. The discussion moves away from "What do I need next?" towards "Who has the biggest problem and how can we contribute to improvement?"






Friday, December 13, 2019

The nonsense called "Enterprise Agile Development"


The "Manifesto for Agile Software Development" is the basis for many "agile approaches". Enterprises worldwide have been sold to the idea that they need to become "Agile" in order to remain competitive. And anecdotal evidence of the success of "Agile" is abundant.

There's a dirty little secret that most organizations, coaches and consultants are either unaware of, don't understand - or they just don't realize the impact thereof: "Agile", as originally proposed, is a local optimization, intended to improve the work of software developers! This begs the question: "What if Software Development isn't even the problem?"

Synopsis

If we look at organizational processes from end to end, we realize that even if Product Development were a flawless, instantaneous activity, not much would be different in the big picture. Why do we spend so much time and energy to make irrelevant changes?
95% of changes made by management today make no improvement. -Peter Scholtes
"Agile Transformation" is often one of these ineffective changes.
To find a better solution, we need to frame the problem differently.

The scope of "Agile"

"Agile" is intended to bring the entire IT development organization together. It's irrelevant whether we're talking Scrum, Crystal, XP or whatever, this idea is fundamental to "Agile".
From the time a "requirement" (or: "user story" - or whatever) is scoped for delivery until it's delivered, people from IT collaborate, involving business stakeholders, to minimize cost, overhead, quality risks and delay in the process.

Why "Agile" looks so appealing!
It sounds very appealing to any IT manager worldwide to reduce defect rates, cost and cycle times by margins of fifty percent plus each, while making IT employees happier, especially since those benefits can actually be achieved compared to siloed approaches!

But what if I told you that none of this matters at enterprise level?

The real picture

In a 100.000+ people enterprise, there are hierarchies, reporting lines, budgets and highly complex dynamics going on. Not everyone talks to everyone else.
In an enterprise context, it's fairly normal that from idea until approval, different rounds of experts, stakeholders and boards are involved - long before an item lands in IT's actual toDo list.

The idea goes through some kind of process, where people who are not involved in actual development clarify what's actually going on, whether it's worth doing and have to make sure it will get done.

While Scrum specifically has a "Product Owner" and Scrum proponents might argue that this is "refinement" for which the Product Owner has this responsibility - in an enterprise, that's too much work for one person, so it will be delegated: We end up with handovers in the process, multiple queues and waiting lines. 

This picture depicts what's actually going on in most larger organizations:

Careful - "Agile" doesn't even talk about the big picture!

The lead time of the yellow chevrons, in a Waterfall world, tends to be 3-6 months, and in an Agile universe, it's much shorter. 

Let's do a thought experiment: What if the yellow part of the process were conducted by a little fairy with a magic wand that could complete all these activities at zero cost, in zero time and without any defects? 

How agile, how fast, how cheap would this process now be?
In large enterprises, each of the gray chevrons takes weeks to months, is quite expensive and error-prone. Considering the end to end process chain, even if we disregarded everything happening in the "Agile" parts of the process - we'd still have a slow, inflexible and expensive process! 
The overall effect of introducing "Agile Software Development" into Enterprise processes is neglegible.

End-to-end Agility

Now, we get to the revolutionary idea of simplifying the entire process by means of moving to "Business Agility": bringing developers straight to the users!

A paradigm shift: entirely removing the men-in-the-middle!
Once an item is prioritized, we jump straight into development. There's just a small fly in the ointment: how do we prioritize it? In order to determine whether it's the most valuable thing, it still needs to be refined - so the same process still applies!

The problem, unfortunately, in the Enterprise world, is not even whether we have a handover in the process between a "Refinement Team" and a "Development Team". The real problem is overall lead time!

As long as it takes weeks to do the clarification rounds, the umpteen mandatory boards only meet once in a full moon, every new feature requires comprehensive user trainings and there are manual compliance audits, there is no way for a change in development to significantly affect overall outcomes!
Unless you're part of the problem, you're not part of the solution.

The illusion of improvement

When overall processes retain their massive overhead, and communication structures remain untouched, Agile Development alone is irrelevant to the overall business outcomes. Even if we get it right - which is extremely hard in the governance straight jacket so nicely provided by many organizations - the enterprise as a whole will not feel much of an impact by the introduction of "Agile Development" into a cluttered, overcomplicated jungle of processes, rules and regulations.

If we are looking for significant improvements to the enterprise, Software Development is often not the right place to look - yet agilists have spent nearly a decade developing increasingly intricate ideas for optimizing exclusively this part!

Moving Enterprise Software Development from "Waterfall" to "Agile"  is like using white sand instead of yellow sand to mix concrete - we feel as if we made a difference, while an outside observer wouldn't recognize any change!

And that's why Enterprises who have invested heavily into "Agile Software Development" often feel underwhelmed with the outcome and become entirely disillusioned.

A change in mindset

"Agile" brought a massive change in mindset, it gave developers a voice - and it's people from Technology who are now educating the business world how to change their ways of working, in order to take advantage of Agile processes.
Unfortunately, we see that many agilists are stuck in their little Technology world, not realizing that optimizations made exclusively from a Technology perspective alone are exactly as ineffective - and potentially equally harmful - as those made by Finance (namely, Cost Accounting - but that's another story, to be told another time).

Much more important than doing an "Agile Transformation" and converting legions of decently functioning teams with decently performing individuals to "Agile ways of working" would be to look at the real problem the organization is facing - it's not a development problem. It's a flow problem, in end to end value.

I believe that not just agilists, but struggling organizations as a whole, will find significant value in moving beyond the new structure and processes suggested by Agile Frameworks and tackle the real issues their organization is facing, which are hidden in plain sight when adopting Agile Frameworks.

In the past years, I have come to realize that we need to move beyond IT, beyond Technology, beyond Product Development, to see the need for a Unity of Purpose, optimization at the bottleneck and relentless improvement. Try not making any change to anything other than the bottleneck: Development can work however they consider fit, unless and until that's where the bottleneck is.


Closing remarks

Let's just not make changes that don't matter. Too much harm and grievance has been inflicted upon managers, developers and entire organizations in the name of this "better way of working". So many "Agile Transformations" caused great people to lose their job, their sanity or the respect they deserve, while giving the organization nothing to show in return. It's time to stop this madness.

Let's put Agile Frameworks where they perform best: into Product Development. Let's not try to stretch their purpose beyond their applicability, and let's not overzealously convert everyone to one specific way of working. Give people the freedom to work in whatever way they consider is best for them, Offer people Scrum, Kanban, SAFe or LeSS as an option if it makes them happier - but don't force anyone into any of this, especially not when development isn't even the problem.

Thursday, November 7, 2019

The value of idle time

The Product Backlog is a mandatory part of Scrum. Together with the Sprint Backlog, they define both the planned and upcoming work of the team.
There's a common assumption that it's considered good to have a decently sized product backlog, and as many items in the Sprint Backlog as the team has capacity to deliver. Let's examine this assumption by looking at a specific event.



The "no backlog" event


It was Tuesday evening. I had just a busy day behind me.
I was chilling, browsing the Web, when I received a message on LinkedIn. The following conversation ensued:




Mind Lukasz' last statement: "The most impressive customer service ever".
Why was this possible?

Had Lukasz contacted me half an hour earlier, this dialogue would never have happened. Why? Because I would have been busy, doing some other work. Lukasz would have had to wait. His request would have become part of my backlog.

Service Classes

There's a lot of work I am pushing ahead of me on a day-to-day basis.
But I classify my work into three categories:

  1. Client related work - I try to cap the amount of client related work, to maintain a sustainable pace.
    It's a pretty stuffed backlog where things fall off the corners every day.
  2. Spontaneous stuff - I do this stuff as fast as I see it, because I feel like doing it.
    The hidden constraint is that "as I see it" depends on me not being engaged in the other two types, because these take 100% of my attention.
  3. Learning and Improvement - That's what I do most of the time when not doing Project work.
    I consider web content creation an intrinsic part of my own learning journey.

These categories would be called "service classes" in Kanban.
I am quite strict in separating these three classes, and prioritize class 1 work over everything else.

Without knowing, Lukasz hit my service class 2 - and during a time when I was indeed idle.
Since class 1 has no managed backlog, I got around to Lukasz' request right as it popped up, and hence, the epic dialogue ensued.

Service Classes in Scrum

If you think of the average Enterprise Scrum team, class 1 is planned during Sprint Planning, and class 2 activities are undesirable: all the work must be transparent in the Sprint Backlog, and the SBL should not be modified without consent of the team, especially not if this might impact the Sprint Goal.

Many Scrum teams spend 100% of their workload on class 1, going at an unsustainable pace, because the constantly descoped class 3 work warrants future-proofing the work.
Even if they plan for a certain amount of class 3 work, that is usually the first thing thrown overboard when there's pressure to deliver.

The importance of Spontaneity

Few Scrum teams take care of Class 2 work, and Scrum theory would dictate that it should be placed in the Product Backlog. This just so happens to be the reason why Scrum often feels like drudgery and developers are getting uncomfortable with practices like Pair Programming.

"Spontaneous stuff" is a way to relax the mind, it helps sustain motivation and being totally uncommitted on outcomes allows creativity to flourish.



Load versus Idle Time

As mentioned, class 1 is bulk work. As workload increases, the percent amount of class 1 activity quickly approaches 100%. Taking care of class 3 activity means that increasing load quickly diminishes idle time activity to go to Zero.

Since I already mentioned that idle time activity creates magic moments, both for team members and customers, high load with zero idle time destroys the "magic" of a team.

Wait Time Idleness

One source of Idle Time is Process Wait Time.
In a Lean culture, wait is seen as detrimental waste. This is both true and false. It is true when the organization doesn't create value during wait, while incurring costs. It is false when this wait is used to generate "magic moments".

Buffer Time Idleness

Both Scrum and Lean-Kanban approaches encourage eliminating idle time, as would the common "agile Scaling" frameworks. Team members are constantly encouraged to pull the next item. or help others get work in progress done faster.
This efficiency-minded paradigm only makes sense if the team controls the end-to-end performance of the process, otherwise they might just accumulate additional waste. Theory of Constraints comes to mind.

On the other hand, buffer removal in combination with a full backlog disenchants the team - there will be no more "magic moments": Everything is just plan, do, check, act.


Idle Time and Throughput

The flawed assumption that I want to address is that buffer elimination, cross-functionality and responsibility sharing would improve throughput. Maybe these will increase output, but this output will be subject to the full lead time of any other activity.

Backlogs vs. Idle Time


Genuine idle time means that the input backlog currently has a size of Zero and a parallel WIP of Zero as well. There is no queue: neither work-in-progress nor work-to-do.
An idle system doesn't require queue management. When idling, throughput for the next request is exactly equal to work time - the maximum throughput speed we could hope to achieve. This kind of throughput speed can look absolutely mind-boggling in comparison to normal activity cycle times.

The impact on organizational design

A perfect organization takes advantage of idle time points that maximize throughput speed - not efficient utilization avoiding idle time.

Summary

The conversation with Lukasz is an example of the benefits of having idle time in your work.
This kind of idle time allows for "magic moments" from a customer perspective.

Just imagine an organization where "magic moments" are the norm, and not the exception.
This requires you to actively shape demand: when demand is roughly equal to capacity, we can eliminate backlogs.
Demand queues destroy the magic.

Eliminate the queues. Make magic happen.


Wednesday, November 6, 2019

Scrum is setting you up to fail!

The amount of debates where agilists claim, "But Scrum addresses <this topic> already!" - then proceed to quote a sentence, or even a single term from their framework's rules are staggering. The phrase, "we need to be pragmatic, and Scrum is idealistic" heats up the debate.

My take: 
In some cases, frameworks like Scrum are helpful. By themselves, however, they aren't. They provide no helpful guidance and rely on the strong assumption that the solutions to an organization's core problems already exist within the team' sphere of control. 
This assumption is borderline insane, because people wouldn't need a rule or framework for something they know how to do.

Even in regards to my article about demand, I got the reply, "Scrum does address the issue. That's what you got a Product Owner for." and "SAFe uses the term 'Demand Management' at Portfolio level, therefore SAFe has a solution." - I say that this is about as helpful in practice as stating, "We have the cure for cancer already. That's what scientists are for: They even use the term cancer research."
Yes. And: What exactly is the solution to the problem beyond assigning responsibility or attaching a label somewhere?

Let's focus on Scrum, just to be talking about something specific.
In all fairness, many Scrum practitioners state, "Scrum doesn't solve your problems, it only highlights them" - which is my answer to everyone who would claim that "Scrum does address this already.Maybe you get a label. You don't get a solution. Scrum itself has no helpful answers, not even the hint of a direction.

Scrum's dangerous assumptions

Scrum makes a lot of strong assumptions. Most of the time, these assumptions are just not valid and will cause a Scrum adoption to shipwreck.
These are all examples of conditions that Scrum is simply assumed to have:

No blocking organizational issues

Scrum can only work when the surrounding organization is at least basically compatible with Scrum. Scrum's assumption is that you are well aware of how to ensure that:
  • Organizational processes are fundamentally compatible with agile development
  • A meaningful portfolio strategy exists
  • Demand funneling "somehow works"
  • Individual incentive schemes don't get in the way of team or organizational goals
  • The organization improves where it matters
  • You have stable teams
And what if not?

Unproblematic contracts

Scrum teams must operate in an environment where developers and customers share common goals, and developers are contractually enabled to maximize organizational value. Scrum assumes that you have a contract situation where:
  • There is no functional split between different organizations (e.g. outsourced manual test - or worse, outsourced users)
  • Financial incentives encourage optimizing around value rather than activities
  • The team meets all legal requirements to deliver all required components
  • The development organization benefits from producing better / more / faster outcomes
And what if not?

People get along

Scrum assumes people can and will communicate with a goal to create value.
You have to know by yourself how to achieve the following states:
  • No communication gaps where significant information gets lost
  • Stakeholders care and show up to provide essential feedback
  • Managers understand and avoid what demotivates the team
  • People have a sufficient level of trust to raise issues and concerns
  • When all things fail, people focus on learning and improvement, avoiding blame.
And what if not?

Development issues

Since its inception, Scrum has removed all aspects of technical guidance. As such, there's now the hard assumption that:
  • Teams have the necessary skills to produce a "Done" Increment
  • Teams know about quality engineering practices
  • The team's software isn't a steaming pile of ... legacy
  • Teams are able to produce a meaningful business forecast
  • Teams can cope with technology shifts
And what if not?


The danger of these assumptions

To assume that none of these problems exist is idealism. If you make these assumptions, you will shipwreck.
To assume you can safely operate Scrum when multiple of those problems exist, you're also going to shipwreck.
To assume that attending a Scrum training course equips you to take on this gorilla is also going to shipwreck.

To assume that Scrum has a solution to any these problems is false hope or snake oil, depending on perspective. Scrum assumes that they have already been solved - or at least, that you well know how to solve them. Scrum tackles none of them.


What if not

The Scrum Guide has no guidance on any of these topics, as all of these problems are assumed to be manageable and/or solved in a Scrum context.
Where these problems are significant, Scrum isn't the droid you're looking for.

Saturday, November 2, 2019

Health Radars are institutional waste!

There's a recent trend that organizations transitioning to agile ways of working get bombarded with so-called "health checks" - long questionnaires asking many questions, that need to be filled in by hundreds or maybe even thousands of people in short cycles. They deprive organizations of thousands of hours of productivity, for little return on this invest. 

Radar tools are considered useful by consultants with little understanding of actual agility. 
My take is that such tools are absolute overkill. What you can do - to save time and effort, and get better outcomes.




The problems of health radar tools

Health radars are deceptive and overcomplicate a rather simple matter. They also focus on the wrong goal.
A radar is only helpful when things are happening outside where you could otherwise see them.
If an organization wants to be agile, the goal should be to improve line of sight, not to institutionalize processes which make you comfortable with poor visibility.

The need for a radar reveals a disconnect between coaches/managers and the organizational reality.

Early transition radars

When an organization doesn't understand much about agile culture and engineering practice, you don't need a health radar to realize that this isn't where you want to be: time-to-market sucks, quality sucks, customer satisfaction sucks, morale sucks. No tool required.

Initial health radar surveys usually suffer from multiple issues:

  • Culture: Many traditional enterprises are set up in a way that talking about problems isn't encouraged. The health radar results often look better than reality.
  • Dunning-Kruger effect: people overestimate their current understanding and ability, as such, overrate it.
  • Anchoring bias: the presented information is considered far more reliable for decision making than it is.

I don't think it needs much further explanation why taking a health radar under these conditions can actually be a threat, rather than a help.

Repeat surveys

The next problem with health radars is that they are usually taken in cyclical intervals, usually ranging from monthly to quarterly. Aside from people starting to get bored having to answer the same fifty questions every month (oddly enough, agile development would encourage automating or entirely eliminating recurrent activity!).

Frequently repeating the surveys thus suffers from:
  1. Disconnect between change and data: Especially in slow-moving environments, the amount of systemic change that warrants re-examination of the state tends to be low, so the amount of difference over time that can actually be attributed to actual change in the system is low. 
  2. Insignificant deltas: Most change actions are point-based optimizations. Re-collecting full sets of data will yield changes that are statistically insignificant in the big picture.
  3. Fatalism: When people see that there are dozens of important topics to be changed, and that progress is really slow, they might lose hope and be less inclined to make changes.
  4. Check-the-box errors: With increasing frequency of surveys, more and more people will just check some boxes to be done with it. The obtained data is statistically worthless. It might even require additional effort to filter out. Likewise, the consequently reduced sample size reduces the accuracy of the remaining data.
Those are the key points why I believe that constantly bombarding an entire organization with health radars can actually be counterproductive.


A much simpler alternative

With these four rather simple questions, you can get a clear and strong understanding about how well a team or organization is actually doing:


Sometimes, those questions don't even need to be asked. They can be observed, and you can enter the conversation by addressing the point right away.

The four questions

To the observant coach, the four questions touch four different domains. If all four of these domains are fine, this begs the question: "What do you even want to change - and why?" - and taking a Health Radar survey under these conditions would not yield much insight, either.
Usually, however, the four questions are not okay, and you can enter into a conversation right away.

1 - Product Evolution

The first question is focused on how fast the product evolves.
If the answer is "Quarterly" or slower - you are not agile. Period.
Even "daily" may be too slow, depending on the domain. If you see inadequate evolution rates, that's what you have to improve. 
And don't get misled - it may not be the tool or process: it may be the organizational structure that slows down evolution!


2 - User attitude

The second question is focused on users.
If the answer is, "We don't even know who they are" - you are not agile. Period.
Some teams invite selected users to Reviews, although even this can be deceptive - having an informal chat with a real user outside a meeting can be revealing indeed.


3 - Developer attitude

The third question is focused on members of the development organization.
If the answer is anywhere along the lines of "I'm looking for job offers" - you are not agile. Period.
Sustainable development can only be achieved when developers care about what they do, are happy about what they do and willing to take the feedback they receive.


4 - Continuous Improvement

The fourth question is focused on how improvement takes place.
If the answer is along the lines of "We can't do anything about it" - you are not agile. Period.
People need to see both the big picture and how they affect it. The system wouldn't be what it is without the people in it. The bigger people's drive to make a positive impact, the more likely the most important problems will get solved.

The core of the matter is what people do when nobody tells them what to do. Until people have an intrinsic drive to do the right thing, you're not going anywhere.

The conversation

Depending where you see the biggest problem, have a conversation about "Why": "Why are things the way they are?" - "Why are we content with our current situation?"- "Why aren't we doing better?" - "why do we even want to be agile if we're not doing our best to make progress here?"

People can have an infinite amount of reasons, so this is the perfect time to get NEAR the team and their stakeholders.

Following up

The followup set of questions after a prolonged period can be a series of "What" questions: "What's different now?" - "What have we learned?" - "What now?"



Summary

Drop the long questionnaires. They waste time, capacity and money. 
Learn to observe, start to ask questions. Reduce distance in the organization.
You don't need many questions to figure out what the biggest problem is - and most of all, you don't need to "carpet bomb" the organization with survey forms.  Keep it simple.


Often, people know very well what the problems are and why they have them. They just never took the time to get things sorted. All you need to do is help them in understanding where they are and discovering ways forward.