Pages

Friday, December 16, 2022

Optimize at the Constraint - only!

The Constraint of a system, in a nutshell, is "the most limiting factor." By definition, it determines the capacity of the entire system: if the Constraint is underutilized, the entire system is underutilized. However, if the Constraint is overburdened, no amount of additional input into the system will lead to more output. Many organizations struggle with this - and that has dire consequences!

Let's start by taking a quick glance at the Constraint:


In our example, the third step (C) is the Constraint - because it has the minimum capacity in our system. An important consideration is that we're not talking about investment or staffing here, our concern is the ability to generate throughput.

As an example, if we have a single A costing $100k to generate 50 Throughput, but ten C costing $1m to generate 30 Throughput, then the Constraint is C, not A.

This simple truth has tremendous consequences:

Don't work more than necessary


All of the capacity in our entire system in excess of the Constraint won't help us generate additional throughput. Let's examine what this means in practice:

  • Excess capacity behind the Constraint is "idle." It exists, but can't generate throughput. Adding more capacity at this point has no effect.
  • Excess idle capacity in front of the Constraint doesn't generate throughput.
  • Excess busy capacity in front of the Constraint adds overburden to the Constraint!

The third point is critical - because of the consequences: Let's say our Constraint is a specialist, and work is piling up at their doorstep. Work waiting at the Constraint generates no value for our company. Since someone was asking for that work in wait, these people will start wondering when their request gets served, i.e. they become unhappy. Eventually, the Constraint will be tasked with managing their undone inventory. At a minimum, some capacity gets diverted away from doing actual work - into managing work. At worst, it will reduce their capacity with each piece of work in wait, until they are fully incapacitated and spend their entire time in status meetings, explaining why nothing gets done.

And that brings up an important question:

Assuming you are not the Constraint: should you optimize your own work?

The astonishing answer is: No. And here's why.

Where's your Constraint?


This image visualizes four possible scenarios:
  1. You're stream-aligned. The only people depending on you are the customers. 
  2. You're behind the Constraint. There's someone, or something, that determines how much work arrives at your desk, and you get less work than you could do.
  3. You're before the Constraint. The more you work, the bigger the "waiting for" the Constraint pile grows.
  4. You and others are working in parallel. What you do, they don't. What they do, you don't need to.

The image doesn't display the scenario that you are operating at the Constraint, because that's equivalent to being unconstrained: the more you do, the more throughput you get. So - let's examine the above four scenarios.

In scenario 1, you are operating as if you were the Constraint, until you get into a "Before" or "Behind" scenario by overburdening your customers. Here, improvement works in everyone's favor until customers start to scream.

In scenario 2 - you have excess capacity anyways. Customer throughput is limited by the Constraint, so the only thing you can spend your optimized capacity on would be gold-plating. At best, nobody notices. At worst, you'll get scolded for wasting company assets. In any case, your optimization efforts won't win you a medal.

In scenario 3 - your capacity outmatches the Constraint. If you want to optimize the Whole: do less. You can only make a difference by reducing burden on the Constraint, that is: by taking work away from them. If you optimize in ways that allow you to do more work, you'll either get scolded if that makes you idle, or your extra work won't lead to extra customer value. In the latter case, you'll get scolded for not delivering more (even though you did, but the customer doesn't see that.)

When optimization doesn't work

In scenario 4, you're acting similar to scenario 1 - you're essentially the Constraint yourself and optimize accordingly.

That leaves scenarios 2 and 3. In both scenarios, you lose by winning.

Any optimization you do when you're not the Constraint will evaporate, be invisble, or make things worse for the Constraint, and thus for the system, and thus, in extension: for you.

When teams try their best to optimize their ways of working, and see that it either does nothing, or backfires - eventually, they get change fatigue: "Why should we change anything that doesn't help us?"

And that's a core problem with Scrum: The Scrum Guide suggests that teams should identify improvements in every single Retrospective, without considering whether the team is even the Constraint. If you aren't - you won't see anything coming out of your changes. To make  Retrospectives meaningful, identifying and enacting change is insufficient. You have to make sure that the changes are actually beneficial to the organization as a whole.


What now?

Here are four simple checks you can do:

  1. If you are the Constraint, do whatever it takes. Do less, do more. Simpler. Faster. Better. It will be noticable immediately, and you may even generate massive leverage. If you're five, and you have 100 people in your organization, every minute you save will have a twenty-fold impact. You'll be celebrated like heroes for even minor improvements.
  2. If you're pushing work "downstream" for other teams to pick up, and you see work piling up, do not try to discover ways to do more: Do less. Use the free capacity to pick up work that would otherwise happen downstream.
  3. If you're not receiving enough input from "upstream," don't try to do whatever you do better. Instead, pick up work that would otherwise happen upstream.
  4. If you see that "downstream" is challenged, and you receive flowback, i.e. defects, complaints, questions or anything that makes downstream wait for you, then you have to improve how you work, so that there's less work to do downstream.

Wednesday, December 14, 2022

Look after the BRO's

 One reason why many agile teams are challenged is because their Coach or Scrum Master isn't looking after the BRO's.

Too many coaches and Scrum Masters focus on framework rules, roles, events, facilitation, process, practice, tickets ... But while they're doing that - who's paying attention the BRO's?

A coach's value isn't in being a nanny for grown-ups. It's not in enforcing a framework or process. It's not in telling people what or how to work. All of these are ultimately irrelevant - or worse: impediments.

A coach is there to make a difference. And if the only difference a coach made after a year is that now people are correctly doing X, they made none. A coach needs to work with the BRO's.


Agile Coaches need to know their BRO's, keep them in sight, and work to improve them. A good Coach always pays attention to their BRO's and makes a difference for them.


Ok - so what are BRO's?

Business Relevant Outcomes.

Tuesday, December 13, 2022

What "Fail Fast, Move On" stands for

There's some confusion as to what "Fail Fast" means - it's caused some disturbance in the force. Let me give you my perspective. 

I'm going to use a business example for illustration. The message won't change if you apply the ideas to a product, a way of working - or even just a task.


The Fail Fast Move On Philosophy

Of course, we would like to maximize our probability of success. But - sometimes, we just can't know.

Let us use a restaurant as a showcase: I haven't seen any restaurant owner ever who opened a deli in order to go broke. And yet, almost 90% of restaurants don't survive their first year. Worse yet: the average restaurant owner loses at least $50k on the ordeal. Which, by the way, is the reason why I haven't opened a restaurant. For me, the upfront investment plus bound capacity (it's a pretty taxing job) isn't in sound relationship to possible benefits. But I digress ... but only slightly.

Fail Fast

During the "Fail Fast" stage, we start by figuring out what we're trying to do, what we know and what the unknowns and risks are. We exhibit a healthy skepticism of facts, for example, "Do we really know people in this part of town like Sushi?" and ask critical questions, such as, "What will happen if they don't?" We can then classify how much we're risking in case our assumptions turn sour. That gives us an analyzed, itemized list of risks which could make our endeavour fail.

Experiment

With our risk list, we propose a counter to our business idea, and we try to prove that the counter is false:

Returning to our sushi example, "We will offer a piece of Sushi to 100 pedestrians and ask them whether they'd visit a Sushi bar offering the same level of quality. [Counter:] Fewer than 10 positive answers mean that there's not enough interest in Sushi here."

Next, we're not even going to try to open a Sushi bar: all we want to do is disprove the pessimistic notion of an empty Sushi, our key risk of failure.  (We could still be wrong - but now we know that there's a possibility of being right.)

We're not trying to succeed, we're trying to rule out predictable failure.
If our counter-experiment succeeds, we just saved all followup effort: Why should we rent a facility, purchase decoration and cutlery, hire a cook and waiters - if we already know that we won't have enough customers? When we learned that we can't deal with known challenges, we need to change our goal and backtrack.

If we succeeded at failing the counter-experiment (yes, that's a double negative!) - we can proceed further.

Move On

In our restaurant example, we know that 90% of restaunts fail, and we'd like to be in the 10% that don't (like everyone opening one of the failing 90% of restaurants.) But: when we can't, we definitely don't want to be lose some $50k+ on the attempt. Hence, we move step by step, keeping the cost of each step low, and accept that everything we invested up to this step is money lost. We're not trying to recoup it - especially not by investing more in order to get some of it back.

"Moving on" means that everything we invested up to this point is "sunk cost" - investments we will never recover. Sunk cost hurts, and it makes us uncomfortable. Yet, there's something worse than sunk cost: "Throwing the good money after the bad."

In order to practice "Move On," we avoid hedging our bets by binding unnecessary assets, so that we have less pain when discarding them.

Moving on

Moving on requires us to let go of what we invested.

The 3 Steps of Fail Fast, Move On

Applying Fail Fast, Move On goes far beyond validating a business idea - it goes for any idea, and thus is all-encompassing for all creative work. Fail Fast, Move On could be considered a 3-step process:

  1. Pull risk forward
  2. Rigorously inspect and adapt
  3. Minimize and write off sunk cost

That, of course, doesn't mean that we either:

  • Generate unnecessary risk by negligence or unprofessionalism.
  • Act without a prediction to validate.
  • Waste our energy on doing things we know we could do smarter. 

To the contrary - "Fail Fast, Move On" is the notion of avoiding these three antipatterns by systematically eliminating failure points.

Proper "Fail Fast, Move On" saves you energy and maximizes your opportunity of success.


You can learn more about succeeding with "Fail Fast, Move On" in my Lean Startup lasses.

Feel free to set up an appointment with me using the PAYL coaching mechanism (on the left) to discover more.

Sunday, December 4, 2022

Performance matters!

Performance is one of the most important factors for an agile organization, even though the topic is often viewed with suspicion. Yet, a proper understanding of - and close attention to - performance is critical to success. Here's why:

On Stage

Let's say you go to a concert. You were thrilled with anticipation, you booked the tickets half a year in advance, made an Instagram story about all your preparation, and on the day of the event, a sleepy, un-enthusiastic singer shuffles on stage and sings without any emotion at all. What's your next post going to be? "This was a terrible performance! So disappointed!" Okay, now let's say we sanitize out the word "performance." That leaves your impression at "That was terrible. So disappointed." Well, that feedback won't help the band improve: What was bad? The location, the food, the music? If I was part of the band, I'd just disregard it, because nothing is ever perfect.


In the workplace

Of course, a stage performance isn't the same as workplace performance. Still, we work for people who have expectations on what we do, and to whom it matters whether we achieve something or not. That said, let me briefly define, 

Performance: the ability produce desirable outcomes.

Thus, inattention to performance sends a strong message, "your outcomes don't matter" - in extension, "you don't matter. There is no better way to demotivate people!

We should strive to build high- and hyper-performing teams, and create an environment where each individual is able to perform at their best. We need to constantly ask ourselves, "How can we perform better?" - and relentlessly fix any problem that stops us from performing better.

Aren't "performance measurements" detrimental?

Three remarks - 
  1. Goodhart's Law (A measure that becomes a target ceases to be a good measure
  2. Attribution error (Correlation doesn't equal causation
  3. Weaponization (Everything is harmful when used as a weapon)
That is: what gets you the bad outcomes is a misdirected measurement system, not performance itself. Don't throw out the baby with the bathwater! 

How does performance matter?

The result of keen attention to performance is pride of craftsmanship and sense of accomplishment: going home after a challenging day of work, we are satisfied with our achievements, and we feel that our time was worth it.

Low performance, in contrast, means that we go home and wonder what we just wasted our time with. It leaves us demotivated, disoriented, burnt out.

And regardless of whether we give it a name, or not: the feelings will be there. We would like to have more sense of accomlishment. Hardly anyone looks forward to feelings of uselessness, worthlessness or pointlessness.

"Performance" is merely the label we attach to these feelings.

If you've ever worked on a hyper-performant team, you will remember that experience for the rest of your life. The experience will continue to boost your self-esteem, your desire to grow, and make you both more professional and a better person. Likewise, if you've ever worked on a low-performance team, you will also most likely remember the experience for the rest of your life - but most likely as a chapter you'd prefer to never repeat.

Performance is paramount.



Monday, November 21, 2022

The TOP Structure - how it all started

As you may already be aware, my latest project is the "TOP Structure." Let me give you some insights into how it all started. Back then, it wasn't very refined, just some thoughts in my head.



The world of traditional IT

Traditional IT organizations typically separate themselves into a development and an operational area. To execute, they again separate into line and project - commonly resulting in matrix organizations. Requests for delivery of new stuff are typically thrown over the fence by "business," or - in more advanced companies - negotiated by Business Analysts. In any case, once a project was scoped, we task a Project Manager with ensuring delivery in TQB. That leaves Project Managers with a wide range of work: setting up a capable team, prioritizing and distributing work, coordinating schedules and tracking progress. Unfortunately, as the proverb goes, "if everything is important, nothing is." Many project managers drown in schedules and tracking, leaving them little time for taking care of the people doing the work.

Scrum changed the game

The move to "stable teams" and continuous "product development" led to the need for a different way to structure teams, and Scrum provided it. In a Scrum context, the line no longer "provides resources" to "projects." And there's no more a final delivery thrown over the fence at the deadline - software in an agile setting is never "finished." Features become available in a continuous flow of value.

Still, line managers have a role - and oftentimes, business initiatives are still funded as pre-packaged projects scoped for "Agile Delivery."

Why is all of that relevant? Because of the resulting interactions.

Scrum teams learn to self-organize themselves, and how to manage their interactions with their surrounding organization. The person accountable for this is the Scrum Master. Often being neither technical nor understanding the product in depth, Scrum Masters focus on Organization. Their value proposal isn't in any code they write or anything sold to customers - it's enabling their team, focusing on "Who" and "How." (i.e. people and process)

The second key of a successful Scrum team is the Product Owner, focused on the Product. Both organization and implementation aren't their concern, only ensuring that the "What" and "Why" are clear and prioritized, so that stakeholders are happy and the team does the most valuable thing at the most suitable moment.

And finally, what would be a good Scrum team without competent developers? They take care both of development and outcomes, that is: they do everything technical. Developers own their Technology.


SAFe repeats the same

Let's do a simple relabeling exercise to demonstrate how SAFe copy+pasted Scrum in this regard, at a level of abstraction.

Competency Scrum Role SAFe Role
Scope Scrum Team Agile Release Train
Technology Developer Agile Team
Organization Scrum Master Release Train Engineer
Product Product Owner Product Manager

(Now - we could formidably debate whether SAFe's copy+paste approach at scale is appropriate and smart, but that's not my point.) I'd like to highlight that if we gloss over implementation details, then Scrum succeeds most likely when there's sufficient attention being paid to all of Technology, Organization and Product (TOP) - and SAFe must repeat the same thing at a higher level of abstraction.

From this, the idea was born of the TOP Structure as a universal pattern: To succeed with a sustainable software organization, you must ensure that all three domains receive sufficient attention.
Thus, the idea behind TOP is first and foremost that we have to build a structure that pays appropriate attention to all three domains, staffs them with sufficient competency, and doesn't force us into making either-or choices between them. Which is what often got traditional Projects into trouble - a PM rarely has time to fix organizational issues, as deadlines are constantly pressing.

TOP Dysfunctions

Let's turn this around, and take a quick peek at what happens when we pay too little - or no - attention to one of the core TOP competencies:

Illustration Dysfunction Consequence
Exclusive Tech Focus Technical excellence disconnected from people and their needs - technological wonders that nobody needs.
Exclusive Product Focus Ephemeral, great ideas that eventually get killed by the inability to execute.
Over-focus on Organization Having the right people working effectively means little when it's not the right thing.
Lack of Product Focus An over-focus on methods and implementations could lead to missing the needs of the customer entirely.
Lack of Organizational Focus A "bias for action" mindset that bulldozes over people and their needs in order to "move fast and break things." Gets things done in early phases, leads to unproductive (and potentially extremely costly) chaos in the long term.
Lack of Technical Focus Emphasizing value generation and order, at the expense of technical sustainability. Most such products incur fatal technical debt at some point in the future.
(none) Technology, Organization and Product all get sufficient attention, nothing gets shoved under the rug and energy is distributed wisely in each domain.


The TOP Question

And thus, the idea of the TOP Structure was born as a simple, yet effective mechanism of asking the question: "We have 100% of our energy. We can distribute it in any way that we want. Where should we put how much?" There's no fixed or perfect ratio such as, "60% T, 10% O, 30% P" that would serve as a recipe for success. Instead, the first answer is often, "We currently spend too much energy on (X) and too little energy on (Y).
TOP thus started as a simple tool for teams, teams-of-teams and entire organizations to determine whether sufficient energy was invested into the different areas in the past - and whether we should redistribute that energy in the future. For example: "We'd need a bit more time for process improvement, and a bit more for Refinement. That's only possible if we invest a bit less of our time for development - can we do that? How? Is there something in any of the competencies we're currently doing that we could discontinue?"


How it evolved

As you can already see from the three circles - they overlap. Initially, I put "management" into the center, with the intent of stating that management has the responsibility that all three competencies get adequate staffing, funding and attention. Then I decided that this isn't what we'd like in self-managing teams. So I decided to put "Teams" into the center, indicating that a self-managing teams should do all of that. It didn't feel right, either. And thus, the current version of the TOP Competency Spectrum was born:

The TOP Competencies

The model of the TOP Competencies Spectrum is mostly a reshape of the circles into one segmented circle, giving names to the different intersects of the circle:
As you move further away from pure technology towards organization, you enter the domain of Architecture, concerned with the question of "Do we have the right means of doing what we do, and is how we do it the best way of doing it?" - Architecture, in the TOP Competencies model, isn't a separate thing from either Technology or Organization: it's the discipline that brings both together!
Likewise, Design fuses the question, "Why do people need it?" with "How do people need it?" - crossing the chasm between user perspective and technical implementation. Thus, the TOP competency of Design requires connecting technical and product competency for best outcomes. Which is needed to which extent at which time - may vary, and as the color indicates: it's a blended mix.
The other TOP Competencies are similar: At the outer part of the circle, activities are more "domain specific," and the closer we get to the center of the circle, the TOP Competencies blend and become indistinguishable. 

Quality, at the core of the TOP Competencies, is much more than "product quality" - it's quality of everything: how we communicate, how we work, what we produce, and any other outcomes we get (including, of course, satisfaction with our jobs.) As quality is everything in a TOP Structure, it's also everyone's responsibility, and everyone constantly contributes towards it, be it consciously or unconsciously, positively or negatively. The TOP Structure should remind us of this, and inspire us to replace unconscious poor quality choices with conscious good quality choices.



This extended model of the TOP Structure focuses less on the question of "Where do we assign our energy?" - somehow, the entire circle makes up for 100% of our energy, with fluctuating investments across the week - it focuses more on People and Interactions: In our current organization, how do we interact with ... Architecture? Is it smooth, do we have boundaries? Where is it: is it part of the team, or outside? Does information flow freely from and towards that domain, or is it thrown over the fence? 
Is there a continuous exchange between developers and product people, or is it a stage-gated Waterfall? What does our relationship between design and quality look like: Do we design for quality, or do testers consume the result of a design process for test case creation?

The TOP Structure, in this regard, makes no imposition of what you must do - much rather, it guides us in the questions we can ask to identify where we've got improvement potential and where we're potentially missing something important.

What's next


And that's how I formed the basis for the entire model of the TOP Structure you can now find on my official company page. I invite you to sign up to my newsletter and follow me, because I have big plans for the TOP Structure: I believe it should be a staple tool for any Coach and Consultant supporting organizations on their journey of Continuous Improvement - regardless of which framework, or no framework, or the approach they choose.

My own journey with TOP has just really begun to get exciting - you're still in time to be an early adopter!

Wednesday, November 9, 2022

There's always a bigger context!

Have you wondered why so many people, organizations - and even humanity as a whole, constantly find themselves in a mess that's hard to scramble out of?

The reason is quite simple: because we are quite short-term oriented, and we either don't see - or discount - the bigger context we're acting in!


Virtuous Cycles

When we want something that we don't have (or: not enough from), we change something in a way that we predict that we'll get what we're looking for. We then see if that did work, and we'll continue doing more of that until we have enough. And we do the opposite when we don't want something that we do have. 

Feedback loops help us to pause, stop or course correct while we're at it.

A trivial example of a virtuous cycle might be lunch: When we're hungry, we want food. We get our portion, like it, and eat some more. (If we don't like the food, we might get something else instead.) We continue eating until either our portion is gone, or our stomach signals "Full."


Vicious Cycles

A vicious cycle isn't the direct opposite of a virtuous cycle - it's when we do something, and get something we don't want. For example, if we'd really like that wild honey, and put our hand into the beehive: the longer we leave our hand in there, the more we'll get stung.


Short term and long term

In the short term, we complete one action and observe the immediate results. For example, we grab a candy bar, and it satisfies our craving. We can repeat this cycle again and again, and we get predictable and repeatable results (well, until our stomach tells us we had too much candy.)

In the long term, however, we get other results than in the short term: while one candy satisfies our craving, one hundred days of repeated snacking will lead to some weight gain, and two years' worth of snacking will result in a wobbly tummy.

Thus, the virtuous cylce of "craving satisfied" is embedded in a vicious cycle of "gain weight."


Inseparability of cycles

In our simple example, it's impossible to separate the short-term virtuous cycle from the long-term vicious cycle: as the proverb goes, "you can't have your cake and eat it, too." The action that starts the virtuous cycle will also set the vicious cycle in motion. 

The desirable short-term outcomes of the virtuous cycle are immediately visible, so we're tempted to set it in motion. On the flip side, the long-term outcomes of the vicious cycle are invisible at the moment, so we're tempted to discount them in favor of the proven and tangible short-term benefits.

Shocking consequences

We find ourselves continuously repeating the virtuous cycle, with the firm belief that what we're doing is beneficial, until - one day, in our example, we get a Diabetes diagnosis: It's impossible to attribute the diabetes to any single piece of candy we consumed. Even worse: simply stopping the virtuous cycle of meeting our craving isn't going to change the situation we're in, and the process of reverting the vicious cycle will be difficult to impossible. There's no easy "undo" action related to anything we did in the past.

We were caught by the embedded larger context of our visible virtuous cycle: the invisible vicious cycle.


What does our little example imply for a software organization, then?

What you see isn't what you get!

Take a look at this diagram which illustrates the larger systemic context we may find ourselves in:


We always get positive feedback from our immediate action, so we learn that our action is good.

For example, let's say the developer who's always fastest (by skipping tests) learns that they get praise by customers and management, whereas the developers who are always slowest (by building quality in) learn that they'd get more appreciation by cutting short on quality.

The short-term virtuous cycle is that developers learn how to deliver faster and meet tight deadlines.

Unfortunately, by the time we realize the effects of the vicious cycle, our product is probably almost dead: it might take months, possibly years, to trace out and fix all the bugs in the code, and that's not even calculating the effort (and frustration) of adding tests to an unmaintainable codebase.

And worse than that, we only have developers left who have - over the years - learned that building quality in is bad for their careers.

By the time we've come to realize that the vicious circle has taken over, there's no quick fix any more, and the cost of change, at this point in time, is overwhelming.


Scrambling out of the mess

When we realize that we got something that we don't want, we have a myriad of problems to address:

  1. We must discover a way to re-wire the outer vicious cycle by disrupting it, and replacing it with a vicious cycle.

  2. We must become conscious that the presumed virtuous cycle did spin off a vicious cycle, and must stop triggering more of the vicious cycle.

  3. We must un-learn and stop the old virtuous cycle, despite the visible short-term benefits.
    This step is very hard, because we must actively reject the benefits we attributed to it.

  4. We must actively pursue the new virtuous cycle, despite it being slow, and the benefits being less visible than the benefits of the old virtuous cycle. This requires strong discipline, because it's easy to lapse into old habits, potentially eliminating months of progress with a single act of carelessness.

Unfortuntately, since we saw short-term benefits in the past, we tend to look for a new way that undoes all of the damage caused by the vicious cycle in the blink of an eye: instead of actively doing the hard work of behavioural and belief change, we often hunt for a miracle pill. And thus start another vicious cycle.


Let me put it like it is:

If a physical building has collapsed because it was built using poor materials, you can't just swallow a pill to rebuild the entire thing. The only way forward is to clean up the rubble, get better materials, and construct a more stable building.


And that's how you get sustainable change:

  1. Become clear what got you into the bigger mess.
  2. Stop doing that, even if it gave you results you were looking for.
  3. Clean up the shambles.
  4. Create something new that avoids the vicious cycle.




Wednesday, October 5, 2022

10 Things a Product Owner shouldn't waste time on

There's quite a bit of confusion about the Product Owner role - and a lot of Product Owners spend most of their time on low-value, or even detrimental activity, thus having little or no time to succeed in their role. 

Here are ten timekillers that a Product Owner shouldn't waste time on:


10 - Writing User Stories

Too many Product Owners are caught up in "writing user stories," at worst matching all kinds of templates, such as the Connextra "As a ... I want ... so that ..." and the Gherkin "Given ... When ... Then" templates. Unfortunately, the better the PO gets at doing this, the more understanding they amass in their own head before transferring information to the developers. At best, the developers are degraded to a "feature factory," and at worst, they no longer understand what or why because someone else did the thinking for them. A PO is a single point of failure and bottleneck in Scrum, hence they should try to offload as much of what could go wrong as possible.

9 - Defining Implementations

Especially Product Owners with technical aptitude quickly fall into the trap of spending a lot of time on explicitly defining the "How" of solution implementation. Not only do they thus assume a Scrum Developer role, but also they disempower and disenfranchise their team. In a great Scrum team, the Product Owner should be able to rely on their developers for implementation - the PO can reduce themselves to discovering the relevant problem statements.

8 - Writing Acceptance Criteria

Probably the biggest time sink for Product Owners is detailling out all Acceptance Criteria for all Backlog Items to be "ready" for the Sprint. Where Acceptance Criteria are needed, they should be defined collaboratively, using a Pull mechanism (i.e. developers formulating, and then verifying with the Product Owner). 

7 - Ticket Details

Depending on which ticket system you're using, a lot of details are required to make a ticket "valid." That could include relations to other tickets, due dates, target versions - none of these are required for Product Ownership. They're part of the development process, and belong to the Developers. (Side note: Sometimes, Scrum Masters also do these things - they shouldn't have to do it, either.)

The items 10-7 are all indicators that the Product Owner is misunderstood as an Analyst role - which is a dangerous path to tread. By doing this, the PO risks losing sight of the Big Picture, leading the entire Scrum Team off the wrong tack, potentially to obsolescence.

6 - Obtaining precise Estimates

Estimation in and of itself is a huge topic, and some organizations are so obsessed with their precision of estimates that they completely forgot that there's no such thing as a "precise estimate." As I like to say, "If we knew, they weren't called Estimates, but Knows." - Estimates should take as close to no time at all, and if a Product Owner finds themselves spending significant amounts of time on getting better estimates, something is seriously out of tune. Try probabilistic forecasting.

5 - Planning for the team

Team Planning serves three purposes: Getting a better mutual understanding, increasing clarity, and obtaining commitment on the team's Sprint Goal. Many Product Owners who used to work in project management functions before fall into the trap of building plans for the team to execute. This defeats all purposes of the Sprint Planning Event. The Product Owner's plan is the Backlog, which, combined with whatever sizing information they have, becomes the Product Roadmap. Content-level planning is a Developer responsibility.

4 - Accepting User Stories

A key dysfunction in many teams is that the Product Owner "accepts" User Stories, and is the one person who will mark them as "Done." Worst case scenario, this happens during Sprint Review. Long story short: When the team says, it's "Done," it should be done - otherwise, you have trust issues to discuss. And you might have had the wrong conversation about benefit and content during Planning. Acceptance is something either part of the technical process, i.e. development, or something that relates to the user - that is, developers should negotiate with users. The Product Owner is not a User Proxy.

3 - Tracking Progress

Yet another "Project Manger gone Product Owner" Antipattern is tracking the team's progress. A core premise of Scrum is that developers commit to realistic goals that they want to achieve during a Sprint. The Product Owner should be able to rely that at any time, the most important items are being worked on, and the team is doing their best to deliver value as soon as possible. Anything else would be a trust issue that the Scrum Master should address. At a higher level, we have very detailed progress tracking in Sprint Reviews, where we see goal completion once per Sprint. If teams can reliably do that, this should suffice - otherwise, we have bad goals, and that is the thing the PO should fix.

2 - Generating Reports

Reporting is a traditional management exercise, but most reports are waste. There are three kind of key reports:

  • In-Sprint Progress Reports, as mentioned above, they are pretty worthless in a good team
  • Product Roadmap Reports - which should be a simple arrangement of known and completed mid-term goals, presented in the Sprint Review for discussion and adjustment.
  • Product Value Reports - which can be created by telemetry and should be an (ideally automated) feature of the Product itself.
Question both the utility of reports and time invested into reporting. Reports that provide valuable information with little to no effort are good. Others should be put under scrutiny.

1 - Bridge Communication

The final, biggest and yet most common antipattern, of the Product Owner is what I call "Bridge Communication" - taking information from A and bringing it to B. Product Owners should build decentralized networks, connecting developers and stakeholders, avoiding "Telephone Games" that come with information loss and delay. 

When the Product Owner has their benefit hypothesis straight, developers can take care of the rest. Developers can talk with stakeholders and obtain user information by themselves. A Product Owner shouldn't even be involved into all the details, because if they wanted to, they'll constantly find their calendar crammed, and they become a blocker to the team's flow of value - the opposite of what they should be doing!


The Alternative

(About half of the points in this article describe the SAFe definition of a PO, but that's an entirely different topic in and of itself)

After having clarified what a PO should not do, let's talk really brief about what is a better investment of time:

A Product Owner's key responsibility is to maximize the value of the product at any given point in time. That is, at any time, the Product should have the best Return on Invest - for the amount of work done so far, the Product should be as valuable as possible. That requires the Product Owner to have a keen understanding on what and where the value is. For this, the PO must spend ample time on market research, stakeholder communication and expectation management.

From this, they obtain user stories - which are indeed just stories told by users about problems they'd like to have addressed by the Product. The Product Owner turns stories into benefit hypotheses - that is, the benefit they'd like to obtain, either for the company or the userbase. They then cluster benefit hypotheses into coherent themes: Sprint and Product Goals. These goals then need to be communicated, aligned and verified with stakeholders. By doing this Product Owner successfully, they'll maximize the chances that their Product succeeds - and the impact of their work. 

The Product Owner can free time by minimize the time spent on implementation. Successful Product Owners let their development team take care of all development-related work (including Analysis, Design and Testing) and trust the team's Definition of Done. That is, their only contact with Work in Process needs to be renegotiating priorities when something goes out of whack, a value hypothesis is falsified or new information invalidated the team's plan.




Monday, September 12, 2022

Cutting Corners - think about it ...

 I was literally cutting corners this weekend when doing some remodeling work, and that made me think ...

Cutting corners:

  • is always a deliberate choice
  • makes things look better to observers
  • is what you don't want others to see
  • doesn't require expertise
  • provides a default solution when you see no alternative
  • might be the most reasonable choice
  • requires more work than not doing it
  • is expensive to undo

So - try having a conversation: where are you cutting corners, why are you doing it - and do you know how much it costs? Which alternatives do you have? What might another person do different?

Wednesday, September 7, 2022

Dealing with limiting beliefs

We often encounter that Limiting Beliefs are holding us back from achieving the goals we want to achieve, from doing what is right, from becoming who we want to be. So - if we know this, why aren't we changing our beliefs? Because, very often, our beliefs define who we are, and change is hard. But there is hope. What could we do?

Limiting Beliefs

Let's start by defining limiting beliefs - a belief confining us, or reducing our options in some way. We all hold limiting beliefs, and there are some of them that we shouldn't even change. So - when exactly are limiting beliefs an issue? A simple and quick answer: when we should be doing something that's hard or impossible because of a specific belief we subscribe to.

Let's use an example to illustrate our case:

Say, Tom is a manager and he believes that: "Developers can't test their own software." This belief is limiting, because it stops all beliefs, decisions and actions built on the idea that "developers do test their own software.

The problem with limiting beliefs

As long as Tom holds this belief, he can't support the ideas of, for example, TDD or Continuous Delivery, because these are in conflict with his belief. And beliefs aren't like clothes - we can't change them at whim. Here's what we're dealing with:

Belief networks

Limiting beliefs don't simply stop one change, they are often part of a complex web of other beliefs that reinforce the limiting belief, and which would be incomplete, incoherent or even inconsistent if that limiting belief was changed - so we can't just replace one belief without examining its context: "Why do you hold this belief?

Supporting beliefs

In Tom's example, we might find other supporting beliefs - such as the Theory X idea, "Without being controlled, developers will try to sneak poor quality into Production, and then we have to deal with the mess."

Anchoring

Tom is probably a reasonable person, and his belief was most likely anchored by a past experience - there were major incidents when developers did cut corners, and these incidents forced Tom to adopt a policy of separating out development and test, and that ebbed the tides.

Negative hypothetical

Let's ask Tom, "What would happen without a separation of development and test?" - and he'd most likely refer back to his anchor experience, "We would have major incidents and wouldn't get any more work done because of continuous firefighting." - and it's hard to argue his case, because it's consistent with his experience.

Conjunction Fallacy

Let's ask Tom an inconspicious question to figure out what Tom thinks is more likely: "Which scenario do you think is more probable: that a developer creates a mess, or that a developer who tests their own code creates a mess?" - Tom will probably answer that it's the latter. This, however, is fallacious, because developers testing their own code are a subset of developers, a special case: if that was Tom's answer, he would (probably unknowingly) subscribe to the idea that developer tests increases the probability of poor results!

Confirmation Bias

Now, let's assume that we manage to convince Tom to make an experiment and let developers take control of quality - we're all human, and we all make mistakes. Tom will feel that the first mistake developers make confirms his belief, "See - we can't. I told you so.

Selection Bias

Of course, not everything an autonomous developer will deliver is going to be 100% completely broken, but Tom will discount or dismiss this, because "what matters is the mess they created and that we didn't prevent that from happening." - Tom will most likely ignore all the defects and incidents that he currently has to deal with despite having a separate Test Department because these aren't affirming his current belief.


Changing limiting beliefs

Given all these issues, we might assume that changing beliefs is impossible.

And indeed, it's impossible to change another person's beliefs. As a coach, we can't and shouldn't even try to do this: it's intrusive, manipulative and most likely not even successful. Instead, what we can do is: support the individual holding a limiting belief in going beyond the limits of their current beliefs.

Here's a process pattern we could use to help Tom get beyond his limiting belief:

1 - Write down the limiting belief
When you spot a critical limiting belief in coaching, write it down. Agree with the coachee that this is indeed the limiting belief they're holding.

2 - Ascertain truth
Truth is a highly subjective thing, it depends on beliefs, experiences and perception. What we want here is not "The truth," but what the coachee themselves asserts to be true: "Do you believe this is certrainly true?" - "What makes you so sure it's true?" - "Could there be cases where this isn't true?"

This isn't about starting an argument, it's about getting the person to reflect on why they're subscribing to this limiting belief.

3 - Clarify the emotional impact

Let's ask Tom, "What does holding this belief do to you?" - and he may answer: "I know what I need to do, that gives me confidence." - but likewise: "I am upset that we can't trust developers on their quality."

We hold onto beliefs both because and despite how they affect us. There's always good and bad, and we often overlook the downsides. Most likely, Tom has never considered that he's carrying around some emotional baggage due to his belief. Until Tom comes to realize that this belief is actually limiting him, and also negatively affecting him, he has no motivation to change it.

4 - Clarify consequences

 Next, we'd like to know from Tom where the limiting belief will put him in the long term: "When we look back, 10 years from now - where will you be if you keep this belief?"

We would like Tom to explore the paths he can't go down because of his limiting belief - for example, "We still won't have a fully automated Continuous Deployment - and I will be held responsible for this." Tom needs to see that his current belief is going to cause him significant discomfort in the future.

5 - Surface the Cost of Not Changing

We're creatures of habit, and not changing is the default. We first and foremost see the cost of change, because that's immediate and discomforting. And we ignore the cost of not changing, so our default would be that we have no reason to change anything.

Tom must see the costs of persevering in his current beliefs, so we ask: "What's the cost - to you - in 10 years, if you don't change this belief?" - a mindful Tom might realize that he'll get passed up for career opportunities, or might even get replaced by someone who will bring new impulses. The more vivid Tom can paint the upcoming pain, the more determined he will be in wanting to change.

And that's the key: As long as Tom himself has no reason to change his belief, he won't. But we can't tell him what his reasons should be. Tom has to see them by himself, and in a way that is consistent with his other beliefs.

6 - Paint a brighter future

Tom may now be depressed, because in his current belief system, he's doomed: there's no hope. So let's change Tom's reality. Let's ask him, "If you change this belief, what would you be and do?" - Tom might be skeptical, but will tell us some ideas on his mind, "I'd give devs permission to test their own code." - "I wouldn't enforce strict controls on developers." - "I wouldn't be known as the only person in this company insisting on stage-gating.

We can then follow this up with, "How would you feel if this could be you?" - if we get positive responses like, "Less stressed, more appreciated" - we're moving in the right direction. If we get negative responses like, "Stupid, Unprofessional" - then there's another, deeper rooted limiting belief and we have to backtrack.

7 - Redefine the belief by its opposite

Let's ask Tom, "What's the opposite of this belief?" - and Tom would answer, "Developers can test their own code." Tom needs to write this down on a card, and keep it with him all the time.

8 - Reinforce the new belief

Every day, Tom should read this card and look for evidence that this opposite belief is true. For example, Tom can find out which people hold this opposite belief, and how it works for them. 

At a minimum, Tom should just take a minute and sit back in calm, take out the card and read it to himself - and then repeat this new belief to another person.

As coach, we can challenge Tom to repeat the new belief back to us frequently, and to provide small stories and anecdotes about what he has said and done based on this different belief.

9 - Reflection

After one month, reflect with Tom what difference thinking and acting based on this opposite belief has made, and how often he lapsed back into thinking and acting based on his limiting belief. Under ideal circumstances, Tom will have success stories based on his new belief - these are a great basis for reflecting whether this new belief can serve him better than his former, limiting belief.

Even if Tom sees no difference, he already has evidence that his original belief may not be true.

If Tom is still struggling, he may need more time to be convinced. 



Closing remarks

Even with a formal process for belief change, we're not guaranteed to rewire or reeducate others. We respect and enjoy freedom of thought and differences in belief, and the best we can do is highlight consequences, reinforce and provide feedback.

If we see that people choose to cling to old beliefs and habits despite all our attempts at supporting them, we have to ask at a meta level what the difficulties are, and whether our support is even desired. We're not in the business of messing with other people's heads - we're in the business of supporting them in being more successful at achieving what they want, and in coming to realize what that actually is.

Friday, September 2, 2022

Microhabits - small action, big impact

 Let's talk about #microhabits - the small things that don't seem to make a difference at all in the short term, which are setting you on a long-term trajectory.


What are microhabits?

Microhabits are actions that take nearly no time and seem to have a very limited scope, and seem to not be worth mentioning, yet they set you on a compounding trajectory. Many years after adopting a microhabit, people adopting it are worlds apart from others around them.


Here are some examples of software development microhabits:

  • Appropriately naming stuff
  • Fixing typos
  • Refactoring
  • Making sure the code is easily testable
  • Adding important unit tests
  • Generally keeping code readable and workable
  • Creating a working build at leasts a few times a day

No excuses

I often hear, "This was an emergency," or "This was just a demo," or "There was time pressure." These are supposedly justifications for not doing the things above. In the working world, there will always be some stress, deadline or emergency lurking behind the next corner. Everything else is cockaigne.

Here's the thing, though: Microhabits become "second nature" and it's more effort to break a habit than to pursue it, so we can't argue that pressure is a reason to do something slower, more complex and less routine than what we'd normally do.

People with good coding microhabits will pursue their habit and keep their code high quality regardless of circumstance. Simply because it's a habit. An important realization about habits: they'll never form if you constantly interrupt them - so consistency is key.


Form the right microhabits habits today!

Which actions, when done consistently over many years, will result in a codebase you'd love to work with? Adopt these, and keep doing them consistently.

And which would result in a codebase you'd loathe? Stop these, and avoid them consistently!

If you want to do a facilitated Retrospective on the topic, you can use this simple template:




Tuesday, August 16, 2022

TOP Structure - the Technology Domain

 Too often, organizations reduce the technical aspect of software development to coding and delivering features.

This, however, betrays a company which hasn't understood digital work yet, and it begs the question who, if anyone, is taking care of:

Technology is the pillar of software development that might be hidden in plain sight

Engineering

Are you engineering your software properly, or just churning out code? Are you looking only at the bit of work to be done, or how it fits into the bigger picture? Do you apply scientific principles for the discovery of smart, effective and efficient solutions? How do you ensure that your solutions aren't just makeshift, but will withstand the test of time?

Automation

What do you turn into code? Only the requirement, or also things that will help you do your work easier, with higher quality, and lower chance of failure? Do you invest into improving the automation of your quality assurance, build processes, your deployment pipeline, your configuration management - even your IDE? How many things that a machine could do is your company still doing by hand, and how much does that cost you over the year - including all of those "oops" moments?

Monitoring

Once you delivered something - how do you know that it works, it works correctly, is being used, is being used correctly, has no side effects, and is as valuable as you think it was? Do you make telemetry a standard of your applications, or do you have reasons for remaining ignorant about how your software performs in the real world?


All of the items above cost time and require skills. Are you planning sufficient capacity to do these things in your work, or are you accumulating technical debt at a meta level?

Think for a minute: How well does your team balance technological needs and opportunities with product and organizational requirements?

Friday, August 12, 2022

TOP Structure - the Product Domain

Many companies misunderstand Product Ownership - or wose: Product Management - to be nothing more than managing the backlog of incoming demand. While that work surely needs to be done, it's the last thing that defines a successful Product Owner - "there's nothing quite as useless as doing efficiently that which shouldn't be done at all," which is what often happens when teams implement requests that are neither valuable, useful nor good for their product.

To build successful products, we need to continuously ask and answer the following questions:

The Product Domain is the third core pillar in the TOP Structure


Direction

1. What's the vision of our product, how close are we, and should we keep it? How does our product make our users' lives better? Where are we in the Product Lifecycle, and what does that mean for our product strategy? Do we have what it takes to take the next step?


Position

2. What does our product stand for, and what not? Will adding certain features strengthen or dilute our product? Are we clear on who's our target audience, who's not - and why? Do we want to expand, strengthen or shift our user base?


Discovery

3. What's the problem we'd like to solve? How big is this problem? Who has it? Is it worth solving? Which solution alternatives exist, is our product really the best way of solving it?


A weak Product Pillar leads to a weak product, which limits opportunities to make the product valuable and profitable - which quickly leads to a massive waste of time and money in product development, whereas a strong Product Pillar maximizes the impact of product development efforts.


Check your own team - on a scale from 1 to 10, how easily and clearly can you answer the questions above?

Wednesday, August 10, 2022

TOP Structure - the Organizational Domain

 It sounds tautological that every organization needs organization - and yet, most companies are really bad at keeping themselves organized, and it hasn't gotten better with the advent of Remote Work.

Although it's technically correct that organization is non-value adding, it is essential to get organization right:



The Organizational Domain is the second core pillar 
in the TOP Structure


People

Do we have the right people in the right places, are they equipped and do they have the necessary support to succeed? People aren't just chess pieces we can freely move around on an org chart - they're individuals with needs and desires, and if we don't take care of our people, performance will decline.


Collaboration

Can our people collaborate efficiently and effectively? Are the right people in touch with each other? How much "telephone game" are we playing? Do we have policies that cause us to block one another? Do we optimize for utilization of individuals, or getting stuff done?


Learning

Do we get genuine learning from events, or are we continuously repeating the same mistakes? Do we have functioning feedback loops? Are we figuring out the levers for meaningful change, and do we turn all of this into action? And do we only focus on how we execute, or also what we work on, and how we think?


Why Organization often doesn't work

Especially project organizations and large "Programs" commonly neglect investing into working with people, improving collaboration or creating a learning environment.

Even "Agile" environments often delegate the responsibility for organization to the Scrum Master, although none of the items mentioned above can be done by a single person on a team - they're everybody's job: team members, support roles and management alike.


When the Organizational pillar isn't adequately represented, we quickly accumulate "organizational debt" - an unsustainable organization that becomes more and more complex, costly, slow, cumbersome and unable to deliver satisfactory outcomes.


Check your own team - on a scale from 1 to 10, how well are the above mentioned organizational aspects tended to?

TOP Structure - the domain of Architecture

In software, there's a critical intersect between technology, that is - how we turn ideas into working software - and our organization - that is, who is part of development and how they interact.





Architecture is at the crossover point of Technology and Organization


This domain is Architecture, and it exists one way or another - if we don't manage it wisely, the outcome is haphazard architecture, most likely resulting in an inefficient organization delivering a complex, low value solutions in a long time and at a high cost.

Am I trying to advocate for a separate architecture team? No. Take a moment and think about Conway's Law: If we have the wrong organization, the consequence is the wrong architecture, the consequence is the wrong technology - and the consequence of that is a failing business.

Architecture is bi-directional. The right organization depends as much on technical choices as vice versa. We need a closed feedback loop between how we develop, and how we organize ourselves.
In many companies, the architectural feedback loop is utterly broken, hence they're doing with 50 people what could be done with 10.

One of the key organizational failures which lead to the need for "Scaling Agile" is that architecture is either disconnected from workplace reality, or not even considered to be important. By architecting both our organizational system and our technology to minimize handovers, communication chains and process complexity, many of the questions which cause managers to ponder the need for "Scaling Frameworks" are answered - without adding more roles, events or cadences.

This form of architecture doesn't happen in ivory towers, and it doesn't require fancy tools - it happens every day, in every team, and it either brings the organisation in a better direction, or a worse direction.


When was the last time you actively pondered how technical and organizational choices affect one another, and used that to make better choices in the other domain?

Monday, August 8, 2022

Make - or Buy?

Determining which systems, components or modules we should "Make" and which we should "Buy" (in extension: use from Open Source) is a challenging aspect for every IT organization. Even when there's a clear votum by management of developers in favor of one option, that vote is often formed with a myopic perspective: managers prefer to "Buy" whatever they can, whereas hardcore developers prefer to "Make" everything. Neither is wise.
But how do we discern?

There are a few key factors at play here:


FactorDetails
AvailabilityWhen there's an affordable, ready-made solution, then "Buy" to avoid reinventing the Wheel. Be sure that ready means ready and "affordable" has no strings attached.
Uniquenessyou need to "Make" anything that's unique to your business model.
AdaptabilityWhen there's only a small need for change and customization, "Buy" is preferable. Never underestimate "a small change."
Sustainability"Buy" only when both initial cost plus lifecycle cost are lower. Include migration and decommissioning costs.
SkillIf you need specialists that you don't and won't have, "Buy" from someone who has.
DependencyIf your business would have to shut down when the solution becomes unavailable, "Buy" puts you at your vendor's whim.
Write-offYou can "Buy" to gain speed even when all indicators favor "Make," if - and only if - you're willing to write off everything invested into the "Buy" solution.

Choose wisely - the answers are often not as obvious as they seem.

Friday, July 22, 2022

U-Curve Optimization doesn't apply to deployments!

Maybe you have seen this model as a suggestion how we should determine optimum batch size for deployments in software development? It's being propagated, among other places, on the official SAFe website - unfortunately, it sets people off on the wrong foot and suggests them to do the wrong thing. Hence, I'd like to correct this model -


In essence, it states is that "if you have high transaction costs for your deployments, you shouldn't deploy too often - wait for the point where the cost of delay is higher than the cost of a deployment." That makes sense, doesn't it?

The cause of Big Batches

Well - what's wrong with the model is the curve. Let's take a look at what it really looks like:


The difference

It's true that holding costs increase over time, but so do transaction costs. And they increase non-linearly. Anyone who has ever worked in IT will confirm that making a huge, massive change isn't faster, easier or cheaper than making a small change

The amount of effort in making a deployment is usually unrelated to the amount of new features part of the deployment - the effort is determined by the amount of quality control, governance and operational activity required to put a package into production. Again, experience tells us that bigger batches don't cause less effort for QC, documentation or operations. If anything, this effort is required less often, but bigger batches typically require more tests, more documentation and more operational activity each time - and the probability of Incidents rises astronomically, which we can't exclude from the cost of change if we're halfway honest.

Metaphorically, the U-Curve graph could be interpreted as, "If exercise is tiresome, exercise less often - then you won't get tired so often. The optimum amount of exercise is going to door to receive the pizza order, but rather order half a dozen pizzas at once if the trip to the door is too exhausting, and then just eat cold pizza for a few days.

Turning back from metaphors to the world of software deployment: It's true that for some organizations, the cost of transaction exceeds the cost of holding. This means that the value produced but unavailable to users is lower than the cost of making that value available. And that means that the company is losing money while IT sits on undeployed, "finished" software. The solution, of course, can't be to wait even longer with not deploying, and losing even more money - even if that's what many IT departments do.

As shown in the model, the optimum batch size isn't achieved when the company is stuck between a rock and a hard place - finding the point where the amount of money lost by not deploying is so big that it's worth to spend a ton of money on making a deployment.


The mess

Let's look at some real world numbers from clients I have worked with. 

As I hinted, some companies have complex, cumbersome deployment processes that require dozens of people weeks of work, easily costing $50000+ for a single new version. It's obvious that due to the sheer amount of time and money involved, this process happens as rarely as possible. Usually, these companies celebrate it as a success when they're able to go from quarterly releases to semiannual releases. But what happens to the value of the software in the meantime? 

Just assuming that the software produced is worth the cost of production (because if it wasn't, why build it to begin with) - if the monthly cost of development is $100k, then a quarterly frequency means that the holding cost is already at $300k, and it goes up to over half a million for semiannual releases. 

Given that calculation, we should assume that the optimal deployment frequency is when the holding cost reaches $50k, which would be two deployments per month. That doesn't make sense, however: when 2 deployments costs $50k each per month, then 100% of the budget would flow into deployment - of nothing

Thus, the downward spiral begins: fewer deployments, more value lost, declining business case, pressure to deliver more, more defects, higher cost of failure, more governance, higher cost of deployments, fewer deployments ... race to the bottom!


The solution

So, how do we break free from this death spiral?

Simple: when you're playing a losing game, change the rules.

The mental model that deployments are costly and we should optimize our batch size to only deploy when the cost of deployment outweighs the holding cost is flawed. We are in that situation because we have the wrong processes to begin with. We can't keep these processes. We need to find processes that significantly reduce our deployment costs:


The cost of Continuous Deployment

Again, using real world data from a different client of mine: 

This development organization had a KPI on deployment costs, and they were constantly working on making deployments more reliable, easier and faster. 

Can you guess what their figures were? Given that I have anchored you at $50k before, you might think that they have optimized the process maybe to $5000 or $3000.
No! If you think so, you're off by so many orders of magnitudes that it's already funny. 

I attended one of their feedback events, where they reported that they had brought down the average deployment cost from $0.09 to $0.073. Yes - less than a nickel!

This company made over 1000 deployments per day, so they were spending $73 a day, or $1460 a month, on deployments. If we calculated the accumulated cost of deployments for the whole period, they were still spending over $5000 for three months' worth of software development. But the transaction cost for each single deployment is ridiculously low.

Tell me of anything in software where the holding cost is lower than 7 Cents - and then tell me why we are building that thing? Literally: 7 Cents is mere seconds of developer time!

With a Continuous Deployment process like this, anything that's worth enough for a developer to reach for their keyboard is worth deploying without delay!

And that's the key message why the U-Curve optimization model is flawed:

Anything worth developing is worth deploying immediately.

When the cost of a single deployment is so high that anything developed isn't worth deploying immediately, you need to improve your CI/CD processes, not figure out how big you should make that batch.

If your processes, architecture, infrastructure or practices don't permit for Continuous Deployment, the correct solution is to figure out which changes you need to make so that you can continuously deploy.