Wednesday, October 5, 2022

10 Things a Product Owner shouldn't waste time on

There's quite a bit of confusion about the Product Owner role - and a lot of Product Owners spend most of their time on low-value, or even detrimental activity, thus having little or no time to succeed in their role. 

Here are ten timekillers that a Product Owner shouldn't waste time on:

10 - Writing User Stories

Too many Product Owners are caught up in "writing user stories," at worst matching all kinds of templates, such as the Connextra "As a ... I want ... so that ..." and the Gherkin "Given ... When ... Then" templates. Unfortunately, the better the PO gets at doing this, the more understanding they amass in their own head before transferring information to the developers. At best, the developers are degraded to a "feature factory," and at worst, they no longer understand what or why because someone else did the thinking for them. A PO is a single point of failure and bottleneck in Scrum, hence they should try to offload as much of what could go wrong as possible.

9 - Defining Implementations

Especially Product Owners with technical aptitude quickly fall into the trap of spending a lot of time on explicitly defining the "How" of solution implementation. Not only do they thus assume a Scrum Developer role, but also they disempower and disenfranchise their team. In a great Scrum team, the Product Owner should be able to rely on their developers for implementation - the PO can reduce themselves to discovering the relevant problem statements.

8 - Writing Acceptance Criteria

Probably the biggest time sink for Product Owners is detailling out all Acceptance Criteria for all Backlog Items to be "ready" for the Sprint. Where Acceptance Criteria are needed, they should be defined collaboratively, using a Pull mechanism (i.e. developers formulating, and then verifying with the Product Owner). 

7 - Ticket Details

Depending on which ticket system you're using, a lot of details are required to make a ticket "valid." That could include relations to other tickets, due dates, target versions - none of these are required for Product Ownership. They're part of the development process, and belong to the Developers. (Side note: Sometimes, Scrum Masters also do these things - they shouldn't have to do it, either.)

The items 10-7 are all indicators that the Product Owner is misunderstood as an Analyst role - which is a dangerous path to tread. By doing this, the PO risks losing sight of the Big Picture, leading the entire Scrum Team off the wrong tack, potentially to obsolescence.

6 - Obtaining precise Estimates

Estimation in and of itself is a huge topic, and some organizations are so obsessed with their precision of estimates that they completely forgot that there's no such thing as a "precise estimate." As I like to say, "If we knew, they weren't called Estimates, but Knows." - Estimates should take as close to no time at all, and if a Product Owner finds themselves spending significant amounts of time on getting better estimates, something is seriously out of tune. Try probabilistic forecasting.

5 - Planning for the team

Team Planning serves three purposes: Getting a better mutual understanding, increasing clarity, and obtaining commitment on the team's Sprint Goal. Many Product Owners who used to work in project management functions before fall into the trap of building plans for the team to execute. This defeats all purposes of the Sprint Planning Event. The Product Owner's plan is the Backlog, which, combined with whatever sizing information they have, becomes the Product Roadmap. Content-level planning is a Developer responsibility.

4 - Accepting User Stories

A key dysfunction in many teams is that the Product Owner "accepts" User Stories, and is the one person who will mark them as "Done." Worst case scenario, this happens during Sprint Review. Long story short: When the team says, it's "Done," it should be done - otherwise, you have trust issues to discuss. And you might have had the wrong conversation about benefit and content during Planning. Acceptance is something either part of the technical process, i.e. development, or something that relates to the user - that is, developers should negotiate with users. The Product Owner is not a User Proxy.

3 - Tracking Progress

Yet another "Project Manger gone Product Owner" Antipattern is tracking the team's progress. A core premise of Scrum is that developers commit to realistic goals that they want to achieve during a Sprint. The Product Owner should be able to rely that at any time, the most important items are being worked on, and the team is doing their best to deliver value as soon as possible. Anything else would be a trust issue that the Scrum Master should address. At a higher level, we have very detailed progress tracking in Sprint Reviews, where we see goal completion once per Sprint. If teams can reliably do that, this should suffice - otherwise, we have bad goals, and that is the thing the PO should fix.

2 - Generating Reports

Reporting is a traditional management exercise, but most reports are waste. There are three kind of key reports:

  • In-Sprint Progress Reports, as mentioned above, they are pretty worthless in a good team
  • Product Roadmap Reports - which should be a simple arrangement of known and completed mid-term goals, presented in the Sprint Review for discussion and adjustment.
  • Product Value Reports - which can be created by telemetry and should be an (ideally automated) feature of the Product itself.
Question both the utility of reports and time invested into reporting. Reports that provide valuable information with little to no effort are good. Others should be put under scrutiny.

1 - Bridge Communication

The final, biggest and yet most common antipattern, of the Product Owner is what I call "Bridge Communication" - taking information from A and bringing it to B. Product Owners should build decentralized networks, connecting developers and stakeholders, avoiding "Telephone Games" that come with information loss and delay. 

When the Product Owner has their benefit hypothesis straight, developers can take care of the rest. Developers can talk with stakeholders and obtain user information by themselves. A Product Owner shouldn't even be involved into all the details, because if they wanted to, they'll constantly find their calendar crammed, and they become a blocker to the team's flow of value - the opposite of what they should be doing!

The Alternative

(About half of the points in this article describe the SAFe definition of a PO, but that's an entirely different topic in and of itself)

After having clarified what a PO should not do, let's talk really brief about what is a better investment of time:

A Product Owner's key responsibility is to maximize the value of the product at any given point in time. That is, at any time, the Product should have the best Return on Invest - for the amount of work done so far, the Product should be as valuable as possible. That requires the Product Owner to have a keen understanding on what and where the value is. For this, the PO must spend ample time on market research, stakeholder communication and expectation management.

From this, they obtain user stories - which are indeed just stories told by users about problems they'd like to have addressed by the Product. The Product Owner turns stories into benefit hypotheses - that is, the benefit they'd like to obtain, either for the company or the userbase. They then cluster benefit hypotheses into coherent themes: Sprint and Product Goals. These goals then need to be communicated, aligned and verified with stakeholders. By doing this Product Owner successfully, they'll maximize the chances that their Product succeeds - and the impact of their work. 

The Product Owner can free time by minimize the time spent on implementation. Successful Product Owners let their development team take care of all development-related work (including Analysis, Design and Testing) and trust the team's Definition of Done. That is, their only contact with Work in Process needs to be renegotiating priorities when something goes out of whack, a value hypothesis is falsified or new information invalidated the team's plan.

Monday, September 12, 2022

Cutting Corners - think about it ...

 I was literally cutting corners this weekend when doing some remodeling work, and that made me think ...

Cutting corners:

  • is always a deliberate choice
  • makes things look better to observers
  • is what you don't want others to see
  • doesn't require expertise
  • provides a default solution when you see no alternative
  • might be the most reasonable choice
  • requires more work than not doing it
  • is expensive to undo

So - try having a conversation: where are you cutting corners, why are you doing it - and do you know how much it costs? Which alternatives do you have? What might another person do different?

Wednesday, September 7, 2022

Dealing with limiting beliefs

We often encounter that Limiting Beliefs are holding us back from achieving the goals we want to achieve, from doing what is right, from becoming who we want to be. So - if we know this, why aren't we changing our beliefs? Because, very often, our beliefs define who we are, and change is hard. But there is hope. What could we do?

Limiting Beliefs

Let's start by defining limiting beliefs - a belief confining us, or reducing our options in some way. We all hold limiting beliefs, and there are some of them that we shouldn't even change. So - when exactly are limiting beliefs an issue? A simple and quick answer: when we should be doing something that's hard or impossible because of a specific belief we subscribe to.

Let's use an example to illustrate our case:

Say, Tom is a manager and he believes that: "Developers can't test their own software." This belief is limiting, because it stops all beliefs, decisions and actions built on the idea that "developers do test their own software.

The problem with limiting beliefs

As long as Tom holds this belief, he can't support the ideas of, for example, TDD or Continuous Delivery, because these are in conflict with his belief. And beliefs aren't like clothes - we can't change them at whim. Here's what we're dealing with:

Belief networks

Limiting beliefs don't simply stop one change, they are often part of a complex web of other beliefs that reinforce the limiting belief, and which would be incomplete, incoherent or even inconsistent if that limiting belief was changed - so we can't just replace one belief without examining its context: "Why do you hold this belief?

Supporting beliefs

In Tom's example, we might find other supporting beliefs - such as the Theory X idea, "Without being controlled, developers will try to sneak poor quality into Production, and then we have to deal with the mess."


Tom is probably a reasonable person, and his belief was most likely anchored by a past experience - there were major incidents when developers did cut corners, and these incidents forced Tom to adopt a policy of separating out development and test, and that ebbed the tides.

Negative hypothetical

Let's ask Tom, "What would happen without a separation of development and test?" - and he'd most likely refer back to his anchor experience, "We would have major incidents and wouldn't get any more work done because of continuous firefighting." - and it's hard to argue his case, because it's consistent with his experience.

Conjunction Fallacy

Let's ask Tom an inconspicious question to figure out what Tom thinks is more likely: "Which scenario do you think is more probable: that a developer creates a mess, or that a developer who tests their own code creates a mess?" - Tom will probably answer that it's the latter. This, however, is fallacious, because developers testing their own code are a subset of developers, a special case: if that was Tom's answer, he would (probably unknowingly) subscribe to the idea that developer tests increases the probability of poor results!

Confirmation Bias

Now, let's assume that we manage to convince Tom to make an experiment and let developers take control of quality - we're all human, and we all make mistakes. Tom will feel that the first mistake developers make confirms his belief, "See - we can't. I told you so.

Selection Bias

Of course, not everything an autonomous developer will deliver is going to be 100% completely broken, but Tom will discount or dismiss this, because "what matters is the mess they created and that we didn't prevent that from happening." - Tom will most likely ignore all the defects and incidents that he currently has to deal with despite having a separate Test Department because these aren't affirming his current belief.

Changing limiting beliefs

Given all these issues, we might assume that changing beliefs is impossible.

And indeed, it's impossible to change another person's beliefs. As a coach, we can't and shouldn't even try to do this: it's intrusive, manipulative and most likely not even successful. Instead, what we can do is: support the individual holding a limiting belief in going beyond the limits of their current beliefs.

Here's a process pattern we could use to help Tom get beyond his limiting belief:

1 - Write down the limiting belief
When you spot a critical limiting belief in coaching, write it down. Agree with the coachee that this is indeed the limiting belief they're holding.

2 - Ascertain truth
Truth is a highly subjective thing, it depends on beliefs, experiences and perception. What we want here is not "The truth," but what the coachee themselves asserts to be true: "Do you believe this is certrainly true?" - "What makes you so sure it's true?" - "Could there be cases where this isn't true?"

This isn't about starting an argument, it's about getting the person to reflect on why they're subscribing to this limiting belief.

3 - Clarify the emotional impact

Let's ask Tom, "What does holding this belief do to you?" - and he may answer: "I know what I need to do, that gives me confidence." - but likewise: "I am upset that we can't trust developers on their quality."

We hold onto beliefs both because and despite how they affect us. There's always good and bad, and we often overlook the downsides. Most likely, Tom has never considered that he's carrying around some emotional baggage due to his belief. Until Tom comes to realize that this belief is actually limiting him, and also negatively affecting him, he has no motivation to change it.

4 - Clarify consequences

 Next, we'd like to know from Tom where the limiting belief will put him in the long term: "When we look back, 10 years from now - where will you be if you keep this belief?"

We would like Tom to explore the paths he can't go down because of his limiting belief - for example, "We still won't have a fully automated Continuous Deployment - and I will be held responsible for this." Tom needs to see that his current belief is going to cause him significant discomfort in the future.

5 - Surface the Cost of Not Changing

We're creatures of habit, and not changing is the default. We first and foremost see the cost of change, because that's immediate and discomforting. And we ignore the cost of not changing, so our default would be that we have no reason to change anything.

Tom must see the costs of persevering in his current beliefs, so we ask: "What's the cost - to you - in 10 years, if you don't change this belief?" - a mindful Tom might realize that he'll get passed up for career opportunities, or might even get replaced by someone who will bring new impulses. The more vivid Tom can paint the upcoming pain, the more determined he will be in wanting to change.

And that's the key: As long as Tom himself has no reason to change his belief, he won't. But we can't tell him what his reasons should be. Tom has to see them by himself, and in a way that is consistent with his other beliefs.

6 - Paint a brighter future

Tom may now be depressed, because in his current belief system, he's doomed: there's no hope. So let's change Tom's reality. Let's ask him, "If you change this belief, what would you be and do?" - Tom might be skeptical, but will tell us some ideas on his mind, "I'd give devs permission to test their own code." - "I wouldn't enforce strict controls on developers." - "I wouldn't be known as the only person in this company insisting on stage-gating.

We can then follow this up with, "How would you feel if this could be you?" - if we get positive responses like, "Less stressed, more appreciated" - we're moving in the right direction. If we get negative responses like, "Stupid, Unprofessional" - then there's another, deeper rooted limiting belief and we have to backtrack.

7 - Redefine the belief by its opposite

Let's ask Tom, "What's the opposite of this belief?" - and Tom would answer, "Developers can test their own code." Tom needs to write this down on a card, and keep it with him all the time.

8 - Reinforce the new belief

Every day, Tom should read this card and look for evidence that this opposite belief is true. For example, Tom can find out which people hold this opposite belief, and how it works for them. 

At a minimum, Tom should just take a minute and sit back in calm, take out the card and read it to himself - and then repeat this new belief to another person.

As coach, we can challenge Tom to repeat the new belief back to us frequently, and to provide small stories and anecdotes about what he has said and done based on this different belief.

9 - Reflection

After one month, reflect with Tom what difference thinking and acting based on this opposite belief has made, and how often he lapsed back into thinking and acting based on his limiting belief. Under ideal circumstances, Tom will have success stories based on his new belief - these are a great basis for reflecting whether this new belief can serve him better than his former, limiting belief.

Even if Tom sees no difference, he already has evidence that his original belief may not be true.

If Tom is still struggling, he may need more time to be convinced. 

Closing remarks

Even with a formal process for belief change, we're not guaranteed to rewire or reeducate others. We respect and enjoy freedom of thought and differences in belief, and the best we can do is highlight consequences, reinforce and provide feedback.

If we see that people choose to cling to old beliefs and habits despite all our attempts at supporting them, we have to ask at a meta level what the difficulties are, and whether our support is even desired. We're not in the business of messing with other people's heads - we're in the business of supporting them in being more successful at achieving what they want, and in coming to realize what that actually is.

Friday, September 2, 2022

Microhabits - small action, big impact

 Let's talk about #microhabits - the small things that don't seem to make a difference at all in the short term, which are setting you on a long-term trajectory.

What are microhabits?

Microhabits are actions that take nearly no time and seem to have a very limited scope, and seem to not be worth mentioning, yet they set you on a compounding trajectory. Many years after adopting a microhabit, people adopting it are worlds apart from others around them.

Here are some examples of software development microhabits:

  • Appropriately naming stuff
  • Fixing typos
  • Refactoring
  • Making sure the code is easily testable
  • Adding important unit tests
  • Generally keeping code readable and workable
  • Creating a working build at leasts a few times a day

No excuses

I often hear, "This was an emergency," or "This was just a demo," or "There was time pressure." These are supposedly justifications for not doing the things above. In the working world, there will always be some stress, deadline or emergency lurking behind the next corner. Everything else is cockaigne.

Here's the thing, though: Microhabits become "second nature" and it's more effort to break a habit than to pursue it, so we can't argue that pressure is a reason to do something slower, more complex and less routine than what we'd normally do.

People with good coding microhabits will pursue their habit and keep their code high quality regardless of circumstance. Simply because it's a habit. An important realization about habits: they'll never form if you constantly interrupt them - so consistency is key.

Form the right microhabits habits today!

Which actions, when done consistently over many years, will result in a codebase you'd love to work with? Adopt these, and keep doing them consistently.

And which would result in a codebase you'd loathe? Stop these, and avoid them consistently!

If you want to do a facilitated Retrospective on the topic, you can use this simple template:

Tuesday, August 16, 2022

TOP Structure - the Technology Domain

 Too often, organizations reduce the technical aspect of software development to coding and delivering features.

This, however, betrays a company which hasn't understood digital work yet, and it begs the question who, if anyone, is taking care of:

Technology is the pillar of software development that might be hidden in plain sight


Are you engineering your software properly, or just churning out code? Are you looking only at the bit of work to be done, or how it fits into the bigger picture? Do you apply scientific principles for the discovery of smart, effective and efficient solutions? How do you ensure that your solutions aren't just makeshift, but will withstand the test of time?


What do you turn into code? Only the requirement, or also things that will help you do your work easier, with higher quality, and lower chance of failure? Do you invest into improving the automation of your quality assurance, build processes, your deployment pipeline, your configuration management - even your IDE? How many things that a machine could do is your company still doing by hand, and how much does that cost you over the year - including all of those "oops" moments?


Once you delivered something - how do you know that it works, it works correctly, is being used, is being used correctly, has no side effects, and is as valuable as you think it was? Do you make telemetry a standard of your applications, or do you have reasons for remaining ignorant about how your software performs in the real world?

All of the items above cost time and require skills. Are you planning sufficient capacity to do these things in your work, or are you accumulating technical debt at a meta level?

Think for a minute: How well does your team balance technological needs and opportunities with product and organizational requirements?

Friday, August 12, 2022

TOP Structure - the Product Domain

Many companies misunderstand Product Ownership - or wose: Product Management - to be nothing more than managing the backlog of incoming demand. While that work surely needs to be done, it's the last thing that defines a successful Product Owner - "there's nothing quite as useless as doing efficiently that which shouldn't be done at all," which is what often happens when teams implement requests that are neither valuable, useful nor good for their product.

To build successful products, we need to continuously ask and answer the following questions:

The Product Domain is the third core pillar in the TOP Structure


1. What's the vision of our product, how close are we, and should we keep it? How does our product make our users' lives better? Where are we in the Product Lifecycle, and what does that mean for our product strategy? Do we have what it takes to take the next step?


2. What does our product stand for, and what not? Will adding certain features strengthen or dilute our product? Are we clear on who's our target audience, who's not - and why? Do we want to expand, strengthen or shift our user base?


3. What's the problem we'd like to solve? How big is this problem? Who has it? Is it worth solving? Which solution alternatives exist, is our product really the best way of solving it?

A weak Product Pillar leads to a weak product, which limits opportunities to make the product valuable and profitable - which quickly leads to a massive waste of time and money in product development, whereas a strong Product Pillar maximizes the impact of product development efforts.

Check your own team - on a scale from 1 to 10, how easily and clearly can you answer the questions above?

Wednesday, August 10, 2022

TOP Structure - the Organizational Domain

 It sounds tautological that every organization needs organization - and yet, most companies are really bad at keeping themselves organized, and it hasn't gotten better with the advent of Remote Work.

Although it's technically correct that organization is non-value adding, it is essential to get organization right:

The Organizational Domain is the second core pillar 
in the TOP Structure


Do we have the right people in the right places, are they equipped and do they have the necessary support to succeed? People aren't just chess pieces we can freely move around on an org chart - they're individuals with needs and desires, and if we don't take care of our people, performance will decline.


Can our people collaborate efficiently and effectively? Are the right people in touch with each other? How much "telephone game" are we playing? Do we have policies that cause us to block one another? Do we optimize for utilization of individuals, or getting stuff done?


Do we get genuine learning from events, or are we continuously repeating the same mistakes? Do we have functioning feedback loops? Are we figuring out the levers for meaningful change, and do we turn all of this into action? And do we only focus on how we execute, or also what we work on, and how we think?

Why Organization often doesn't work

Especially project organizations and large "Programs" commonly neglect investing into working with people, improving collaboration or creating a learning environment.

Even "Agile" environments often delegate the responsibility for organization to the Scrum Master, although none of the items mentioned above can be done by a single person on a team - they're everybody's job: team members, support roles and management alike.

When the Organizational pillar isn't adequately represented, we quickly accumulate "organizational debt" - an unsustainable organization that becomes more and more complex, costly, slow, cumbersome and unable to deliver satisfactory outcomes.

Check your own team - on a scale from 1 to 10, how well are the above mentioned organizational aspects tended to?

TOP Structure - the domain of Architecture

In software, there's a critical intersect between technology, that is - how we turn ideas into working software - and our organization - that is, who is part of development and how they interact.

Architecture is at the crossover point of Technology and Organization

This domain is Architecture, and it exists one way or another - if we don't manage it wisely, the outcome is haphazard architecture, most likely resulting in an inefficient organization delivering a complex, low value solutions in a long time and at a high cost.

Am I trying to advocate for a separate architecture team? No. Take a moment and think about Conway's Law: If we have the wrong organization, the consequence is the wrong architecture, the consequence is the wrong technology - and the consequence of that is a failing business.

Architecture is bi-directional. The right organization depends as much on technical choices as vice versa. We need a closed feedback loop between how we develop, and how we organize ourselves.
In many companies, the architectural feedback loop is utterly broken, hence they're doing with 50 people what could be done with 10.

One of the key organizational failures which lead to the need for "Scaling Agile" is that architecture is either disconnected from workplace reality, or not even considered to be important. By architecting both our organizational system and our technology to minimize handovers, communication chains and process complexity, many of the questions which cause managers to ponder the need for "Scaling Frameworks" are answered - without adding more roles, events or cadences.

This form of architecture doesn't happen in ivory towers, and it doesn't require fancy tools - it happens every day, in every team, and it either brings the organisation in a better direction, or a worse direction.

When was the last time you actively pondered how technical and organizational choices affect one another, and used that to make better choices in the other domain?

Monday, August 8, 2022

Make - or Buy?

Determining which systems, components or modules we should "Make" and which we should "Buy" (in extension: use from Open Source) is a challenging aspect for every IT organization. Even when there's a clear votum by management of developers in favor of one option, that vote is often formed with a myopic perspective: managers prefer to "Buy" whatever they can, whereas hardcore developers prefer to "Make" everything. Neither is wise.
But how do we discern?

There are a few key factors at play here:

AvailabilityWhen there's an affordable, ready-made solution, then "Buy" to avoid reinventing the Wheel. Be sure that ready means ready and "affordable" has no strings attached.
Uniquenessyou need to "Make" anything that's unique to your business model.
AdaptabilityWhen there's only a small need for change and customization, "Buy" is preferable. Never underestimate "a small change."
Sustainability"Buy" only when both initial cost plus lifecycle cost are lower. Include migration and decommissioning costs.
SkillIf you need specialists that you don't and won't have, "Buy" from someone who has.
DependencyIf your business would have to shut down when the solution becomes unavailable, "Buy" puts you at your vendor's whim.
Write-offYou can "Buy" to gain speed even when all indicators favor "Make," if - and only if - you're willing to write off everything invested into the "Buy" solution.

Choose wisely - the answers are often not as obvious as they seem.

Friday, July 22, 2022

U-Curve Optimization doesn't apply to deployments!

Maybe you have seen this model as a suggestion how we should determine optimum batch size for deployments in software development? It's being propagated, among other places, on the official SAFe website - unfortunately, it sets people off on the wrong foot and suggests them to do the wrong thing. Hence, I'd like to correct this model -

In essence, it states is that "if you have high transaction costs for your deployments, you shouldn't deploy too often - wait for the point where the cost of delay is higher than the cost of a deployment." That makes sense, doesn't it?

The cause of Big Batches

Well - what's wrong with the model is the curve. Let's take a look at what it really looks like:

The difference

It's true that holding costs increase over time, but so do transaction costs. And they increase non-linearly. Anyone who has ever worked in IT will confirm that making a huge, massive change isn't faster, easier or cheaper than making a small change

The amount of effort in making a deployment is usually unrelated to the amount of new features part of the deployment - the effort is determined by the amount of quality control, governance and operational activity required to put a package into production. Again, experience tells us that bigger batches don't cause less effort for QC, documentation or operations. If anything, this effort is required less often, but bigger batches typically require more tests, more documentation and more operational activity each time - and the probability of Incidents rises astronomically, which we can't exclude from the cost of change if we're halfway honest.

Metaphorically, the U-Curve graph could be interpreted as, "If exercise is tiresome, exercise less often - then you won't get tired so often. The optimum amount of exercise is going to door to receive the pizza order, but rather order half a dozen pizzas at once if the trip to the door is too exhausting, and then just eat cold pizza for a few days.

Turning back from metaphors to the world of software deployment: It's true that for some organizations, the cost of transaction exceeds the cost of holding. This means that the value produced but unavailable to users is lower than the cost of making that value available. And that means that the company is losing money while IT sits on undeployed, "finished" software. The solution, of course, can't be to wait even longer with not deploying, and losing even more money - even if that's what many IT departments do.

As shown in the model, the optimum batch size isn't achieved when the company is stuck between a rock and a hard place - finding the point where the amount of money lost by not deploying is so big that it's worth to spend a ton of money on making a deployment.

The mess

Let's look at some real world numbers from clients I have worked with. 

As I hinted, some companies have complex, cumbersome deployment processes that require dozens of people weeks of work, easily costing $50000+ for a single new version. It's obvious that due to the sheer amount of time and money involved, this process happens as rarely as possible. Usually, these companies celebrate it as a success when they're able to go from quarterly releases to semiannual releases. But what happens to the value of the software in the meantime? 

Just assuming that the software produced is worth the cost of production (because if it wasn't, why build it to begin with) - if the monthly cost of development is $100k, then a quarterly frequency means that the holding cost is already at $300k, and it goes up to over half a million for semiannual releases. 

Given that calculation, we should assume that the optimal deployment frequency is when the holding cost reaches $50k, which would be two deployments per month. That doesn't make sense, however: when 2 deployments costs $50k each per month, then 100% of the budget would flow into deployment - of nothing

Thus, the downward spiral begins: fewer deployments, more value lost, declining business case, pressure to deliver more, more defects, higher cost of failure, more governance, higher cost of deployments, fewer deployments ... race to the bottom!

The solution

So, how do we break free from this death spiral?

Simple: when you're playing a losing game, change the rules.

The mental model that deployments are costly and we should optimize our batch size to only deploy when the cost of deployment outweighs the holding cost is flawed. We are in that situation because we have the wrong processes to begin with. We can't keep these processes. We need to find processes that significantly reduce our deployment costs:

The cost of Continuous Deployment

Again, using real world data from a different client of mine: 

This development organization had a KPI on deployment costs, and they were constantly working on making deployments more reliable, easier and faster. 

Can you guess what their figures were? Given that I have anchored you at $50k before, you might think that they have optimized the process maybe to $5000 or $3000.
No! If you think so, you're off by so many orders of magnitudes that it's already funny. 

I attended one of their feedback events, where they reported that they had brought down the average deployment cost from $0.09 to $0.073. Yes - less than a nickel!

This company made over 1000 deployments per day, so they were spending $73 a day, or $1460 a month, on deployments. If we calculated the accumulated cost of deployments for the whole period, they were still spending over $5000 for three months' worth of software development. But the transaction cost for each single deployment is ridiculously low.

Tell me of anything in software where the holding cost is lower than 7 Cents - and then tell me why we are building that thing? Literally: 7 Cents is mere seconds of developer time!

With a Continuous Deployment process like this, anything that's worth enough for a developer to reach for their keyboard is worth deploying without delay!

And that's the key message why the U-Curve optimization model is flawed:

Anything worth developing is worth deploying immediately.

When the cost of a single deployment is so high that anything developed isn't worth deploying immediately, you need to improve your CI/CD processes, not figure out how big you should make that batch.

If your processes, architecture, infrastructure or practices don't permit for Continuous Deployment, the correct solution is to figure out which changes you need to make so that you can continuously deploy.

Thursday, July 7, 2022

How Equivocation destroys Agile

"Scrum engages groups of people who collectively have all the skills and expertise to do the work and share or acquire such skills as needed" - What does that even mean?

This is just a door-opener for one of many equivocation fallacies which are destroying the entire "Agile" space. Let's dig into it.


We're talking about "equivocation" when a term is used in multiple inconsistent ways, which thereby invokes false images in the minds of the audience. A simple example of how quickly this can get out of hand: "White Supremacy is racism. Since the White House is also White, it's also a sign of insitutional racism." - it's clear that the term "White" means two different things (in one case, a mindset - in the other, a description), but it's already hard to get the thought out of our heads.

Equivocations in the "Agile" space are much more subtle, thus harder to spot, but potentially no less damaging. So, let's get into some of the equivocations that are permeating "Agile." (and I purposefully don't dig into how "Agile" itself doesn't have a single definition people would agree on)


Based on Investopedia, "A customer is an individual or business that purchases another company's goods or services. Customers are important because they drive revenues; without them, businesses cannot continue to exist."

And yet, we have this concept of "internal customers" popping up left and right.

The first Agile Principle reads, "Our highest priority is to satisfy the customer through early and continuous delivery of valuable software." So, every Agile team needs a customer - and who's that customer for most agile teams? Well, they deliver Business Support Systems, so they say that "Business is our customer!" - circular logic based on the Investopedia definition: the business is important, because without it, the business can't continue to exist. Does that make sense?

It doesn't even end there: Modern software is complex. In larger companies, we often see platform teams building platforms that other teams work on. These say, "The other development teams are our customer!" - how much more profit would we make when we get more and more development teams asking for services from our platform team while nothing in our company's relationship with the open market changes?

To quote Wikipedia, "Leading authors in management and marketing, like Peter Drucker, Philip Kotler, W. Edwards Deming, etc., have not used the term "internal customer" in their works. They consider the "customer" as a very specific role in society which represents a crucial part in the relationship between the demand and the supply. Some of the most important characteristics of any customer are that: any customer is never in a subordination line with any supplier; any customer has equal positions with the supplier within negotiations, and any customer can accept or reject any offer for a service or a product."

The equivocation fallacy is a dual use of the word "customer," in one sense meaning "entity on whom a business depends as a source of revenue" - and in another sense meaning "entity consuming the outcomes of the work done by a group of people.

Why is that a problem? Demand.
When interacting with the open market, we would like to maximize demand - that is, we would like to have more customers constantly requesting more of our product, and more features from our product - to get them addicted for more, more and more. That enables growth, and makes our company sustainable.
On the other hand, internal demand is undesirable, because it diverts resources away from serving the real customer: internal entity A doing work for internal entity B means that B must spend time with A, which B doesn't have for the customer - and A must spend time to do something for B. That costs money. This money isn't profit any more, and the time B spends with A is profit not generated: internal demand costs twice. We therefore would like to minimize internal demand.
Equivocating "customers" to include both internal and external people opens the doors for a destructive game: maximizing internal demand. It makes internal service teams feel good about themselves and their work, while the inner proceedings bleed the company dry of critical resources.

By clearly differentiating between "customers" as "those who will determine whether they will pay for what we do" and "consumers" as "those who need what we do," we avoid this pitfall.

We won't go as deeply into other equivocation fallacies - just teasing what they are, and the damage that they're doing.


I chose that little quote from the Scrum Guide as an opener on purpose, because we hear very often that "developers are the experts, trust them."

The equivocation fallacy: Scrum requires developers to be experts in order to function properly. Many Scrum folks argue that being a developer means they're an expert.

This harms the developers when they don't get the support they would sorely need, and it harms the company when the team doesn't perform properly.


Actual Owners, according to Investopedia, are "a person or entity that receives the benefits of ownership." (emphasis added)
However, when you look at Scrum's Product Owner - how often are these individuals the real beneficiaries of the stuff they allegedly "own?" They have a bunch of accountabilities and responsibilities, but rarely own anything.

The equivocation? "Being a beneficiary of" and "Being accountable for."

The consequence?
While on the one hand, an owner can do whatever they please with their property, they tend to have a self-interest in maximizing their benefits. Without this self-interest, an Owner can merely minimize the effort they have to put into whatever they are supposed to "own."
Similar things apply to other forms of "Ownership" - process ownership, code ownership, system ownership or anything else: let people reap the benefits of whatever they're supposed to own, or it's going to backfire.


Investopedia defines "Value" as "the monetary, material, or assessed worth of an asset, good, or service." In the "Agile" space, however, there's a notion that a Product Increment is the value produced by the team
That's a confusion of cause and effect: If the Product Increment has value, then the team has produced value. But merely because the team has produced an increment, it doesn't mean it is value - it could be that it actually has a negative value.


Not specifically an issue of "Agile," more a confusion in the entire IT industry - "value added activity" is an activity that increases the value of a product or service. Analysis, design, documentation, testing and support do not create any assets. They are therefore "non-value-added activity."
The equivocation in software development is that all these activities are also value-adding, since they're contributing to the creation of value.
Careful: Just because an activity is non-value-adding, that doesn't mean it's not required. It just means it's not increasing the value

Think of it like this: If you do more of activity X, does the company's total value go up? If not, then X is a non-value adding activity.

The issue?
We would like to maximize our share of value-added activity, and minimize our share of non-value-added activity. By equivocating, for example, testing, to be a value-added activity, we paint a picture that testing is something we should do more of, because hey, it creates value. No. It doesn't: It's an activity necessary to create and maintain value, but all other things equal, more testing doesn't mean we're delivering more value.

Value Stream

One of the latest fads that came up with SAFe is the "value stream." It's been around since the early days of Lean (although the concept was already used by Henry Ford around a century ago.)
Yet, if you'd ask how SAFe's "Development Value Stream" is different from a Software Development Lifecycle Process, people might not know how to answer - "Isn't a value stream just a high level process?"

That's the equivocation fallacy of value streams: Whereas a value stream is defined as "the set of value-added activities between customer need and realization of value by the customer"- when we modify the meaning of "customer," "value" and "value-add," then it is nothing other than a process.

The problem?
Clearly optimized value streams are make-or-break for the success of any business. Everything is subject to an organization's core value streams. Processes need to optimize around them. When we can't discriminate the two any more and locally optimize a support process at the expense of our actual value stream, we endanger the whole company.

Closing remarks

Equivocations make improvement difficult, because the thing we're talking about may not be the thing we're talking about. Let's get clear first what we really mean when we're talking about something, remove the equivocation, find a proper label that means what it says - and improve on the things that are hidden in plain sight by using the wrong words.

Monday, June 13, 2022

Collaboration Patterns we know from science

Team structures - should be a straightforward enough topic, although in many organizations it isn't. Here are six phenomena you may remember from science class - and how they relate to your organizational structure.

To keep matters simple, this post refers to "entities" - which could either be individuals, teams or entire departments. While the nature of the entity changes, we are concerned with the relationship of the entities with each other. Since some terms have different definitions in different domains, let us refer to the point of origin.


(Origin: Chemistry)

Cohesion is the connection between entities of one substance. Organizational cohesion, thus is the bonding strength between entities of the same category.


We have organizational cohesion when there is collaboration occurring within one team.

We have poor organizational when team members act as a group of individuals, picking their own work items.


(Origin: Chemistry)

Adhesion is the strength with which two different entities stick together. Organizational adhesion, for example, is the amount of effort it would require to separate two different entities.


We have high organizational adhesion when a process has a complex critical path.

Two business units serve different customer segments independently have low adhesion.


(Origin: Chemistry)

Covalence happens when two atoms share an electron to form a bond, which molds these two distinct entities into one "complete" entity. Within an organization, covalence occurs when two or more entities share resources.


Component ownership causes covalence - let's say team A owns the Customer entity, and team B owns the Contract entity. When B needs to access the Customer, they rely on whatever A provides - whereas when A references the Contract, they rely on whatever B provides.

In situations where the Contract relies on a new or modified attribute of the Customer - such as consumer credit score, team B must coordinate with team A on how and when a change can be made, and team A might want to store a "previously rejected" attribute that must be provided by Team B. Covalence thus means that while the inner dealings of A and B become intertwined, an outside change from covalent entities requires that the change works for all covalent entities.


(Origin: Chemistry)

A bridge connects two entities to turn these into one common, stable structure. Bridges require covalence bonding between two entities plus the presence of a third entity. The bridge is an entity that connects two entities by being the missing part in both.

Organizational bridges are entities equally bound to two or more other organizational entities to form one entity of higher complexity.


The analyst role is often an organizational bridge - they are close to business from development perspective and close to development from a business perspective: they are neither, but connect the two entities to turn demand into solutions.


(Origin: Physics)

Capacitative coupling occurs when an energy transfer occurs between two separated conductors. Organizational coupling thus occurs when two structurally separated entities affect the outcomes of one another. We would refer to "tight coupling" when either entity could cause blocking interference to the other, and "loose coupling" if there's a generally uncritical impact. If there is no interference, we would consider the entities uncoupled.


We see tight organizational coupling when the Maintenance Team decides to shut down the Deployment process, thereby incapacitating the development pipeline.

Loose organizational coupling could be the relationship between Marketing and Sales - while they can technically work with or without the other, they do have a performance effect on each other.


(Origin: Physics)

Coherence is the ability of a signal to withstand interference. We discriminate between spatial and temporal coherence: Spatial coherence is the ability of a signal to withstand interference over distance, whereas temporal coherence is the ability of the signal to withstand interference over time. In an organization, it's the ability of the information to cross entity boundaries without getting distorted by interfering signals (e.g., from other work items, other projects or line management.) Note that coherence is only relevant in the context of cohesion - incohesive entities who don't work towards a common goal require no coherent signal transmission.


Low spatial coherence would be a process with a lot of "telephone game," where information is modified in each step.

High spatial coherence would be provided by a synchronization event which ensures that all stakeholders have the same understanding on a subject.

Low temporal coherence are the deviations from a plan over time, usually caused by unanticipated events.

Thursday, June 9, 2022

Using metrics properly

Getting metrics right is pretty difficult - many try, and usually mess up. The problem?
Metrics require a context, and they also create a context. Without a proper definition of context, metrics are useless - or worse: guide you in the wrong direction.

A Metrics system

Let's say you have a hunch, or a need, that something could - or should - be improved. To make sure that you know that you're actually improving, create a metrics system covering the topic. To build this system, it should cover the organizational system in an adequate - that is, both simple and sufficient, model consisting of:

  • Primary metrics (things we want to budge)
  • Secondary metrics (things we expect to be related to our primary metric)
  • Indirect metrics (things we expect NOT budge)

An example

We start with a problem statement, "Our TTM sucks." Hence, our metrics system would start with the primary metric "time-to-market" as a primary metric. A common sense assumption might be that an improvement to TTM will make people do overtime, or that people become sloppy. Thus, we add the secondary metric "quality" - we would like to observe how a change to TTM affects quality, and we set an indirect metric "overtime" - we set a constraint that people shall not do extra hours.

Systematic improvement

In order to work with your metrics system adequately, there's a common five-step process which is at the core of Six Sigma:


  • Define our problem statement: what problem do we currently face?
  • Define our primary metric.
  • Become clear on our Secondary and Indirect metrics.


  • Get data to determine where these metrics currently are.
  • Set an improvement target on our primary metric.
  • Predict the effects on secondary metrics 
  • Set boundaries on indirect metrics.


  • Understand what's currently going on.
  • Understand why we currently see the unwanted state in the primary metric.
  • Determine what we'd like to do to budge the primary metric.


  • Make a change.
  • Observe changes to all the metrics.


  • If our Primary metric budged significantly and all other metrics are where we'd expect them to be, our change was successful.
  • If that wasn't the case - we messed up. Backtrack.
  • Determine which metrics we'd like to retain in the future to make sure we're not lapsing back.

Metrics are thus always bounded to a specific problem you would like to address.

Pitfalls to avoid

Getting metrics systems completely right is challenging, and many organisations struggle with getting metrics right.

Incomplete metric systems

The most common problem is that we often only define primary metrics, which paves the way for building Cobra Farms, that is: we improve one thing at the expense of another thing, which might create an even bigger problem that we just didn't realize.

Red Herring metrics

Another issue is confusion between outcomes and indicators. This is also often associated with a Cobra Farm, but from another angle - we fail to address the actual problem and pursue the problem of the metric.

For example, if management wants to reduce the amount of reported defects, the easiest change is to deactivate the defect reporting tool. That reduces the amount of defect reports, but doesn't improve quality.

This is also called "Goodhart's Law:" A metric that becomes a target stops being useful.

Vanity metrics

It's a human tendency to want to feel good about something, and metrics can serve that basic need. For example, we might track the amount of hours worked per week. That metric constantly goes up, and it always hits the target. But it's not valuable: It tells us nothing about the quality or value of the work done.

Uncontrolled metrics ("waste")

We often collect data, just in case. And we don't connect any action trigger with it. Let's take a look at, for example, deployment duration: It's a standard metric provided by CI/CD tools, but in many teams, nothing happens when the numbers rocket skyward. There are no boundaries, no controls, and no actions related to the metric. If we don't use the data available to act upon it, the data might as well not exist.

Bad data

Sometimes, we have the right metric, but we're collecting the wrong data, or we collect it in the wrong way. That could range anywhere from having the wrong scale (e.g. measuring transaction duration in minutes, when we should measure in milliseconds - or, vice versa)  to having the wrong unit (e.g. measuring customer satisfaction in amount of likes instead of NPS) to having the wrong measurement point (e.g.measuring lead time from "start work" instead of from "request incoming.) 
This data will then lead us to draw wrong conclusions - and any of our metrics could suffer from this.

Category errors

Metrics serve a purpose, and they are defined in a context. To use the same metrics in a different context leads to absurd conclusions. For example, if team A is doing maintenance work and team B is doing new product development: team A will find it much easier to predict scope and workload, but to say that B should learn from A would be a category error.

Outdated metrics

Since we're talking about metric systems rather than individual metrics, when the organizational system on which we measure has changed, our metrics may no longer make sense. Frequently revisiting our measurement system and discarding control metrics which no longer make sense and either discarding or adjusting them is essential to keep our metric system relevant.

Tuesday, May 31, 2022

Why we need clearly defined goals

 A common issue I observe in many organizations - there's no visible goal beyond, "Get this work done," and people don't even see the point in "wasting time to set a goal." The problem: many organizations are just tactical engines generating work and handling exceptions in the generated work. Yet, most of this work has no goal. Goal-setting is not an esoterical exercise, and here's why.

The Hidden Factory

One concept of Lean Management is the "Hidden Factory" - and this concept deserves some kind of explanation. A factory is an institution that applies work to input in order to generate some outputs. So far, so good. A "hidden factory," on the other hand, is a factory within a factory, doing work without generating viable outputs: either nothing - or waste.

To understand the problem of hidden factories, think like a customer.

You get the option to buy your product from one of two providers.

Company A has a straightforward process, turning raw input into consumable output, and Company B has a process that turns the same input into something, then into something else, then into something else, then into the same consumable output.

This extra work makes Provider B's product is more expensive to produce, without any discernable difference to the product sold by Company A.

Company A and Company B thus sell products which are identical in all aspects - except price. Company B has to charge a premium to cover the extra work they do. Which product would you purchase?

As customers, we do not care how much work is required to do something. Given all other things equal, we choose to opt for the cheapest, fastest way of meeting our needs.

And companies are no different here. But how does that relate to goal-setting?

Induced and intermediate work

Many companies are great at inducing work - and once that work has been induced, it becomes necessary, and the people doing it must do it, otherwise the outcome can no longer be achieved.

Let's pick a practical example.
We have chosen to use a Design Process that requires any changes to be made as a sequential series of steps:
  1. Request the element to be changed
  2. Describe the change to be made
  3. Prototype the change
  4. Validate the change
  5. Implement the change
  6. Verify the change
While you may argue that all of these steps are common sense and necessary, our specific process choice has just locked us into a process that turns replacing an image into a full day's work - a change that could be done by a competent developer in a couple of minutes.

How is that relevant to goal-setting?

Confusing Task and Outcome

Referring to our fictional process, an individual may have the specific responsibility of describing changes. Their input is a change request, and their output is a change description. As a customer, I address this organization to say, "I want a new backdrop on my website." Our execution agent of step 2 will say, "I am overburdened. I have too many change requests on my desk. I need someone to help me describe the changes." If we would ask them "Why do you need to describe the changes?" - they might say, "So that they can be prototyped." If we'd press and ask, "And why do they need to be prototyped?" - the answer could be, "So that we can validate the change." - which, of course begs the question, "And then - I get what?" - "An implementable change."

You see where this is going: Everyone has reasons why they do the things they do, and from the way this organization is set up, their reasons are indeed valid. And still, nobody really understands why they do the things they do. 
We should assume that everyone whom we ask should answer, "So that you can get your new backdrop." In many companies, however, that is not the case.

And that's where goals come into play.


Every company has a few - usually very few - first-order goals, and a specific context that provides constraints within which these goals can be realized. Surviving and thriving on the market is a baseline almost all have, and most of the time, the primary goal is to achieve this by means of advancing the products which define the company. That would be an example of a first-order goal.

From that, we get into second-order goals, that is - into derived goals which help us achieve this primary goal. Build a better product. Build the product better. Sell more. Sell faster. Sell cheaper. You name it.

These, of course, would be realized via strategies - which in themselves have multiple subordinate goals. For example: Increase product quality, add features to the product, reduce discomfort with the current product, improve perception of the product in its current state - again, the possibilities are endless.

At some point in the rabbit hole, somewhere deep down in the loop of operational delivery, we may then see a business analyst stating, "I am overburdened. I have too many change requests on my desk. I need someone to help me describe the changes." - but why are they doing it? Are we describing change in order to increase product quality, in order to sell more, or to sell cheaper?
It's easy to realize that "adding people to do the work" may indeed make sense when our goal is to add features to improve our product. And yet, it seems entirely backwards when our goal is to "sell cheaper."

That's why we are setting goals. It allows everyone, on all layers of an enterprise, regardless of their specific responsibility, to quickly and easily determine, "Do the things which I am doing help us achieve our goals?"
If the answer to that simple and straightforward question is, "No" - then this begs two followup questions:
  • Why am I doing things which are not helping this company achieve its goals?
  • What do we need to change, so that I am contributing to our goals?

The impact of goals

Well-defined goals immediately expose the Hidden Factories and induced work, and they set the stage for reducing waste in our processes as well as leading employees to do more meaningful, more important work.

Poorly defined goals - such as "to do X amount of work" - encourage establishing and inflating Hidden Factories, and they set the stage for wasteful processes and unhappy employees who may be doing totally worthless work, without ever realizing.

Undefined goals - or the absence of goals - remove the yardstick by which we can measure whether we are contributing to anything of relevance or merely adding waste. Without a goal, work is meaningless and improvement impossible.

The importance of goals

Goal-setting is important for organizations both large and small in:
  1. Guiding decision-making
  2. Enabling Improvement
  3. Eliminating Waste
While a goal itself doesn't do any of these, a goal sets the stage for these. Once you know your goal, you can take it from there.

Friday, May 27, 2022

Why test coverage targets backfire

 Most organizations adopt a "Percent-Test-Coverage" metric as part of their Definition of Done. Very often, it's set by managers, and becomes a target which developers have to meet. This approach is often harmful. And here's why:

The good, bad and ugly of Coverage Targets

Setting coverage targets is often intended as both a means of improving the product, as well as improving the process - and getting developers to think about testing.

The good

It's good that developers begin to pay closer attention to the questions, "How do I test this?" as well as, "Where are tests missing?" Unfortunately, that's about it.

The bad

The first problem is that large legacy code bases starting with zero test coverage will put developers into a pickle: If I work on a component with 0% test coverage, then any amount of tests I write will still keep that number close to Zero. Hence, the necessary compromise becomes not setting a baseline number, just ask for the number to increase. The usually envisioned 80+% targets are visions for a distant future rather than something useful today.

Looking into the practice - as long as the organization is set up to reward minimizing the amount of time invested into unit testing, the outcome will be that developers try to meet the test coverage targets with minimum effort. 

Also, when developers have no experience in writing good tests, their tests may not do what they're supposed to do.

The ugly

There are many ways in which we can meet coverage targets that fulfill Goodhart's Law:

Any metric that becomes a target stops being useful.

Often, developers will feel unhappy producing tests that they create only to meet the target, considering the activity a wasteful addition to the process, which provides no benefit.

At worst, developers will spend a lot of time creating tests that provide a false sense of confidence, which is even worse than knowing that you have no tests to rely on.

But how can this happen? Let's see ...

The first antipattern is, of course, to write tests that check nothing.

Assertion-free testing

Let's take a look at this example:

Production code

int divide(int x, int y) { return x/y; }


print divide(6,2);

When looking at our code coverage metrics, we will see a 100% coverage. But: the test will always pass - even when we break the production code. Only if we were to inspect the actual test output (which is manual effort that we probably won't do) - we will what the test does: There is no automated failure detection, and the tests aren't even written in a way that we would detect a failure: Who would detect the problem if suddenly, we got a "2" instead of a "3" - we don't even know which result would have been correct!

Testing in the wrong place

Look at this example:
Class Number {
int x;
void setX(int val) {x=val;}
void getX() {return x;}
void compute() { x=x?x/x^x:x-x*(x+x); }
n = new Number();
assert (N.x == 5);
assert (N.get() == 5);

In this case, we have a code coverage of 75% - we're testing x, we're testing the setter, and we're testing the getter.

The "only" thing we're not testing is the compute function, which is actually the one place where we would expect problems, where other developers might have questions as to "What does that do, and why are we doing it like that?" - or where various inputs could lead to undesirable outcomes.

Testing wrongly

Take a peek at this simple piece of code:
int inverse (int x) { return 1/x; }
void printInverse(int x) { print "inverse(x);" }
assert (inverse(1) == 1);
assert (inverse(5) == 0);
assert (print.toHaveBeenCalled());

The issues

There are numerous major problems here, all of which in combination make that test more dangeous than helpful:

Data Quality

We may get a null pointer exception if the method is called without initializing x first, but we neither catch nor test this case.

Unpexpected results

If we feed 0 into the function, we get a Divide by Zero. We're not testing for this, and it will lead to undesired outcomes.

Missing the intent

The inverse function returns 0 for every number other than 1 and -1. It probably doesn't do what it's expected to do. How do we know? Is it poorly named, or poorly implemented?

Testing the wrong things

The print function's output is most likely not what we expect, but our tests still pass.


If we rely on coverage metrics, we might assume that we have 100% test coverage, but in practice, we may have very unreliable software that doesn't even work as intended.

In short: this way of testing tests that the code does what it does, not that it does what it should.

Then what?

The situation can be remedied, but not with numerical quotas. Instead, developers need education on what to test, how to test, and how to test well.

While this is a long topic, this article already shows some extremely common pitfalls that developers can - and need to - steer clear from. Coverage metrics leave management none the wiser: the real issue is hidden behind a smoke screen of existing tests.