Pages

Friday, February 4, 2022

SAFe Values - no real value!

Transparency, Built-in Quality, Alignment. Program Execution.
These are the four "SAFe Core Values." But - are they actually, "values?"


What are SAFe Values?

According to the official SAFe website, the SAFe Core Values are:
The four Core Values of alignment, built-in quality, transparency, and program execution represent the fundamental beliefs that are key to SAFe’s effectiveness. These guiding principles help dictate behavior and action for everyone who participates in a SAFe portfolio.
I would like to focus your attention to the highlighted terms:
  1. Fundamental beliefs: In order for "SAFe Values" to make any sense, you must buy into this belief system. If you don't share one or more of the underlying beliefs, the cookie crumbles and the "Value System" of SAFe becomes meaningless. Let's also mention that the idea of "fundamental beliefs" originates from religion - it doesn't coincide well with science.
  2. Guiding principles: What is a guiding principle? Let's examine the meaning of a principle: "a fundamental truth or proposition that serves as the foundation for a system of belief or behaviour or for a chain of reasoning." Here, we're pretty close to circular reasoning: SAFe Core Values are fundamental - because they are fundamental. A foundation isn't negotiable, because everything built on it collapses without that foundation. But - are these "fundamental truths" axiomatically true - or are they based on perspective?
  3. Dictate behaviour and action - "Dictate" is a much stronger term than, for example, "orient" or even "inform." Based on the fundamentals, you have only one course of action, and it's not open to negotiation or discussion: it's mandatory.
From a linguistic perspective, the SAFe values are beyond scrutiny and would be considered a totalitarian rule: You must believe them, and you must do what they say.

Fundamental beliefs dictating behaviours coincide neither with science - nor agility.

Without getting any further into this matter, let me first clarify that I have never, ever experienced a SAFe fanatic. I have seen my share in the Scrum community, but SAFe practitioners - by and large - tend to be very pragmatic. "Doing what makes sense" is high on the list, and hence, I would give both the idea of "fundamental" beliefs and the idea of "dictating" behaviour a break.

That's just half the matter, though.
The other half is - how these terms tend to be understood in a larger corporation.
Much of an enterprise system is built around the idea that transparency is a one-way street, alignment equals incorporating others' opinions, quality a matter of governance. "Program execution" - well, it requires setting up programmes. With a separation between coordination and execution.


Not-so-fundamental beliefs

A problem of SAFe's core values is that these very underlying beliefs aren't defined from an agile point of view - when a company introduces SAFe, they are taken as "fundamental." Management understands these terms in a certain way, and discussing a "fundamental truth" seems to be a waste of time, because - they're fundamental, and we already understand them. Hence, the crucial discussion, "What do these terms mean?" won't happen. Instead, the terms become a projection of status quo onto an agile organization.

If you start with these four key premises:
  1. Transparency: We require a chain of command giving clear orders, and we rely on detailed reports.
  2. Alignment: We must ask for the opinions of people not doing a specific piece work, and only do things they agreed to.
  3. Built-in Quality: We need some kind of governance telling people what their job is, and how to do it properly.
  4. Program Execution: We must set up large programmes which bind a lot of time, money and manpower.
... then congratulations: the core idea of "Agile" is already dead and buried.

Hence, we first need to discuss how we want to understand these terms before we can build an agile organization where these terms even make sense.


What's (a) Value

Unlike beliefs, values aren't "fundamental," "truths," or "principles."
Value, as defined in Oxford dictionary, is:
Value - how much something is worth in money or other goods for which it can be exchanged.
Note again, that there are two critical statements here: worth and exchange.
A value is about the "worth" of something, not a boolean attribute. 
When asking, "What's the value of this car?", you wouldn't expect "Yes" as an answer.

Value only exists when there is a metric and a conversion rate towards another entity, so that people can decide what kinds of trades they would make - and which they would not.

For example, let's assume we measure quality on a scale of 1-10, and you have a quality level of 5. That's merely a measurement: It's not a cross-referenced metric. If you want quality to have value, you would need to relate it to something else, such as "1 quality is worth 3 time."

The statement, "we must have all of built-in quality, alignment, transparency and program execution" would not define "core values." To have an actual value system, we must be able to make trades between quality, alignment, transparency and execution - and we must be able to determine which trade-offs are advantageous, and which aren't.

How much execution are you willing to invest into alignment instead? How much quality would you give for transparency?
These aren't values, unless you can answer those questions.

And for the record - if you'd answer, "None," then you have just discarded the value entirely.

You only have a "value" if you can trade for it.

Considering a SAFe value system

In order to turn "beliefs" into values, we need to be able to make trade-offs based on something. We need to state how much we are willing to give, in order to gain some of what we believe in.
The easiest way to make trade-offs is by investing more or less time or money.

Quality

makes a formidable value, since we can immediately relate it to money: We can determine the cost of poor quality (i.e., the failure to produce high quality) based on projected cost of failure and/or rework. This cost is so staggeringly high that quality has an extremely high value, and the price of reducing it is extremely high. Although the numbers for reducing quality rarely add up, we can do it, and clearly quantify cost, benefit, exchange rate and margins.

Alignment

converts formidably to time - we could do something now, and show the outcomes. Or we could first align, which would mean that we will have no results until alignment is done. It also means that we're not earning money in the meantime. Plus, we're spending effort. Hence, we can assign a financial cost to alignment.
When we can say how many days an improvement / reduction of 1 point on our alignment scale will gain or cost, and how much money we gain / lose during that time, then alignment might become a value.

Transparency

is very hard to turn into a value, because it's extremely hard to define.
What is transparency even - and how would the gain or loss of one point on our transparency scale convert into time or money? We could measure the cost of information inavailability, which in itself is only visible via second-degree indirection - we must peek at the delays incurred by waiting for information to become available, and at the cost of that time. We could also measure the cost of evitable, wrong decisions that were made because people made choices in the absence of existing information. We then also need to reverse the calculation, computing the cost and time required for producing information not required for decision-making ("information waste").
Companies capturing this kind of data are so rare that the measurement of transparency remains elusive.
 

Execution

simply doesn't make sense as a "value." How much time or money is one unit of "not doing anything" worth? How many would we want? Is there ever even a potentially valid scenario where having a plan, and not acting upon it, has a positive value attached? If so - we should immediately ditch that plan! 
And that's not even digging into the question whethere there's value in a "program," that is, a large-scale initiative. Skunkworks has already shown two generations ago that programmes are often more of a problem than a solution.


SAFe Core Values - a wishlist for Santa!

When we're talking about "SAFe Core Values", we're not talking about values at all, since we're not even discussing potential trade-offs.

Quality is the only "value" of SAFe that managers would actually consider trading off, despite being so ridiculously expensive that this isn't economical: The best quality has the lowest cost - "less costs more and brings less." It's absolutely something an economical thinker would want to maximize.

Alignment, as commonly understood, comes with a negative net benefit - "more costs more and brings less." So, it's not something of value. Why would we want it, besides clinging to a belief that it's somehow important?

Transparency is too elusive to be considered a value: Everyone wants it, but nobody can really say what, or how much, we'd want to trade for it.

Program Execution doesn't make sense as a value. It's boolean: If we want to do something, just do it. If we don't want to do it - why is it relevant?


Hence, SAFe's "core values" are nothing more than a wishlist for Santa - something that almost every manager in the world will say they want, and nothing they'd be willing to pay a price for. 

Anything for which you're unwilling to pay a price, has no value. It therefore can't be considered a value.

SAFe has only one real value: built-in quality. And that's the first ones most organizations kick out of the window.

If you want SAFe "Values", you'll need to do a lot of groundwork and figure out what it is that you actually want: You're not done with simply accepting them.

No comments:

Post a Comment