Showing posts with label Design. Show all posts
Showing posts with label Design. Show all posts

Monday, September 12, 2022

Cutting Corners - think about it ...

 I was literally cutting corners this weekend when doing some remodeling work, and that made me think ...

Cutting corners:

  • is always a deliberate choice
  • makes things look better to observers
  • is what you don't want others to see
  • doesn't require expertise
  • provides a default solution when you see no alternative
  • might be the most reasonable choice
  • requires more work than not doing it
  • is expensive to undo

So - try having a conversation: where are you cutting corners, why are you doing it - and do you know how much it costs? Which alternatives do you have? What might another person do different?

Sunday, January 24, 2021

The importance of testablility

In the article on Black Box Testing, we took a look at the testing nightmare caused by a product that was not designed for testing. This led to a followup point: "Testability", which is also a quality attribute of the ISO:25010. Let us examine a little closer what the value of testability for product development actually is.






What is even a "test"?

In science, every hypothesis needs to be falsifiable, i.e. it must be logically and practically possible to find counter-examples. 

Why are counter-examples so important? Let's use a real-world scenario, and start with a joke.

A sociologist, a physisicst and a mathematician ride on a train across Europe. As they cross the border into Switzerland, they see a black sheep. The sociologist exclaims, "Interesting. Swiss sheep are black!". The physicist corrects, "Almost. In Switzerland, there are black sheep." The mathematician shakes her head, "We only know that in Switzerland, there exists one sheep that is black on at least one side."

  • The first statement is very easy to disprove: they just need to encounter a sheep that isn't black.
  • The second statement is very hard to disprove: because even if the mathematician were right and another angle would reveal the sheep to not be fully black, the statement weren't automatically untrue, because there could be other sheep somewhere in Switzerland that are black.
  • Finally, the third statement holds true, because the reverse claim ("There is no sheep that is black on one side in Switzerland") has already been disproven by evidence.

This leads to a few follow up questions we need to ask about test design.


Test Precision

Imagine that our test setup to verify the above statements looked like this:

  1. Go to the travel agency.
  2. Book a flight to Zurich.
  3. Fly to Zurich.
  4. Take a taxi to the countryside.
  5. Get out at a meadow.
  6. Walk to the nearest sheep.
  7. Inspect the sheep's fur.
  8. If the sheep's fur color is black, then Swiss sheep are black.

Aside from the fact that after running this test, you might be stranded in the Winter Alps at midnight wearing nothing but your pyjamas and you're a couple hundred Euros poorer, this test could go wrong in so many ways.

For example, your travel agency might be closed. Or, you could have insufficient funds to book a flight. You could have forgotten your passport and aren't allowed to exit the airport, you could go to a meadow that has cows and not sheep, and the sheep's fur inspection might yield fleas. Which of these has anything to do with whether Swiss sheep are black?

We see this very often in "black box tests" as well:

  • it's very unclear what we're actually trying to test
  • just because the test failed, that doesn't mean that the thing we wanted to know is untrue
  • there's a huge overhead cost associated with validating our hypothesis
  • we don't return to a "clean slate" after the test
  • Success doesn't provide sufficient evidence to verify the test hypothesis.

Occam's Razor

In the 13th century, a monk by the name of Occam came up with what's known today as Occam's Razor, i.e., "entities should not be multiplied without necessity". Taking a look at our above test, that would mean that every test step that has nothing to do with sheep or fur color should be removed from the design.

The probability to run this test successfully increases by isolating the test object (sheep's fur) as far as possible, and eliminating all variability from the test that isn't directly associated to the hypothesis itself.


Verifyability

We verify a hypothesis by finding a counter-example to what's called the "alternate hypothesis" and assuming that if this one is untrue, then its logical opposite, called the "null hypothesis" is true. Unfortunately, this means we have to think in reverse.

In our example: To prove that all sheep are black, we have to sample all sheep. That's difficult. It's much easier to sample one non-black sheep, and thereby falsify that all sheep are black. If we fail to produce even one, then all sheep must be black.


Repeatability and Reproducibility

A proper test is a systematic, preferrably repeatable and reproducible, way of verifying our hypothesis. That means, it should be highly predictable in its outcome, and we should be able to test as often as we want.

Again, going back to our example, if we design our test like this:

  1. Take a look at a Swiss sheep.
  2. If it is white, then sheep are not black.
  3. If sheep are not - not black, then sheep are black.

This is terrible test design, because of some obvious flaws in each step: 

  1. The setup is an uncontrolled random sample. Since sheep are white or black, running this test on an unknown setup, means we haven't ruled out anything if we picked a black sheep.
  2. The alternate hypothesis is incomplete: Should the sheep be brown, then it is also not white.
  3. Assuming that step 2 didn't trigger, we would conclude that brown = black.

Since the "take a look at a Swiss sheep" is already part of the test design, each time we repeat this test, we get a different outcome, and we can't reproduce anything either, because if I run this test, my outcome will be different from yours.


Reproducibility

A repeatability problem occurs when the same setup can generate different results. In our example, "take a look at" could be fixed, assuming we take the mathematician's advice, by re-phrasing step 1 to: "Look at a Swiss sheep from all angles." This would lead everyone to reach the same conclusion.

We might also have to define what we call "white" and "black", or whether we would classify "brown" as a kind of white.

We increase reproducibility by being precise on how we examine the object under test, and which statements we want to make about our object under test.


Repeatability

Depending on what the purpose of our test is, we are doing well by removing the variation in our object under test. So, if our test objective is to prove or falsify that all sheep are black, we can set up a highly repeatable, highly reproducible test like this:

  1. Get a white Swiss sheep.
  2. Identify the color of the sheep.
  3. If it is not black, then the statement that Swiss sheep are black is false.

This experiment setup is going to produce the same outcome for everyone conducting the test anywhere across the globe, at any point in time.

While there is a risk that we fail in step 1 (if we can't get hold of a Swiss sheep), we could substitute the test object with a picture of a Swiss sheep without affecting the validity of the test itself.


What is testability?

A good test setup has:
  • a verifiable test hypothesis
  • a well-defined test object
  • a precise set of test instructions
  • absolutely minimized test complexity
  • high repeatability and reproducibility
When all these are given, we can say that we have good testability. The more compromises we need to make in any direction, the worse our testability gets. 

A product that has high testability allows us to formulate and verify any relevant test hypothesis with minimal effort.

A product with poor testability has a high difficulty associated with formulating or verifying a test hypothesis. This difficulty might translate into an increase of any or all of the following:
  • complexity
  • effort
  • cost
  • duration
  • validity
  • uncertainty


In conclusion

The more often you want to test a hypothesis, the more valuable high testability becomes.
With increasing change frequency, the need to re-verify a formerly true hypothesis also increases. 

Design your product from day 1 to be highly testable.
By the time you discover that a product's testability is unsustainably low, it's often extremely expensive to notch it up to the level where you need it.

Tuesday, November 17, 2020

Is all development work innovation? No.

In the Enteprise world, a huge portion of development work isn't all that innovative. A lot of it is merely putting existing knowledge into code. So what does that mean for our approach?

In my Six Sigma days, we used a method called "ICRA" to design high quality solutions.


Technically, this process was a funnel, reducing degrees of freedom as time progressed. We can formidably argue about whether such a funnel is (always) appropriate in software development or whether it's a better mental model to consider that all of them run in parallel at varying degrees, (but that's a red herring.) I would like to salvage the acronym to discriminate between four different types of development activity:

Activity Content Example
Innovate Fundamental changes or the creation of new knowledge to determine which problem to solve in what way, potentially generating a range of feasible possibilities. Creating a new capability, such as "automated user profiling" to learn about target audiences.
Configure Choosing solutions to a well-defined problems from a range of known options.
Could include cherry-picking and combining known solutions.
Using a cookie cutter template to design the new company website.
Realize Both problem and solution are known, the rest is "just work", potentially lots of it. Including a 3rd party payment API into an online shop.
Attenuate Minor tweaks and adjustments to optimize a known solution or process.
Key paradigm is "Reduce and simplify".
Adding a validation rule or removing redundant information.

Why this is important

Think about how you're developing: depending on each of the four activities, the probability of failure, hence, the predictable amount of scrap and rework, decreases. And as such, the way that you manage the activity is different. A predictable, strict, regulated, failsafe procedure would be problematic during innovation, and highly useful on attenuation - you don't want everything to explode when you add a single line of code into an otherwise stable system, which might actually be a desirable outcome of innovation: destabilizing status quo to create a new, better future.

I am not writing this to tell you "This is how you must work in this or that activity." Instead, I would invite you to ponder which patterns are helpful and efficient - and which are misleading or wasteful in context. 

By reflecting on which of the four activities and the most appropriate patterns for each of them, you may find significant change potential both for your team and for your organization, to "discover better ways of working by doing it and helping others do it."


Friday, June 14, 2019

Don't get fooled by Fake Agile

Whichever consultancy sells this model, DO NOT let them anywhere near your IT!

So, I've come across this visual buzzword bingo in three major corporations already, and all of them believe that this is "how to do Agile". And all of them are looking for "Agile Coaches" to help them implement this garbage.

This is still the same old silo thinking that never worked.

There is so much wrong with this model that it would take an entire series of articles to explain.
Long story short: all the heavy buzzwords on this image merely relabel the same old dysfunctional Waterfall. In Bullet points:

  • "Design Thinking" isn't Design Thinking,
  • "Agile" isn't Agile,
  • "DevOps" isn't Devops. and 
  • Product Owners and Scrum Masters aren't, either.

Here is a video resource addressing this model:




I have no idea which consultancy is running around and selling this "Agile Factory Model" to corporations, but here's a piece of advice:
  • If someone is trying to sell you this thing to you as "Agile", do what HR always does: "Thank you very much for coming. We will call you, do not call us ..."
  • If someone within your organization is proposing to implement this model, send them to a Scrum - or preferably LeSS - training!
  • If someone is actually taking steps to "implement" this thing within your organization, put them on a more helpful project. Counting the holes in the ceiling, for example.

If this has just saved you a couple millions in consulting fees, you're welcome.

Saturday, April 13, 2019

Design Choices - is it really BDUF vs. Agile?

Do we really end up with Chaos or randomness if we don't design upfront?
The question alone should be sufficient to warrant that the answer isn't so straightforward. Indeed, it would be a false dichotomy - so let's explore a few additional options, to create choice.


Let's tackle the model from left to right:

Upfront Design

The epitomal nemesis of agilists, "Upfront Design" (UD) is the idea that both the product and its creation process can be fixed before the product is created. This approach is only useful when the product is well known and there is something called a "Best Practice" for the creation process.
In terms of the Cynefin framework, UD is good for simple products. A cup, a pen - anything where it's easy to determine a cost effective, high quality production approach. This is very useful when the creation process is executed billions of times with little to no variation for an indefinite time.

Normally, Upfront Design isn't a one time design effort. There are extensive prototyping runs to get the product right and afterwards, to optimize the production process before the real production process starts, as later adjustments are expensive and may yield enormous scrap and rework. All of this upfront effort is expected to pay off once the first couple million units have been sold.
The problems of this approach in software development are closely related to ongoing change: There is no "final state" for a software product except "decommissioned".
Your users might just discover better options for meeting their need in certain areas of the product - would you want to force them to do with a suboptimal solution, just because you've finished your design?

Likewise, a new framework could be released that might cut development effort in half, providing better usability and higher value at better quality: Wouldn't you want to use it? UD would imply that you'd scrap your past efforts in the light of new information and return to the drawing board.

Industrial Design

In many aspects, Industrial Design (ID) is similar to Upfront Design - both product and process design are defined before the creation actually starts. There is one major difference, though: the design of the product itself is decoupled from the design of the product creation process. This creates leeway in product creation based on the needs of the producer.

ID accounts for specifying the key attributes of the product upfront and leaving the creation process open until a later stage. The approach acknowledges that the point in time when the "What and Why" is determined may be too early to decide on "How". In implementation, ID accounts for "many feasible practices" from which appropriate "best practices" are chosen by the process designer for optimizing meta attributes such as production time, cost or quality.

Glancing at the Cynefin framework, ID is great for complicated, compound products assembled from multiple components in extensive process chains. Take, for example, a cell phone, a car or even something as simple as a chair, all commodities which get mass-produced, although both with a limited lifetime and still a limited quantity.

The challenges of ID in software development are similar to those of UD: there is no "final state" for software - and by the time it gets into users' hands, requirements might already have changed.

While for most industrially created products, the economic rule "Supply creates its own demand" applies and marketing and sales efforts can be used to make the available product attractive to an audience - Enterprise Software Design is about creating a product for one specific audience that doesn't like to be told what they're supposed to need!

Most corporate software development still happens in a similar way, although in many cases, the creation process has been defined and standardized irrespective of the product, with the assumption that all software is the same from a development perspective: This might be akin to assuming that a gummy bear follows the same creation process as a car or a chair!
Software development doesn't produce the same software a million times - we create it once and use it. And the next product is different. If indeed your company pays developers to develop the same product time and again, I have a bridge to sell you ...


Emergent Design

Emergent Design (ED) is buzzword terrain, as different people have a different understanding of what this means, and while advocates of UD/ID might call ED "No design", we likewise see people who indeed have no design call their lack of structured approach ED. This muddies the waters and makes it complicated to understand what ED is really about.

From a Cynefin framework perspective, ED is a way of dealing with the chaotic Domain in product development - when neither the best solution nor implementation method are known or constant! ED is what typically happens during prototyping, when various options are on the table and by creating something tangible, appropriate design patterns start to emerge.

ED requires high discipline. as systematic application of different design approaches helps reduce both time and effort in establishing useful designs, yet the designers need to fully understand that at any time, following a pattern may lead to a dead end and the direction needs to be changed. ED therefore requires high skill in design, quick forward thinking as well as massive flexibility in backtracking and changing direction.

ED accounts for constantly changing, evolving and emerging customer needs, a near infinite solution space and the inability of any person to know the next step before it has been taken.

Modern software development infrastructure is a showcase for ED: It takes minutes rather than days to change a design pattern, add, modify or remove behaviours and get the product into users' hands. The cost of change on a product level with good engineering practice is so low that it's hardly worth thinking upfront whether a choice is right or wrong - a design choice can be reverted cheaper than its theoretical validation would be.

The downside of ED is that neglect in both process and product design can slow down, damage and potentially outright destroy a product. It is essential to have a good understanding of design patterns and be firm in software craftsmanship practices to take advantage of Emergent Design.


Integrated Design

With Integrated Design (IntD), the focus shifts away from when we design to how we design. While UD and ID assume that there are two separate types of design, namely product and process - ED assumes that product design patterns need to emerge through the application of process design patterns.
IntD takes an entirely different route, focusing on far more than specification - it doesn't just take into account meeting requirements and having a defined creation process.

Peeking at the Cynefin framework, Integrated Design is the most complex approach to design, as it combines several domains into a single purpose: an overall excellent product.

IntD interweaves disciplines - given Cloud Infrastructure, development techniques such as Canary Builds are much easier to realize, which in turn allow the delivery of experimental features to validate a hypothesis with minimal development effort, which in in turn requires production monitoring to be an integral part of the design process as well. The boundaries between product design, development, infrastructure and operation blur and disappear.

Integrated Design requires a high level of control over the environment, be it a requirement modification, a software change, a process change, user behaviour change or a technological change.
To achieve this, practitioners of IntD must master product design techniques, software development practices, the underlying technology stack including administrative processes, monitoring and data science analytics. I call the people who have this skillset, "DevOps".

"Any sufficiently advanced technology is indistinguishable from magic" - Arthur C. Clarke
While master DevOps do all of these things with ease, people unfamiliar with any of the above techniques may not recognize that a rigorous approach is being applied and ask for a more structured, distinguishable approach with clear dividing lines between product design, process design and system operation.
Rather than accepting the near-magical results produced by such a team, they might go on a witch hunt and burn the heretics at stake, in the wake destroying the product built in this fashion.


And then, finally, we still have the default option, oftentimes used as the strawman in the advocation of each other design approach:


No Design

While seemingly the easiest thing to do, it's myopic and fatal to have No Design. No reputable engineer would ever resort to applying No Design, even if they know their design might be defective, expensive or outright wrong.

The reasons why No Design aren't an option go a long way, including - but not limited to:

  • You don't know whether you achieved quality
  • You're continuously wasting time and money
  • You can't explain why you did what you did

The problem with No Design is that any design approach quickly deteriorates into No Design:

  • An UD software product tends to look like it had No Design after a couple of years when people constantly add new features that weren't part of the initial design.
  • When ID products change too much, their initial design becomes unrecognizable. The initial design principles look out of place and lacking purpose if they aren't reworked over time.
  • Applying ED without rigour and just hoping that things work out is, as mentioned previously, indeed, No Design.
  • While IntD is the furthest away from No Design, letting (sorry for the term) buffoons tamper with the an IntD system will quickly  render the intricate mesh of controls ineffective, to the point where an appearance of No Design indeed becomes No Design.


tl;dr

I have given you options for different types of design. The statement, "If we don't do it like this, it's No Design" is a false dichotomy.
This article is indeed biased in suggesting that "Integrated Design" is the best option for software development. This isn't a universal truth. An organization must have both the skillset and the discipline to apply ED/IntD techniques, otherwise they will not operate sustainably.


  • No Design is the default that requires no thinking - but it creates a mess.
  • Full-blown Big Design Upfront is a strawman. It rarely happens. Almost all software has versions and releases.
  • Industrial Design is the common approach. It nails down the important parts and gives leeway in the minute details. 
  • Emergent Software Design requires experience and practice to provide expected benefits. When applied haphazardly, it might destroy the product.
  • Integrated Software Design is a very advanced approach that requires high levels of expertise and discipled.