Monday, June 13, 2022

Collaboration Patterns we know from science

Team structures - should be a straightforward enough topic, although in many organizations it isn't. Here are six phenomena you may remember from science class - and how they relate to your organizational structure.

To keep matters simple, this post refers to "entities" - which could either be individuals, teams or entire departments. While the nature of the entity changes, we are concerned with the relationship of the entities with each other. Since some terms have different definitions in different domains, let us refer to the point of origin.


Cohesion

(Origin: Chemistry)

Cohesion is the connection between entities of one substance. Organizational cohesion, thus is the bonding strength between entities of the same category.

Examples

We have organizational cohesion when there is collaboration occurring within one team.

We have poor organizational when team members act as a group of individuals, picking their own work items.


Adhesion

(Origin: Chemistry)

Adhesion is the strength with which two different entities stick together. Organizational adhesion, for example, is the amount of effort it would require to separate two different entities.

Examples

We have high organizational adhesion when a process has a complex critical path.

Two business units serve different customer segments independently have low adhesion.


Covalence

(Origin: Chemistry)

Covalence happens when two atoms share an electron to form a bond, which molds these two distinct entities into one "complete" entity. Within an organization, covalence occurs when two or more entities share resources.

Examples

Component ownership causes covalence - let's say team A owns the Customer entity, and team B owns the Contract entity. When B needs to access the Customer, they rely on whatever A provides - whereas when A references the Contract, they rely on whatever B provides.

In situations where the Contract relies on a new or modified attribute of the Customer - such as consumer credit score, team B must coordinate with team A on how and when a change can be made, and team A might want to store a "previously rejected" attribute that must be provided by Team B. Covalence thus means that while the inner dealings of A and B become intertwined, an outside change from covalent entities requires that the change works for all covalent entities.


Bridge

(Origin: Chemistry)

A bridge connects two entities to turn these into one common, stable structure. Bridges require covalence bonding between two entities plus the presence of a third entity. The bridge is an entity that connects two entities by being the missing part in both.

Organizational bridges are entities equally bound to two or more other organizational entities to form one entity of higher complexity.

Examples

The analyst role is often an organizational bridge - they are close to business from development perspective and close to development from a business perspective: they are neither, but connect the two entities to turn demand into solutions.


Coupling

(Origin: Physics)

Capacitative coupling occurs when an energy transfer occurs between two separated conductors. Organizational coupling thus occurs when two structurally separated entities affect the outcomes of one another. We would refer to "tight coupling" when either entity could cause blocking interference to the other, and "loose coupling" if there's a generally uncritical impact. If there is no interference, we would consider the entities uncoupled.

Examples

We see tight organizational coupling when the Maintenance Team decides to shut down the Deployment process, thereby incapacitating the development pipeline.

Loose organizational coupling could be the relationship between Marketing and Sales - while they can technically work with or without the other, they do have a performance effect on each other.


Coherence

(Origin: Physics)

Coherence is the ability of a signal to withstand interference. We discriminate between spatial and temporal coherence: Spatial coherence is the ability of a signal to withstand interference over distance, whereas temporal coherence is the ability of the signal to withstand interference over time. In an organization, it's the ability of the information to cross entity boundaries without getting distorted by interfering signals (e.g., from other work items, other projects or line management.) Note that coherence is only relevant in the context of cohesion - incohesive entities who don't work towards a common goal require no coherent signal transmission.

Examples

Low spatial coherence would be a process with a lot of "telephone game," where information is modified in each step.

High spatial coherence would be provided by a synchronization event which ensures that all stakeholders have the same understanding on a subject.

Low temporal coherence are the deviations from a plan over time, usually caused by unanticipated events.

Thursday, June 9, 2022

Using metrics properly

Getting metrics right is pretty difficult - many try, and usually mess up. The problem?
Metrics require a context, and they also create a context. Without a proper definition of context, metrics are useless - or worse: guide you in the wrong direction.


A Metrics system

Let's say you have a hunch, or a need, that something could - or should - be improved. To make sure that you know that you're actually improving, create a metrics system covering the topic. To build this system, it should cover the organizational system in an adequate - that is, both simple and sufficient, model consisting of:

  • Primary metrics (things we want to budge)
  • Secondary metrics (things we expect to be related to our primary metric)
  • Indirect metrics (things we expect NOT budge)

An example

We start with a problem statement, "Our TTM sucks." Hence, our metrics system would start with the primary metric "time-to-market" as a primary metric. A common sense assumption might be that an improvement to TTM will make people do overtime, or that people become sloppy. Thus, we add the secondary metric "quality" - we would like to observe how a change to TTM affects quality, and we set an indirect metric "overtime" - we set a constraint that people shall not do extra hours.


Systematic improvement

In order to work with your metrics system adequately, there's a common five-step process which is at the core of Six Sigma:

Define 

  • Define our problem statement: what problem do we currently face?
  • Define our primary metric.
  • Become clear on our Secondary and Indirect metrics.

Measure

  • Get data to determine where these metrics currently are.
  • Set an improvement target on our primary metric.
  • Predict the effects on secondary metrics 
  • Set boundaries on indirect metrics.

Analyze

  • Understand what's currently going on.
  • Understand why we currently see the unwanted state in the primary metric.
  • Determine what we'd like to do to budge the primary metric.

Improve

  • Make a change.
  • Observe changes to all the metrics.

Control

  • If our Primary metric budged significantly and all other metrics are where we'd expect them to be, our change was successful.
  • If that wasn't the case - we messed up. Backtrack.
  • Determine which metrics we'd like to retain in the future to make sure we're not lapsing back.

Metrics are thus always bounded to a specific problem you would like to address.


Pitfalls to avoid

Getting metrics systems completely right is challenging, and many organisations struggle with getting metrics right.

Incomplete metric systems

The most common problem is that we often only define primary metrics, which paves the way for building Cobra Farms, that is: we improve one thing at the expense of another thing, which might create an even bigger problem that we just didn't realize.


Red Herring metrics

Another issue is confusion between outcomes and indicators. This is also often associated with a Cobra Farm, but from another angle - we fail to address the actual problem and pursue the problem of the metric.

For example, if management wants to reduce the amount of reported defects, the easiest change is to deactivate the defect reporting tool. That reduces the amount of defect reports, but doesn't improve quality.

This is also called "Goodhart's Law:" A metric that becomes a target stops being useful.


Vanity metrics

It's a human tendency to want to feel good about something, and metrics can serve that basic need. For example, we might track the amount of hours worked per week. That metric constantly goes up, and it always hits the target. But it's not valuable: It tells us nothing about the quality or value of the work done.


Uncontrolled metrics ("waste")

We often collect data, just in case. And we don't connect any action trigger with it. Let's take a look at, for example, deployment duration: It's a standard metric provided by CI/CD tools, but in many teams, nothing happens when the numbers rocket skyward. There are no boundaries, no controls, and no actions related to the metric. If we don't use the data available to act upon it, the data might as well not exist.


Bad data

Sometimes, we have the right metric, but we're collecting the wrong data, or we collect it in the wrong way. That could range anywhere from having the wrong scale (e.g. measuring transaction duration in minutes, when we should measure in milliseconds - or, vice versa)  to having the wrong unit (e.g. measuring customer satisfaction in amount of likes instead of NPS) to having the wrong measurement point (e.g.measuring lead time from "start work" instead of from "request incoming.) 
This data will then lead us to draw wrong conclusions - and any of our metrics could suffer from this.


Category errors

Metrics serve a purpose, and they are defined in a context. To use the same metrics in a different context leads to absurd conclusions. For example, if team A is doing maintenance work and team B is doing new product development: team A will find it much easier to predict scope and workload, but to say that B should learn from A would be a category error.


Outdated metrics

Since we're talking about metric systems rather than individual metrics, when the organizational system on which we measure has changed, our metrics may no longer make sense. Frequently revisiting our measurement system and discarding control metrics which no longer make sense and either discarding or adjusting them is essential to keep our metric system relevant.