Wednesday, December 17, 2014

Refactoring running rampant

The purpose of refactoring is: Increase the quality of code without changing the functionality of the code. Unfortunately, we too often forget that "quality" is in the eye of the beholder.

Let me give you one example. This is pseudo Java code snippets, but you can see where it leads.

Originally, there were 2 classes which contained code like this:

class A
int X;
int Y;
public int compute ()
   return this.X + this.Y;
class B
int E;
int F;
public int compute()
  return this.E - this.F;

Well, those are highly similar methods, so "obviously", this calls for refactoring.

Let's start refactoring to a common level:

class A
int X;
int Y;
private int _compute ()
   return this.X + this.Y;
public int compute ()
   return this.compute();

class B
int E;
int F;
private int _compute()
  return this.E - this.F;
public int compute()
   return this.compute();

We have some duplicate code now, so we want to eliminate it by moving the method into a new class:

abstract class operatorClass
public int compute ()
   return _compute;
class A extends operatorClass
private int _compute ()
   return this.X + this.Y;
class B extends operatorClass
private int _compute ()
   return this.E - this.F;

At least, now there is a common level, but there is still highly similar code. Let's do something about it.

class A extends operatorClass
  setOperator ("+");
class B extends operatorClass
  setOperator ("-");
abstract class operatorClass
int X;
int Y;
String operator;
public int compute()
      case "-" : return X-Y; break;
      case "+" : return X+Y; break;
      default: throw new InvalidOperationException (operator); break;
protected void setOperator(String operator)

Yay! We eliminated the "nearly duplicate" method in 2 different classes and reduced the amount of code in both of them.

Unfortunately, there is a small fly in the ointment here:

  • Case-Statements are poor code. Reason? They are more difficult to unit-test. Path coverage comes to mind. It's also a violation of the Single Purpose Principle.
  • The "operatorClass" is now doing stuff it shouldn't be doing, that is: making a decision that should be made on a more abstract level, i.e. on the level when the object is created - a violation of the Dependency Inversion Principle!
  • We actually introduced the possibility for error into the operatorClass' "compute" method. Trying to call "compute" with invalid operators was not possible before!
  • Oh, and that, of course means, we need additional Exception Handling. We didn't even go into the new "InvalidOperatorException" class that we must create.
  • Each time we implement a new class that implements a new "compute" method, we must modify the "operatorClass", so we just violated the Open/Closed Principle!
  • Not to mention the application's performance has just deteriorated. It will now be slower, because of the "case" statement that must be evaluated. It will also consume more memory, because an additional variable must be initialized
While the clode looks cleaner when you only look at the level of A and B, we merely "shoved dirt under the rug", but we didn't help at all!

Lesson learned

Refactoring is not a purpose in itself.
Not every refactoring is actually a positive change.
When refactoring, you must set a clear purpose of what you want to accomplish - and why. Even when you do not break (unit) tests and the code becomes shorter, you may be doing something tremendously harmful.
I strongly advise doing Code Katas occasionally to get a grip on how to refactor beneficially.

Wednesday, December 10, 2014

Software Development Lifecycle - Testing

The Software Development Lifecycle: Testing

What you see above is the "Test Cycle" as I learned and practiced in Waterfall environment, for years.
Now, I don't even want to go into how in theory, you can add significantly more test phases in here.Neither how in practice, smoke, integration and regression tests are usually neglected.

The simple fact that developers hand over software they consider "works as designed" to test is ingrained into the mind of waterfall software project specialists.

As I mentioned in another post about test coverage, defects occur even when developers consider that their software is defect free.

Let us consider for a minute that each test costs time.
While a piece of code is in test, developers continue to produce more working software. Yeah, I know that the Waterfall theory says that once development is finished, the product is handed off to test. But seriously - has this ever been reality? Do developers really sit there twiddling thumbs until testers report defects? Do companies really pay developers to sit idle while testers are busy?
If you are seriously working in such an environment, I would have a great optimization suggestion for your management.

So, developers build on code they consider to be working while test time passes. If a defect is then found in a component they are building on - yet, given the defect, the new component did "work as designed", the defect fix may cause rework not only in the defective component, but also in the current work-in-progress: Fix efforts may already be twice as high -or even higher- as if the defect was discovered before the developer started a new topic.

The problem is intensified when developers don't induce defects into new components, but into components that have already been accepted in the past. Ignoring the fact that oftentimes, when schedules are tight, regression testing is the first activity to be descoped, it's always the last thing testers do. This approach actually designed to maximize the amount of time that a defect can stay in the software - and therefore, maximizes the amount of damage a defect can do!

Is this smart? No!

You will never deliver cost effective high quality products unless you un-learn this model!
Forget everything you learned about Design-Develop-Test. It's the wrong philosophy. You can't improve it. It doesn't even get better when you increase the amount of time for regression tests or put regression testing in front of functional testing.

The Solution

A paradigm shift is needed.
Here is a non-exhaustive list of changes you must make, preferably in this order:

  1. Introduce mechanisms that let your developers know whether they introduced defects before they pick up a new task.
  2. Don't even let developers start on a new topic until there is confidence that their last piece of work didn't introduce defects.
  3. Automate testing. Enable developers to run any test they need or want to run at any given point in time, as often as they need to. Don't make them wait days - or weeks - for test results!
  4. Eliminate the "tester role" (but not the testers). In Scrum, we speak of a "Developer" even when we mean "the test expert" because everyone is accountable for high quality. Make programmers cowork with test experts before actually starting to write code.
  5. Create test awareness. Make sure developers know exactly which tests must pass before they create code.
  6. Introduce test driven development (TDD). Give developers access to the tests before they actually start coding.
  7. Change your process: Create quality awareness and accountability. We utilize "pre-commit hooks". Developers cannot even commit defective code unless they specifically override, but even then, the defect will be tracked on every single commit until resolved.
  8. Implement Continuous Integration. Let developers know immediately if their component damaged the product build. A wait time of 10 minutes is already tough, days simply aren't acceptible!
  9. Implement Continuous Delivery: Developers should never be working on their own environment or an individual branch for many days without merging back to the master. They should be working in small increments that can be delivered fast. This minimizes the risk that days of work need to be scrapped because of a wrong premise.

Your future process should be fully integrated, eliminating time gaps between design, development and testing. Testing should be an activity that starts before development, should go on in parallel to development and should be completed by the time the programmer moves a story, feature or task to "Done".

If you still need a "test phase", always think that any single day that a defect is within the software, you increase the cost of poor quality. Think different!

Test Coverage

What you see above is a classical test report as you would expect it within a Waterfall project. Typically, test managers produce such statistics at frequent intervals to report the progress of testing.

As with any diagram, this one isn't worth much without explanation.

The blue bar is the amount of test steps conducted per day, the red bar is the amount of defects the software contained on each day.

Now, what you see is an amount of roughly 15000 test steps being run over the course of 2 weeks and roughly 800 defects discovered.

In large-scale Waterfall projects where I worked previously, this would have been a boatload of effort for a team of maybe 20 testers.

A job well done for the test manager, you can be sure QA would be praised.

What is this metric - really? 

It's the results of automated integration and acceptance testing for one of our products. What you see here is only my personal tests: Don't even get me started that I'm not a fulltime developer. All of the tests you see displayed here - accumulated - ran within less than 1 hour including defect reporting.
None of the defects discovered made it past the evening. All were fixed within the same business day!

The consequence of such activity?

We use Continuous Delivery, so we are able to deliver working software on a daily basis - and we haven't had a single critical fault in the system on a live installation.

Years ago, when I was working exclusively Waterfall, I couldn't have believed by myself that not only a single person could execute as many as 4000 test steps per day. I wouldn't have believed - and that is probably the more critical learning here:

A single change to a single component could wreak havoc in use cases that merely rely on interfaces and have no direct connection to the implementation of the changed component!
When a programmer might consider their work "Done" - without proper automated test coverage, these beasts are still lurking deep in the dark!
I wouldn't have run the tests if I knew that my code was bugged! I typically stay in development until I am confident that my code is defect-free! I mean, hey - I got unit tests to make sure I didn't do anything bad! (and that's already more than we often had in Waterfall projects)
In a Waterfall, I would have handed the topic off to test and waited for any potential bug reports.
And chances are, testers wouldn't have done a Regression test and wouldn't have discovered the issue before the change went live.

It's not that I became a crappy developer by doing Agile. On the contrary. All of my Waterfall experience is still there. I make fewer mistakes than ever. But mistakes still can happen. The difference is that they don't make it into production any more.

Lesson learned

If you're a Developer, you are well advised to automate tests for your components so that you know if you accidentally broke something. However, don't stop there. Unless you have integration test suites, you may not know when your work has detrimental impact on the system's overall functionality. Chances are if you are only automating functionality and not use cases, your software still works, but your users can't.

Hence, agilists say "Automate everything". (well, everything that makes sense) - it does pay off!

Tuesday, December 9, 2014

We're Agile now, we don't do testing any more!

Scrum only has Product Owner, Scrum Master and Developer.
Testing is not part of the Scrum process, so we can eliminate it - and save money by sacking the testers.

I've had the joy of working with a company that seriously thought about taking this road.
It was a shock for me not because I come from a testing background, but because it's wrong on so many levels:

Agile means: Deliver valuable software

Buggy software isn't valuable. We don't expect to get praised for a "job well done" if the result isn't both usable and useful. How do you know that it's usable without testing? How do you know it's useful without testing?`

Agile means: Deliver value often

Working on bugfixes drains resources. The only way to prevent bugfixes is by not introducing bugs to the customer in the first place. How do you know that you don't have bugs without testing?

Agile means: Deliver value fast

You need a profound understanding of what your components do - and why. Without testing, you will be spending more and more time in understanding the impact of change as the project progresses. Specifically, without systematic testing, you may completely lose control!

Agile means: Eliminate waste

A standard trumpet: Testing is waste, because if there are no defects, tests don't add any value.
Well. That may sound true if you have a superficial understanding of (software) engineering. 
Ask yourself: Would you hire an architect who didn't run calculations to verify that your house won't crash over your head? No. Would you ride a car where designers didn't validate traffic safety regulations? No. 
So why do you want to build a piece of software without those checks? 
Because bankruptcy isn't as bad as death? Great! Tell me your company name and I will make a fortune by short-selling!

Agile means: Working with feedback

Each time a piece of code is finished, it should be exposed to customer feedback. Unless you want to look like a fool in front of your paying customers, you better have a strategy in place to make sure that they like what they see. You must verify the correctness and validate the applicability of your solution to their problem before confronting them with results. And the formal process for doing them is called "testing".

Not testing is planning for commercial suicide.
It holds true for Agile even more than for traditional projects, because in a Waterfall, you might get the customer to sign off some crappy product just for political reasons. In Agile, your customer will know that you don't know your trade within a couple weeks.

Monday, December 8, 2014

The most important role in Scrum

It's a philosophical question: Which role is most important in Scrum?

When I went into Jeff Sutherland's Scrum Master class, he stated that "The Scrum Master is the most important person in the team", on the Product Owner Training, I heard "The Product Owner is ..." - and during Scrum Developer Training, I heard the same about Developers.
Now, what is the real deal?

Let's look at this slowly.

The Product Owner

Imagine you have no Product Owner.
Who takes care of the backlog, who grooms it? Who makes the calls when stakeholders quarrel about the direction of the product? Who communicates the product vision to developers and stakeholders?
Ok, let's make it short. If there is no product owner, there is no product, there is no project. There is no need for a team - so obviously, the PO is the most important person.

Well, that was quick.
Oh, wait.

The Scrum Master

Imagine you have no Scrum Master.
Who arranges your ceremonies, who takes care of impediments? Who makes sure that management or other stakeholders don't violate the team's self-organization? Who takes care that the Working Agreements are adhered to?
Ok, let's make it short. If there is no Scrum Master, teams will most likely fall into disarray. Maybe not because of themselves, but because of the world around them. A team in chaos will not deliver.
So, obviously, the SM is the most important person.

Now, we've got a conflict already. 
But we're not finished yet.

The Developers

Ok, in traditional organizations, we know that managers think that the worker drones are easily replacible but everything depends on their genius.
But let's get real: You got a product vision, you got a development process. But who does the thinking, where does the code come from? Who makes the vision real?
Guess what - if there's no developers, the best PO in the world is useless!
So, developers are the most important people!

Everyone is important!

Many agilists use the Pig-Chicken metaphor, stating that an Agile team should only consist of "pigs". I don't like this analogy, because first, I don't consider my coworkers to be pigs for obvious reasons.
Second, if you're in Saudi Arabia, chances are that nobody wants bacon, rendering the pig completely worthless.

Simply said, if you're on an Agile team, everything depends on your contribution. You are essential to success. If you're not essential, you're in the wrong team!

The Bottleneck issue

Are you terrified of going on vacation? Can you already guess all the disasters that will happen when you're out of office for more than 2 days?
If yes, you're not alone, but there is something you should do about it.

Oftentimes, new Agile teams have the "Problem" that the intensity of work is so high that people are somehow stuck in their role.

For instance, the Scrum Master may not even get to coach the team because impediments and ceremonies already fill their schedule: The installation of the Continuous Improvment Process may become secondary.
Or, the Product Owner is spending fulltime answering questions of team members to ensure the product is terrific: Good, but the grooming of future backlog items and stakeholder management may suffer.
The worst case is developers taking so many stories that they lose time to hone their agility: The delivered product may be great, but we didn't implement any ways to do the same thing faster and easier.

The Solution

Agile methodologies try to eliminate "Single Points of Failure", i.e. having success or failure hinge on one single person.

  • Developers should ideally pair up to make sure that there is no capability required in the project which only one person has. Why? Isn't it more easy to replace you if some else possesses your skills? No! On the contrary: if you can chip in wherever something needs to be done, your value to the company increases!
  • The Product Owner, "owns the product" and needs to call shots. However, they should not feel obliged to create a structure where every minute decision entails them.On the contrary, the PO should communicate the vision so clearly that the team can independently decide what is the best way to advance the product. 
  • The Scrum master, "owns the process" and is responsible for ceremonies and managing the impediment backlog. A good Scrum Master will not spend their days running after individual impediments and arranging meetings. Ideally, they will coach and empower the team to do this by themselves. 

For both the Scrum Master and the Product Owner, I would refer to an important proverb about aid: "The most important job of a helper is to make themselves superfluous", with the intention "If you enable and empower others to fill your role, you did a good job, otherwise you missed the mark".

Sustainability is one of the Agile Principles, and you will not have sustainable development unless everyone actually strives to enable others to do what they are doing.

Friday, November 21, 2014

How agile are we?

"Hey, we're doing Agile now!" - "We too, but it doesn't work".
How often do we hear this? Too often.

Not everyone who has put the term "Agile" on their IT strategy is doing the same things. More precisely: Everyone understands something else under the term "Agile".

As others have pointed out, "Agile" requires more education than a 2-day training course for prospective Product owners and Scrum masters can provide.

Here is my suggestion for an Agile Maturity Model of Implementation (AMMI).

Agile performers

You can't really predict what their practices are, and frankly, they don't even care so much about specific practices or processes. What they care about is: Valuable software and happy customers. Everything else is subject to that and negotiable.
The main thing they have in common: They thoroughly understand the reasons why their processes and organizations look the way they do.
These performers are the reason why the Agile Manifesto is so short.
They follow, what could be called a "Scientific truth" (credit: Paul Oldfield). They experimented, adjusted and have gained confidence that their current approach is the best way at the current time. Gaining evidence that other approaches will yield better results, they will quickly move on.

Agile transitioners

When getting into a transitioning company, you can most likely predict their processes and organization based on subject literature. You could update the "Scrum Primer" accurately by taking pictures of their ceremonies. They're seriously trying, and probably seeing some improvements over their old methods of working.
Most likely, the way they are doing things is highly imperfect, but hey - empiricism means learning by doing!
What transitioners often have in common is that they lack deeper understanding of how to optimize their agile practices. 
They follow, what could be called a "Political truth" (credit: Paul Oldfield). They were convinced by someone that their current approach is the best choice.

Agile cargoists

"Man, we've been Agile for a year now and nothing has improved! We're doing everything exactly how Spotify is doing it, but ROI hasn't improved!" - Agile cargoists religiously follow prescribed patterns in hope of results. Unfortunately, these never turn up. 
Cargiosts, at first glance, look like transitioners. The big difference is that their Continuous Improvement Process is either political in nature (i.e. the team is not empowered to change) or fad-driven, i.e. researching what others do. 
They follow, what could be called a "Religious truth" (credit: Paul Oldfield). A reputable source claims that this approach is the best, hence it must be so.


The entire spectrum between "Don't know about it", "Read about it, but it's not for us" and "Tried it, didn't work, we're back to classic project management". 

Agile by Name Only

Get into an ABNO and you will find tons of Project Managers dressed up as Scrum Masters and Business Analysts / Architects vested as Product Owner. 
Most likely, projects start with a Feasibility Study, Requirement Documentation, Budget Planning and Detailed Design before ever involving an "Agile developer". 
The "Agile transition" in these companies was fast and painless: Managers were sent to a 2-day training course, the titles on business cards were adjusted and voilá: A new agile all-star is born!
Agile culture are at best random elements, but not the norm.
ABNO's usually blame Agile for being ineffective and give the entire approach a bad name.
Transitioning an ABNO into an agile organization is probably harder than teaching a granite block how to swim.
The best thing about ABNO's is that they will return to the Non-Agile department sooner or later: Most likely after firing most of their "ineffective developers". At least, they'll have plenty of managers left.

Tuesday, November 11, 2014

Slicing and dicing user stories

To really understand the effect that the Product Owner has on the success of Agile development, it is vital to understand the art of creating good user stories to deliver results.

As was mentioned in a previous post, a poorly chosen user story will make it tough for the team to deliver, wasting time and potentially effort with poor control over the investment. On the other hand, a well chosen user story will support the early delivery of valuable results.

The creation of good user stories depends on a technique called "Slicing", i.e. preparing user stories in a fashion that they become immediately useful once completed.

Horizontal slicing is bad

A poor approach to slicing is cutting a huge story into horizontal slices. Let us use our "World Hunger" example to explain this approach:

We may want to resolve World Hunger in the following manner:

  1. Obtain enough food to feed the entire world
  2. Gather enough volunteers to distribute this food in the entire world
  3. Establish the logistics to accomplish the distribution process
  4. Obtain political support (permission, visa, military/police protection) for the distribution
Let us just assume that we had successfully completed steps 1,2 and 3 - but we don't get permission!
We have everything in place, yet there's a bunch of nuts sitting in well-furnished offices who block results for whatever reason or maybe for no reason at all.
While we gind our teeth in frustration and our supplies spoil, people continue to starve. 
A whole lot of resources were wasted, a lot of time was spent - yet there is no visible result!

What happened? Dependent layers were built, and all activities on one layer depend on another layer for success. We can't even know whether our concept works until all layers have been completed, well - at least to some extent. But until all layers are achieved, there is always this looming sense of failure.

(When this happens in the corporate world, you would find some CxO closing divisions, cutting funding and whatnotever that is bad for your career.)

Delivering with vertical slices

As previously discussed, a vertical slice is intended to deliver fast, with limited scope.
"Feed one person for one day" is a limited task, it produces visible, measurable results and permits scaling.
The result from "feed 1 person for one day" is visible and can be subjected to Continous Improvement. Each day, each person contributes to success and regardless of how far we get, the feeling of accomplishment stays with us.

Once 1 person has been supplied, we can enlarge the process to feed 2, then 5, then 10.
All of a sudden, we will see that we are more efficient by centralizing the supply process, so we set up a user story like "As food distributor, I would like a central place where I can always pick up the food baskets so that I save time in getting food to the Needy."
Once we reach 100 people fed, we may get into stories like "As charity, we need media coverage, so that we can obtain funding to feed another 1000 people"
These stories are again, limited in scope, provide a good measure of success and can be completed in a limited amount of time.

Maybe we need to rework our former approach (by Refactoring) but maybe we can just continue doing things as before.


Lean identifies multiple types of "Waste". One of these types is producing stuff which we can not use yet ("Inventory"). Another type is producing stuff which may never be used ("Overproduction").
Well-sliced stories which resolve (or even better: eliminate) dependencies on other activities serve the team in reducing the risk of delivering features which are not useful for a long time in the future, or - potentially never.

Good Story slicing positively contributes to the visibility and value of results and negatively, eliminates waste.
Therefore, the Product Owner is a key player in enabling team success.

Monday, November 10, 2014

Making stories manageable

As a Human Being, I want to end World Hunger so that there is no more hunger.
Wow, I've written my first user story and it's even correct based on the template. Now my team can get going and because they are Agile, we will have a solution in no time!

What? You have some feedback proposal? ... mmh, okay. :(

Well, poor stories are often "World Hunger Projects" and this one is no different: It's just being a bit more straightforward to realize.

Too many times, in projects I see stories like this one: As a user, I would like to see my data so that I can see what I have to do next.
And the poor developers are stuck with a mess and can't deliver at sprint's end!

Let's take the example of the World Hunger Story to practice a little bit of slicing:

User Roles

Human Being - hmm, that looks pretty big scope. What is your role in all of this?
Are you a human being who is in the fortunate situation of having excess wealth and want to share it? If so, you are a "potential donor".
You maybe the founder of a charitable organization dedicated to actively do something about hunger.
Are you "afflicted by hunger" and need food for yourself? In that case, your perspective is different.
Or maybe you're a "homicidal maniac" and propose dropping a few nuclear missiles on hunger hotspots. (Hmm, maybe as a PO I would pass on that story.)

By identifying different user roles, it becomes significantly easier to focus on what actually needs to be implemented.

Specific goal

Before we get into the What, we must be clear on Why we want something to be realized.
Well, "that there is no more hunger" sounds cool, but it's not all too clear. How can we tackle this?
Let's put ourselves into the seats of a charity organization.
We have a whole bunch of problems, to name a few:

  • We need to have something to give to the hungering people of the world:
    • Do we want them to have food for the moment?
    • Do we want to help them obtain their own food?
    • Do we want to resolve a specific crisis?
  • We need to obtain funding
    • How do we raise our funds?
    • How do we manage our funds?
    • How do we spend the money wisely?
  • We need people with whom we collaborate:
    • Do we need speakers?
    • Do we need celebrity sponsors?
    • Do we need skilled workers who are willing to go into the region?

Each of these is a specific objective to tackle - and even these may still be too big to tackle with just one team and within a limited amount of time, so we may want to get even more specific!
But let's just stick to one goal now.

Specific approach

So, now we are the charity who want go give food to the Needy.
A whole bunch of ideas may come to mind:

  • Food baskets for the neighbour in need
  • Soup kitchen in the metropoles of the world
  • Shipments of wheat/flour/rice to central Africa

Putting it all together to deliver

Now we may create a much more focused story card:
"As a charity, we want to give food baskets to the neighbour in need so that nobody who lives in our community would have to go to bed with an empty stomach".

It won't really solve the entire problem of World Hunger, but it's a step, and you can have visible results in no time.
The team can brainstorm on the contents of the food basket, identify a needy neighbour - and maybe it can be delivered on this very day, working with real user feedback ("Thanks so much for the milk, but I'm lactose intolerant") and improve their strategy from there.


A good user story helps the team deliver visible results in a very short time without falling into the trap of missing the mark, running into Analysis-Paralysis and many other problems.
Good stories help the team deliver, motivate the team, enable Continous Improvement, allow the PO to check the direction of the implementation and adjust.
And most of all, they provide real value at a limited investment.

Six types of Agile Products

We're Agile now. We don't need a Project Plan any more, we're freed from deadlines and we don't have to work off predefined requirement lists any more. Yay, this is gonna rock!
Let's go for it!

Not so fast, young Padawan!

Agilists don't abolish processes or planning. When we say that we value adapting to change over following a plan, it doesn't mean we don't plan!

Agile projects might be even more rigorous than any traditional project you have ever worked on!
They are set up differently. Nobody went out to obtain a $100m funding up front, "be seeing you in 5 years with the results". (And nobody in their right mind should fund a project like that!)

Agile's "Inspect and Adapt" follows the decades-old acronym "WYSIWYG": you get what you see. 
This goes both ways. The project will not continue to receive funding when there are no more results!
So, before you go out on your next hackathon, be prepared to deliver as much as possible at any point in time.

Agile products grow over time. There are four primary growth stages of agile products:
Seed - "We know what we want to produce, but haven't really started"
Startup - "Here is what we're working on"
Initial - "Go ahead and use it - we know there's still lots of things we must work on"
Mature - "If you want more, you gotta pay for it!"

In the seed phase, the stakeholders come together and define the product vision. At this time, we may have a project homepage and a project team, but nothing to show for. As a newly formed venture, we might go to Kickstarter, but we need a compelling reason why anyone should fund us.

Actually, unless we have some proof that we can do what we've set out to, chances are we won't get this funding. So, we need to produce something to get an initial taste.

There are three things we can do, we don't need to do all but we should pick our options:

The Mock Prototype will give the stakeholders and future customers something to connect our vision with something more tangible. It will give them an impression of what may one day be - and it will give us an impression of whether we're aiming in the right direction!
In the mock prototype, we'll probably resort to mockups and we might want to spend as little resources on shoving it out as possible. If we can salvage some code from the mock prototype for the final project - cool, but I wouldn't count on it.
We may borrow apply techniques such as Rapid Application Development to put out one or more prototypes and set the stage for the final product.
Spending more than a few man-weeks on a prototype is probably already the death of our project, so we need to do this in a cost-effective and highly disciplined manner.

Once we know what we'll be building, we should produce a Walking Skeleton that features most of the end-to-end components of the final product. For instance, we might produce an application consisting of a webserver, a database and a web frontend. It's not going to do anything fancy, but that's not the point of the skeleton. We want to know if we have the right means to get the job done. Filling in the functionality is for later.
Let us use an example on this: If we are trying to build a data mining application, but already fail at the "store data" part because we can't maintain our data, we don't need to worry about building a nice user interface or tuning performance.

As soon as we have a reasonable amount of certainty that we can make our vision real, we should work on the Minimum Viable Product.
The MVP is a usable piece of software that does something in line with the product vision, so we are talking about working software produced using good engineering practices.
The main purpose of the MVP is to obtain fast feedback from users about what we have built so far - and what we should be building next!
Because of this, the Product Owner has to make some seriously tough calls on what should be part of the MVP and what not. A good PO should never fall for the temptation to load the MVP with features - in this stage, less is more!
While the Walking Skeleton is a great basis for the MVP, there are sometimes reasons for building an MVP which doesn't even meet Walking Skeleton functionality.

While we can "alchemize" on the Prototype and to some degree even the Walking Skeleton, we need to have some form or engineering framework in place to produce the MVP. As the product is just in the startup phase, Scrum is a perfect methodology to build up both the team and the product simultaneously.

At this point, take caution that effort was already invested, but there is no measurable ROI for investors yet.
The value of the MVP is close to Zero.
Because of this, the startup phase must be minimized both in cost and time. Chances are if you take more than a few months to leave the startup phase, you require a lot of goodwill from your investors.
For your own sake, if you can't see yourself reaching the MVP anytime soon, rather let the project die!

You want to get into the "Initial Market" phase as fast as possible.
This phase determines everything for investors. A good product will start generating revenue in this phase, because you have a Minimum Marketable Product, something that you roll out "ready for use". You want people to actively use the MMP for more than play. If you can't get customers hooked to the MMP, you might want to cancel the project, because it will most likely be wasted effort!
The transition from MVP to MMP requires delivering the most critical stories in your backlog as well as continuously staying in touch with the customer base in order to refine the backlog.
Techniques such as A-B testing will give you feedback on how customers like different approaches to new features.
If your engineering practices are still lacking in the MMP phase, you will most likely kill your product due to technical debt sooner or later.

Let's hope your project has survived the MMP phase and is now fully matured.
A mature product has an established customer base and should have generated sufficient ROI to make the investment meaningful. At this stage, the backlog no longer consists of any critical features and most things to be done are merely extensions of existing features.
Since there is nothing which must be added to the product, it is no longer essential to follow a rigorous short-term delivery routine such as the Scrum timeboxes. Also, since the criticality is low, funding may be reduced.
Agile practice recommends shrinking the team(s) at this stage and moving some resources to more critical projects. The reduced team is mostly in "maintenance mode" and occasionally produces new features, so Kanban may become the team's preferred delivery framework.

The agile product lifecyle is mostly driven by value: You must always find fast ways to deliver more value. Every product type described above has different objectives, but all can help the team succeed.
Different aspects of agility become relevant in different stages of the lifecycle: Initially, XP practices determine "make or break" - during the bulk of the project, Scrum is a great framework and in later stages, Kanban comes to rise.
You should never be overly focused on a single agile framework. Always look out which approach helps you deliver more value. Be ready to transition!

Thursday, September 11, 2014

The Daily Standup - Stick to "DONE"

I've seen it happen times and again:
When Waterfall projects turn Agile, people are unfamiliar with what they should do in the Standup, so they resort to old habits and give a status report: "I'm currently refactoring the Service Core by adding the Bombastic Framework, it's about 80% finished, but there's some issues with the parameterization in the bean class... ".

Great, I can already smell the 30-minute+ standup where none of us will be any wiser at the end.

Especially in Pair Programming, where people don't have individual accomplishments at all, it's stupid that first I report what I did (together with you) and then you either get up to parrot me - or worse, you just state "Yeah, me too".

If our Daily Standup looks like this, we might as well skip it: it's most likely just wasting everyone's time and causing unnecessary interruptions in the work.

What many people fail to realize: Nobody cares how you're spending your time and how far you've gotten! In Scrum, everything that matters is what will be done, and when!

Here is what I'm looking for in the Daily Standup:

- Which stories have we Delivered, in terms of our Storyboard?
- What are our  Obstacles [read: Impediments] blocking us from reaching the sprint goal?
- What should we do NExt in order to get closer to the sprint goal?

If you lack this information, the sprint goal is in danger!

I care what we have delivered, because I care whether the customer will be satisfied with the sprint output.
I feel uneasy when the team focus shifts away from "stories delivered" and towards "work done": It's an alarm sign! If we're consistently busy without delivering, we should take a "Time-Out", stop what we are doing and rethink where we are going!

I care about our obstacles, because we should make sure everyone can perform well.
I don't want you to pull impediments out of thin air to satisfy the doomsayer within me - I want you to be prepared to concisely and precisely describe the impediments you are dealing with. Like this, we can tackle the issue rather than watching you waste time. Our success depends on removing impediments!

I care what we do next, because I want to know how I can contribute.
Maybe it's something where I have some expertise and give you a head start, maybe I have some concepts on how it could be delivered easier. Well, call me maybe ... no seriously. I want to contribute and if I can lend a hand so we can deliver more value with less effort, that's gonna make our team greater!

If you take these three things out of the Daily Standup, you have gained important information.

For your next standup, your team should consider this challenge:
For each of D - O - NE, prepare exactly one sentence.
Your standup will be significantly shorter than 15 minutes and you'll take more out than with all the yada yada!

Thursday, July 31, 2014

Incremental Design at it's worst

Is this a representation of your system architecture?

What does Incremental Design actually mean, how does it come into being and what are the consequences?
In the most positive world, an incremental design has not only been tailored to the practical need of business, but constantly adapts and evolves, so that the system architecture smoothly covers every need.

In practice, oftentimes unenlightened teams consider incremental design as absolution for failing to plan the long term.

Let us look at the example of the power supply of Ho Chi Minh City you see above:
Whenever someone needs a new electricity line or a current line is broken, the city workers quickly lay out a new cable and the need is met. Just drop the city a call and you will have a working solution within a couple hours! No waiting, no queuing slot, no formalities - just fast, working solutions!
That's agile, isn't it?

Practiced agility

What we see is an organization applying some agile values and principles, delivering results at a high pace for many years already: It's great!

Let's look at the team of electricity workers applying agile values here:
  • Working software (power supply) over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan
Results first: forget about complex planning - you know which line is needed, so just lay it.
Get the customer supplied with electicity comes first and foremost. The Definition of Done clearly contains "Customer can use electricity", management, bureaucracy and quotas are secondary.
If a line malfunctions, the location of the source doesn't permit for more cables, no problem: No job will be left undone. Just lay another line!

Now, let's look at the Agile Principles applied:
Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.

  • Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage.
  • Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
  • Business people and developers must work together daily throughout the project.
  • Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.
  • The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
  • Working software (power supply) is the primary measure of progress.

Yeah, there are a few Agile principles not so well applied, but we'll get to that later.

The Not-So-Agile part

As in any agile team, there is improvement potential. You just need to look at the picture above and you will see some potential.

The first thing that will spring to mind: How do they locate a defective line?
Short answer: You don't need to, when laying another line is cheaper than fixing the old line.

This is something you will also often see in software development.
It is a truism that adding a new, miniscule element to the architecture which fulfil the business need is oftentimes significantly faster than analyzing the fitting place, then designing and implementing only the minimal possible change.
We call this "local optimization".

Why is local optimization a problem?
Because in the long term, there will be nobody in your organization who is capable of telling exactly what each element is doing, and because in the end, the same thing will have been done hundreds of times - in nearly identical fashion!

Each newly laid electricity line not only fulfils it's job, but also keeps the existing lines stable: How can this be bad?
Well, first you have to think that probably 90% of the cables you see in this picture could be joined into one cable. Not only would the cityscape look more beautiful (are you really a developer who doesn't care if your code looks terrible?) but there is a problem at scale:
Yes: 10 parallel lines work. 50 parallel lines also work.
But once you hit 1000+ lines, good luck controlling the Electric Fields ...

The same happens in software design: Growing complexity is no problem until you end up with interferences which stem from an uncontrollable amount of sources. This is typically the point where "emergent design" turns into "emergent chaos" and business comes up with a good case for system decommissioning!

The solution

There is an Agile Practice which is sorely needed to prevent this kind of disaster: Refactoring.

If you do not want your system design to resemble Ho Chi Minh City's power cable design, here is what you need to do:

Yes, it is sometimes OK to implement "quick and dirty" fixes to supply the business with the needed solution. But you should never consider a job "done" when business can work. You should consider it "done" when you cleaned up the mess.
Refactoring, ideally, should come either instead of "quick and dirty" or in the timespan between the first passing of acceptance tests and delivering the solution to business ("Red-Green-Refactor").

There are some situations where business criticality (for instance, human life depends on it) prohibits turning "quick and dirty" into a more beautiful design.
However, this should be noted in the backlog and scoped for cleanup at the nearest possible point in time.

Let's go back to our power cable example:
A second cable is totally fine and does not warrant refactoring, it's actually more failsafe!
A third doesn't either.
But when I come to a place where there are already five or more lines in place, I should come back to the office and put a mark in the backlog "Too many lines here", so that the next time, I move out, I don't only lay one new cable - but I lay 1 new cable in a fashion that I can decommission 4 cables, so the total amount of cables will be 2!
We call this technique "Optimize the Whole", and it comes from the Lean set of practices.

Do this in your software design, too:
Once you encounter a piece of code which looks like a tangled mess already, choose the closest feasible point in time (ideally: now!) to untangle the mess and decommission superfluous code elements.

Thursday, July 3, 2014

The "Project Manager" Game

Project managers are important contributors in Waterfall projects and there is an incredible weight on their shoulders. Good project managers will go a great length to make their projects successful.

Then again, in Agile we don't have Project Managers. And we don't miss them. No offense.

So, how can we be successful in Agile without a Project Manager?
Truth is - everything that is needed still somehow gets done.

I learned the "Project Manager" game from Jeff Sutherland's CSM course.
It's a fun pastime and takes only about 15 minutes. You may even throw this into a retrospective to lighten up the mood and dig for some insights!

Here's the rules:

Round 1: Everyone on the team brainstorms responsibilities of a Project Manager(5 Minutes)
Do not add things which a Project Manager should probably be doing but doesn't do.
This is not a test of whether the team qualifies for PMI certification.

Round 2: Assign to each responsibility at least one of the labels (5 Minutes)
    1.  "Team Member" (T)
    2. "Product Owner" (P)
    3. "Scrum Master" (S)
    4. If you think that the activity is not needed in Agile, label it: "Waste" (W) 

There is no "right" or "wrong" in this game. It's about the team's perception!

This is an example of what the result may look like:

Here are some things you should look out for in the results:

  •  "Every responsibility goes to the PO / SM": you have a vested Project Manager and are probably missing serious collaboration benefits.
  • Process relevant topics solely within the SM domain: your Continous Improvement project may be impedited.
  • The team can quote the PMBOK by heart and can't find a single activity that is "Waste": there are probably severe organizational impediments towards agility in the organization
  • Planning related topics (scope, budget, objectives, timelines) aren't even on the agenda: Dig deeper to find out how the team conducts planning in the agile implementation!
  • There is an excessive focus on non-value adding activity such as listing myriads of different reports, and these are all assigned to the Scrum Master: Maybe the team's level of self-organization should be increased?

Saturday, June 28, 2014

Seven deadly sins of Agile

In theology, a deadly sin is believed to destroy the life of grace and charity within a person and thus creates the threat of eternal damnation. The "seven deadly sins" are not discrete from other sins, but are considered to be the origin of others.

There are a lot of things you can do wrong as an agile organization.
Many impediments are easy to resolve once you are aware. Some, however, may brickwall your team and will sooner or later kill your project - or maybe even your company!

Here is a list of seven potential agile "deadly sins" which can become the root cause of an insurmountable pile of impediments:

1 - Not having any problems

Every organization has problems. In fact, every person has problems: Nobody is perfect.
Not having a keen view for improvement potential will cause the team's velocity to stagnate. Once you stagnate, the first big hitter will devastate your output capacity.
Hyperproductive teams don't get to this stage because they are able to avoid all problems, but because they spot them early and deal with them effectively.

2 - Not tolerting failure

It is nice to work in an environment where everything can be anticipated well in advance. In this case, agile really isn't the best way to go. Reality usually looks different: work includes uncertainty and there is always a looming risk of missing vital information.
Catching such failures fast makes adjustment easy. Hiding small failures will result in big failures.

3 - Lack of trust

If I make a mistake, I can be sure that you will not use it to report my weakness to management, but you will simply correct it - preferably with feedback, so that I may learn for the future.
Unless we are a mutual safety net, catching each other fast and smooth, failure will be painful. And unless we can rely on painless failure, we can't take the risks we must take in order to deliver high value at high velocity.

4 - Pride

Agile thrives when every team member contributes to the best of their ability.
It is OK to have someone who has more ability than the rest, but it should be made that person's primary responsibility to get the rest of the team up to speed. Tolerating any form of primadonna antics will stifle the growth of the team. It will also pose the project at tremendous risk if this person becomes unavailable.

5 - Lack of automation

Doing simple things is often quicker manually than automated.  Unfortunately, a million simple things are still a lot.
Time saved by not automating often comes at the price of decreasing sustainability and future velocity!

6 - Lack of ownership

Agile products become great when everyone on the team cares. A team developing a product "for the product owner" with a low level of engagement will damage or even kill the product in the long term!

7 - Lack of focus

Agile projects become legends when the entire team is focused on delivering one product with maximum value.
It is very easy to lose this focus, for example by diluting the product vision, by overemphasizing internal practices, by having too much turnover in the team. In any case, the result will be a dissatisfied customer.

Thursday, June 12, 2014

Agile Cargoism

In the South Seas there is a cargo cult of people. During the war, they saw airplanes land with lots of good materials, and they want the same thing to happen now. So they've arranged to imitate things like runways, to put fires along the sides of the runways, to make a wooden hut for a man to sit in, with two wooden pieces on his head like headphones and bars of bamboo sticking out like antennas--he's the controller--and they wait for the airplanes to land. They're doing everything right. The form is perfect. It looks exactly the way it looked before. But it doesn't work. No airplanes land.
From a Westerner's perspective, the actions of the Cargo Cult are ludicrous, but ask the practitioners - they feel entirely different about the matter!

But - what is Cargoism?
Cargoism is the firm belief that by imitating hard enough, you will produce the same results as other practitioners. If you're not successful, it's because you didn't imitate well enough. A cargoist doesn't realize that they are missing something, that the big picture is different from their perception.

James Shore wrote a good article on Agile Cargo Cult teams back in 2008. It's worth the read.
But an entire organization can become an Agile Cargo Cult!
I've heard many times when people said "Google is successful with Agile, Spotify is successful ... so we can also be!". Then, they spend big money to send their Managers to CSPO or CSM training and establish Scrum. Of course, the executives hope that projects will run faster and more successful once practices are adopted.
They hear about Agile, they see all those successful projects and they firmly believe: "Let's do Agile, and we'll be much more competitive."
So, team managers are renamed to Scrum Masters, project managers become Product Owners - the Releases are renamed to "Sprints", each of which is started with a Sprint Planning Meeting. Every day there's a Standup. At the end of a Sprint, there's a Review and a Retrospective.
Now, we're set up. The world is just waiting for our greatness to become manifest! We'll be a successful Agile Organization in no time - and our stock will skyrocket far beyond Apple, Google and Microsoft!

Oh wait - I've described the typical organization that will stop "Agile" after maybe 2 or 3 years, shelving the concept for good. Disillusioned employees will say "Agile is just a different name for the same old wine" and managers will say "We've tried Agile. It doesn't work, it's just a fad".

Blinded by the promise of easy success, fast, high quality results at a low cost, thy adopt the practice in hopes of valuable returns: Cargoism at it's finest! Someone did make a good sale to them, probably earning good money on all the training and coaching. Ask deeper.
  1. Do managers loosen the reigns?
    Agile means giving power to those who need the power to succeed. A Command+Control structure is not only inherently inefficient, but it prohibits success. Maybe people can't do what is right because they still need management approval. In this case, management may proclaim a million times "We're Agile" - but at the same time, they're agility's biggest impediment!
  2. Do people understand that Agile methods like Scrum don't solve any problems?
    Scrum, for instance, just makes problems visible. Did a real, genuine problem solving process get established? Are fundamental organizational issues resolved - or accepted?
  3. Who really owns the products?
    Are people aware that Product Owners champion the product - but that the product really belongs to the team? Do developers take personal ownership of their work, are they proud of their contribution to the product, or are they delivering products just because it's their job? Do the teams even have what it takes to own their product, or do they rely on external patronage to deliver anything?
  4. Who defines the development process?
    Do developers follow corporate governance or are they empowered to do what is right? Do they really care to do what is right - or do they prefer to have someone tell them? Do they not only care to do their job, but to do it well? Do they improve their working process because they see the need - or because the Scrum Master pushes them?
  5. Do people realize Scrum Ceremonies are actually the wrong idea? Do they realize teams should not have to have:
    1. Standups, as mentioned by Jame Shore. If everyone knows what we're doing, and the PO is continuously involved - the Standup will add no value.
    2. Reviews. If the team were directly engaged with the customer either with direct collaboration or techniques like A/B testing, they wouldn't need to batch up for a week or two before getting things signed off. 
    3. Retrospectives. If something is going wrong, why wait until the end of the sprint before bringing the issue up for improvement? That's insane! When you see that something is wrong, go fix it!

If you hold the firm belief that by rigorously applying the practices, you will become more Agile, you have already failed. Behind Agile, there is a spirit. This spirit is not visible, but it may be manifest in agile teams. If you do not lay hold of this spirit, all your efforts are doomed. You will not be successful.
"Copy the spirit, not the form", said Yoji Akao - and that is as true for Agile as it is for Quality Function Deployment.

Without the agile spirit, you'll be weaving Agile landing strips and donning Agile coconut headsets until your organization falters, but you won't receive the precious agile cargo, no matter how devotedly you worship Agile Cargoism.

Power to the Product Owner

We call the Product Owner "single wringable neck" of any agile project.
It's a good thing to have one person who is responsible for leading the product to success.

But what happens when you don't let the Product Owner lead the product to success?

The situation has been explored by many agilists, such as in this blog or in this presentation.

My team was tasked to deliver B2B Interface platform.
The team did the coding and got the solution tested. Then, we hit a brick wall: IT Operations.
The department was making issues with topics like firewall clearance, hardware, maintenance windows, capacity management and ... don't even ask. It wouldn't even be right to say that they were wrong with their reservations.
Anyways - we had a delivery date, we had potentially shippable software, but no server connected to the other side.

Long story short, when the CTO realized that this was endangering the product launch, he simply declared "Use whatever resources you need, you have complete control over the department's processes and priorities".
It didn't even take 2 days and the server was up and running!

Lesson Learned

It's not enough to have a PO who understands the product and grooms the backlog. The PO needs to have the full power to make the product successful.
If your organization brickwalls the success of the product with politics, structures or anything else, then don't strangle the PO for not delivering. You need to have a PO who can make the calls whenever necessary, whereever necessary.

Tuesday, June 10, 2014

Waterfall Kanban

Kanban is a very lean, efficient process for getting work done.
Since Kanban minimized Work in Progress and makes bottlenecks visible, I thought it would be a great way to increase the flexibility of our software development.
Years back in 2008, when I just went through Lean Six Sigma training, I saw Kanban and the first thing that came to my mind, "Let's try this!"

Setting up a Kanban board is easy.
Explaining the concept of Kanban is also easy.
So, we started.

Initially, we did really well:

  • Breaking down the work into bite-sized portions made analysis, development and test significantly easier.
  • The high visibility of which topic was currently at which stage of the development process made managing the product easier as well.
  • Working on a pull principle significantly reduced the stress associated with large projects.
  • Seeing multiple cards fly through the board from left to right, getting things done at a rapid pace, significantly improved team morale
So, Kanban was a big success.
However, a couple weeks down the line, problems did start to set in.

What went wrong?

We did Kanban in a classical Waterfall approach.
The swimlanes we had were no "toDo, Doing, Done", but rather:
"Open, Analysis, Development, Test, Done"

Intitially, this approach was fine. Until the stories got more complicated. Then, test simply bounced back defective stories to Development. Some stories even came back to Analysis, because underlying assumptions were disproven in test.
We ended up with a whole mumbojumbo where the Development swimlane was plastered with post-it's and analysis and test were idling.

Lesson Learned

You can only prevent a clutter of stories in an unfinished position if people don't have to idle while other stories are being processed by other people. If anyone needs to wait for someone else to finish, your Kanban setup is doomed!

If you want to be successful with Kanban, you must make sure that you have "T-Shaped People", so that the team can support each other as backlog items move through.

Ideally, you do Pair Programming and the same pair is responsible for the story end-to-end, eliminating unnecessary handovers. Make sure that everything required to successfully complete the story is available in the team: skills, tools, information, ownership.

Sunday, June 1, 2014

Just get it done quickly - whatever "done" means ...

Imagine the following:

You go to a restaurant, order a Chicken Wings with fries ... and then wait.
When you finally ask the waiter, he says "Oh, your meal is ready. The cook already went home".

Bewildered, you go into the kitchen to find out that yes, indeed, your order has been processed. Somewhat. There are unspiced, half-baked chicken wings without spices in the oven - and some mash potatoes in the pot.

Which brings me to topic.

I was working for a client who had one developer working offshore. Because he wasn't part of the core team, he only had specific tasks assigned that could be completed stand-alone.

So, this one glorious Friday, he dropped an E-mail "Done. I'm off for 2 weeks of vacation" - and boarded an airplane right away to a different continent.
He had an assignment with customer impact and a clear deadline.
To our dismay, we discovered that there was no code commit. Maybe his mind was already on vacation, but still: we had an issue.
So, we called him as soon as his plane landed to discover he had simply forgotten to commit his code to the central repository.
He had to call a friend to come over to his house who, with detailed phone instructions, managed to commit the code after many hours.
And that's when things started to get funny. We couldn't even build the software successfully: He had taken the liberty of adjusting the core engine to suit his implementation of the solution!

Needless to say, there went the deadline.

Lessons Learned

There were so many things that went wrong here, I don't even know where to start.
And don't even get me started about hiring a single developer offshore to assist an onsite team: that was a business decision made by someone else.

First things first, it is a mindset thing: "Commit early, commit often". It should be habitual for every developer to commit more often than drinking coffee. We sometimes commit as often as 100 times per day in a small team of 3. If you see that someone isn't even committing daily, seriously - you have a process problem and risk losing work!

We didn't have any automated test coverage, so he thought he was doing fine. Especially when working in distributed teams, it is essential to have good unit and regression coverage, it creates a safety net for developers. We didn't have that, so we essentially created the environment where his mistake was possible!

Of course, offshore communication is difficult, but the problem here wasn't that he forgot to commit. The problem was that although we had some form of CI, he considered it "nothing out of the ordinary" to build the software locally to verify his implementation. Especially offshore teams, but every team, is well advised to have one single central Integration system, as the single point of truth by everyone on the team: If it doesn't integrate, it's not done. Regardless of how nicely it works on your own machine!

Friday, May 30, 2014

Wrong assumptions - Take nothing for granted!

As a Product Owner, my primary responsibility is for the "What" of the team, not for the "How".

This week, we had a completely new product and so we formed a new team and did what we always did in projects ...

Communicate the product vision, set up the backlog, define Working Agreements - and get the action on!

We're doing 1-week sprints, because they work very well for our company and those weekly Retrospectives are really valuable.

Anyway. So, we had our Review and the team had done a good job, completed 14 Story Points and done a lot of work that will help them reach a higher velocity later on.

Or, so I thought.

After the Review was over, I casually asked "So, how many unit tests did you write this week?"

... deafening silence ...

Guess what?

The team hadn't had an explicit Working Agreement to write unit tests - and so they didn't!
It's a good thing we have weekly sprints: The Retro will take care of this.

Lesson Learned

Never, ever take anything for granted - you never know who interprets things how!
Better be explicit about the Engineering Practices that will be employed in the project than accumulating technical debt.

Wednesday, May 28, 2014

Continuous Integration that isn't

I have seen people who run Jenkins and claim that they have realized CI.

Actually, one of those people was me, a couple years down the line.
Before you ask, no, it wasn't in an agile team.

Configuring a Jenkins is easy, and getting the Jenkins to pull a repo, create a build and deploy it on a server also is. But that is not Continuous Integration!

So, here is my story:
We were in a customer project and pretty much nobody had heard of CI, only one guy had an idea "Why should we manually deliver software to the testers? There are tools out there, let's do CI"

And so we did.
The Jenkins was up and running. Whenever the team manager pressed the button, the software got deployed to the Test Environment and the downtime for a new deployment was reduced from a couple hours to less than 5 minutes. As the testers knew when the "Deploy" button was being pressed, usually Friday EOB, test was absolutely not affected by any downtimes.
A big benefit!

However, usually the first thing happening after the deployment - something didn't work.
Like, for instance, the web server. Or the Messaging Queue. Or the database. Or the business processes. Or anything else that used to work in the past.

Lesson learned

Continuous Integration is so much more than automating the build/deployment chain and reducing outages to a couple minutes.
CI shouldn't result in outages in the first place. You can use techniques like Parallel Deployment to attain Zero Downtime for patches.

Also, you haven't understood CI until you have tons of other techniques in place.
CI that works on a button press is missing the point: CI should be continuous, not scheduled and manual.

  • If you are delivering dozens of new features with each build, your CI has a very slim chance of locating the error source. Make sure your CI is integrated in a way that each feature has at least 1 build on it's own.
  • If you don't have unit test coverage, CI isn't even worth being called such. Move towards high unit test coverage before bothering with CI.
  • If you don't have automated regression and smoke tests, CI is more likely to cause harm than help. Invest into test coverage and link the automated tests to the CI server. 
  • If you don't have a rapid feedback cycle into your development process, CI has no benefit. Make sure the developer who committed the failing build gets informed and acts within minutes.
  • If you aren't acting immediately on failed builds or errors in the deployment, that's not CI, it's a mess!
    STOP and fix if the tests fail. Don't proceed to code further based on harmful code!
  • If you are spending a full week on manual integration tests, you may have a CI tool, but you don't have CI!
    Create automated integration tests that can be run as part of the CI process. If you can't eliminate manual components, rethink your approach!

CI isn't about having a tool - it's about having the processes and engineering practices that allow you to deliver rapidly and often!
Real CI comes with a mindset "Let's rather have one too many than one too few deployments".

Monday, May 26, 2014

Done undone

As a Product Owner, it is my key responsibility is ensuring that the customer is satisfied with the product.

As the SCRUM Team, it is our key responsibility that we get the story "Done" in a way that the customer will also accept.

Recently, I had a bad surprise when running a new team.
We all work for the same company, but we usually don't work together in the same constellation.

So, we dug head in, at the beginning of the sprint, we defined the backlog.
As PO, I defined the stories and priorities. Then, my team did the Work Breakdown and defined the tasks required for each story.

During the Sprint Review, I couldn't accept a single story as "Done", despite the fact that the team assumed the story was done.

What had happened?
The tasks all got done, but nobody paid attention to the story itself! After all tasks were executed, the story was so complicated that even the developers had to ask each other how to use it - UX was terrible!
A customer was present in the Review and he simply asked "How do you expect me to do this?"

Sorry for the team, I couldn't accept it as "Done", because I personally understood "Done" as "We are not going touch this again. We can tear up the story card because everything is finished".

The failure?
I assumed that the team's Definition of Done was the same as mine, but the team had a DoD for themselves which considered a story "Done" if all tasks were completed - not when the results are usable by the customer!

Lesson Learned

Make sure that the Definition of Done is not subjective.
Take your time in the first sprint. Remove all subjectivity and unspoken expectation from the DoD.
Everybody must be on the same boat. The team, the PO and the Customer should all have the same understanding of the team's DoD.
Make certain that before the first story gets implemented, everyone knows and understands the team's DoD in the same way.

Wednesday, May 21, 2014

The worst possible Performance metric

Developer performance is not easy to measure.
Why is this? Because a developer's primary objective should be to find the overall simplest feasible solution towards unsolved problems (or at least, "un-implemented").
However, time and again, there are non-technical project managers who try to do it.

There is an infamous telltale project where allegedly one million lines of code had to be written within one month, but the developers overperformed - producing even one and a half million lines!
Wow, what a great result!*

Refactoring is a technique primarily focused on eliminating code complexity, therefore increasing readability, maintainability and improving overall design.

Story time:

One of these days, my team was challenged with automating a business process.
Occasionally, the customer would ask how we were doing. So, one glorious day, they asked very specifically, "How many lines of code did you write today?"
It was probably the worst possible day to ask this question.
The entire team had written maybe 10 additional lines of code - but deleted roughly 200!
So, at the end of the day, the "lines of code" metric was 190 in the negative!

It actually took a while to explain to our customer why they should still be paying for this ...

The refactored code eliminated a performance problem.
It also implemented 2 different user stories from our backlog.
And in all that, we increased the flexibility of the current code base way beyond the customer's need - with no extra effort!

Lesson Learned

Never, ever let anyone measure developer performance in "lines of code". It is not a success metric.
Don't even go for "tasks done" or "amount of user stories completed", these are all deceptive!

The only metric that should be applied to software development is "outcome".
And that one is incredibly tough to quantify.
In the end, all it means "How much better is the software fit for it's intended purpose now?"

Tuesday, May 20, 2014

Mocked loophole: Failure to test unit integration!

We recently had a project where we had to experiment with Unit Testing in a Procedural Environment.
Being familiar only with tests in an Object Oriented environment, it was quite tough to figure out how to properly conduct unit tests.

For testing a function, we did what we usually do: mock every external function call.

So, our code effectively looked like this:

function X
   if Y($1) is "true" then echo "Yes"
   else echo "No"

mock function Y { return "true" }
assertEquals "X in good case" "Yes" X(1)
mock function Y { return "false" }
assertEquals "X in bad case" "No" X(2)

assertEquals "Y with good result" "true" Y(1)
assertEquals "Y with bad result" "false" Y(2)

Extra credit to those who already sit back laughing, "You fools, this obviously had to go wrong!" ...

Guessed what happened?

We had done some refactoring to Y in the meantime, and in the end, the unit tests for Y looked like this:

assertEquals "Y with good result" "Yes" Y(1)
assertEquals "Y with bad result" "No" Y(2)

Yes, we had changed "Y" from returning "true"/"false" to returning "yes" / "no"!
Of course, the refactoring and TDD made sure that Y was doing what it should be, and we simply assumed that regression tests would catch the error on X - guess what: they didn't!
Because we had always mocked the behaviour of Y in X, there was no such test "Does X do what it's supposed to do in real circumstances?"

Lesson Learned:
If the function works in context, it does what it's supposed to do - but if the function works in isolation, there is no guarantee that it works in context!

We changed the way of writing unit tests as follows: "Rather than use the most isolated scope to test a function, prefer to use the most global scope possible without relying on external resources".

Saturday, May 10, 2014

Work Done doesn't matter

It was a small company which just decided to transition towards SCRUM.

The team I coached was highly competent, they actually did a good job. I was serving them as a SCRUM Master and I actively engaged in daily business, conducting story-related admin activity as well.

SCRUM was really good for the team: Impediments surfaced left and right, we started resolving year-old stuff and really tuned up the velocity quickly.
In the first Review, I took the liberty of inviting all the relevant stakeholders.

Here is how the Review went:
Everyone just gathered in front of the SCRUM board and reported which tasks were "Done".
Nobody sat at a computer, and withing 5 minutes, the first attendants were already fiddling with their watches and phones.

The team was not capable of producing "visible results", and even if the results were visible, they were only talking about them rather than demonstrating them.

My lesson:
A team is still focused around Tasks and Work may be applying SCRUM, but is focused on the wrong deliverable.
In traditional management, reporting the "Work Done" is very important. We neither report about how hard and/or much we worked, nor do we deliver "work".

Our result is working stuff. For developers, that's the new software product. For server admins, it may be a piece of hardware where the developers can now install the product. For a marketing team, it may be the new product's homepage.
But for nobody, it's a bunch of completed task cards.

Friday, May 9, 2014

Versioning Failure

It was many years ago, when I first was introduced to the marvels of a Version Control System when working as a developer for an Enterprise Support Platform.
My customer was using the PVCS for release management - I had never heard of automated versioning before.

I love automation, and I loved the things the PVCS could do for me.
However, I quickly grew weary that after pretty much every couple lines of code, I had to do the following:

  1. Add modified files to the repo
  2. Do a diff to verify the changes
  3. Commit the changes
  4. Publish to baseline

Whenever I run a manual activity multiple times, my first thought is "Automate this". So this is what I did.
It was very easy to automate. Always the same commands, always in the same sequence - so I just scripted it!

Then came this glorious Friday. It was my last day of work before vacation.
Everyone else had already left the office.
I wanted to complete this one last task. It was trivial, one single line of code.
So I implemented the change, did my tests, ran my script and took off.

On Monday morning, I got a phone call "What did you do to our baseline? EVERYTHING is gone!"

Took a while to figure out my "push script" went rampant and committed every single item in the project as zero-byte file.
While I was on vacation, it took the rest of the team half a day's work to clean out the entire mess I had unwittingly created.

Probably I took this one harder than the team, but here's
My lesson:
I now understand why version control software does not provide "one-step push" for changes. No automation can understand what the intention of your change was.

Not everything that can be automated should be automated.
Keeping the "brain-in-the-loop" is often the only way to eliminate accidents.

And this is why I no longer believe in "full automation".

Thursday, May 8, 2014

Are you smart or stupid?

Everyone wants to appear "smart", nobody wants to appear "stupid".
In this blog, the author describes why we should actually dare to be stupid.

Creativity actually requires sometimes doing things which have a high risk of failure.
working in a domain which is planned to the very last detail, where every process is stiffly defined and formalized, there is no more room for creativity.

But only by breaking out of known habits do you have the chance to make marvelous discoveries.

Did you know that Penicillin was only discovered because Alexander Fleming was so stupid as to forget closing his Petri dish full of bacteria samples?

An accident saved millions of lives!

Proud to Fail

Whether you are working in an Agile environment or not, there are small and big failures every day.
As long as we are humans, we are not omniscient, and therefore, will fail.

I remember a story I read somewhere, many years ago:
In a company was a young manager who was in charge of a $2m project. He made a severe mistake in the planning, the project failed. Others asked the CEO "Why don't you fire him?" - to which he replied "Why should I fire someone into whose education I have just invested $2m?"

As Agile practitioners, we should live in an environment devoid of coverups and blame-games. We should have the courage be open and honest with our shortcomings, without fear of reprisal.

The difference between a wise person and a fool is not that the wise person never failed.
Wisdom means learning from their failures - and improving.
Even better, we can use our own failure in order to help others improve!

I, am glad to have worked in such an environment for years.
In this blog, I want to share stories about failure and the lessons I have learned.