Pages

Wednesday, December 17, 2014

Refactoring running rampant

The purpose of refactoring is: Increase the quality of code without changing the functionality of the code. Unfortunately, we too often forget that "quality" is in the eye of the beholder.

Let me give you one example. This is pseudo Java code snippets, but you can see where it leads.


Originally, there were 2 classes which contained code like this:

class A
int X;
int Y;
public int compute ()
{
   return this.X + this.Y;
}
class B
int E;
int F;
public int compute()
{
  return this.E - this.F;
}

Well, those are highly similar methods, so "obviously", this calls for refactoring.

Let's start refactoring to a common level:



class A
int X;
int Y;
private int _compute ()
{
   return this.X + this.Y;
}
public int compute ()
{
   return this.compute();
}

class B
int E;
int F;
private int _compute()
{
  return this.E - this.F;
}
public int compute()
{
   return this.compute();
}


We have some duplicate code now, so we want to eliminate it by moving the method into a new class:


abstract class operatorClass
public int compute ()
{
   return _compute;
}
class A extends operatorClass
private int _compute ()
{
   return this.X + this.Y;
}
class B extends operatorClass
private int _compute ()
{
   return this.E - this.F;
}

At least, now there is a common level, but there is still highly similar code. Let's do something about it.



class A extends operatorClass
A()
{
  setOperator ("+");
}
class B extends operatorClass
B()
{
  setOperator ("-");
}
abstract class operatorClass
int X;
int Y;
String operator;
public int compute()
{
   switch(operator)
   {
      case "-" : return X-Y; break;
      case "+" : return X+Y; break;
      default: throw new InvalidOperationException (operator); break;
   }
}
protected void setOperator(String operator)
{
   this.operator=operator;
}


Yay! We eliminated the "nearly duplicate" method in 2 different classes and reduced the amount of code in both of them.

Unfortunately, there is a small fly in the ointment here:

  • Case-Statements are poor code. Reason? They are more difficult to unit-test. Path coverage comes to mind. It's also a violation of the Single Purpose Principle.
  • The "operatorClass" is now doing stuff it shouldn't be doing, that is: making a decision that should be made on a more abstract level, i.e. on the level when the object is created - a violation of the Dependency Inversion Principle!
  • We actually introduced the possibility for error into the operatorClass' "compute" method. Trying to call "compute" with invalid operators was not possible before!
  • Oh, and that, of course means, we need additional Exception Handling. We didn't even go into the new "InvalidOperatorException" class that we must create.
  • Each time we implement a new class that implements a new "compute" method, we must modify the "operatorClass", so we just violated the Open/Closed Principle!
  • Not to mention the application's performance has just deteriorated. It will now be slower, because of the "case" statement that must be evaluated. It will also consume more memory, because an additional variable must be initialized
Summary:
While the clode looks cleaner when you only look at the level of A and B, we merely "shoved dirt under the rug", but we didn't help at all!

Lesson learned

Refactoring is not a purpose in itself.
Not every refactoring is actually a positive change.
When refactoring, you must set a clear purpose of what you want to accomplish - and why. Even when you do not break (unit) tests and the code becomes shorter, you may be doing something tremendously harmful.
I strongly advise doing Code Katas occasionally to get a grip on how to refactor beneficially.

Wednesday, December 10, 2014

Software Development Lifecycle - Testing

The Software Development Lifecycle: Testing

What you see above is the "Test Cycle" as I learned and practiced in Waterfall environment, for years.
Now, I don't even want to go into how in theory, you can add significantly more test phases in here.Neither how in practice, smoke, integration and regression tests are usually neglected.

The simple fact that developers hand over software they consider "works as designed" to test is ingrained into the mind of waterfall software project specialists.

As I mentioned in another post about test coverage, defects occur even when developers consider that their software is defect free.

Let us consider for a minute that each test costs time.
While a piece of code is in test, developers continue to produce more working software. Yeah, I know that the Waterfall theory says that once development is finished, the product is handed off to test. But seriously - has this ever been reality? Do developers really sit there twiddling thumbs until testers report defects? Do companies really pay developers to sit idle while testers are busy?
If you are seriously working in such an environment, I would have a great optimization suggestion for your management.


So, developers build on code they consider to be working while test time passes. If a defect is then found in a component they are building on - yet, given the defect, the new component did "work as designed", the defect fix may cause rework not only in the defective component, but also in the current work-in-progress: Fix efforts may already be twice as high -or even higher- as if the defect was discovered before the developer started a new topic.

The problem is intensified when developers don't induce defects into new components, but into components that have already been accepted in the past. Ignoring the fact that oftentimes, when schedules are tight, regression testing is the first activity to be descoped, it's always the last thing testers do. This approach actually designed to maximize the amount of time that a defect can stay in the software - and therefore, maximizes the amount of damage a defect can do!

Is this smart? No!

You will never deliver cost effective high quality products unless you un-learn this model!
Forget everything you learned about Design-Develop-Test. It's the wrong philosophy. You can't improve it. It doesn't even get better when you increase the amount of time for regression tests or put regression testing in front of functional testing.

The Solution

A paradigm shift is needed.
Here is a non-exhaustive list of changes you must make, preferably in this order:

  1. Introduce mechanisms that let your developers know whether they introduced defects before they pick up a new task.
  2. Don't even let developers start on a new topic until there is confidence that their last piece of work didn't introduce defects.
  3. Automate testing. Enable developers to run any test they need or want to run at any given point in time, as often as they need to. Don't make them wait days - or weeks - for test results!
  4. Eliminate the "tester role" (but not the testers). In Scrum, we speak of a "Developer" even when we mean "the test expert" because everyone is accountable for high quality. Make programmers cowork with test experts before actually starting to write code.
  5. Create test awareness. Make sure developers know exactly which tests must pass before they create code.
  6. Introduce test driven development (TDD). Give developers access to the tests before they actually start coding.
  7. Change your process: Create quality awareness and accountability. We utilize "pre-commit hooks". Developers cannot even commit defective code unless they specifically override, but even then, the defect will be tracked on every single commit until resolved.
  8. Implement Continuous Integration. Let developers know immediately if their component damaged the product build. A wait time of 10 minutes is already tough, days simply aren't acceptible!
  9. Implement Continuous Delivery: Developers should never be working on their own environment or an individual branch for many days without merging back to the master. They should be working in small increments that can be delivered fast. This minimizes the risk that days of work need to be scrapped because of a wrong premise.


Your future process should be fully integrated, eliminating time gaps between design, development and testing. Testing should be an activity that starts before development, should go on in parallel to development and should be completed by the time the programmer moves a story, feature or task to "Done".

If you still need a "test phase", always think that any single day that a defect is within the software, you increase the cost of poor quality. Think different!


Test Coverage



What you see above is a classical test report as you would expect it within a Waterfall project. Typically, test managers produce such statistics at frequent intervals to report the progress of testing.

As with any diagram, this one isn't worth much without explanation.

The blue bar is the amount of test steps conducted per day, the red bar is the amount of defects the software contained on each day.

Now, what you see is an amount of roughly 15000 test steps being run over the course of 2 weeks and roughly 800 defects discovered.

In large-scale Waterfall projects where I worked previously, this would have been a boatload of effort for a team of maybe 20 testers.

A job well done for the test manager, you can be sure QA would be praised.



What is this metric - really? 

It's the results of automated integration and acceptance testing for one of our products. What you see here is only my personal tests: Don't even get me started that I'm not a fulltime developer. All of the tests you see displayed here - accumulated - ran within less than 1 hour including defect reporting.
None of the defects discovered made it past the evening. All were fixed within the same business day!

The consequence of such activity?

We use Continuous Delivery, so we are able to deliver working software on a daily basis - and we haven't had a single critical fault in the system on a live installation.

Years ago, when I was working exclusively Waterfall, I couldn't have believed by myself that not only a single person could execute as many as 4000 test steps per day. I wouldn't have believed - and that is probably the more critical learning here:

A single change to a single component could wreak havoc in use cases that merely rely on interfaces and have no direct connection to the implementation of the changed component!
When a programmer might consider their work "Done" - without proper automated test coverage, these beasts are still lurking deep in the dark!
I wouldn't have run the tests if I knew that my code was bugged! I typically stay in development until I am confident that my code is defect-free! I mean, hey - I got unit tests to make sure I didn't do anything bad! (and that's already more than we often had in Waterfall projects)
In a Waterfall, I would have handed the topic off to test and waited for any potential bug reports.
And chances are, testers wouldn't have done a Regression test and wouldn't have discovered the issue before the change went live.

It's not that I became a crappy developer by doing Agile. On the contrary. All of my Waterfall experience is still there. I make fewer mistakes than ever. But mistakes still can happen. The difference is that they don't make it into production any more.

Lesson learned

If you're a Developer, you are well advised to automate tests for your components so that you know if you accidentally broke something. However, don't stop there. Unless you have integration test suites, you may not know when your work has detrimental impact on the system's overall functionality. Chances are if you are only automating functionality and not use cases, your software still works, but your users can't.

Hence, agilists say "Automate everything". (well, everything that makes sense) - it does pay off!


Tuesday, December 9, 2014

We're Agile now, we don't do testing any more!

Scrum only has Product Owner, Scrum Master and Developer.
Testing is not part of the Scrum process, so we can eliminate it - and save money by sacking the testers.

I've had the joy of working with a company that seriously thought about taking this road.
It was a shock for me not because I come from a testing background, but because it's wrong on so many levels:

Agile means: Deliver valuable software

Buggy software isn't valuable. We don't expect to get praised for a "job well done" if the result isn't both usable and useful. How do you know that it's usable without testing? How do you know it's useful without testing?`

Agile means: Deliver value often

Working on bugfixes drains resources. The only way to prevent bugfixes is by not introducing bugs to the customer in the first place. How do you know that you don't have bugs without testing?

Agile means: Deliver value fast

You need a profound understanding of what your components do - and why. Without testing, you will be spending more and more time in understanding the impact of change as the project progresses. Specifically, without systematic testing, you may completely lose control!

Agile means: Eliminate waste

A standard trumpet: Testing is waste, because if there are no defects, tests don't add any value.
Well. That may sound true if you have a superficial understanding of (software) engineering. 
Ask yourself: Would you hire an architect who didn't run calculations to verify that your house won't crash over your head? No. Would you ride a car where designers didn't validate traffic safety regulations? No. 
So why do you want to build a piece of software without those checks? 
Because bankruptcy isn't as bad as death? Great! Tell me your company name and I will make a fortune by short-selling!

Agile means: Working with feedback

Each time a piece of code is finished, it should be exposed to customer feedback. Unless you want to look like a fool in front of your paying customers, you better have a strategy in place to make sure that they like what they see. You must verify the correctness and validate the applicability of your solution to their problem before confronting them with results. And the formal process for doing them is called "testing".


Conclusion
Not testing is planning for commercial suicide.
It holds true for Agile even more than for traditional projects, because in a Waterfall, you might get the customer to sign off some crappy product just for political reasons. In Agile, your customer will know that you don't know your trade within a couple weeks.

Monday, December 8, 2014

The most important role in Scrum

It's a philosophical question: Which role is most important in Scrum?

When I went into Jeff Sutherland's Scrum Master class, he stated that "The Scrum Master is the most important person in the team", on the Product Owner Training, I heard "The Product Owner is ..." - and during Scrum Developer Training, I heard the same about Developers.
Now, what is the real deal?

Let's look at this slowly.

The Product Owner

Imagine you have no Product Owner.
Who takes care of the backlog, who grooms it? Who makes the calls when stakeholders quarrel about the direction of the product? Who communicates the product vision to developers and stakeholders?
Ok, let's make it short. If there is no product owner, there is no product, there is no project. There is no need for a team - so obviously, the PO is the most important person.

Finished.
Well, that was quick.
Oh, wait.

The Scrum Master

Imagine you have no Scrum Master.
Who arranges your ceremonies, who takes care of impediments? Who makes sure that management or other stakeholders don't violate the team's self-organization? Who takes care that the Working Agreements are adhered to?
Ok, let's make it short. If there is no Scrum Master, teams will most likely fall into disarray. Maybe not because of themselves, but because of the world around them. A team in chaos will not deliver.
So, obviously, the SM is the most important person.

Now, we've got a conflict already. 
But we're not finished yet.

The Developers

Ok, in traditional organizations, we know that managers think that the worker drones are easily replacible but everything depends on their genius.
But let's get real: You got a product vision, you got a development process. But who does the thinking, where does the code come from? Who makes the vision real?
Guess what - if there's no developers, the best PO in the world is useless!
So, developers are the most important people!


Everyone is important!

Many agilists use the Pig-Chicken metaphor, stating that an Agile team should only consist of "pigs". I don't like this analogy, because first, I don't consider my coworkers to be pigs for obvious reasons.
Second, if you're in Saudi Arabia, chances are that nobody wants bacon, rendering the pig completely worthless.

Simply said, if you're on an Agile team, everything depends on your contribution. You are essential to success. If you're not essential, you're in the wrong team!





The Bottleneck issue

Are you terrified of going on vacation? Can you already guess all the disasters that will happen when you're out of office for more than 2 days?
If yes, you're not alone, but there is something you should do about it.

Oftentimes, new Agile teams have the "Problem" that the intensity of work is so high that people are somehow stuck in their role.

For instance, the Scrum Master may not even get to coach the team because impediments and ceremonies already fill their schedule: The installation of the Continuous Improvment Process may become secondary.
Or, the Product Owner is spending fulltime answering questions of team members to ensure the product is terrific: Good, but the grooming of future backlog items and stakeholder management may suffer.
The worst case is developers taking so many stories that they lose time to hone their agility: The delivered product may be great, but we didn't implement any ways to do the same thing faster and easier.

The Solution

Agile methodologies try to eliminate "Single Points of Failure", i.e. having success or failure hinge on one single person.

  • Developers should ideally pair up to make sure that there is no capability required in the project which only one person has. Why? Isn't it more easy to replace you if some else possesses your skills? No! On the contrary: if you can chip in wherever something needs to be done, your value to the company increases!
  • The Product Owner, "owns the product" and needs to call shots. However, they should not feel obliged to create a structure where every minute decision entails them.On the contrary, the PO should communicate the vision so clearly that the team can independently decide what is the best way to advance the product. 
  • The Scrum master, "owns the process" and is responsible for ceremonies and managing the impediment backlog. A good Scrum Master will not spend their days running after individual impediments and arranging meetings. Ideally, they will coach and empower the team to do this by themselves. 

For both the Scrum Master and the Product Owner, I would refer to an important proverb about aid: "The most important job of a helper is to make themselves superfluous", with the intention "If you enable and empower others to fill your role, you did a good job, otherwise you missed the mark".

Sustainability is one of the Agile Principles, and you will not have sustainable development unless everyone actually strives to enable others to do what they are doing.