Tuesday, February 23, 2016

Quantify problems before solving them

I was recently asked: "How many automated tests, do you think, we need?". An underlying premise was that I would provide an expert opinion on whether the team had the right amount of tests.
I could only reply: "I don't know. If you have a problem with quality, you might need more. If you have a problem with the maintenance of your tests, maybe you need fewer. However, if there is no problem, you're probably doing the right thing."

This one can be abstracted, away from the specific issue of test automation.

If there is no problem, there is nothing to fix. But how do you know if you have a problem then?

Here is a simple suggestion:  Collect data. Then, decide based on facts.

This is an example of how you can track:
Each time your team encounters an issue and spends time with or because of it, make a mark. Your result might then look like this, depending on which activities you are tracking and how you track:

In your retrospectives, you can simply look at the numbers and ask the following questions:
  • Does this make sense?
  • If not: What should we do about this?
  • Will we gain more insight if we get more data in the next iteration?

Summary

"Data Driven Decision Making" is not rocket science. Collecting data can be simple and easy - and require very little preparation or technology. While you may want to refine your collection process depending on your understanding of the problem, the simplest possible solution will often suffice to rule out wasting time on dealing with corner cases or imaginary issues.


No comments:

Post a Comment