Showing posts with label TDD. Show all posts
Showing posts with label TDD. Show all posts

Tuesday, March 29, 2016

What makes a good test?



In agile development, the team is responsible not only for development, but also for testing. However, many developers are challenged with the notion that not just any test is a good test.
Here is a fairly comprehensive list of criteria which good tests meet:


  1. Valid: It measures what it is supposed to measure. It tests what it ought to test.
  2. Clear: Test instruction and test objective should be clear.
  3. Comprehensible: The language of the test should be comprehensible to the reader.
  4. Relevant: It appropriately and accurately covers a significant portion of the Object Under Test. 
  5. Focused: It should do one, and only one thing (“Single Purpose Principle”).
  6. Detailed: Failure conditions should be sufficiently specific to discover the source of the defect. 
  7. Practical: It is easy to conduct and requires a low amount of preparation.
  8. Fast: It should take only an insignificant amount of time.
  9. Efficient: It only consumes a reasonable amount of resources.
  10. Unique: For one specific attribute of the software, there is only one test.
  11. Repeatable: If it is repeated, the result will not differ.
  12. Reproducible: It should yield the same result, regardless of where it is executed.
  13. Independent: It should not rely on the results of other tests.
  14. Economical: It should require lower creation and maintenance efforts than the value of the test (risk covered)


Violation of these criteria may (or may not) be a problem. However, they provide a good guideline when designing tests. regardless of whether these are acceptance, system or component (unit) tests.

Working with the list

When your test process has problems, check a sample of your tests against this list and try to discover hotspots for improvement.

When creating your first couple tests, you should pick some criteria from the list which seem to be the most difficult to achieve. Then design your tests to actually meet these criteria.

Sunday, June 1, 2014

Just get it done quickly - whatever "done" means ...

Imagine the following:

You go to a restaurant, order a Chicken Wings with fries ... and then wait.
When you finally ask the waiter, he says "Oh, your meal is ready. The cook already went home".

Bewildered, you go into the kitchen to find out that yes, indeed, your order has been processed. Somewhat. There are unspiced, half-baked chicken wings without spices in the oven - and some mash potatoes in the pot.

Which brings me to topic.

I was working for a client who had one developer working offshore. Because he wasn't part of the core team, he only had specific tasks assigned that could be completed stand-alone.

So, this one glorious Friday, he dropped an E-mail "Done. I'm off for 2 weeks of vacation" - and boarded an airplane right away to a different continent.
He had an assignment with customer impact and a clear deadline.
To our dismay, we discovered that there was no code commit. Maybe his mind was already on vacation, but still: we had an issue.
So, we called him as soon as his plane landed to discover he had simply forgotten to commit his code to the central repository.
He had to call a friend to come over to his house who, with detailed phone instructions, managed to commit the code after many hours.
And that's when things started to get funny. We couldn't even build the software successfully: He had taken the liberty of adjusting the core engine to suit his implementation of the solution!

Needless to say, there went the deadline.

Lessons Learned

There were so many things that went wrong here, I don't even know where to start.
And don't even get me started about hiring a single developer offshore to assist an onsite team: that was a business decision made by someone else.

First things first, it is a mindset thing: "Commit early, commit often". It should be habitual for every developer to commit more often than drinking coffee. We sometimes commit as often as 100 times per day in a small team of 3. If you see that someone isn't even committing daily, seriously - you have a process problem and risk losing work!

We didn't have any automated test coverage, so he thought he was doing fine. Especially when working in distributed teams, it is essential to have good unit and regression coverage, it creates a safety net for developers. We didn't have that, so we essentially created the environment where his mistake was possible!

Of course, offshore communication is difficult, but the problem here wasn't that he forgot to commit. The problem was that although we had some form of CI, he considered it "nothing out of the ordinary" to build the software locally to verify his implementation. Especially offshore teams, but every team, is well advised to have one single central Integration system, as the single point of truth by everyone on the team: If it doesn't integrate, it's not done. Regardless of how nicely it works on your own machine!


Tuesday, May 20, 2014

Mocked loophole: Failure to test unit integration!

We recently had a project where we had to experiment with Unit Testing in a Procedural Environment.
Being familiar only with tests in an Object Oriented environment, it was quite tough to figure out how to properly conduct unit tests.

For testing a function, we did what we usually do: mock every external function call.

So, our code effectively looked like this:

function X
  {
   if Y($1) is "true" then echo "Yes"
   else echo "No"
  }

X.test
mock function Y { return "true" }
assertEquals "X in good case" "Yes" X(1)
mock function Y { return "false" }
assertEquals "X in bad case" "No" X(2)

Y.test
assertEquals "Y with good result" "true" Y(1)
assertEquals "Y with bad result" "false" Y(2)

Extra credit to those who already sit back laughing, "You fools, this obviously had to go wrong!" ...

Guessed what happened?

We had done some refactoring to Y in the meantime, and in the end, the unit tests for Y looked like this:

Y.test
assertEquals "Y with good result" "Yes" Y(1)
assertEquals "Y with bad result" "No" Y(2)

Yes, we had changed "Y" from returning "true"/"false" to returning "yes" / "no"!
Of course, the refactoring and TDD made sure that Y was doing what it should be, and we simply assumed that regression tests would catch the error on X - guess what: they didn't!
Because we had always mocked the behaviour of Y in X, there was no such test "Does X do what it's supposed to do in real circumstances?"

Lesson Learned:
If the function works in context, it does what it's supposed to do - but if the function works in isolation, there is no guarantee that it works in context!

We changed the way of writing unit tests as follows: "Rather than use the most isolated scope to test a function, prefer to use the most global scope possible without relying on external resources".