|The Software Development Lifecycle: Testing|
What you see above is the "Test Cycle" as I learned and practiced in Waterfall environment, for years.
Now, I don't even want to go into how in theory, you can add significantly more test phases in here.Neither how in practice, smoke, integration and regression tests are usually neglected.
The simple fact that developers hand over software they consider "works as designed" to test is ingrained into the mind of waterfall software project specialists.
As I mentioned in another post about test coverage, defects occur even when developers consider that their software is defect free.
Let us consider for a minute that each test costs time.
While a piece of code is in test, developers continue to produce more working software. Yeah, I know that the Waterfall theory says that once development is finished, the product is handed off to test. But seriously - has this ever been reality? Do developers really sit there twiddling thumbs until testers report defects? Do companies really pay developers to sit idle while testers are busy?
If you are seriously working in such an environment, I would have a great optimization suggestion for your management.
So, developers build on code they consider to be working while test time passes. If a defect is then found in a component they are building on - yet, given the defect, the new component did "work as designed", the defect fix may cause rework not only in the defective component, but also in the current work-in-progress: Fix efforts may already be twice as high -or even higher- as if the defect was discovered before the developer started a new topic.
The problem is intensified when developers don't induce defects into new components, but into components that have already been accepted in the past. Ignoring the fact that oftentimes, when schedules are tight, regression testing is the first activity to be descoped, it's always the last thing testers do. This approach actually designed to maximize the amount of time that a defect can stay in the software - and therefore, maximizes the amount of damage a defect can do!
Is this smart? No!
You will never deliver cost effective high quality products unless you un-learn this model!
Forget everything you learned about Design-Develop-Test. It's the wrong philosophy. You can't improve it. It doesn't even get better when you increase the amount of time for regression tests or put regression testing in front of functional testing.
The SolutionA paradigm shift is needed.
Here is a non-exhaustive list of changes you must make, preferably in this order:
- Introduce mechanisms that let your developers know whether they introduced defects before they pick up a new task.
- Don't even let developers start on a new topic until there is confidence that their last piece of work didn't introduce defects.
- Automate testing. Enable developers to run any test they need or want to run at any given point in time, as often as they need to. Don't make them wait days - or weeks - for test results!
- Eliminate the "tester role" (but not the testers). In Scrum, we speak of a "Developer" even when we mean "the test expert" because everyone is accountable for high quality. Make programmers cowork with test experts before actually starting to write code.
- Create test awareness. Make sure developers know exactly which tests must pass before they create code.
- Introduce test driven development (TDD). Give developers access to the tests before they actually start coding.
- Change your process: Create quality awareness and accountability. We utilize "pre-commit hooks". Developers cannot even commit defective code unless they specifically override, but even then, the defect will be tracked on every single commit until resolved.
- Implement Continuous Integration. Let developers know immediately if their component damaged the product build. A wait time of 10 minutes is already tough, days simply aren't acceptible!
- Implement Continuous Delivery: Developers should never be working on their own environment or an individual branch for many days without merging back to the master. They should be working in small increments that can be delivered fast. This minimizes the risk that days of work need to be scrapped because of a wrong premise.
Your future process should be fully integrated, eliminating time gaps between design, development and testing. Testing should be an activity that starts before development, should go on in parallel to development and should be completed by the time the programmer moves a story, feature or task to "Done".
If you still need a "test phase", always think that any single day that a defect is within the software, you increase the cost of poor quality. Think different!