Pages

Friday, May 29, 2020

Why there's no traditional test strategy in agile organizations

The test strategy is a fundamental pillar of quality assurance. It would seem plausible that quality is independent of development approach, hence the same approach can be used irrespective of whether  we are Agile or not.

Nothing could be further from the truth. Indeed, a traditional approach to quality assurance is entirely incompatible with an agile approach - to the point where it becomes a problem in terms of performance and quality! Hence, if you have a traditional test management background and enter an agile environment, there are a few things you need to understand, lest your best intentions be met with massive resistance and pushback.


The goal of testing

Why do we test? This fundamentally changes between a traditional and an agile setting:
From accepting the past to sustaining the future
Traditional, stage-gated testing is centered around the idea that at some point, the work previously completed by developers enters into a QA process, during which the conformity of a product towards specified requirements is assured.  This is past-oriented. It assumes that the product / release / version is "finished".

An agile tester works with a team delivering in small increments - in the case of Continuous Deployment, that could be hundreds of increments per day. The product is never "finished". There will always be more work in the future. The agile team is supposed to work in a way that whatever was the last delivery, we can call it a day, and tomorrow, we won't find a shambles caused by whatever we messed up.



The testing mission

Let's start with the shift in testing objective, because different goals verify a different approach:
From "finding defects" to "preventing defects"
Traditional testing assumes that software has defects, and that developers make mistakes. In the worst circumstances, it assumes that unless checked upon, developers will not do what they are told.
Traditional QA serves a threefold purpose:

  • Verify that developers have done what they claim to have done
  • Catch the mistakes in the work of developers
  • Discover the defects in the software

In an agile setting, this is reframed, to a much more positive outlook on both the people and their work. Our agile testing serves three purposes:

  • Consistently deliver high, preferrably zero-defect, quality
  • Provide evidence that the product does the right thing
  • Prevent defects as early as possible


 As a consequence, some core concepts of how we go about "Agile Testing" change:

Test activity

The major contribution of testing changes:
From mitigating project risk to enabling teams
Whereas a traditional Test Strategy contributes to the project objectives, mitigating the risk of project failure through poor quality, agile testers enable their teams to continuously and consistently work in an environment where quality related risks are a non-issue.
You have to think of it like this: A nuclear power plant has to invest effort into preventing nuclear fallout. A watermill doesn't do that, because it wouldn't even make sense.

Likewise, an agile test strategy won't concern itself with defects. Hence, a lot of things you may have learned that are "must-have" as a test manager are just plain irrelevant:

Test Preparation

We move on to the first major area of a test strategy: preparation. Traditionally, we create a test case catalog by writing test cases to cover the content of the Specification documentation. Then, during the test phase, execute the test cases to verify conformity to requirements. If a test case finds no defect, we label it as passed. Once a certain threshold of test cases were passed, we can give a "Go" from testing.

There's one fundamental problem here when working with agile teams: there is no specification document! Then what? To make a long story short, we still have a specification, and we still have test cases: The tests are the specification

A few things don't exist in an agile organization, though:

Test Case Catalog

The Test Case Catalog is built on the idea that there is something like a major delivery that needs to be tested successfully  to meet the project objectives. That idea is incompatible with agile ways of working.
On a high level, we discriminate two types of tests: those that ensure quality, and those that help us understand what quality is.
All tests of the first category become part of our test suite - they are run on every build, they get run as soon as they get created, and they get created as soon as the feature starts being developed.

There is no test case catalog that has been created "upfront, for later execution".

Risk-Based Testing

Typically, a test case catalog contains a myriad of test cases that the team will not have time to conduct. Hence, Risk-Based testing helps to match capacity with effort, while reducing the overall quality risk. In an agile organization, things look different.
We don't develop things that don't need to pass tests. And we don't create tests that don't need to pass. Testing is part of development, passing tests is part of the process, and the tests are as much part of the product as the productive code itself.

Test Data Creation

Most traditional testers have at some point encountered the difficults of acquiring the necessary data to conduct certain test cases - sometimes, it's not entirely clear what data is needed, how it should look like and (in case of mass data) where to obtain it. When techniques like BDD with Specification by Example are in use, we have our test data as part of the product design.

Test scenario setup

In traditional software testing, it would often take hours, sometimes days, to set up an intricate test scenario required to check only one thing in the software. And then pray that we didn't make a mistake - or all that effort was lost! If that's the case, then our architecture has some issue: Tests in our pipeline should bring everything necessary to run as quickly as possible - in seconds rather than hours! And if a scenario takes days to prepare, it'll be a maintenance nightmare, so we'd rather not have any of these to begin with.

Test scenarios move from a per-release basis to a per-code-change basis, which means that it doesn't even make sense to plan scenario setup strategically: it moves entirely to the work execution level.


Defect Management

Traditional test managers feel appalled when an agile team tells them that they neither have, nor want, defect management. Given the typical defect rates observed in traditional Waterfall organizations, it's unthinkable to not systematize and institutionalize defect management.

Let me return to the nuclear plant example. Of course, it needs to have both processes and means to deal with toxic waste: You'd get second thoughts if there was no hazardous waste management. But what if there were barrels labelled as "Nuclear waste" in your local sushi diner? You'd bolt for the door - because such a thing simply doesn't belong there!
It's the same for defects. They don't belong in an agile organization. That's why we don't need defect management.

And with defect management, we lose the need for many other things that would be part of a good traditional test strategy:

Defect management process

In an agile team, dealing with non-conformance is easy: When a test turns red, the developer stops what they're doing, fixes the problem, and continues.
Under ideal circumstances, this takes seconds - if it takes minutes, it may already be an issue where they involve other people on the team. That's it.

Defect prioritization

Don't we all have fun with the arguments that ensue around whether a defect is Priority 1,2 or 3? We don't need any meetings to align and agree on a priority model are pointless if there's a "stop the line" process where any defect would immediately interrupt all other work until resolved. 

Defect status model

Given that a known issue is either someone's top priority being worked on, or it's already fixed, we don't need much of a status model. That reduces organizational complexity by a massive amount.

Defect Tracking

There is nothing to track, by default. If there are defects to track, we have problems we shouldn't be having.

Defect management tool

The agile organization would prefer to resolve the root cause that would mandate the need for such a tool. We should not institiute a tool based on the idea that quality problems are inevitable.

Defect status meetings

No defects, no defect meeting.

Defect reports

What would you expect from a report with no data?

Defect KPIs

Who hasn't seen the ping pong that ensues when a defect was shoved between developer and tester a dozen times, with the tester claiming "it's a defect", and the developer arguing it isn't? When you measure testers against rejected defects while measuring developers against the amount of defects, you generate this confliect. Without defect-related KPIs, there's no such conflict.


Test Management

It's an unfair assertion to say that there's no test management, because agile tests are well-managed.

Test Plans

What we don't want is assigning and scheduling test cases or types to individual testers, irrespective of whether a feature will actually be delivered. Instead, every backlog item has all necessary tests related to it, It's clearly defined who runs them (the CI/CD pipeline), where (on the CI/CD stage) and when (on every build). Part of the refactoring process would be to move the test plan away from a backlog item into the test suite - a default element of the test plan becomes the "full regression" of everything that was formerly built. Hence, a traditional test plan becomes redundant.

Test Tracking

Once you've got your test case catalog, you need to track your test cases. Not so in an agile setting, where the CI/CD pipeline runs and monitors every single test case. "If the test exists and is part of the suite, it will be run every time we change the code, and the developer will receive a notification if anything is wrong." - what would you want to track?

Test Documentation

This isn't fair, because a test documentation exists: in the log files of the CI/CD pipeline, for every single change to the codebase. It's is just that we don't give a hoot about the documentation of test individual cases, because the entire document would read "Step - executed, passed. Test - executed, passed", since wherever that's not true, we get information on what wasn't okay, when and where.

Test Reporting

We don't do stuff like reporting the percentage of test cases passed, failed and "not-run". There are only two valid conditions for our entire software: "all tests passed", or "tests not passed".  And there's not really a need to report testing at all, because if a single test hasn't passed, we can't deploy. So, we really only need to report development progress.

Test Status Meetings

In a waterfall organization, we need to track test status, typically by having routine meetings where testers report their progress Is vs Should, the amount of defects they found, and how many of them were already closed, plus an estimate how likely they consider completion of their work by the end of the test period.
This meeting wouldn't make any sense in an agile organization, because there would be nothing to talk about.

Test Management Suite

Agile organizations rely heavily on automation. There's probably a tool for everything that is relevant and can be automated. Still, you're not going to find a Test Management or Application Lifecycle Management Suite - because it has nothing to do.
If your test cases written in the central repository and managed by the pipeline, your test protocols are managed by your artifactory, and you don't have any defects to track ... what exactly would you expect such a tool to do?

Roles and Responsibility

We need to agree on which role has which responsibility in the testing process - who writes test cases, who reviews and approves them, who runs them, who communicates defects, who tracks them, and so on. None of this would be required in an agile setting: The team writes, reviews and runs test cases, and deals with any problems encountered. The role is called, "Agile team member", and the responsibility is "contribute to the team's success." What that means can be more or less flexible, and just like the different members in a family have different strengths and weaknesses, we don't want the game of "Not my responsibility" or "Why didn't you..." - because none of these discussions help us reach our team goals. The only discussion we are looking for is "How can I contribute to ..." - and that may change upon need. We wouldn't want a static document to contradict what people feel they can achieve.

Test Levels

We have a Test Pyramid, and technically, that doesn't change in an agile environment. But it means a different thing than in a traditional organization. 

In a traditional organization, we would decide up front on certain test levels, which tests to run on which level, and when to do these test levels.

In agile development, the test levels are fluid. We decide on a test, and we execute it. We then refactor it, to conduct it on the most effective level, and that should - first and foremost, be the unit level. Pulling every test to the lowest possible level is essential to retaining a sustainable test suite, and that means there can be no hard cut of what to do where.

Test types

We have the "Test Quadrants" which give a simple and clear overview of what should be tested, and whether it's automatable. Unlike a Test Strategy document, which would define once for all which of these test types we use, what we do to cover them and where to run them. In an agile setting, these quadrants are more of a constant discussion trigger - "do more of this, maybe some less of that is enough, how can we do better here ..."

Test Automation

Automation is often an afterthought in classical test strategies - we identify a number of critical test cases that need to become part of the Regression suite, and as far as effort permits, we automate them. This wouldn't work in an agile setting. With the "automate everything" DevOps mentality, we wouldn't define what to automate - we'd automate everything that needs to be done frequently, and that would include pretty much all functional tests. It would also include configuration, data generation and scraping data from the system. We wouldn't include it in a test strategy, though, because how helpful is a statement of "we do what makes sense" - as if anything to the contrary would make sense.


Release Management

Ideally, we would be on a Continuous Deployment principle - and where that's not feasible, it should be Continuous Delivery. We also want to have a "Release on Demand" principle, that is: when something is developed, it should be possible to deploy this content without delay. Whether it would be released to users immediately or after a period should be a business, and not a technical, question. In most settings, the content would already be live and using "Dark Release" mechanisms, become available without changes to the code base.

Test Phases

A major concern in traditional testing is the coordination of the different test phases required to meet the launch date. Some activities, like test preparation, need to be completed before the delivery of the software package, and all activities need to be completed a few days before launch to meet the project schedule.
When the team is delivering software many times an hour, and wants to deploy to Production at least once a day, you're going to be short on luck creating a phase gated schedule - acceptance, integration and system tests happen in parallel, continuously. They don't block each others, and they take minutes rather than weeks.

Scheduling non-functional tests

Yes, there are some tests, like Pen-Tests or load tests, that we wouldn't run on every build.
Whereas a traditional test strategy would put these on a calendar with clear test begin/end periods, we'd schedule intervals or triggers for these types of test - for example, "nightly, weekly, quarterly" or "upon every major build or change to the environment".

Installation schedule

A major issue in traditional testing is the schedule for installation - what will be installed, when, and by whom. We'd prefer to "push" builds through the automated pipeline, and expect every member of the team to "pull" any version of the software onto an environment of their choice at any time.
If you think this results in chaos, try reframing this into, "How would the software and the work need to look like for this to not result in chaos?" - it's possible!

Go/No-Go Decisions

In an agile setting, you're not working to achieve a "Go" approval from management by providing evidence that quality is sufficient. If there are any quality issues, we would not continue anyways.

Test Coverage

A key metric in traditional testing is test coverage - both the percentage of requirements covered with tests, and the percentage of tests successfully completed. Neither of these statements makes sense in a setting where a requirement is defined by tests, and where the work isn't "done" until the test is successfully completed: test coverage, from the perspective of a traditional definition, must always be 100% by default. Why then measure it?

Restricted approval

Whereas the traditional tester is usually tasked with doing whatever is required to ensure that a new software release can get the "Go" approval, the approval is usually made "with restrictions". That means: "we know the delivery isn't really up to the mark, but we have a deadline, and can't afford to postpone it." Everyone knows the quality isn't there, it's just a question of how many corners we can cut, and by how much.  Agile testers have a different goal: Understanding that corners we did cut today will need to be smoothed out in the future - requires us to ensure that there are no cut corners!

Approval without restrictions

When there are no cut corners, there is no "restricted Go", and when all approval decisions are always an unconditional, and unanimously "Go" - there is no need for Go/No-Go decisions.



Test Environments

Probably one of the hardest battles fought when transitioning from a traditional testing approach to an agile testing approach is the topic of environments: more environments add complexity, constraints on environments reduce agility. The fewer environments we have, the better for us.

Environment configuration

If we move towards a DevOps approach, we should also have "infrastructure as code". Whereas a traditional test team would usually have one specialist to take care of environment configuration, we'd expect the config to be equal, or at least, equivalent, to the Production environment - with no manual configuration activity. Our test strategy should be able to rely on our CI/CD pipeline to be able to bootstrap a test environment in minutes.

Installation schedule

A major issue in traditional testing is the schedule for installation - what will be installed, when, and by whom. We'd prefer to "push" builds through the automated pipeline, and expect every member of the team to "pull" any version of the software onto an environment of their choice at any time.
If you think this results in chaos, try reframing this into, "How would the software and the work need to look like for this to not result in chaos?" - it's possible!


After all these, I hear your cry, ... but ...

How about Regulatory Compliance?

"We need evidence for, e.g. SOX Compliance, to warrant that the software meets specification, that all tests were executed, by whom, when, and with which outcome." Yes. That could be. And that's no conflict to agile testing.
I would even claim that agile testing is a hundred times better suited to meet regulatory requirements than traditional testing. And here's why:

  1. The exact statements which were used to do a test are code. 
  2. That means, they have no space for interpretation, and can be repeated and reproduced infinitely.
  3. It also means they are subject to version control. We can guarantee that the test result is exactly correlated to the test definition at that timestamp. Our test cases are tamper-proof.
  4. There is no room for human error in execution or journaling. The results are what they are and mean what they mean. 
  5. All test runs are protocolled exactly as defined. There is no way for any evidence to be missing. By storing test results in an artifactory, we have timestamped, tamper-proof evidence. And tons of it.

Proper agile testing as nothing to be afraid of when facing an audit. It's better prepared than traditional test management could ever be, and under most circumstances, an auditor would require much less evidence to ascertain compliance than an agile team would require just to do their job.


All that said, this begs the question ...

Do we need an Agile Test Strategy?

If you've been paying close attention to all I've written above, you may wonder if there's a need for a test strategy in an agile environment at all.
After all of the above points, the answer will surprise you: Yes, indeed we need an agile test strategy.
We will explore this test strategy in more detail at another time. At this time, let me just reduce this to headlines of what it means and will cover:

The Agile Test Strategy ...

  • belongs to the teams developing software.
  • is a living documentation that explains what is currently happening.
  • focuses on increasing customer satisfaction and product value.
  • is closely related to the Definition of Done, the objective standard of when a team has completed their work on any given item.
  • contains the team's quality-related Working Agreements that provide a explanation of how the team is collaborating to meet quality objectives.
  • addresses organizational and technological measures required to attain and sustain high quality.
  • minimizes both current and future total effort required to deliver sustainable high quality.
  • optimizes the Test Pyramid
  • utilizes the Test Quadrants
  • leverages manual testing to optimize learning

The key question an agile test strategy would always revolve around is, "What are we doing to meet our commitment to quality with even less effort?"


Conclusion

With this definitely long article, I hope that I could shed some light into the differences between agile and traditional testing strategy. You can add massive value to an agile organization if they don't have a test strategy yet, and you can always find something within an agile test strategy that can be optimized. What you should not do, however, is to consider the absence of a traditional test strategy as a "flaw" in an agile organization.

You need to be familiar with the differences in the two approaches, so that you can avoid making unhelpful suggestions and focus your efforts on the things which move the teams forward.


The major assumption of this article is that an agile organization is actually committed to and using the engineering practices required to attain continuous high quality. Where this isn't true, the Agile Test Strategy would include the way to achieve this condition - it wouldn't focus on instituting practices or mechanisms contrary to this goal.


No comments:

Post a Comment