We all say that testing is important—after all, for any requirement, we can only say that it’s “done” when all the relevant tests have been passed. But “important” isn’t the same as “valuable.” That’s not only an important distinction, it’s also one that QA people don’t get to make.
Author: Peter Vogel
For example, I think every joke I tell is “funny.” Unfortunately, my opinion doesn’t count – only the person hearing the joke has an opinion that matters. And while I think testing should be regarded as a “value-added” activity, my opinion doesn’t matter there, either: “Value” exists in the eyes of the customer, not the producer.
And it’s easy to prove that users don’t regard testing as a “valued-added” activity: Ask them if they’d mind if you dropped testing, provided it didn’t increase the bug count. We’re all pretty confident that not one user will complain. That’s because your users—the only opinion that counts here—don’t consider testing to be value-added activity.
However, we know we can’t (yet) create bug-free applications without testing. Unfortunately, while that makes testing a “necessary” task, it doesn’t make it a “value-added” task.
There is a path to changing that perception: Bring testers into the process earlier—specifically, into the requirements process.
If you think that the job of testers is “to find bugs” and that has no relationship to requirements, then you’re missing the value that testers bring to the table. As Niall Lynch at QA Lead says, “Anyone can find bugs. Customer do it for free all the time.”
Testers, however, have a unique association with the “application under test,” and bringing testers in early not only lets you take advantage of that but adds value that users care about. While testers (like users) care very much about how the application works, testers (like developers) are part of the process that delivers the application.
Testers must understand both the business aspect (what processes are involved/what goals are to achieved) and the technical aspect (how is the application supposed to work/what’s possible for the application to do). To put it another way: Testers have to know what “done” looks like from both the end user’s and the application’s perspective.
Stand-ins for the User
We can’t deliver bug-free software (at least, not with our current tools). But having testers involved in the requirements process increases the chances of users getting what they want with, where it’s unavoidable, the bugs users can live with. At the start of the project, exploratory testing is more likely to find “the bugs that matter” when it’s driven by the tester’s understanding of the customers’ requirements.
This also applies at the end of the project. Where testers have a deep understanding of what users want, developed during the requirements phase and continued through the project, testers can help refine the criteria and focus of User Acceptance Testing. This ensures that UAT demonstrates what users need in order to release the software—something that users value.
During the development, because we can’t eliminate all bugs, QA allows us to manage the risk associated with any release. The people who should be deciding how much risk is acceptable are the users … except we can’t involve them as much as we might like (users do have jobs). As a result, during the testing process, testers have to act as proxies for users in order to successfully prioritize testing. This is only possible if testers participated in defining those requirements to begin with.
In fact, the need for testers to act as proxies for users crops up all through the testing process. When building test cases, testers have to know what fields are required to meet the business needs, what counts as valid/invalid data, and what counts as “special cases.”
Those are business questions, not technical questions. I once objected to allowing employee timecards to exceed 24 hours in a day … until someone explained to me the impact of danger pay, overtime and other potential bonuses that resulted in timecards that totaled far more than 24 hours per day. Testers are in the best position to ensure that users get what they want when they participate in developing the requirements.
Furthermore, we should be involving testers from the beginning because of our process: We’ve decided that providing a “definition of done” gives users the best chance of getting what they want. And that means defining the tests that prove the definition has been met. Testers need to be involved at the requirements phase to ensure that the requirements include a meaningful “definition of done.”
“Meaningful” means flagging which tests are valuable to users and, as a result, prioritizing the related features. Prioritizing tests ensures that users get, as early as possible, the reliable functionality that’s important to them.
Yes, most of the time, users will want the “happy path” implemented first. But, often, there are exceptions—often edge cases that are important enough that the application isn’t really “ready for use” until those special cases are handled. On the other hand, I once delivered a version of an application where our testing showed that only the application’s “happy path” would work and, even then, only under a light load. But that was fine because the client only wanted a version of the application that they could use as a demonstration at a trade show.
Plus, getting a testable definition of done early in the process lets testers set up a measurement of progress that users value: number of tests (features) that are done/undone and the number of known bugs not yet fixed. Users value these measures (especially as they see the first number increase and the second one decrease).
Finally: Once testers and users start working together in the requirements phase, it’s natural for that participation to continue. Most modern testing tools (like Telerik Test Studio provide recorders that let users build test scripts by working with the application. Testers and users can use these tools to build tests together, further deepening both groups’ understanding of the application and its requirements. More importantly, by participating in creating tests, users are more likely to value (and have faith in) the tests.
Testing as a Value-Added Activity
To quote Lisa Crispin and Janet Gregory in Agile Testing”, the real goal of testers is “… to work with the customer or product owner in order to help them express their requirements adequately so that they can get the features they need, and to provide feedback on project progress to everyone.” And that’s something that even users will think of as “value added.”
About the Author
Peter Vogel is a system architect and principal in PH&V Information Services. PH&V provides full-stack consulting from UX design through object modeling to database design. Peter also writes courses and teaches for Learning Tree International.