So what exactly is software testing? The traditional academic view thinks that software testing takes programs and specifications as input, and produces a bug list as output. In other words, software testing produces bug lists for development teams. Others, especially those in the commercial world, have different expectations. They view testing as a service to development teams. Testers are expected to provide almost instant feedback at all times while programs and specifications keep evolving.
This article discusses the "production" view and the "service" view of software testing and tries to find out their impacts on software testing techniques.
Traditionally, the problem of software testing was stated as such: given a program and a description or specification of what the program does, find out under which conditions the program does not behave as expected. There are generally two types of testing techniques used to solve this problem. One type is program-based techniques, also known as white-box testing. The other type is specification-based techniques, also known as black-box testing.
Program-based techniques develop test cases according to program structures. The central idea is that program control structures and data structures determine program behaviors. If test cases can sufficiently cover all control structures and/or data structures, we can be reasonably confident that most program behaviors are examined. Statement coverage, branch coverage, and path coverage are example techniques used by white-box testing.
Specification-based techniques do not assume knowledge of internal program structures. Instead, they depend on the problem specifications or descriptions to determine which test cases should be used. The central idea is that if a program is supposed to solve a problem, as long as the problem is solved, it doesn't matter how the program is constructed.
Both types of these traditional testing techniques assume that there are static programs or specifications to work on and that a list of bugs is all development teams need.
To support "bug-list production", techniques are developed to cover the program-under-test more thoroughly under a pre-selected coverage criterion, to achieve a higher coverage with a smaller number of test cases, and to execute test cases more quickly. New coverage criteria are also invented to cover different aspects of the program-under-test.
The focus here is to produce more thorough lists of bugs, in other words, better products, even if it might take a longer turnaround time to provide such lists.
In practice, testers' jobs are sometimes more subtle than simply producing bug lists. I once asked a test lead from a large software company what his most important responsibility was. The answer was quite surprising to me at that time: the most important thing was to know the status of the software product at all times. After I thought about it, the idea became quite reasonable. Clearly, when both the program-under-test and the description of the problem are changing everyday, it is not feasible to produce a comprehensive bug list for each daily build. Nor is it necessary. It is more useful to the development team if testers can provide constant and rapid feedback on the status of the current builds. Overview information is as important as individual bug reports. This "service" view of software testing focuses on the need for rapid feedback and the evolving nature of the program-under-test. Just as with many other services such as phone services, the need for rapid responses is paramount. When a person picks up a phone, she expects to talk right away; when a development team gets a build done, they expect feedback right away.
To perform software testing as services, testers must be able to quickly find out the status of a new build. Automated test execution and result verification seems to be a logical way to go. However, most current test automation tools and techniques are closely tied to implementation details such as user interfaces. They are extremely sensitive to changes in the program-under-test. This creates a dilemma. On one hand, testers have to automate tests to provide rapid feedback. On the other hand, automated tests don't work very well with updated programs and thus sometimes slow down software testing. There is no perfect solution to this yet. More abstract test descriptions may be able to decouple test cases from implementation details in the future.
A key question here is, how do testers perform a small number of test cases on each build and still gain an good overall knowledge of the status of the entire program? In other words, how do they determine which test cases should be used on which builds? How do they combine the results of different test cases executed on different builds and make sense of it? I'm sure many testers are experienced enough to do this, but until we can clearly state how we do it, we cannot claim that we know how to engineer it and that we can do it successfully in the next project.
In the case of open-source software development, there are usually no deadlines. Still, builds are updated daily or weekly in many projects. It is likely that when someone declares "Hey, I just achieved 80% test coverage for project X based on test criterion Y." (if one ever would), the build she uses probably is an out-of-date one. I wonder how people in successful projects such as Emacs, Linux, and Apache put together feedback and determine stable builds. Or do they determine a build to be stable before user feedback? Is there a systematic way to separate stable builds from other builds?
The production view and the service view of software testing are certainly not entirely incompatible. Many testers who provide testing services are doing a good job using techniques developed for bug production, and make ad hoc adjustments to them to work with evolving environments. However, I think it is in the best interest of the software community to contemplate what we expect from software testing and what is the best way to provide it. I can't wait to hear what freshmeat users have to say.
Chang Liu is a member of the Rosatea group (Research Organization for Specification- and Architectual-based Testing & Analysis) at UC Irvine. His research interests are centered on software testing automation, software quality assurance, and software engineering in general. He is currently working on TestTalk -- a comphrehensive testing language.
We're eager to find people interested in writing editorials on software-related topics. We're flexible on length, style, and topic, so long as you know what you're talking about and back up your opinions with facts. Anyone who writes an editorial gets a freshmeat t-shirt from ThinkGeek in addition to 15 minutes of fame. If you think you'd like to try your hand at it, let firstname.lastname@example.org know what you'd like to write about.