What is Sufficient Test Coverage for a Software Development Effort?
In any software development project testing is an integral part of deploying a working product that meets and satisfies the needs of the prospective user base. Software by nature is a complex effort and is typically the product of attempting to automate and streamline workflows that may or may not already have other software applicaitons and implementations as part of those flows. The goal of any software product should be therefore the creation of a product that introduces and supports a better and improved approach. In some way it should be cheaper, faster, more secure, more robust and in some way a higher quality product than what preceded it. If nothing else it should offer a more complete and comprehensive solution that integrates well and works with other parts of the users overall workflow. If it does none of these things then the purpose for the new software’s introduction is minimal or none at best. The only possible excuse for the introduction of such a lack of improvement can be limited to one: the new software is designed to work on new hardware that will replace the old. Integration testing then becomes more critical to be sure that the new hardware and the other software and operating system designed to run on it introduce no conflicts.
The main point here is that test coverage is primarily determined and dictated by a few important criteria:
1. Users Workflows
2. Features supported by the software
3. Configurations of the environment that the software will be running on.
4. Performance and Functionality Expectations
There are other concerns but these are the central concerns of testing and that of determining if the testing to be applied is adequate.
Generally any testing effort can rest on the fundamentals of computing that software has key functionality that must work properly before getting creative with the aesthetics of layout and UI design. These basics boil down to the functions of input, processing, and output which all computer science students learn at the beginning of their studies. For testing we are concerned if the software correctly and accurately captures the right input from the user in a consistently reliable manner. If we can be sure the input is captured correctly then we move on to examining the processing mechanism to make sure it performs the stated function. From a testers point of view often this is somewhat of what is called a “black box” and the only way to check on the function is enter the input data and submit it to the processing function which in turn returns or delivers the output data to the specified channels. Once we know the correct set of data is output then we can focus on the particular type of format that is being delivered and make sure that that particular data is returned in a way that can be used by the user base.
These are the essentials and fundamentals of testing any software. So far none of this stuff as described is at all that complex. What starts to get complex are the specific methods by which to verify and validate that the software components of input, processing, and output are working properly. This then rapidly moves the conversation to specific tools, hardware platforms, and type of networked environments. All of these require some specific knowledge and expertise depending on the type of testing that is needed. Today for most testing to be sufficiently comprehensive end-to-end integration testing is required. This may or may not mean that the testing is performed by a single individual or a team. The amount of testers required depends on the size of what is typically called the test matrix. The test matrix is an abstract term that defines the scope and boundaries of the particular hardware and software components that must be supported to meet the requirements of a successful software delivery and deployment. The test matrix determines the actual configurations that will be supported by the software for actual live users in the field or what is typically called production.
To satisfy sufficient test coverage of the test matrix the testers that perform their testing task employ a variety of methods and approaches to actual test execution. Depending on the functionality and nature of the software and the components and UI (user interface) elements used testers will commonly use the following:
1. Limit /Boundary tests
2. Both Positive and Negative tests
3. Computational Checks
4. Variations of Input
5. SQL Injection
6. Variations of Sequencing Inputs
7. User Role specific tests (tests for properly authorized scope
8. Performance and Load testing
9. Regression Testing
The point of all the testing is to confirm the functionaly and the overall performance of the software. This means that it is the testers job to confirm that the software performs as it is intended to as is specified in both the design and requirements and that in so doing will meet the intended and expected use by the user base and do it in such a way that it satisfies and meets or exceeds expectations for its defined purpose.
It should be clear then that if the requirements and design are specified clearly then there is no ambiguity. In real life however the reality is not so simple. Often times both the requirements and design are often not so clearly spelled out. Overall this means a higher level of risk of failure for the project and a higher level of effort for not only the tester but everybody involved in the software deployment effort. The testers role can often be that of helping to hammer out the design and requirments by highlighting the discrepancies and the inadequacies. To be clear though the tester can only provide the supporting effort since it is the role of project management, the prospective customer, and those actually coding and configuring the solution that must specify and drive the implementation of the requirements and the design.
So despite the secondary support role the testing effort is inextricably linked to the documented sets of requirements and the design specifications that in essense define what the software is intended to be and how it is to behave. The testing effort relies on these to establish what will be a baseline for comparison. As new features get rolled in and introduced the old features must remain the same. Testing and verifying this is often called regression testing. Testers perform tests against the already existing features to make sure that they behave and perform in the same manner as they did before the introduction of the new features. This allows for incremental updates that allow for easier isolation of problems introduced by configuration or code changes.
The reality is that there are very lengthy books discussing at length the long list of methods used in the discipline of Software Testing. Here we have more modest intentions we hope to outline the essentials that will help the project manager understand what is needed to assess if the testing performed is sufficient coverage for the deployment of a given piece of software. The best tool for the project manager to obtain and use to measure this sufficiency is the requirements traceabilty matrix. What this is is essentially a map typically laid out in a spreadsheet that shows what the requirments are, the features actually built in placed in the design and the testing results of each feature. For the project manager this clearly shows at a glance the overall state of the project. While some of these spreadsheets can be very large and some project managers don’t exactly use this as I have described it is indeed a true map of the overall condition of the software.
Another very useful tool is the bug trend tracking chart. If properly implemented and the project is successfully moving forward the trendlines on the chart should show a nice bell curve where the number of new and open bugs rise and then fall in a typical bell curve. This means that new bugs are becoming scarce and open bugs are being fixed which means that the software is on reasonably definite path toward being stable, functional, and usable.
Between these two and closely watching the various parts of the project timeline and coordinating the integration of each piece we can be sure that we are on track toward seeing a successful launch and deployment. Of course once the software is deployed that does not mean the work is done. There is much support and maintenance that will be needed. Also updated versions with additional capabilities are typically expected. This is the general nature of the software development cycle. At this point we can have high confidence that the test coverage provided is sufficient that is unless you have good reason to question the results that the testers are providing. In general testers are keenly aware of the consequences of malpractice and providing test results that miss critical or severe bugs. If they are paid professionals this is their role and the responsibility that they are expected to fill. As a project manager your primary role and task is to make sure that all the players on the project fill the responsibilities of their given role well.