The purpose of test monitoring is to give feedback and visibility about test activities. Information to be monitored may be collected manually or automatically and may be used to measure exit criteria, such as coverage. Metrics may also be used to assess progress against the planned schedule and budget.
Common test metrics include:
- Percentage of work done in test case preparation (or percentage of planned test cases prepared).
- Percentage of work done in test environment preparation.
- Test case execution (e.g. number of test cases run/not run, and test cases passed/failed).
- Defect information (e.g. defect density, defects found and fixed, failure rate, and retest results).
- Test coverage of requirements, risks or code.
- Subjective confidence of testers in the product.
- Dates of test milestones.
- Testing costs, including the cost compared to the benefit of finding the next defect or to run the next test.
Test reporting is concerned with summarizing information about the testing endeavour, including:
- What happened during a period of testing, such as dates when exit criteria were met.
- Analyzed information and metrics to support recommendations and decisions about future actions, such as an assessment of defects remaining, the economic benefit of continued testing, outstanding risks, and the level of confidence in tested software.The outline of a test summary report is given in ‘Standard for Software Test Documentation’ (IEEE 829).
Metrics should be collected during and at the end of a test level in order to assess:
- The adequacy of the test objectives for that test level.
- The adequacy of the test approaches taken.
- The effectiveness of the testing with respect to its objectives.
Test control describes any guiding or corrective actions taken as a result of information and metrics gathered and reported. Actions may cover any test activity and may affect any other software life cycle activity or task.
Examples of test control actions are:
- Making decisions based on information from test monitoring.
- Re-prioritize tests when an identified risk occurs (e.g. software delivered late).
- Change the test schedule due to availability of a test environment.
- Set an entry criterion requiring fixes to have been retested (confirmation tested) by a developerbefore accepting them into a build.