Data coverage testing

163

Views

0

Downloads

Netisopakul, Ponrudee, White, Lee and Morris, John (2002) Data coverage testing In: Asia-Pacific Software Engineering Conference, Gold Coast, Qld., Australia.

Abstract

Generating test data sets which are sufficiently large to effectively cover all the tests required before a software component can be certified as reliable is a time consuming and error-prone task if carried out manually. A key parameter when testing collections is the size of the collection to be tested: an automatic test generator builds a set of collections containing n elements where n ranges from 0 to n/sub crit/. Data coverage analysis allows us to determine rigorously a collection size such that testing with collections of size > n/sub crit/ does not provide any further useful information, i.e. will not uncover any new faults. We conducted a series of experiments on modules from the C++ Standard Template Library which were seeded with errors. Using a test model appropriate to each module, we generated data sets of sizes up to and exceeding the predicted value of n/sub crit/ and verified that after all collections of size /spl les/n/sub crit/ have been tested, no further errors are discovered. Data coverage was also compared with statement coverage testing and random test data set generation. The three testing techniques were compared for effectiveness at revealing errors compared to the number of test data sets used. Statement coverage testing was confirmed as the cheapest, in the sense that it produces its maximal effect for the smallest number of tests applied, but the least effective technique in terms of numbers of errors uncovered. Data coverage was significantly better than random test generation: it uncovered more faults with fewer tests at every point.

Item Type:

Conference or Workshop Item (Paper)

Identification Number (DOI):

Deposited by:

ระบบ อัตโนมัติ

Date Deposited:

2021-09-09 23:53:48

Last Modified:

2022-04-30 05:06:48

Impact and Interest:

Statistics