TestMake

Introduction

The TestMake script is a facility that generates a complete test program for a given subroutine or module in general from a very limited amount of information: From this information a test program is made that tries to exercise all or most test paths in the module.

While it is (probably) possible to use any programming language as a host for the test program, several specific features make one language easier to use than another. These items are discussed in the section on PROGRAMMING LANGUAGES

The Generation Process

The TestMake facility is easy to use. As stated in the introduction, you need two reports from the Toolset that describe basic properties of the module to be tested: When there is no more information available, then the script will generate a template with a guess of the missing information, as described in the next section. The generated test program will then only sketch the test paths and how to take them.

The generated template looks like this:


Module SOME_MODULE {
   Input? "ARRAY" "REAL*4" {
      ARRAY = ...
   }
   Input? "IERROR" "INTEGER*4" {
      IERROR = ...
   }

   ... further parameters ...

   Condition " NODATA .GT. MAXDIM  " TRUE {
      MAXDIM = ...
   }
   Condition " NODATA .GT. MAXDIM  " FALSE {
      MAXDIM = ...
   }

   ... further conditions ...

   Call {
      ? Call to module
   }
}
The purpose of the template is to help set up the specific information about the module that is very hard or impossible to extract automatically.

When, however, all this information about the module is available, then the test program can be filled in. It is complete in the sense that all wholly or partially controllable test paths are included in a sequence of calls to the module, that the results are checked and that a report is generated.

The usefulness of the automatic checks depends, unfortunately, on the nature of the module. The automatic checks will do no more than determine:

To amend for this essential drawback, you can specify your own test code, either for individual output parameters or for the test as a whole. It is currently not possible however to make the test code specific for each test case.

If this particular limitation was relieved, it would mean that you have to supply quite a bit of information and the golden rule should be, because you do not want to build a test program to test this generated test program, that all code fragments be short and their number be small).

Details

The TestMake script reads a file with the name of the module for which the test program must be created (extension: .module) or if that does not exist, the default file "modules.tcl".

This so-called module file must define the call to the actual code that needs to be tested, the input and output variables that need to be observed and the test cases.

The following commands are available:

The scheme below shows how all these commands together make a complete test program:

   Main program:
   (Program header)
   Headers
   Declarations of input, output and other parameters
   Declarations

   Preparation

   Repeat for all test cases:
      Call initialisation
      Set up the conditions for the test case
      Call the module
      Call the check routine
      Report the conclusions

   End of main program

   Subroutine to initialise:
      Initialise all parameters
      Initialisation fragment

   Subroutine to call the module:
      Call fragment

   Subroutine to check the output:
      Repeat for each parameter:
         Generic check code
         Specific check code (if given)
      Report conclusion: test failed or not

Supporting routines

The checking code and, for that matter, any other code you specify may use the following support routines:

Programming languages

The host language for the generated program is determined by all the fragments that are defined. The script itself simply assembles all the fragments in a fixed order, substitutes variable names and so on and writes the result to the output file.

This means that the script is very flexible: as long as you can define the proper code fragments, the host language can be any programming language you like. There are several special features, however, that make a language suitable for this use or not:

These features are present in languages like Fortran 90, Java and, to a lesser degree, C++, but not in languages like Fortran 77 and ANSI-C.

The current host language of choice is therefore Fortran 90, which has the additional advantage that the scope of variables can be better controlled than a simple global scope. The feature of internal subroutines makes the generated program a trifle more elegant.

The code fragments are, however, contained in a separate file so that it is relatively easy to change the host language.

For the currently supported languages we have the following special issues:

General remarks:
Complicated data structures (linked lists for instance) can not be automatically copied and compared (or only with a very large effort). This means that you will have to extend these classes and auxiliary routines if such data structures are important.

Limitations

The current implementation has a number of restrictions, but more important are the restrictions on the method itself: if the module contains one or more state variables, calling it twice with the same input may not lead to the same behaviour and the same outcome. For instance, at the first call, a file is opened and some values are read. At each subsequent call more is read from the file.

If the operation of the module depends on features like this, you may need a more clever approach to get the module into the right state again for each test case.

Another problem is the presence of test paths that are not completely controllable from outside via the input parameters or are not at all selectable via the input parameters. For such paths, it is very difficult to automatically set up the proper test input (it may depend on all kinds of external resources, such as the presence of an input file).

The possibility of defining new test cases manually is a workaround for these problems, but not a complete solution.

Checking the results is another area that leaves room for improvement. In many cases, this will have to be done by module-specific code, since it is principally impossible to judge from the code alone what the module should do.

To do

The TestMake script is ready for use, but there remain a number of things to do: