TestMake
The TestMake script is a facility that generates a complete test program
for a given subroutine or module in general from a very limited amount of
information:
-
Two files generated with the McCabe Toolset, one
containing the data usage and one containing a description of
the basis testpaths.
-
A file that must be prepared manually, although the first round
through the generation process will help you
(cf. THE GENERATION PROCESS)
From this information a test program is made that tries to exercise all
or most test paths in the module.
While it is (probably) possible to use any programming language as a
host for the test program, several specific features make one language
easier to use than another. These items are discussed in the section on
PROGRAMMING LANGUAGES
The TestMake facility is easy to use. As stated in the introduction,
you need two reports from the Toolset that describe basic properties
of the module to be tested:
-
The report on the data usage is used to identify the dummy
or external parameters to the module.
-
The report on the test paths is used to identify those test paths
that can be influenced directly from outside
(cf. LIMITATIONS)
When there is no more information available, then the script will
generate a template with a guess of the missing information, as
described in the next section. The generated test program will
then only sketch the test paths and how to take them.
The generated template looks like this:
Module SOME_MODULE {
Input? "ARRAY" "REAL*4" {
ARRAY = ...
}
Input? "IERROR" "INTEGER*4" {
IERROR = ...
}
... further parameters ...
Condition " NODATA .GT. MAXDIM " TRUE {
MAXDIM = ...
}
Condition " NODATA .GT. MAXDIM " FALSE {
MAXDIM = ...
}
... further conditions ...
Call {
? Call to module
}
}
The purpose of the template is to help set up the specific information
about the module that is very hard or impossible to extract
automatically.
When, however, all this information about the module is available, then
the test program can be filled in. It is complete in the sense that
all wholly or partially controllable test paths are included in a
sequence of calls to the module, that the results are checked and that a
report is generated.
The usefulness of the automatic checks depends, unfortunately, on the
nature of the module. The automatic checks will do no more than
determine:
-
Has an input-only variable been changed? This is not allowed, so
that is considered an error.
-
Has an output variable been changed? This is supposed to happen,
so no change is considered an error.
-
Has an error flag been set by the module itself?
(Which means the module could not do its job properly,
possibly because it had to fail!) In that case output variables
need not have been set, though this is reported.
To amend for this essential drawback, you can specify your own test
code, either for individual output parameters or for the test as a
whole. It is currently not possible however to make the test code
specific for each test case.
If this particular limitation was relieved, it would mean that you
have to supply quite a bit of information and the golden rule should be,
because you do not want to build a test program to test this generated
test program, that all code fragments be short and their number be small).
The TestMake script reads a file with the name of the module for which
the test program must be created (extension: .module) or if that does
not exist, the default file "modules.tcl".
This so-called module file must define the call to the actual code that
needs to be tested, the input and output variables that need to be
observed and the test cases.
The following commands are available:
-
Select the host programming language by means of the
HostLanguage command:
HostLanguage "language of choice"
Note:
There should be only one such command in the modules.tcl
file, as the script has not been designed to work with more than
one programming language at a time.
-
The Module command is in fact a container for all other
commands that are specific to the module in question. The first
argument gives the name of the module, the second argument is
a list of other commands described here.
Module "module-name" {
other commands
}
-
With the Input command you define a parameter that must
have been set before the module can be called. The code given in
brackets will be exercised as a preparation to each test case. It
should initialise the parameter to some value or values that are
useful for all these test cases.
The type argument should reflect the actual parameter type, such
as "INTEGER" in Fortran or "int *" in C. It depends on the
programming language what details are used, for specifying arrays
and so on (cf "Programming language").
Input "parameter-name" type {
initialisation code
}
Note:
You may define more input and other parameters than are actually
passed to the module under test. This can be used for instance for
controlling variables in a Fortran COMMON-block or any kind of
global variable in C, Java and so on.
In these cases it may be useful to set the type to an empty string
("") as then the type will possibly have been defined elsewhere.
-
The Output command defines a parameter that must be set
by the module under test, under normal circumstances. That means
that the initialisation should set the parameter to such values
that the automatic checks can establish that the output parameter
has been set (to different values). The check code fragment is
optional, but allows a finer-grain check than the automatic one.
Output "parameter-name" type {
initialisation code
} {
check code
}
-
The In/out command defines a parameter that must be set
before the call and will be set to some other value
by the module under test, under normal circumstances. Again,
the initialisation should set the parameter to such values
that the automatic checks can establish that it has been set
(to different values). The check code fragment is optional.
In/out "parameter-name" type {
initialisation code
} {
check code
}
-
The Error command defines a parameter that will be set
in case of an error condition. Under normal circumstances it will
keep the original value. If the automatic checks find out that
the value has changed, they assume an error condition was
encountered by the module under test. This means that the output
input/output parameter need not be set any more.
Note:
Forcing error conditions is a very valuable way to test
the module. Therefore the bare fact that the error condition has
occurred possibly means the test has succeeded. It is up to you to
determine whether that is indeed the case.
Error "parameter-name" type {
initialisation code
} {
check code
}
-
With the Call command you specify a code fragment that
will execute the module with the correct sequence of actual
parameters and possibly other related code. As a golden rule:
keep it short and simple, preferrably no more than the call itself.
It may be necessary to call some other module first or do other
preparations, but these should go into the Initialisation
fragment.
Call {
code to call the module under test
}
-
The Condition command: For each condition in the module
under test, the input parameters must be set to values that make
the module go one way or the other. The fragments should ensure
the proper values for the (input) parameters, such that the
condition becomes true or false, just as the value represents.
Condition "description" value {
code to make the condition within the tested module true or
false
}
-
With the Initialisation command you specify a code
fragment that will be executed before the actual call to the
module, but after the standard initialisation (the
initialisation of all parameters).
Initialisation {
code to call before the module under test is called
}
-
With the Preparation command you control the
initialisation of the test program as a whole. It will be
called once only.
Preparation {
code to call before the start of the actual test suite
}
-
The Declarations command allows the declaration
of additional variables. The code fragment is inserted in the
declarations part.
Declarations {
declarations of additional variables and such
}
-
The Headers command inserts text before the actual
declarations for support of header files in C or use
statements in Fortran 90.
Headers {
statements to preceed any declarations
}
-
If there is a need for extra, specific test cases, use the
Testcase command. Each defines a new test case,
where the code fragment should set the input to the required
values.
Testcase {
code making up a specific test case
} {
check code
}
The scheme below shows how all these commands together make a complete
test program:
Main program:
(Program header)
Headers
Declarations of input, output and other parameters
Declarations
Preparation
Repeat for all test cases:
Call initialisation
Set up the conditions for the test case
Call the module
Call the check routine
Report the conclusions
End of main program
Subroutine to initialise:
Initialise all parameters
Initialisation fragment
Subroutine to call the module:
Call fragment
Subroutine to check the output:
Repeat for each parameter:
Generic check code
Specific check code (if given)
Report conclusion: test failed or not
The checking code and, for that matter, any other code you specify
may use the following support routines:
-
test_print_text( string )
This routine prints the string of text to the log file
-
test_failed( logical value, string )
Use this routine to indicate a failure (if the logical value or
expression is true). The string explains why the test has failed.
The host language for the generated program is determined by all the
fragments that are defined. The script itself simply assembles all the
fragments in a fixed order, substitutes variable names and so on and
writes the result to the output file.
This means that the script is very flexible: as long as you can define
the proper code fragments, the host language can be any programming
language you like. There are several special features, however, that
make a language suitable for this use or not:
-
Overloading of functions can greatly simplify the checking: the
compiler can do most of the hard work of determining which
algorithm to use - comparing strings is different than comparing a
two-dimensional array of floating-point numbers.
-
To avoid long and varying lists of parameters, the use of global
variables is strongly recommended (but only for a test program
generator like this!).
-
Being able to inquire the size of arrays greatly simplifies
generating code as well.
These features are present in languages like Fortran 90, Java and, to
a lesser degree, C++, but not in languages like Fortran 77 and ANSI-C.
The current host language of choice is therefore Fortran 90, which has
the additional advantage that the scope of variables can be
better controlled than a simple global scope. The feature of internal
subroutines makes the generated program a trifle more elegant.
The code fragments are, however, contained in a separate file so that it
is relatively easy to change the host language.
For the currently supported languages we have the following
special issues:
-
Fortran 90:
There is no (direct) support for allocatable types or for pointer
types, at least for Input/Output variables.
The file "test_facilities.f90 contains auxiliary routines and
must be linked with the program.
-
C:
Arrays must be defined like this (statically):
Input "array" "int [10]"
or (actually a pointer)
"Input "array" "int *"
Use the (aliased) malloc() function to allocate memory for
dynamic arrays. (The actual routine that is called registers the
size of the array and then calls malloc().)
The files "test_fac.h" and "test_facilities.c" are part of the
program.
-
Java:
Arrays must be defined like this:
Input "array" "int []"
Via the "new" operator you can actually allocate memory, using
standard Java idiom.
The auxiliary classes for Java are contained in "Testcmp.java"
and "Testcopy.java".
General remarks:
Complicated data structures (linked lists for instance) can not be
automatically copied and compared (or only with a very large effort).
This means that you will have to extend these classes and auxiliary
routines if such data structures are important.
The current implementation has a number of restrictions, but more
important are the restrictions on the method itself:
if the module contains one or more state variables, calling it twice
with the same input may not lead to the same behaviour and the same
outcome. For instance, at the first call, a file is opened and some
values are read. At each subsequent call more is read from the file.
If the operation of the module depends on features like this, you may
need a more clever approach to get the module into the right state again
for each test case.
Another problem is the presence of test paths that are not completely
controllable from outside via the input parameters or are not at all
selectable via the input parameters. For such paths, it is very
difficult to automatically set up the proper test input (it may depend
on all kinds of external resources, such as the presence of an input
file).
The possibility of defining new test cases manually is a workaround for
these problems, but not a complete solution.
Checking the results is another area that leaves room for improvement.
In many cases, this will have to be done by module-specific code, since
it is principally impossible to judge from the code alone what the
module should do.
The TestMake script is ready for use, but there remain a
number of things to do:
-
The code must be made more generic, so that the module
may determine the host language.
-
Several commands have not been implemented yet.
-
The generated test program should perhaps allow for a loop over
the test cases.
-
The handling of very long conditions is clumsy
-
(Relation to tools like McCabe: generating the required reports
by batch commands).