Automated Test Procedure for LAS/ADAPS

This file describes the requirements and design for using runtest.csh to automate testing of individual LAS/ADAPS modules.

In order to use runtest.csh on an application, the following scenario must exist:

1.
A subdirectory named ``test'' must exist. All testing will take place within the test directory.
2.
Within the test subdirectory, a few things must be present. If the user is not testing subdirectories, then a LAS PDF file named ``test.pdf'' must exist. If the user is testing subdirectories, then each subdirectory must exist in the test subdirectory. A LAS PDF file named ``test.pdf'' must exist in each subdirectory. The script runtest.csh will launch a LAS process that will traverse each subdirectory (if specified) and interpret test.pdf. The file test.pdf must contain all operations to be performed as part of the test suite for the pertinent application.
3.
The script runtest.csh keys off of a file named ``results'' in the test or specified subdirectory. Upon completion, runtest.csh interprets the existence of results as indication that the test suite failed in some manner. If results does not exist, runtest.csh assumes that all tests passed. Therefore, the PDF test.pdf must handle the generation of the file ``results'' if appropriate. It is probably a good idea to create an empty results file in the initial stages of test.pdf in case a faulty PDF crashes without returning to test.pdf for error handling.

The personnel responsible for developer testing of the application are also responsible for creating and/or updating the automated test scenario described herein. Following is the ``intended use'' flow for applying runtest.csh to a LAS/ADAPS application:

1.
The Makefile associated with the application is modified by adding the lines
#----------------------------------
# Run the test suite, if available.
#----------------------------------
tests:
        @$(LASTOOLS)/runtest.csh <application> [subdirectories]
The target ``tests'' is used (rather than ``test'') so that make will not simply look at the date of the test subdirectory and say that the target ``test'' is up to date. The @ sign tells make to not echo the command line. This will reduce the log file clutter when applied to all of LAS/ADAPS as an overall test procedure. The <application> argument to runtest.csh is simply a string that is echoed during execution to denote which application is presently being tested. For testing a single application, the argument doesn't serve any real purpose. But when testing a large set of applications, it may be important to know which application generated certain outputs, and the <application> tag will make this clear.
2.
A subdirectory named "test" is created for holding all files related to the application test suite.
3.
Within the test subdirectory, several things must exist. If the user is not using subdirectories, a LAS PDF file named ``test.pdf'' must be created. This file should contain all of the test cases associated with the application. If the user is using subdirectories, then these subdirectories must exist in the test subdirectory. Within these subdirectories, a ``test.pdf'' file must be created as discussed above. Regression testing is a primary focus with the LAS testing procedure.
4.
A group of reference data files are part of the CVS repository for the application. These files are used for comparison of results for each new LAS build to ensure that nothing has changed. Ideally these file will be relatively small ASCII files that do not add too much bloat to the CVS repository. For image comparisons, the reference files may be small sample blocks (maybe 100 by 100 or smaller) that have been extracted from images using the LIST command. Other reference files may be generated by LAS/ADAPS modules that support the ``PRINT'' parameter. Many LAS/ADAPS data files contain the names of images and other files. The names often include the directory path, and this can cause problems when trying to compare results from one test run to another. To avoid this problem, the developer should replace the path with the generic character tag ``xxxxx'' in all of the reference files. The Perl script make_local.pl replaces the tag ``xxxxx'' with the current working directory so that the generated file names will agree with the reference file names.
5.
One concern is the potential for accumulating a large number of test images that must be stored. When possible, generate any required input images using TESTGEN. For tests requiring specific test images, the LAS test data repository should be utilized. The test data directory is defined for both sg1 and sns1 via the environment variable $LASTESTDATA. Data in this directory should be accompanied by a text information file giving a brief description of the data. Before adding additional images to this repository, an attempt should be made to make use of existing images so as to minimize the disk space usage.
6.
If a test encounters a fatal error, flow will be transferred to an error handling block. A fatal error message will be written to the file ``results'' and processing will be terminated with error status. The following is an example test.pdf section:
let _ONFAIL(1) = "GOTO ERROUT"

.
.
.

ERROUT>

PUTMSG "********************* ERROR *******************" "module-test"
PUTMSG "Fatal error encountered.  Test terminated." "module-test"
PUTMSG "********************* ERROR *******************" "module-test"

ush echo 'Fatal error encountered.  Test terminated.' > results

RETURN -1
7.
If no fatal errors are encountered, all tests will be completed. After completion of all tests, test.pdf will delete all generated files not necessary for comparison with the reference files in order to recover disk space. The generated comparison files will be compared with the reference files (in test.pdf) with the differences redirected to the file ``diffs'' as such:
ush echo 'diff localcase1_diff.prt case1_diff.prt' > diffs
ush diff localcase1_diff.prt case1_diff.prt >>& diffs
ush echo 'diff localcase2a_list.prt case2a_list.prt' >> diffs
ush diff localcase2a_list.prt case2a_list.prt >>& diffs
The diff command is echoed so that the reviewer has a prepared listing of all files that were compared. If a difference is encountered, it is obvious in which file it occurred.
8.
Since we want everything to boil down to a simple pass/fail for the entire test suite, an additional reference file is added to the CVS repository. This file is simply the list of diff commands for all tests:
diff localcase1_diff.prt case1_diff.prt
diff localcase2a_list.prt case2a_list.prt
This file is compared with the ``diffs'' file generated earlier, with the results redirected to the file ``results''. If any differences exist, flow is transferred to an error handling block and an error status is returned.
let _ONFAIL(1) = "GOTO DIFF_FAIL"
ush diff localdiffs diffs >& results

.
.
.

DIFF_FAIL>

PUTMSG "********************* ERROR ********************" "module-test"
PUTMSG "A difference was encountered.  See results file." "module-test"
PUTMSG "********************* ERROR ********************" "module-test"

RETURN -1
If no differences occur, the application has passed testing. The generated data files can be deleted to recover disk space and reduce clutter. The (empty) file ``results'' is deleted to signify that the test has passed. The success status is returned.
cmdel-f infile="case*;prt" conflg=no
ush rm diffs results

PUTMSG "******************** SUCCESS ******************" "module-test"
PUTMSG "Testing completed successfully." "module-test"
PUTMSG "******************** SUCCESS ******************" "module-test"

RETURN 1
In addition to the preceding operations, the following should also be considered:

9.
For more information on regression testing, see the regression_tests.txt file.



Lowell Johnson
2001-02-01

Modified by Gail Schmidt
2001-03-12