This file describes the requirements and design for using
runtest.csh
to automate testing of individual LAS/ADAPS
modules.
In order to use runtest.csh
on an application, the following
scenario must exist:
test.pdf
'' must exist. If the user is testing subdirectories,
then each subdirectory must exist in the test subdirectory. A LAS PDF file
named ``test.pdf
'' must exist in each subdirectory. The script
runtest.csh
will launch a LAS process that will traverse each
subdirectory (if specified) and interpret test.pdf
. The file
test.pdf
must contain all operations to be performed as part
of the test suite for the pertinent application.
runtest.csh
keys off of a file named
``results'' in the test or specified subdirectory. Upon
completion, runtest.csh
interprets the existence of
results as indication that the test suite failed in some manner. If
results
does not exist, runtest.csh
assumes that
all tests passed. Therefore, the PDF test.pdf
must handle the
generation of the file ``results'' if appropriate. It is probably
a good idea to create an empty results file in the initial stages of
test.pdf
in case a faulty PDF crashes without returning to
test.pdf
for error handling.
The personnel responsible for developer testing of the application
are also responsible for creating and/or updating the automated
test scenario described herein. Following is the ``intended use''
flow for applying runtest.csh
to a LAS/ADAPS application:
#---------------------------------- # Run the test suite, if available. #---------------------------------- tests: @$(LASTOOLS)/runtest.csh <application> [subdirectories]The target ``tests'' is used (rather than ``test'') so that make will not simply look at the date of the test subdirectory and say that the target ``test'' is up to date. The
@
sign tells make to not echo the command line.
This will reduce the log file clutter when applied to all of
LAS/ADAPS as an overall test procedure.
The <application>
argument to runtest.csh
is simply a
string that is echoed during execution to denote which application
is presently being tested. For testing a single application, the
argument doesn't serve any real purpose. But when testing a large
set of applications, it may be important to know which application
generated certain outputs, and the <application>
tag will
make this clear.
test.pdf
'' must be created. This file should contain all of
the test cases associated with the application. If the user is using
subdirectories, then these subdirectories must exist in the test
subdirectory. Within these subdirectories, a ``test.pdf
'' file
must be created as discussed above. Regression testing is a primary focus
with the LAS testing procedure.
make_local.pl
replaces the tag
``xxxxx'' with the current working directory so that the generated
file names will agree with the reference file names.
$LASTESTDATA
. Data in this
directory should be accompanied by a text information file giving a
brief description of the data. Before adding additional images to
this repository, an attempt should be made to make use of existing
images so as to minimize the disk space usage.
test.pdf
section:
let _ONFAIL(1) = "GOTO ERROUT" . . . ERROUT> PUTMSG "********************* ERROR *******************" "module-test" PUTMSG "Fatal error encountered. Test terminated." "module-test" PUTMSG "********************* ERROR *******************" "module-test" ush echo 'Fatal error encountered. Test terminated.' > results RETURN -1
test.pdf
will delete all
generated files not necessary for comparison with the reference
files in order to recover disk space.
The generated comparison files will be compared with the reference
files (in test.pdf
) with the differences redirected to the file
``diffs'' as such:
ush echo 'diff localcase1_diff.prt case1_diff.prt' > diffs ush diff localcase1_diff.prt case1_diff.prt >>& diffs ush echo 'diff localcase2a_list.prt case2a_list.prt' >> diffs ush diff localcase2a_list.prt case2a_list.prt >>& diffsThe diff command is echoed so that the reviewer has a prepared listing of all files that were compared. If a difference is encountered, it is obvious in which file it occurred.
diff localcase1_diff.prt case1_diff.prt diff localcase2a_list.prt case2a_list.prtThis file is compared with the ``diffs'' file generated earlier, with the results redirected to the file ``results''. If any differences exist, flow is transferred to an error handling block and an error status is returned.
let _ONFAIL(1) = "GOTO DIFF_FAIL" ush diff localdiffs diffs >& results . . . DIFF_FAIL> PUTMSG "********************* ERROR ********************" "module-test" PUTMSG "A difference was encountered. See results file." "module-test" PUTMSG "********************* ERROR ********************" "module-test" RETURN -1If no differences occur, the application has passed testing. The generated data files can be deleted to recover disk space and reduce clutter. The (empty) file ``results'' is deleted to signify that the test has passed. The success status is returned.
cmdel-f infile="case*;prt" conflg=no ush rm diffs results PUTMSG "******************** SUCCESS ******************" "module-test" PUTMSG "Testing completed successfully." "module-test" PUTMSG "******************** SUCCESS ******************" "module-test" RETURN 1In addition to the preceding operations, the following should also be considered:
.prt
files that contain current
dates and times or other data that complicates the regression
testing. In order to handle such situations, the script
diff_ignore_lines.pl
has been written. The script takes the
same arguments as diff with a variable number of additional
arguments. These additional arguments are the ``diff strings
to ignore''. For example, say a particular diff operation
goes as follows:
% diff localcase1.prt case1.prt 3c3 < This is what we expected. --- > This is what we got.Then we could run
% diff_ignore_lines.pl localcase1.prt case1.prt 3c3which would return a success status since we told the script to ignore the one difference. If there were additional differences, we could add those to the end of the argument list. Any differences found that do not match an argument are output identical to a standard diff output.
Modified by Gail Schmidt