ctest currently has some deficiencies such as a complete lack of
dependencies (you must remember to run "make all" before every run of ctest
if anything has changed) and lack of parallel test capability (does the
tests in sequential order which uses only one processor on multiprocessor
machines).

Accordingly I decided some time ago to migrate my already existing
testing framework (accessible with the new installed examples tree
build system) to the build tree while simultanously adding new features to
that testing framework which is now common between the build tree and
installed examples tree.

The testing team for the next release should be interested in this common
testing framework because of its advantages over the ctest approach. It has
complete dependency checking (only the minimum required targets are run),
and it takes advantage of multiple processors for those platforms where the
CMake generator can execute multiple targets in parallel.  Linux "make" has
this capability (the -j option).  I believe the OS X version of make has
this capability.  Also, according to a quick google search some versions of
nmake (the Windows equivalent of unix make) also have this capability.
Finally, the common testing framework also tests our interactive devices and
examples.  This capability should be possible for ctest as well, but
it has not been implemented there.

The common testing framework is still being developed, but the core of it works
as of revision 10435.  Here are some reasonably fair timing comparisons
between ctest and the common testing framework in the top of the build tree
(with BUILD_TEST=ON) after "make all" has been run.

softw...@raven> time ctest -R '(_c$|_cxx$|_f77$|_f95$|compare)'
Start processing tests
Test project /home/software/plplot_cvs/HEAD/build_dir
   1/ 33 Testing examples_c                       Passed
   2/ 33 Testing examples_cxx                     Passed
   3/ 33 Testing examples_f77                     Passed
   4/ 33 Testing examples_f95                     Passed
  33/ 33 Testing examples_compare                 Passed

100% tests passed, 0 tests failed out of 5

real    0m44.180s
user    0m18.741s
sys     0m3.336s

softw...@raven> time (make -j4 test_c_psc test_cxx_psc \
test_f77_psc test_f95_psc> make_psc.out ; make test_some_diff_psc)
Scanning dependencies of target test_some_diff_psc
c++
   Missing examples            :
   Differing postscript output :
   Missing stdout              :
   Differing stdout            : 
f77
   Missing examples            :
   Differing postscript output :
   Missing stdout              :
   Differing stdout            : 
f95
   Missing examples            :
   Differing postscript output :
   Missing stdout              :
   Differing stdout            : 
Built target test_some_diff_psc

real    0m27.556s
user    0m21.445s
sys     0m5.096s

For this selection of languages, there is a factor of 1.6 improvement in
elapsed time due to the two cpu's on my platform.  Presumably the speed
advantage is not the expected factor of two because the target
test_some_diff_psc (which deliberately has no dependencies, see below) has
to be run sequentially after the other targets are finished.

Here is a list of targets that are now working.

softw...@raven> make help |grep 'test.*_psc'
... test_ada_psc
... test_all_diff_psc
... test_c_psc
... test_cxx_psc
... test_d_psc
... test_f77_psc
... test_f95_psc
... test_java_psc
... test_lua_psc
... test_ocaml_psc
... test_octave_psc
... test_pdl_psc
... test_python_psc
... test_some_diff_psc
... test_tcl_psc

>From the example above, you should be able to figure out that
test_<language>_psc runs the <language> front-end for the psc device for all
our standard examples while test_some_diff_psc runs the script to compare
stdout and PostScript results for whatever test_<language>_psc have been run
before (just like "ctest -R compare").  In other words, test_some_diff_psc
(as its name suggests) has no dependencies and simply compares stdout and
PostScript results that have been produced up to when the test_some_diff_psc
target is run.

In contrast to test_some_diff_psc, test_all_diff_psc (as its name suggests)
has dependencies on all the test_<language>_psc targets so normally it is
just run alone as

make -j4 test_all_diff_psc >& make.out

to do comprehensive testing of all stdout and PostScript comparisons.  Note,
I have been working on file dependencies pretty hard over the last week so
they are in pretty good shape.  That mean only the individual components of
the PLplot build and also the corresponding test_<language>_psc targets are
re-run that are necessary after you have made some source-tree change.  For
this reason, the above command should be convenient for quick build-tree
testing of development changes, although something like the timed make
command I gave above would also work well. Because the common testing
framework has full dependencies on all targets (with the deliberate
exception of test_some_diff_psc) it is not necessary or desirable to execute
the initial (and repeated) "make all" commands required in the ctest case.

Please give the above new targets a thorough testing on your platform of
choice (especially if you plan to help out with the testing of the next
release). Note the same scripts that are used in the ctest case are used for
these targets so hopefully there will be no problems on the non-Linux
platforms where these targets have not been tested yet.

Note by design of the common testing framework, the same targets described
above also work fine on Linux for the new installed examples build system.
After

make -j4 install >& make_install.out

for the usual core build system, you run that new build system as follows
(from an initially empty build tree):

cmake $prefix/share/plplot5.9.5/examples >& cmake.out

(where $prefix is the installation prefix you specify with the
-DCMAKE_INSTALL_PREFIX=$prefix cmake option).  Then execute

make -j4 test_all_diff_psc >& make.out

(and other test_*_psc targets) as above.  Again, I expect the non-Linux
case to work well (because of prior ctests on all our platforms), but
that hypothesis still needs to be tested.

Finally, even though the above targets work fine, there is more I plan to do
over the next few days to finish off the common testing framework. In that
time I hope to complete the implementation of the test_noninteractive and
test_interactive targets. (These targets are mostly done but need some
debugging.) test_noninteractive will be more comprehensive than a run of
ctest with all tests enabled (because some of the file devices do not have
add_test implemented for the ctest system). test_interactive will give
interactive results for a selection of our examples for all interactive
devices. Targets to test individual interactive devices for a subset of our
examples will also be made available.  None of these interactive tests are
currently implemented with ctest.

Alan
__________________________
Alan W. Irwin

Astronomical research affiliation with Department of Physics and Astronomy,
University of Victoria (astrowww.phys.uvic.ca).

Programming affiliations with the FreeEOS equation-of-state implementation
for stellar interiors (freeeos.sf.net); PLplot scientific plotting software
package (plplot.org); the libLASi project (unifont.org/lasi); the Loads of
Linux Links project (loll.sf.net); and the Linux Brochure Project
(lbproject.sf.net).
__________________________

Linux-powered Science
__________________________

------------------------------------------------------------------------------
Come build with us! The BlackBerry&reg; Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9&#45;12, 2009. Register now&#33;
http://p.sf.net/sfu/devconf
_______________________________________________
Plplot-devel mailing list
Plplot-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/plplot-devel

Reply via email to