Barry Smith <[email protected]> writes: > Ok, there are requirements sometimes for compiling examples as well, > so presumably you would need to list the requirements twice once for > the runexX but also for the exX: and your compile rules would need > to do the checking also ?
Compile rules inherit variables from run rules unless we make requirements private (or just assign requirements directly), which we should probably do because run targets are usually more restrictive. For example, we always want to compile KSP ex10.c, even though many of the targets require various third-party packages. Now I don't mind having compile targets for each example, but for automated testing, I would greatly prefer to build one executable per package. I pushed a branch 'jed/tap', which does this. You should be able to run "make -f gmakefile -j4 build-test". I'll rebase this branch before doing more work in it. > Can we have “built in” rules for .F, .cxx, .cu so we don’t have to > list require fc for each fortran example etc? Yeah. > Also what about a “built in” rule for mpiexec -n n > 1 so we don’t > have to put a requirement of MPI for each parallel test? I'm not wild about this. Instead of inlining complicated shell script and losing return codes, I'd rather have a runner script that runs the job, captures both stdout and stderr, checks for correctness, and logs results. > This sounds much better than the current model, the less redundant > REQUIRE we need to list the better. Your idea of including test definitions in the source files is also viable.
pgpGAfY3IYDns.pgp
Description: PGP signature
