On 04/18/2017 07:43 PM, Jason Pyeron wrote:
Currently we are using this script to find files (line 48) under the scripts 
directory and add them to the coverage report.

After running the programs under test, the script is run:

$ ./tests/script/cover-missing.pl
load
ingest
17 known covered file(s) found
preprocess
132 uncovered file(s) found and hashed
process
run: 1492558405.0000.10744
saving

Here it found over 100 files without executions, so our tests are not very 
comprehensive...

First question, is the code below the "best" way to do this?

Second question, is this something that could be provided by the Module via API 
or command line? Something like: cover -know fileName . Maybe even support if 
recursively find files if fileName is a directory.


Can I ask you to back up a minute and tell us how you "normally" test your code -- i.e., test it in the absence of coverage analysis?

The reason I ask that is that, for better or worse, most of the Perl testing infrastructure is geared toward the testing of *functions*, whether those be Perl built-in functions or functions or methods exported by libraries such as CPAN distributions. This typically looks like this:

#####
use Test::More;
use Some::Module qw( myfunc );

my ($arg, $expect);
$arg = 42;
$expect = 84;
is(myfunc($arg), $expect, "myfunc(): Got expected return value";
#####

Testing of *programs* -- what I presume you have under your scripts/ directory -- is somewhat different. Testing a script from inside a Perl test script is just a special case of running any program from inside a Perl program. Typically, you have to use Perl's built-in system() function to call the program being tested along with any arguments thereto.

#####
my ($rv, @program_args);
$rv = system(qq|/path/to/tests/scripts/myprogram @program_args|);
is($rv, 0, "system call of myprogram exited successfully");
#####

You can get slightly more fancy by asking whether the program threw an exception or not.

#####
{
    local $@;
    $expect = "Division-by-zero forbidden";
    eval {
      $rv = system(qq|/path/to/tests/scripts/myprogram @program_args|);
    }
    like($@, qr/$exp/, "Got expected fatal error");
}
#####

And you can capture warnings, STDOUT and STDERR with CPAN libraries like Capture::Tiny, IO::CaptureOutput, etc.

But to get fancier, you have to establish some expectation as to how the world outside the program being tested will change after you've run 'myprogram' once with a certain set of arguments. For example, you have to assert that a new value will be stored in a database, and you then have to make a database call and run the return value of that call through Test::More::ok(), is, like(), etc. (You can do at least 82% of anything you reasonably need with just those three functions.)

What you're really doing in this case is *testing the program's documented interface* -- its API. "'myprogram' is documented to store this value in the database. Irrespective of how 'myprogram' is written -- or even of what language it's written in -- does it actually store that value in the database?"

That's what has sometimes been characterized as "black-box testing." We can't see into the program; we limit our concerns to what we expect the program to do based on its published interface.

Coverage analysis, in contrast, is the archetypical case of "white-box testing." You, the developer, can see the code, and you want to make sure that your tests exercise every nook-and-cranny of it.

Based on this approach, in cases where I am writing a library and also writing an executable as a short-hand way of calling functions from that library, my approach has generally been to use coverage analysis (via Devel::Cover) focused on the library (the .pm files) and to use system calls of the executable more sparingly except insofar as the executable is asserted to make a change in the outside world.

Now, clearly, your mileage may vary. That's why I'm interested in knowing your concerns about doing coverage analysis on the programs under scripts/.

Thank you very much.
Jim Keenan

Reply via email to