> -----Original Message----- > From: James E Keenan > Sent: Tuesday, April 18, 2017 22:28 > > On 04/18/2017 07:43 PM, Jason Pyeron wrote: > > Currently we are using this script to find files (line 48) > under the scripts directory and add them to the coverage report. > > > > After running the programs under test, the script is run: > > > > $ ./tests/script/cover-missing.pl > > load > > ingest > > 17 known covered file(s) found > > preprocess > > 132 uncovered file(s) found and hashed > > process > > run: 1492558405.0000.10744 > > saving > > > > Here it found over 100 files without executions, so our > tests are not very comprehensive... > > > > First question, is the code below the "best" way to do this? > > > > Second question, is this something that could be provided > by the Module via API or command line? Something like: cover > -know fileName . Maybe even support if recursively find files > if fileName is a directory. > > > > Can I ask you to back up a minute and tell us how you "normally" test > your code -- i.e., test it in the absence of coverage analysis?
Sure. It is not tested. Your intentions are right, but not aligned to the practical issues. I am working on a massive update to a long running open source project. The code base does not have ANY tests when I started. I am changing much of the rendering engine, but preserving legacy functionality. To assure everyone, we are creating tests in parallel, not before. Code coverage will tell us how comprehensive our test are. Right now they suck. Nothing helps more than having a finidh line. This does that. <snip/> > > Testing of *programs* -- what I presume you have under your scripts/ > directory -- is somewhat different. Testing a script from > inside a Perl > test script is just a special case of running any program > from inside a What one has to understand is that the code was written to load perl files dynamically, not all branches load all files. Here in lies the rub. <snip/> > > Now, clearly, your mileage may vary. That's why I'm interested in > knowing your concerns about doing coverage analysis on the programs > under scripts/. I sort of mentioned this above. I am adding XML output to logwatch. Logwatch uses an execution model that loads perl files dynamically. Further we do not have anywhere sufficient test coverage at this point, but that can no longer be a blocker. The coverage reports should indicate zero coverage on files which have not been loaded / run. Having 78% of 2% of the files is not 78% it is closer to 1%. The more important issue is the future, when new log processors are added the code coverage in Jenkins will catch it by seeing a new file that is never called, reporting the code coverage falling below the threshold. -Jason