Re: Devel::Cover eval oddity
On Sun, Nov 05, 2006 at 10:18:19PM -0500, Christopher H. Laco wrote: Anyone have any ideas on this blip? http://handelframework.com/coverage/blib-lib-Handel-Base-pm.html line #171 Lord knows, it doesn't really matter since that's the only piece left, but I'm kinda of curious. This is under Devel::Cover 0.59 under 5.8.4, 5.8.6, and 5.8.8 Hmmm. It's pretty hard to tell what's going on there, but let's have a go. The statements on that line are the ones from use Module being expanded to BEGIN { require Module; import Module LIST; }, and the subroutines are there because BEGIN blocks are subroutines. I count seven subroutines on that line, indicating that you used seven separate modules there. And there are 22 statements, which will be three for each module plus one for the eval. Looking at the figures for the statement coverage, starting from the first line and looking at every third line, we have 1, 1, 1, 1, 1, 1, 441, 447. This tells me that the first six modules were used once each, and the seventh was used 441 times. The 447 at the end is the eval statement, which was naturally called once for each use. Interestingly, the last module, which was used 441 times, seems to have failed twice. You can see that from figures of 439 for the next two statements. Similarly, the two uncovered lines you see are associated with the failure of the third module. But since this module was used only once, those statements never get executed and so show up as errors. This can be verified by looking at the next lines where you check for errors from the eval. Notice that the check is executed 447 times, that both the true and false branches are executed, and that the statement reporting the eval failure is executed three times. So, in order to get your 100% coverage (!), all you need to do is find out which of the modules is failing, and call it once that it doesn't fail. Of course you might not want to do that, in which case you can just say, good, I know what is going on here and let it be. In this case you'd ideally want to mark the code as uncoverable so that when you do get a real coverage error you can easily notice it. Unfortunately the code to do that is not finished yet, and the default html report doesn't support it anyway. You're welcome to try out what's there anyway, if you like, but remember that it is still in development and there is no documentation for it yet, except for the code itself. -- Paul Johnson - [EMAIL PROTECTED] http://www.pjcj.net
Re: CPANTS and META.yml
David Golden wrote: I have to second this. There really shouldn't be separate conforms to 1.0 and conforms to 1.2 metrics and so on. What happens as the spec evolves? Unless the spec is broken, encouraging specific latest spec compliant is just churn and Kwalitee breaks if there's ever a change that isn't backwards compatible. The test should be whether the META.yml is well-formed -- meaning that it's valid according to the spec that it declares (or 1.0 otherwise). And realistically, Ken, Adam and I (maintainers of the major install tools) really control most of the META.yml generation anyway. If we don't upgrade, you don't upgrade.
Re: CPANTS and META.yml
On 11/6/06, Michael G Schwern [EMAIL PROTECTED] wrote: And realistically, Ken, Adam and I (maintainers of the major install tools) really control most of the META.yml generation anyway. If we don't upgrade, you don't upgrade. Well, that's not entirely true for things like no_index or various resources keywords. I think only Module::Install offers no_index and I don't know that any tool supports the new resources keywords. Within Module::Build, you have to use meta_add and/or meta_merge. Both of those make it very easy for users to create non-standard META.yml entries, even if they don't hand-roll a META.yml. Checking against the spec is a good Kwalitee metric to catch the cases where users have departed from the Spec.(e.g. no lowercase resources keywords except those given in the spec as approved.) David
ANNOUNCE: Crucible 1.7
Back in August I posted here about Crucible, a tool for kernel testing. We've completed a new release, version 1.7, available here: http://prdownloads.sourceforge.net/crucible/crucible-1.7.tar.gz I'm scheduled to be presenting this at the November PDX.pm Perl Monger's meeting (http://portland.pm.org/kwiki/). Crucible is able to watch various websites or source code systems for new code, pull it down, build it, and run a variety of tests on it. It can handle dependent libraries, boot different Linux kernels, and run network tests between two or more different machines. One of the major changes in this release is a new installation system. This also includes a post-installation test ('make installcheck') to do a dry run of a new installation. (Of interest to this list - the test is in bash but emits TAP-like output.) Bryce
Re: CPANTS and META.yml
* David Golden [EMAIL PROTECTED] [2006-11-06 05:40]: I have to second this. There really shouldn't be separate conforms to 1.0 and conforms to 1.2 metrics and so on. What happens as the spec evolves? Unless the spec is broken, encouraging specific latest spec compliant is just churn and Kwalitee breaks if there's ever a change that isn't backwards compatible. The test should be whether the META.yml is well-formed -- meaning that it's valid according to the spec that it declares (or 1.0 otherwise). Well, there’s the issue that some META.yml spec versions might be (considered) broken, in which case it might make sense to have the metric check conformance to one of the known good spec versions. Much like the previously discussed “broken installer” metric. But yeah, other than that, I agree, the metric should check that META.yml conforms to the spec it says it conforms to, and that a metric that checks for conformance to the latest version should be a bonus, if it exists at all. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/