On Tue, 10 Jun 2008, Eric Roode wrote:
* Then there are these, which are probably valid metrics, but of
questionable utility:
buildtool_not_executable: Nearly everyone does "perl Makefile.PL"
or "perl Build.PL". Note that this does not specify a specific perl,
just the first one in the user's PATH. I'll bet that 95% of the rest,
who presumably do "./Makefile.PL" or "./Build.PL", have only one perl
installed anyhow. I agree with the reason for this test, but I
question whether it's really doing anyone any good.
I think this is just something that the community has (mostly) agreed on
as a good thing. Making it a kwalitee point just codifies it.
extracts_nicely: How much of a problem is this, really?
has_buildtool: How many CPAN modules does this even apply to?
has_working_buildtool: ditto
Again, see above. These points codify existing community standards. The
reason this is not much of a problem now in 2008 is probably because these
standards have become so ingrained, both in terms of spreading the word,
and in terms of making sure everyone is using standard tools that do the
right thing (using EUMM, MI, or MB instead of hand-rolled Makefiles, for
example).
For the few cases where this isn't the case, having this codified may
provide some useful feedback for module authors or potential end users.
For example, if a module fails the extracts nicely test, I can pretty much
guess that either the module is ancient and unmaintained, or the author is
completely clueless.
To answer the question "how of a problem is this" is pretty easy, since
cpants.perl.org can show you a list of dists failing each test.
* Then there are the following tests, which as far as I can tell are useless:
extractable: How many CPAN modules have EVER been released with a
weird packaging format? Is this addressing a real problem in the
world?
It seems not, since there are no dists failing this test.
has_humanreadable_license: I suppose there are some developers in
big companies somewhere who have to pore through distributions to find
licenses to see whether they're allowed to use the module... but is
this really a big problem?
Once again, this is something that the community has largely agreed is a
good thing, and so we should encourage module authors to do it.
has_readme: ditto
I think the README has long outlived its usefulness myself, but since MB
will generate one for me I don't care that much. But if no one is using
their README for anything useful, maybe it's time to kill this one.
* Finally, we come to the large body of misguided and unuseful tests:
has_test_pod: Useless. Why should every end-user have to test the
correctness of the module's POD? The original developer should test
it, and the kwalitee metric should test that the module's POD is
correct (no_pod_errors). Including it as part of the module's test
suite is useless.
You don't need to make these tests run for the end user to get the point.
All of my dists (should) ship with a pod.t and pod-coverage.t that only
runs in "maintainer mode". My definition of this is unfortunately a bit
ad-hoc, but I know there's work to standardize this with an xt/ directory
or something, and I'm sure CPANTS will account for this.
no_cpants_errors: Module author should not be dinged for bugs in
CPANTS testing.
I agree. It seems like this is more likely to happen because of errors in
CPANTS code and its toolchain, for example here -
http://cpants.perl.org/dist/errors/Devel-CoreStack
proper_libs: Misguided. Why should end-users care about the
structure of the module build tree? They shouldn't.
Again, community standards. Encouraging this standard makes it easier for
people who want to download a distro and make a patch for it.
use_strict: Misguided. "use strict" is a valuable tool for
developers, but it is not necessary (or even desirable) in all
circumstances.
use_warnings: Misguided. Maybe my module doesn't need warnings.
Maybe I tested all relevent cases and got no warnings. Why should the
end-user's code need to load yet another module (even if small) just
so I can get one more kwalitee point?
Seriously, you tested all relevant cases and got no warnings? You must
have solved the halting problem or something ;)
Again, community standards.
has_example: Possibly useful, but poorly implemented (or possibly
poorly documented). Most modules that include examples do so in an
"Examples" section of the POD, not in a separate file or directory.
The has_example documentation implies that it'll only be satisfied by
a separate file or directory.
I tend to agree.
has_tests_in_t_dir: Misguided. Why should end-users care about
the location of the tests, so long as they all get run? They
shouldn't.
See above about patching and community standards.
is_prereq: Awful. I write many of my modules to without depending
on prerequisites; this reduces the load on end-users. I expect that
many other module authors do the same. Should I include other
authors' modules just to improve their kwalitee scores? More
importantly, why should Acme::FromVenus get a point for this test,
just because the author put up a dummy distro, Acme::FromMars, which
uses it? Some of my modules are very useful for end-users' code, not
so much for module developers' code. So I get dinged for this?
I think you're misinterpreting the meaning of the scores. You're not
"getting dinged". You _are_ rewarded if your code is considered good
enough to be required by something else. In other words, the point is a
reward, but the lack of a point is not a punishment.
-dave
/*==========================
VegGuide.Org
Your guide to all that's veg
==========================*/