On Sep 5, 2008, at 11:36, chromatic wrote:
They are annoying, but I'm not sure it's my biggest complaint.
There's also
the arbitrariness of the upload/debug/revise cycle of trying to
please a
black box full of testers. I'm not willing to say that this is
primarily the
fault of CPAN Testers, but it does expose a lot of cracks in the CPAN
plumbing.
Yes. Some easily-accessed documentation of best practices wold be
welcome, linked to from the pause upload page. That would help. I
recently updated all of my modules to work on various platforms and
specify versions of Perl, and it was pretty annoying to do the upload/
debug/revise bit (though it was mainly Windows that gave me the pain,
not old versions of Perl).
It's a little bit like trying to have a discussion with someone
who's upset
but won't tell you why, and you have to guess and hope you don't
make things
worse before you get a useful answer.
Yes, better diagnostics would be welcome, especially for those who
suffer from action-at-a-distance failures like you do for your
UNIVERSAL:: modules.
Well, you can upload a dev version to CPAN and the testing bots will
test it, I believe. It'd be nice if there was a separate place to
upload code to be tested before you actually released it. That'd be
very handy indeed.
Even being able to identify from a distribution which CPAN Testers
platforms
will even try to run tests would help. (Oh dear, this'll get all of
those
5.005 boxes running my code.)
I do like how CPANTS lists the original dozen or so Kwalitee metrics
and their
solutions on the individual distribution Kwalitee pages.
Yeah, it's a bit more advanced that way, in that it has specific
metrics, whereas CPAN testers is just ("does it build" and "do the
tests pass"). It's the former that seems to cause the most
aggravation, as there are many reasons a build could fail and it's
difficult to tell which reason or reasons are the underlying cause.
Best,
David