On Dec 18, 2007, at 8:13 PM, Andy Armstrong wrote:
On 19 Dec 2007, at 02:05, chromatic wrote:
Sure - but I'd have expected that to be perceived as a specific
problem in an otherwise valuable system. It's not a rational
reason to
right off automated testing as a whole surely?
That depends on the ratio of useless to useful results.
Presumably the false negative rate achieved by the best modules is
a measure of how noisy the smoking system is. Given that the
cleanest modules regularly get a <1% FAIL rate over many tens of
reports it's not a huge reach to suggest that any module should be
able to get close to that. So I'd expect an author who was seeing a
lot of failures to look around on CPAN a little and observe that
other people's tests are doing better than theirs and then maybe
wonder whether it's their code that is at fault.
Does anyone know how the false negative rates compare for cpan-tester
smokers vs. CPAN::Reporter users? I've found the former to be
enormously valuable for cross-platform testing (especially David
Cantrell and Slaven Rezic), but I have seen very little feedback via
the latter at all.
Chris