On Tuesday 02 September 2008 11:01:44 David Golden wrote: > > You encourage what you measure, > > In theory, yes. In practice, that hasn't been the experience to date.
> Testers over 70K: > > 1 587018 Chris Williams (BINGOS) > 2 318527 Andreas J. König (ANDK) > 3 188392 David Golden (DAGOLDEN) > 4 151457 David Cantrell (DCANTRELL) > 5 148505 Slaven Rezić (SREZIC) > 6 73425 Jost Krieger (JOST) > 7 73104 Yi Ma Mao (IMACAT) > Do you think this group couldn't game the stats if all they wanted was > a high score? Being snide about peoples volunteer efforts isn't > particularly constructive. Someone in that top seven has sent plenty of useless reports. ("Hi, I'm from CPAN Testers! I have my client configured not to install required dependencies! Your distribution doesn't work! Hope that helps!") > If you think that people should be rewarded (acknowledged?) for > "useful" reports, start defining "useful" and the heuristics you'd use > to identify them. * Does the report identify an actual failure for the common use case of CPAN installation or does it identify a failure in configuring the CPAN Testers client? * Does the report identify a known failure already reported elsewhere with the same characteristics? * Does the report identify a success on a previously unknown platform/configuration combination? * Does the platform combination include a supported version of Perl? My criteria for usefulness suggest answers of "Yes. No. No. Yes. Yes." I realize that the third question is more difficult to answer in the presence of XS components, but most of the distributions on the CPAN are pure Perl. -- c