On Wed, 18 Nov 2009 14:33:27 -0600, Robert Kern <[email protected]> wrote: > Personally, I don't want to see any aggregates of incommensurable > observations > ever. I don't mind seeing a dashboard of individual observations (even if I > > disagree with many of the individual measurements), but aggregating them > with > arbitrary weights into a single score is simply wrong. I disagree with > including > user ratings, too, for much the same reasons.
I'm not sure if CPANTS displays their findings/ratings to package users on CPAN either. I think you have to navigate to a seperate site to see the grade. The purpose of testing packages isn't to warn users off a package, say, because it has no docstrings. It's about taking their package, running the internal test suite on a number of different platforms (windows, linux, mac), checking that it installs properly with distribute/setuptools/distutils/pip. After that, to probe it and put some numbers (ratings) on what is and isn't done. Like documentation, tests, pylint, pep8. Any new package writer would expect to submit a package and get a rating in the C or D range (if graded with letters). With some extra polishing, you'd expect them to be interested in moving their package up into the A or B range. I can't see why it would be so wrong to give them tools that would allow them to do something like that. Otherwise, there's no incentive to try to make things good. Because it looks like nobody cares. Ratings to assist a package developer identify weaknesses are a good thing. Both for the developer and for the python community at large.. David _______________________________________________ Catalog-SIG mailing list [email protected] http://mail.python.org/mailman/listinfo/catalog-sig
