Meaningful quality measurement of software projects is a hard problem. People 
have written theses, books and sophisticated tools for it. We should leave it 
to these people and to the developers who know what they need and choose 
packages based on that. Some bits of info from my experience with 
useless-to-dangerous pseudo-objective quality metrics in software projects:

**Comments** : Heavy code comments tend to go out of sync with the actual code. 
Wrong comments can be worse than none. Nim code can speak for itself very 
often: the language is concise and mostly clutter-free; choose meaningful 
identifiers and you won't need a lot of code comments. Doc comments for the 
public API of a module are of course mandatory except for obvious cases.

**" Well maintained" projects**: Metrics can include code frequency, number of 
contributors/LOC, age distribution of PRs, opened bug level issues minus closed 
ones over time and others. Which ones are important varies by developer 
preference and intended application: for a lib with very specialized 
functionality, a "dead" project with low activity may just mean that the code 
is rock solid and needs no more work. In a project implementing sophisticated 
scientific calculations, a single contributor doesn't have to be a bad sign. In 
a high-profile open source project, letting PRs submitted to beef up someones 
resume slowly die is probably not a bad thing. It depends.

**Tests** : while no tests are often a bad sign, their existence is not 
necessarily a good one. I've seen scores of tests which just exist to look good 
when they go green. Test coverage and orthogonality are better criteria, but 
again: these are hard and there are tools for that.

Reply via email to