I did not wanted this work to be polemical and I've already described my 
feeling in a [previous thread](https://forum.nim-lang.org/t/5092). Let me 
explain my personal motivation for this work.

When I'm looking for a Nim package to be used in a development, I turn to 
`nimble search` function. But as a package description is a one-liner, 
frequently I don't have enough information to determine if the package is 
useful or not. With the `--full` option, I offer package authors the 
possibility to describe more extensively the main features of their package. 
This extended description can be displayed in with `nimble list --full` or used 
in search with `nimble search -full`.

You could argue that I could use Google search or Github search to find more 
about Nim packages, but why resort to external tools when we have a package 
manager?

Then, when I've found candidate packages for inclusion in my project, I need to 
evaluate which package is the most _future-proof_ for inclusion. This 
evaluation is highly subjective as sometimes one prefers to favour performance 
against code clarity, for instance, but as a first hint I would like to have a 
rating I could base a quick decision upon, in order to select the packages I 
want to evaluate first. I decided to create a maturity indicator in a range of 
0 (no more maintained package) to 4 (no risk package, well documented and 
tested, with active community, etc.). You can use `nimble search --mat=level 
keyword` or `nimble list --mat=level` to find or list packages with a rating 
over the level given in parameter. Now, how do you calculate that maturity 
level? As nimble package repository is static in a single `packages.json` file 
and that this file is maintained manually, I couldn't use ratings automatically 
computed. As a first shot, I decided to create the 3 subjective ratings 
`code_quality`, `doc_quality` and `project_quality` and calculate the resulting 
maturing with a magic secrete formula. I spent many hours browsing the >1000 
packages web sites to set these (again I totally agree subjective) 3 values for 
each package in order to have starting maturity values.

Regarding the categories, I wanted to add fixed attributes that can be used to 
filter packages. I'd rather select a pure Nim package than one based on an 
external library, for instance, and that's the reason why some package are 
categorized `FFI` (though it means pure instead of using FFI...). I selected a 
basic set of categories inspired from Debian ones and discovered later on 
that's not the best set for the work, but it was to late and I did not want to 
start again... For the moment, the package categories are not used by Nimble. I 
hope they can be useful.

I must say that I've seen scrap projects like I've seen many gems. Sometimes, I 
could not give a high score to a very valuable projects because it did not fit 
into the scoring grid that I had set when I started the work. Like @andrea 
said, if some code did not contain a single comment, I had to evaluate it as 
not easily accessible and its rating was lowered. I certainly committed many 
errors spending an average of 5 minutes by package. If I missed tests and 
examples, the package rating should be corrected and increased.

That's the reason why I appeal to the package authors to correct these errors. 
The Google Sheet can be simultaneously accessed while I can't do it with the 
JSON file. **Do it for your packages or the packages you know well!** Try to 
respect the evaluation grid so these maturity indicators are fair against other 
packages. This Google Sheet is only a temporary tool easier to use than a JSON 
editor.

What is the plan?

  * First complete the `packages.json` file and see if the features I added to 
Nimble are interesting the community. Hopefully by the end of the month.
  * Then, update the page [Awesome 
Nim](https://github.com/VPashkov/awesome-nim) with the best projects I've seen. 
I think that some of these projects must have a better visibility. Think about 
the "Nim distributions" of important Nimble packages...
  * Like we all agree, a better way of maintaining a packages repository is not 
through a static JSON file but with a database. A long term goal would be to 
have better evaluation tools than manual ratings and use better maturity 
variables than the 3 I started with. During that work, I discovered that 
federico3 [Nimble directory](https://nimble.directory/) has much more 
capabilities than what I thought and it could be used as a base for automatic 
evaluation and package metadata maintenance. Adding @Libman metadata attributes 
and maintaining them should be a breeze then. To be explored...


Reply via email to