Hi, In order to improve workflow it is important to collect data about the process. Even if Python doesn't have any kind of democracy, meritocracy or other kind of political system (well, dictatorship?), it is nice to estimate how many people are really involved in the process of language evolution.
For example, there is a new accepted PEP 440 which is authored by two people (documented in PEP), and they are are listed in credits. They definitely put a lot of work in making it happen. Other people had less chances to monitor the progress, and in my opinion this is the reason why handling semantic versioning was not given enough attention - proposing to convert versions instead of specifying mechanism for packages to specify alternative versioning system: http://legacy.python.org/dev/peps/pep-0440/#semantic-versioning But, my opinion is an opinion and it will stay so until it is backed up by data that can put some weight to my words. This can be possible if PEP pages allowed people with registered accounts to provide structured feedback for PEP in a form of three metrics: % of text read - 0 .. 100% (approximate) Text clarity - 0 .. 5 (how hard to understand the text) PEP coverage - 0 .. 5 (does the PEP cover your case well) Accept / Reject / Clarify / meh. Optional (reason/comment): -- anatoly t. _______________________________________________ core-workflow mailing list [email protected] https://mail.python.org/mailman/listinfo/core-workflow This list is governed by the PSF Code of Conduct: https://www.python.org/psf/codeofconduct
