It's really great to see data being gathered on the impact of changes.

As we've already seen in this thread, there are many suggestions for how to
gather more data and thoughts on how the methodology might be enhanced --
and these suggestions are great -- but just having a means to gather some
important data is an excellent step.

Anecdotes and people's mental models of the Python ecosystem are certainly
valuable, but by themselves they don't provide a way to improve our joint
view of the consequences of particular changes.

As with unit tests and static analysis we should not expect such data
gathering to provide complete proof that a change is okay to make, but
having *some* quantitative data and the idea that we should pay attention
to this data are definitely a big step forward.

- Simon
_______________________________________________
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/QJJRIQXDJKFIR5YWJ2YLKEAYENWA35TP/
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to