On Thu, 19 Feb 2026, at 1:45 PM, Charles R Harris via NumPy-Discussion wrote:
> +1. The interaction on that PR as a whole struck me as harsh, verging on rude.
It certainly shows the need for developing a unified policy sooner rather than
later! *sigh*
I did want to push back on this statement from the PR:
>> Therefore, if any of the code included in the PR was generated by AI,
> That is an extreme position, if we do that we will end up with no maintainers
> because everyone coming up will be using AI.
I want to push back specifically on this point because it is not a good basis
from which to determine policy. We can, and I would argue we *must*, play a
part in whether "everyone" will be using AI. I'll lift this quote (via Juan
Luis Cano Rodriguez[1]) from "Resisting Enchantment and Determinism: How to
critically engage with AI university guidelines" [2]:
> Enchanted by determinism, some see the adoption and use of generative AI in
> education as inexorable as the effects of the laws of physics. This
> perspective nudges us towards helplessness and acceptance: no one can change
> gravity. Besides, it would be absurd to ask whether gravity is good or if we
> want it.
> [...]
> *We must not be persuaded by the false premise of a human-made artefact being
> inevitable.*
Juan.
[1]: https://astrojuanlu.leaflet.pub/3meyu6zht2c2q
[2]: https://zenodo.org/records/18282338
_______________________________________________
NumPy-Discussion mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3//lists/numpy-discussion.python.org
Member address: [email protected]