Hi,

On Thu, Feb 19, 2026 at 4:42 AM Juan Nunez-Iglesias <[email protected]> wrote:
>
> On Thu, 19 Feb 2026, at 1:45 PM, Charles R Harris via NumPy-Discussion wrote:
>
> +1. The interaction on that PR as a whole struck me as harsh, verging on rude.
>
>
> It certainly shows the need for developing a unified policy sooner rather 
> than later! *sigh*
>
> I did want to push back on this statement from the PR:
>
> Therefore, if any of the code included in the PR was generated by AI,
>
> That is an extreme position, if we do that we will end up with no maintainers 
> because everyone coming up will be using AI.
>
>
> I want to push back specifically on this point because it is not a good basis 
> from which to determine policy. We can, and I would argue we must, play a 
> part in whether "everyone" will be using AI. I'll lift this quote (via Juan 
> Luis Cano Rodriguez[1]) from "Resisting Enchantment and Determinism: How to 
> critically engage with AI university guidelines" [2]:
>
> Enchanted by determinism, some see the adoption and use of generative AI in 
> education as inexorable as the effects of the laws of physics. This 
> perspective nudges us towards helplessness and acceptance: no one can change 
> gravity. Besides, it would be absurd to ask whether gravity is good or if we 
> want it.
> [...]
> We must not be persuaded by the false premise of a human-made artefact being 
> inevitable.

I agree - and it's a point that has been well-made by your quote, and
by others, that "accept the inevitable" is a poor argument, and easily
deployed by people who are trying to sell you something.

You may have seen Linus Torvalds' constant complaints about the hype
surrounding AI.

In this case, what worries me is that we may be drifting into
accepting the idea that it is inevitable that we developers will
switch from mainly writing code ourselves, to mainly asking AI to
write code for us.   I don't think that's inevitable, and neither,
apparently, does Torvalds (see quotes above).   I suspect, if we do go
in that direction, we will find (from another quote above) that our
skills in writing code will start to atrophy, and this is likely to
mean that our learning, and our skills in reviewing code will start to
atrophy as well.   See the Anthropic study quoted above, and
references therein for more on AI-generation and learning deficits.
In other words, more code, more subtle bugs, fewer developers who
understand the code-base, and fewer developers coming to the project
with sufficient training to read and review code.

But luckily, all hype aside, we have plenty of time to take this
slowly and see how this develops.   There's no plausible world in
which Numpy suffers significantly from taking a measured approach to
AI-generated code, over the next few years.    There are various
plausible worlds where it suffers from being too credulous of AI code
quality, or it's ability to train developers, or its tendency to
generate code that is subject to copyright.

Cheers,

Matthew
_______________________________________________
NumPy-Discussion mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3//lists/numpy-discussion.python.org
Member address: [email protected]

Reply via email to