On Mon, Feb 9, 2026 at 11:29 PM Stefan van der Walt via NumPy-Discussion <
[email protected]> wrote:

> On Mon, Feb 9, 2026, at 13:58, Ralf Gommers via NumPy-Discussion wrote:
>
>
>
> On Mon, Feb 9, 2026 at 6:23 PM Matthew Brett via NumPy-Discussion <
> [email protected]> wrote:
>
> I think it's correct that it's not sensible for policies to reflect
> things like dislike of AI's use of energy or the effects on the
> environment of AI data centers.   However, it seems obvious to me that
> it is sensible for policies to take into account the effect of AI on
> learning.
>
>
> Why would that be obvious? It seems incredibly presumptuous to decide for
> other people what methods or tools they are or aren't allowed to use for
> learning. We're not running a high school or university here.
>
>
> The way I read Matthew's comment is not that we should prescribe how
> people use their tools, but that we should be aware of the risks we are
> facing,
>

This part is fine in the abstract - but that's also true for the
environmental and societal impacts.


> and also communicate those risks to contributors who want to use AI tools
> to do NumPy development.
>

This doesn't necessarily make sense to me. If I try to figure out what all
the hand waving means concretely - i.e., "where would we want to
communicate such possible risks" - I think my answer is: probably nowhere.
It doesn't quite fit in a policy on AI tool usage, which I'd hope would be
short and to the point. And I don't think we want anything in the
contributor guide at this point around AI tools for contributions, except
for pointing at the policy?

The conversation here is a bit too abstract for me, and mostly arguing
against a straw man. Clearly if you outsource most thinking to a machine
and do less thinking yourself, you learn less. If you use tools
deliberately (one of many ways of doing that, from a blog post referencing
that Anthropic paper: https://mitchellh.com/writing/my-ai-adoption-journey),
that won't happen. Yes, you need to think about it as an individual using
the tool. As is the case for any tool and way of working.

If there is a concrete idea/proposal for a docs section, policy content, or
anything like that, please clarify.

Cheers,
Ralf


This also presumes that you, or we, are able to determine what usage of AI
> tools helps or hinders learning. That is not possible at the level of
> individuals: people can learn in very different ways, plus it will strongly
> depend on how the tools are used. And even in the aggregate it's not
> practically possible: most of the studies that have been referenced in this
> and linked thread (a) are one-offs, and often inconsistent with each other,
> and (b) already outdated, given how fast the field is developing.
>
>
> It is true that things are moving fast, and while the original METR study
> (which has been informally replicated in other settings) is perhaps
> outdated, Anthropic's just-released paper shows a broadly similar trend.
> Specifically, they show that time-to-solution is faster for junior
> developers, but not so much for senior developers. They also show that
> knowledge about the library is worse, having done a task with AI vs without.
>
> I'm sure, over time, we will figure out the best patterns for using AI and
> how to avoid the worst traps.
>
> Best regards,
> Stéfan
>
> _______________________________________________
> NumPy-Discussion mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
> https://mail.python.org/mailman3//lists/numpy-discussion.python.org
> Member address: [email protected]
>
_______________________________________________
NumPy-Discussion mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3//lists/numpy-discussion.python.org
Member address: [email protected]

Reply via email to