Thanks for trying to steer this back to a pragmatic "how do we define a
policy".


On Sat, Feb 21, 2026 at 5:16 PM Marten van Kerkwijk via NumPy-Discussion <
[email protected]> wrote:

> > So "No AI contribution is allowed" is a valid take for me if that
> > would be the policy. Or "we will use common sense and make opinionated
> > decisions, for trivial and otherwise laborious tasks we don't care but
> > for involved bits, we won't touch it". It is also fine.
>
> I'd be happy with either too, with a preference for the second one,
> which I see as allowing AI as a tool.
>

I have a very strong preference for allowing AI as a tool here, for
multiple reasons:

1. The principle I articulated before: don't tell others what tools are and
they aren't allowed to use, as long as it doesn't break other contribution
guidelines/rules.

2. The opportunities it affords us to *lighten* the maintenance burden.
Maintainer bandwidth is always scarce, and the majority of PRs do not
involve architecture or copyrightable algorithmic code. That most
contributions are "AI slop" and will just make our maintenance load worse
is a take that is highly likely to be incorrect.

3. The opportunities for learning. AI tools are fast becoming a key skill
for software engineering related jobs; for scientific jobs I could see that
becoming the case as well. Blanket forbidding use of those tools
potentially penalizes especially the people who spend a significant part of
their time every week contributing to NumPy.

4.  Alignment with other open source projects. Melissa has been composing a
nice overview here:
https://github.com/melissawm/open-source-ai-contribution-policies. Both
inside and outside the scientific Python community, a large majority of
projects land on a very similar set of policies and principles. And very
few land on an outright ban.

I find the discourse in this thread about copyright to be completely
lacking in pragmatism and actionable insights. It's something that's not
new for us in scope, only in scale. We're just going to have to trust
the disclosures that we should be asking for (starting asap), give some
pragmatic basic guidelines, and deal with PRs as they come. There *will* be
a gray zone. Oscar's example of someone starting with "generate an
algorithm, then lightly edit it" is one step too far for now I think.
Anything short of that, including using AI to fill in some algorithmic
details that were already designed by a human, should be okay / in the grey
zone.


Disclosure has to be key, though.
>

+1


> Note that while both would at least somewhat address copyright issues,
> only the first would address the pain of having to deal with AI slop.
>
> To me it remains the bigger problem [1], one I don't see how we can
> address without something like a web of trust.  But I'll let this be the
> last time I note it here, since arguably it is a different discussion,
> and also at numpy I'm not one directly affected as I rarely look at new
> PRs.


I do think a web of trust is a potentially valuable idea. However, the need
right now isn't there yet (at least for NumPy) and it does have the
potential to close the door pretty strongly to newcomers. On the other
hand, we already don't run CI on PRs from first-time contributors - that
was something that turned out to be necessary to limit wasting resources. A
web of trust is something to keep in mind in my opinion, and consider
adopting if and when it becomes a clear win for maintainer load.


> Instead, those who do look at new PRs should speak up on whether
> they are willing to filter AI slop.


+1. Also in general: this policy should be primarily informed by the people
who are actively maintaining NumPy on a daily/weekly basis. I'd definitely
include you in that though Marten. There is more to maintenance than
"triage the new PRs from people we don't know"; you spent a significant
amount of time reviewing code, mostly complex C code in `numpy/_core`.
That's the kind of code where tools can help quite a bit, but only in the
hands of humans that already understand the code deeply.

Another example: security reports. Those are usually triaged by Sebastian
and me, and I'd say the volume of bogus reports has gone up slowly over the
past couple of years, in part driven by wider availability AI tooling (and
https://huntr.com/, which has a terrible signal-to-noise problem - avoid at
all costs). So we can avoid AI tools ourselves, and see that maintenance
load just continue going up. Or we can try to experiment with tooling
ourselves - see for example
https://www.anthropic.com/news/claude-code-security. I just signed up for
early access of that capability, because I would like to get high-quality
reports and patches, and fix potential real-world problems before they
occur.

I'm honestly not really interested in opinions from non(-active)
maintainers about whether I am allowed to do something like sign up for
that security service, or apply a patch it generates. Which is yet another
reason I find "no AI at all" not an acceptable policy.

Filtering out the copyright-related noise/argumentation, I detect a
significant preference for allowing use of AI tools while putting on some
sensible constraints by the group of active maintainers. I'd like to
somehow move towards something more actionable, because we do need some
policy and an AI usage disclosure on all PRs soon. To get to that, I think
we should be picking a base policy as a start, and add some NumPy-specific
edits/context as needed. I think the most suitable base to start from would
be either:

1. The LLVM policy: https://llvm.org/docs/AIToolPolicy.html
2. The SymPy/SciPy policy:
https://scipy.github.io/devdocs/dev/conduct/ai_policy.html

If we want to capture the "gray zone" better, adding a supplementary
document with some concrete examples and maybe incorporating the "Zones"
that Peter sketched would be good.

Final thought since we're sharing resources: this article by Chris Lattner
resonated with me:
https://www.modular.com/blog/the-claude-c-compiler-what-it-reveals-about-the-future-of-software

Cheers,
Ralf



> But for what it is worth, for
> astropy, where I did often look at new PRs, I've concluded that AI slop
> is sufficiently shifting the balance towards misery that I will no
> longer look at anything unless I'm pinged.
>
> All the best,
>
> Marten
>
> [1] Not fact checked, and more for amusement, but via LWN I was led to
> the following (really, 5% of all open-soure code this month???).
>
> """
> Kevin Beaumont
> @[email protected]
>
> Today in InfoSec Job Security News:
>
> I was looking into an obvious ../.. vulnerability introduced into a major
> web framework today, and it was committed by username Claude on GitHub.
> Vibe coded, basically.
>
> So I started looking through Claude commits on GitHub, there’s over 2m of
> them and it’s about 5% of all open source code this month.
>
>
> https://github.com/search?q=author%3Aclaude&type=commits&s=author-date&o=desc
>
> As I looked through the code I saw the same class of vulns being
> introduced over, and over, again - several a minute.
> """
>
> (From https://cyberplace.social/@GossiTheDog/116080909947754833)
> _______________________________________________
> NumPy-Discussion mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
> https://mail.python.org/mailman3//lists/numpy-discussion.python.org
> Member address: [email protected]
>
_______________________________________________
NumPy-Discussion mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3//lists/numpy-discussion.python.org
Member address: [email protected]

Reply via email to