On Sun, 2026-02-22 at 10:19 -0500, Marten van Kerkwijk via NumPy-
Discussion wrote:
> Ralf Gommers via NumPy-Discussion <[email protected]>
> writes:
>
> [snip]
>
> > I do think a web of trust is a potentially valuable idea. However,
> > the need right now isn't
> > there yet (at least for NumPy) and it does have the potential to
> > close the door pretty strongly
> > to newcomers. On the other hand, we already don't run CI on PRs
> > from first-time
> > contributors - that was something that turned out to be necessary
> > to limit wasting
> > resources. A web of trust is something to keep in mind in my
> > opinion, and consider adopting
> > if and when it becomes a clear win for maintainer load.
>
> Thanks for the reminder that we do not run CI for first-time
> contributors. That is nice in that there is already a mechanism in
> place to recognize those. As an intermediate step towards trust (but
> not yet a web of it!), would it make sense to have a welcome message
> that asks the new contributor to introduce themselves by editing
> their
> top comment? I.e., something like this:
Thanks for the suggestions on what concrete steps we should do (and I
agree we should do something).
I would be fine with basically adopting either of these, adopting the
SciPy/SymPy one seems pragmatic, it is nice to keep things similar in
similar projects. SymPy/LLVM do have a pretty clear note on copyright
(not all do, I think). [1]
(I like many things about the LLVM, it is nice explicit about
reasoning, etc. but I guess that also makes it longer.)
To me they honestly all get the important points across. And honestly,
I suspect many contributors won't read it anyway, so it may be more
used to point to in the rare case where you close a PR or so.
To achieve better transparency, I would suggest we add check-boxes,
E.g. sklearn has this now:
<!--
If AI tools were involved in creating this PR, please check all boxes that
apply
below and make sure that you adhere to our Automated Contributions Policy:
https://scikit-learn.org/dev/developers/contributing.html#automated-contributions-policy
-->
I used AI assistance for:
- [ ] Code generation (e.g., when writing an implementation or fixing a bug)
- [ ] Test/benchmark generation
- [ ] Documentation (including examples)
- [ ] Research and understanding
I am not sure how well it is used, but I think that is a good start to
see where it goes. I could imagine trying to put in something about the
scope of AI use, but I am not sure if it matters. It may be easier to
just follow up for PRs where it is unclear.
(FWIW, I like the comment asking for a bit of personal context, it
feels both helpful and welcoming! But I think when it comes to AI
specifically, I would start with the check-boxes for pragmatism.)
Cheers,
Sebastian
[1] I would be happy with linking out to continuing discussion towards
the note in the LLVM one: "Artificial intelligence systems raise many
questions around copyright that have yet to be answered"
But I think that is about as much as I want to focus on that point in
something targeted for contributors.
> """
> Thank you for your PR! As you appear to be a new contributor, could
> we
> ask you to briefly introduce yourself, e.g., by editing the top
> comment?
> It would help to know how you use numpy yourself and what made you
> want
> to contribute, and whether you are, e.g., a student keen to make an
> open-source contribution, or an experienced developer just fixing an
> annoying bug.
>
> Note: if you used any AI in your PR, be sure to declare this and
> check
> that your use is consistent with our AI policy.
> """
>
> > Filtering out the copyright-related noise/argumentation, I detect a
> > significant preference for
> > allowing use of AI tools while putting on some sensible constraints
> > by the group of active
> > maintainers. I'd like to somehow move towards something more
> > actionable, because we do
> > need some policy and an AI usage disclosure on all PRs soon. To get
> > to that, I think we should
> > be picking a base policy as a start, and add some NumPy-specific
> > edits/context as needed. I
> > think the most suitable base to start from would be either:
> >
> > 1. The LLVM policy: https://llvm.org/docs/AIToolPolicy.html
> > 2. The SymPy/SciPy policy:
> > https://scipy.github.io/devdocs/dev/conduct/ai_policy.html
> >
> > If we want to capture the "gray zone" better, adding a
> > supplementary document with some
> > concrete examples and maybe incorporating the "Zones" that Peter
> > sketched would be
> > good.
>
> I think both policies are good; I'd prefer to go with the shorter and
> more direct scipy one -- that also has the advantage of keeping
> things
> more consistent within scientific python. Personally, I would copy
> it
> verbatim and for now not spend time adding/editing.
>
> All the best,
>
> Marten
> _______________________________________________
> NumPy-Discussion mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
> https://mail.python.org/mailman3//lists/numpy-discussion.python.org
> Member address: [email protected]
_______________________________________________
NumPy-Discussion mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3//lists/numpy-discussion.python.org
Member address: [email protected]