Hi Ralf, I think you're playing the ball and not the man, and apart from that being unpleasant for me, it's bad for the discussion. If we are not careful, people will be discouraged from posting for fear of personal attack.
That said - I do apologise for using "obviously" - thanks for pointing that out - it was rude of me, and I will try to be more careful. On Sat, Feb 14, 2026 at 9:05 AM Ralf Gommers via NumPy-Discussion <[email protected]> wrote: > > > > On Fri, Feb 13, 2026 at 12:02 PM Matthew Brett via NumPy-Discussion > <[email protected]> wrote: >> >> >> Ralf pointed out one benefit - that we are not seen to disapprove of >> the chosen workflows of our fellow developers. I think this is a >> weak argument. > > > Please, read and reason more carefully. That was right below a principle > "honor copyright". One does not invalidate or contradict the other. > >> So, accepting large AI-generated PRs would be a significant threat to >> copyright > > > You have a good point in your arguments about copyright somewhere, but you're > making it very poorly, verbosely, and with too much confidence when using > words like "obviously". It's easy to come up again with examples for why a > "large AI-generated PR" isn't copyrightable. E.g., filling holes in test > coverage, say for nan's, empty arrays, or noncontiguous arrays. Such an > effort involves tedious boilerplate tests with no copyrightable content, and > we'd happily outsource that to a tool, and may be thousands of lines of code. > > For large PRs with intellectually stimulating and copyrightable code, there > is also a gray zone where the human does most of the thinking, outlines the > solution while stubbing out a lot of details, and then lets a tool fill in > the details. That might all be fine too - it depends. I don't know why you would think that I hadn't understood that there were nuanced arguments to be made for the acceptability of any particular piece of AI-generated code. Did you not read my reply to Ilhan, for example? > The thing to be done here is to find understandable and pragmatic wording for > a policy that discourages and lets us reject the undesirable usage of AI > tools, while not hindering valid usage. Being overly broad and moralizing > with inactionable wording isn't helpful. I think you're confusing the statement of ethical principles, with being overly broad and inactionable. I don't know what "moralizing" means, but I'm assuming that it can't reasonably be applied to the statement "We have an ethical responsibility to uphold copyright". If we accept that statement, then we can have specific discussions about what actions we need to take, and that's what I was doing - you must have seen the specific proposal that I made. Cheers, Matthew _______________________________________________ NumPy-Discussion mailing list -- [email protected] To unsubscribe send an email to [email protected] https://mail.python.org/mailman3//lists/numpy-discussion.python.org Member address: [email protected]
