Hi,

On 3/4/26 23:40, Theodore Tso wrote:

[bad uploads as a result of not checking AI output thoroughly enough]

Why should it be any different if AI was used, and someone screwed up
in reviewing the patch?  At the end of the day, a human being that was
trusted to review a change, and/or to decide to upload a package, will
sometimes screw up.  If there are a pattern of missteps, we might need
to take action.  But that's true whether or not AI was involved.

Exactly my point: we don't want to differentiate, but for that to work, we need people to be able to reach the "trusted" status.

We both already have that status, but we can neither pull up the ladder behind us and make it more difficult to become a DD, nor do we want to mint new DDs who are incapable of passing through NM without AI help, because that also means they cannot review the code they generated.

That's where we need policy. As I said earlier, whether seasoned contributors use AI is less of a concern (also, these people tend to have enough contributions that it doesn't matter if some of them aren't copyrightable), but the disruption of our onboarding process is a huge issue.

My personal preference would be to disallow AI-assisted contributions from anyone who isn't a DD yet, because it would interfere with our skill assessment, and just sit out the situation otherwise.

   Simon

Reply via email to