On Wed, Mar 04, 2026 at 05:16:11PM +0900, Simon Richter wrote:
> Also, I believe we need to create proactive, not reactive policy, because we
> have a very limited set of reactions to bad uploads, all of them
> heavy-handed. I am fairly sure not even the proponents of AI use would be
> happy with a policy of "AI use is allowed, but if you make a bad upload as a
> result of not doing a thorough review, we nuke your upload permissions and
> make you go through a tasks&skills check again", but I see very few options
> here.

Humans make mistakes.  When a DD uploaded a bogus patch which resulted
in all Debian systems generated exactly 256 possible "random private
keys", it was hugely embrassingly, and a security disaster.  No one's
upload permissions was taken away.  The bug was just fixed, and we
said, "oops, sorry", and life moved on.

Why should it be any different if AI was used, and someone screwed up
in reviewing the patch?  At the end of the day, a human being that was
trusted to review a change, and/or to decide to upload a package, will
sometimes screw up.  If there are a pattern of missteps, we might need
to take action.  But that's true whether or not AI was involved.

(Or someone well-meaning who tried to make a lint-style tool STFU, and
accidentally caused a Debian-specific bug that completely nerfed our
random number generator.  It doesn't mean that we should stop using
those tools; just that they should be used carefully.  And we've never
tried to pass a GR trying to regulate the use of gcc -Wall.)

                                                - Ted

Reply via email to