On Mon, 29 Sept 2025 at 10:51, Daniel P. BerrangĂ© <[email protected]> wrote:
>
> On Fri, Sep 26, 2025 at 09:26:47PM +0200, Paolo Bonzini wrote:
> > On Fri, Sep 26, 2025, 16:39 Peter Maydell <[email protected]> wrote:
> >
> > > I figure I'll state my personal opinion on this one. This isn't
> > > intended to be any kind of 'veto' on the question: I don't
> > > feel that strongly about it (and I don't think I ought to
> > > have a personal veto in any case).
> > >
> > > I'm not enthusiastic. The current policy is essentially
> > > "the legal risks are unclear and the project isn't willing
> > > to accept them". That's a straightforward rule to follow
> > > that doesn't require either the contributor or the reviewer
> > > or the project to make a possibly difficult judgement call on
> > > what counts as not in fact risky. As soon as we start adding
> > > exceptions then either we the project are making those
> > > judgement calls, or else we're pushing them on contributors
> > > or reviewers. I prefer the simple "'no' until the legal
> > > picture becomes less murky" rule we have currently.
> > >
> >
> > In principle I agree. I am not enthusiastic either. There are however two
> > problems in the current policy.
> >
> > First, the policy is based on a honor code; in some cases the use of AI can
> > be easily spotted, but in general it's anything but trivial especially in
> > capable hands where, for example, code is generated by AI but commit
> > messages are not. As such, the policy cannot prevent inclusion of AI
> > generated code, it only tells you who is to blame.
>
> The policy is intentionally based on an honour code, because trust in
> contributors intention is a fundamental foundation of a well functioning
> OSS project. When projects start to view contributors as untrustworthy,
> then IME they end up with burdensome processes (often pushed by corporate
> demands), such as copyright assignment / CLA, instead of the lightweight
> DCO (self-certification, honour based) process we have today.

Mmm. I think there's a difference between:
 * we think this category of AI generated changes is
   sufficiently low-risk to the project and sufficiently
   useful to be worth awarding it an exception
and
 * we think that this category of AI generated changes
   is one we can't trust contributors not to just send
   in anyway, so we give it an exception in the hope they
   might at least tell us when they're doing it

The commit message for this patch is making the first
argument; if we really think that, that's fine, but I
don't think we should make the change with the former
argument as justification if really we're doing it
because we're worried about the second. And I'm definitely
sceptical that we should change our policy just because
we think people are going to deliberately breach it if
we do not...

thanks
-- PMM

Reply via email to