On Thu, 25 Sept 2025 at 08:56, Paolo Bonzini <pbonz...@redhat.com> wrote:
>
> AI tools can be used as a natural language refactoring engine for simple
> tasks such as modifying all callers of a given function or all accesses
> to a variable.  These tasks are interesting for an exception because:
>
> * it is credible for a contributor to claim DCO compliance.  If the
> contributor can reasonably make the same change with different tools or
> with just an editor, which tool is used (including an LLM) should have
> no bearing on compliance.  This also applies to less simple tasks such
> as adding Python type annotations.
>
> * they are relatively easy to test and review, and can provide noticeable
> time savings;
>
> * this kind of change is easily separated from more complex non-AI-generated
> ones, which we encourage people to do anyway.  It is therefore natural
> to highlight them as AI-generated.
>
> Make an exception for patches that have "limited creative content" - that
> is, mechanical transformations where the creativity lies in deciding what
> to change rather than in how to implement the change.

I figure I'll state my personal opinion on this one. This isn't
intended to be any kind of 'veto' on the question: I don't
feel that strongly about it (and I don't think I ought to
have a personal veto in any case).

I'm not enthusiastic. The current policy is essentially
"the legal risks are unclear and the project isn't willing
to accept them". That's a straightforward rule to follow
that doesn't require either the contributor or the reviewer
or the project to make a possibly difficult judgement call on
what counts as not in fact risky. As soon as we start adding
exceptions then either we the project are making those
judgement calls, or else we're pushing them on contributors
or reviewers. I prefer the simple "'no' until the legal
picture becomes less murky" rule we have currently.

-- PMM

Reply via email to