Stefan Hajnoczi <stefa...@gmail.com> writes:

> On Tue, Jun 3, 2025 at 10:25 AM Markus Armbruster <arm...@redhat.com> wrote:
>>
>> From: Daniel P. Berrangé <berra...@redhat.com>
>>
>> There has been an explosion of interest in so called AI code
>> generators. Thus far though, this is has not been matched by a broadly
>> accepted legal interpretation of the licensing implications for code
>> generator outputs. While the vendors may claim there is no problem and
>> a free choice of license is possible, they have an inherent conflict
>> of interest in promoting this interpretation. More broadly there is,
>> as yet, no broad consensus on the licensing implications of code
>> generators trained on inputs under a wide variety of licenses
>>
>> The DCO requires contributors to assert they have the right to
>> contribute under the designated project license. Given the lack of
>> consensus on the licensing of AI code generator output, it is not
>> considered credible to assert compliance with the DCO clause (b) or (c)
>> where a patch includes such generated code.
>>
>> This patch thus defines a policy that the QEMU project will currently
>> not accept contributions where use of AI code generators is either
>> known, or suspected.
>>
>> These are early days of AI-assisted software development. The legal
>> questions will be resolved eventually. The tools will mature, and we
>> can expect some to become safely usable in free software projects.
>> The policy we set now must be for today, and be open to revision. It's
>> best to start strict and safe, then relax.
>>
>> Meanwhile requests for exceptions can also be considered on a case by
>> case basis.
>>
>> Signed-off-by: Daniel P. Berrangé <berra...@redhat.com>
>> Acked-by: Stefan Hajnoczi <stefa...@gmail.com>
>> Reviewed-by: Kevin Wolf <kw...@redhat.com>
>> Signed-off-by: Markus Armbruster <arm...@redhat.com>
>> ---
>>  docs/devel/code-provenance.rst | 50 +++++++++++++++++++++++++++++++++-
>>  1 file changed, 49 insertions(+), 1 deletion(-)
>>
>> diff --git a/docs/devel/code-provenance.rst b/docs/devel/code-provenance.rst
>> index c27d8fe649..261263cfba 100644
>> --- a/docs/devel/code-provenance.rst
>> +++ b/docs/devel/code-provenance.rst
>> @@ -270,4 +270,52 @@ boilerplate code template which is then filled in to 
>> produce the final patch.
>>  The output of such a tool would still be considered the "preferred format",
>>  since it is intended to be a foundation for further human authored changes.
>>  Such tools are acceptable to use, provided they follow a deterministic 
>> process
>> -and there is clearly defined copyright and licensing for their output.
>> +and there is clearly defined copyright and licensing for their output. Note
>> +in particular the caveats applying to AI code generators below.
>> +
>> +Use of AI code generators
>> +~~~~~~~~~~~~~~~~~~~~~~~~~
>> +
>> +TL;DR:
>> +
>> +  **Current QEMU project policy is to DECLINE any contributions which are
>> +  believed to include or derive from AI generated code. This includes 
>> ChatGPT,
>> +  CoPilot, Llama and similar tools**
>
> GitHub spells it "Copilot".

I'll fix it.

> Claude is very popular for coding at the moment and probably worth mentioning.

Will do.

>> +
>> +The increasing prevalence of AI code generators, most notably but not 
>> limited
>
> More detail is needed on what an "AI code generator" is. Coding
> assistant tools range from autocompletion to linters to automatic code
> generators. In addition there are other AI-related tools like ChatGPT
> or Gemini as a chatbot that can people use like Stackoverflow or an
> API documentation summarizer.
>
> I think the intent is to say: do not put code that comes from _any_ AI
> tool into QEMU.
>
> It would be okay to use AI to research APIs, algorithms, brainstorm
> ideas, debug the code, analyze the code, etc but the actual code
> changes must not be generated by AI.

The existing text is about "AI code generators".  However, the "most
notably LLMs" that follows it could lead readers to believe it's about
more than just code generation, because LLMs are in fact used for more.
I figure this is your concern.

We could instead start wide, then narrow the focus to code generation.
Here's my try:

  The increasing prevalence of AI-assisted software development results
  in a number of difficult legal questions and risks for software
  projects, including QEMU.  Of particular concern is code generated by
  `Large Language Models
  <https://en.wikipedia.org/wiki/Large_language_model>`__ (LLMs).

If we want to mention uses of AI we consider okay, I'd do so further
down, to not distract from the main point here.  Perhaps:

  The QEMU project thus requires that contributors refrain from using AI code
  generators on patches intended to be submitted to the project, and will
  decline any contribution if use of AI is either known or suspected.

  This policy does not apply to other uses of AI, such as researching APIs or
  algorithms, static analysis, or debugging.

  Examples of tools impacted by this policy includes both GitHub's CoPilot,
  OpenAI's ChatGPT, and Meta's Code Llama, amongst many others which are less
  well known.

The paragraph in the middle is new, the other two are unchanged.

Thoughts?

>> +to, `Large Language Models 
>> <https://en.wikipedia.org/wiki/Large_language_model>`__
>> +(LLMs) results in a number of difficult legal questions and risks for 
>> software
>> +projects, including QEMU.

Thanks!

[...]


Reply via email to