Hi Aaron,

On Sun, 2026-02-08 at 19:43 -0500, Aaron Merey wrote:
> In December we discussed whether elfutils should permit LLM-generated
> contributions [1]. The conclusion of this discussion was that elfutils
> should not permit this.  Reasons for this decision include uncertainty
> around the copyright status of LLM-generated content.  Below is a
> revised proposal based on these discussions.
> 
> If accepted we can include this policy in the CONTRIBUTING document in
> the elfutils source directory.  The text of this policy is modelled
> after the Binutils [2] and QEMU [3] LLM policies.

Thanks for drafting this. It looks good to me. One small suggestion
wording "tweak" below.

> Policy on the Use of LLM-generated Content
> 
> The elfutils project does not currently accept contributions
> containing output generated by Large Language Models (LLMs) [4].  Use
> of LLMs to research, analyze or debug a contribution is allowed as
> long as no LLM-generated output is included in the contribution.
> 
> There are two exceptions where LLM output may be included in a contribution:
> 
> (1) The output consists solely of trivial changes such as spelling or
> code formatting.
> 
> (2) The LLM assists in writing a contribution but does not author any
> of the content. This includes accessibility-related uses of LLMs
> involving speech-to-text, for example.

I understand what you are saying here, but might not use the words
"writing" and "author" (which feel a little as if we are "personifying"
the the llm). How about using the words "researching" and "generating":

  (2) The LLM assists in the research of an contribution but does
  not generate any of the content. This includes accessibility-related
  uses of LLMs involving speech-to-text, for example.

(Note English isn't my first language, this might not be gramatically
correct.)

> Contributors are not required to disclose the use of LLMs for these purposes.
> 
> This policy may be reviewed or updated when the copyright status and
> Developer's Certificate of Origin (DCO) compatibility of LLM-generated
> output is clarified.
> 
> [1] 
> https://inbox.sourceware.org/elfutils-devel/cajdtp-r9m7uvfggoq20_4k8ooja4lvmuz3x8tzrhq-+r5aa...@mail.gmail.com/T/#m52aef56465c8bbe6d4fe0fda6487add9efb4f857
> [2] https://sourceware.org/binutils/wiki/LLM_Generated_Content
> [3] 
> https://www.qemu.org/docs/master/devel/code-provenance.html#use-of-ai-generated-content
> [4] https://en.wikipedia.org/wiki/Large_language_model

I would be OK with you adding this to CONTRIBUTING with or without the
wordsmithing above.

Cheers,

Mark

Reply via email to