On Tuesday, March 3, 2026 5:33:23β€―AM Mountain Standard Time Ansgar πŸ™€ wrote:
> Hi,
> 
> On Tue, 2026-03-03 at 14:07 +0200, Jonathan Carter wrote:
> > As Ansgar mentioned in
> > another follow-up, rsyslog should probably be out already. So policy
> > would really help in *preventing* slop and code with questionable 
copyright.
> 
> I very specifically did *not* say that.
> 
> If you think the Linux kernel, rsyslog, Python, LLVM and so on are slop
> code that should not be shipped in Debian for not meeting our quality
> standards, feel free to say so. Please do not attribute that to me.

When β€œAI” first came on the scene, I believed it would never amount to much 
because it would always produce slop.  However, I have recently been convinced 
otherwise.  AI sometimes produces quality output.

I personally have never used A” beyond the automatic AI results at the top of 
major search engine results.  And I have no intention to use AI for any work 
in Debian in the forseeable future.  But I think it is important for use to 
distinguish between AI slop output and AI quality output.

My personal opinion is that I do not have problems with AI being used in 
Debian or upstream projects as long as the following two conditions are met.

1.  There are no DFSG licensing problems with the output.  Generally (in many 
jurisdictions), the output of an AI system is not considered copyrightable, 
making it in the public domain.  Public domain is not a problem for the DFSG, 
and can be combined with any license we use (to the best of my knowledge).

If the output is a byte-by-byte regurgitation of a copyrighted work, then this 
would cause problems.  However, as far as I have been able to ascertain, 
someone using AI in a responsible manner is extremely unlikely to produce a 
byte-by-byte regurgitation of a copyrighted work, so this concern is minimal.  
The only exceptions of which I am aware are where people have spent 
significant effort to write prompts with the *intention* of regurgitating 
copyrighted output.

2.  There are no security our maintainability concerns with the AI output.  As 
has already been mentioned on this threat, I would support a requirement that 
all AI output be signed off by a human being, representing their understanding 
and review of the submission.  This is not something we can enforce with 
technical means, but I don’t imagine there are very many Debian Developers who 
would want their digital signature attached to AI slop.

-- 
Soren Stoutner
[email protected]

Attachment: signature.asc
Description: This is a digitally signed message part.

Reply via email to