On Fri, 20 Feb 2026 at 21:57, Christian Kastner <[email protected]> wrote:
> On 2026-02-20 20:54, Theodore Tso wrote: > > As another hypothetical thought experiment, suppose the problem is to > > optimize a program which has a bubble sort, and a human programmer is > > asked to optimize it by replacing it with a quick sort. > > > > There are only so many different ways to code the quick sort algorithm > > in C, and it's likely that the human being might even be vaguely > > remembering how they saw it done in some non-free source code (for > > example, in Sedgewick's Algorithms book) and perhaps, subconsciously > > reproduced it from some non-free source that they ocne saw. > I argued similar in problem #2 here [1]. > > Whether consciously or subconsciously, all the code that we've read has > influenced code that we've written, even if those individual influences > might have been minuscule. > > To my understanding, there is an ocean between minuscule influences, and > and work a being a derivative of work B. > > Or, to use a practical example: When I design some Python class > hierarchy, that design will be influenced by all of the experience I > have accrued reading or using other hierarchies. But unless I copy a > specific one, I don't think anyone would argue that my work is a > derivative (in the legal sense) of all those other examples. > > Why should this be different for an LLM? > > Best, > Christian > I agree with your point of view, Christian. For context: I have been using AI/LLMs extensively for the past ~3 years. Used responsibly, they are effective tools for getting work done faster while still maintaining quality and safety. I believe this technology can help Debian improve faster in some areas. For example, I think the Debian bug tracker could be significantly improved with the help of these models (see: https://nibblestew.blogspot.com/2025/12/an-uncomfortable-but-necessary.html). Given enough time, I would seriously consider contributing to that effort myself using LLM-assisted development. As another example, I recently used Claude Code to create a Debian package for a personal Python codebase. In about 10 minutes, it produced a very complete starting point (source package, dependencies, changelog, control file, description, etc.). That does not remove the need for review, but it shows the potential to accelerate contributor work. The real issue, in my view, is not the tool itself but the risk of AI-generated slop/spam. If someone chooses to use these tools, the responsibility for filtering bad output before a git commit remains with the maintainer/developer. So my position is: Debian should be open to this technology, without blanket restrictions on tool choice. Contributions should be judged by quality and correctness, and contributors should remain responsible for what they submit. Cheers! Thiago

