On Fri, Feb 20, 2026 at 09:40:33PM +0100, Christian Kastner wrote:
> 
> Or, to use a practical example: When I design some Python class
> hierarchy, that design will be influenced by all of the experience I
> have accrued reading or using other hierarchies. But unless I copy a
> specific one, I don't think anyone would argue that my work is a
> derivative (in the legal sense) of all those other examples.
> 
> Why should this be different for an LLM?

Exactly.  There are some humans who might code something that takes a
little from example A, some from example B, and some from example C,
where examples A, B, and C were things that said human may have seen
years ago, and where they might not even have concsious memories of
having seen these examples.  There are many times when we may be
closer to a automated pattern matching automaton like an LLM rather
than rederiving everything from first principles.  (This is the
observation which explains unconscious bias, for which there is
experimental evidence[1].)

[1] https://implicit.harvard.edu/implicit/takeatest.html

There might be other human beings who might start with something that
the copy-pasta from Stack Exchange, but then they make changes to
variable and parameters, in an attempt to "file off the serial numbers".

And there might be other human beings who do a full copy-pasta without
trying to rename variables or even changing the white space to match
the destination project.  And they might not even check the license,
and so they could end up transplating code from GPLv2-only source into
GPLv3 codebase.

Now, there are *some* LLM's that might do that last thing, which is
clearly wrong, regardelss of whether it is done by a human or an LLM.
But they could also do the first example where they are merging
examples from multiple sources to create a new implementation.

We don't say that just because human beings *might* do something
blatently violative of copyright, that human behings should be
prohibited from ever contributing code to Open Source Software.  And
yet, there are people making the argument that we should do this for
LLM's....

                                             - Ted

Reply via email to