On 19/05/2025 19:15, Ilya Bizyaev wrote:


On Monday, May 19th, 2025 at 02:34, Justin Zobel <jus...@1707.io> wrote:
On 18/05/2025 16:41, Albert Vaca Cintora wrote:
On Sun, 18 May 2025, 08:59 Justin Zobel, <jus...@1707.io> wrote:

    If the contributor cannot tell you the license(s) of the code
    that was used to generate the code, then it's literally gambling
    that this code wasn't taken from another project by Gemini and
    used without their permission or used in a way that violates the
    license and opens up the KDE e.V. to litigation.


I'm no lawyer but I would expect that training AI will fall under fair use of copyrighted code. If that's not the case already, it will probably be soon. The benefits of AI to society are too large to autoimpose such a roadblock.

Albert

From my understanding (what others have told me), AI generally does not produce good quality code though. So how is that a benefit to society?

Well, in that case, those “others” are using them wrong or are just spreading second-hand misinformation.

If you really care about the licensing aspect, focus on it instead of diverting this thread into other topics with statements like this one.

As a data point, we've recently used AI models for our modernization work on https://invent.kde.org/websites/kde-ru, with careful manual review of course, and it has helped us perform the amount of work we physically would not have had the time to do ourselves. I cannot imagine any legal risks from reasonable use of LLMs for web development in KDE. If a ban is imposed on it, I'm unlikely to spend an order of magniute more time on this tedious work.
As long as that work hasn't violated any copyrights or licensing, I'm happy for people to use it. The point is, we do not know where LLMs get their content. It is a legal issue. If you don't have the time to do something, that is also fine. Most of us are volunteers to KDE, we give what time we can.

Reply via email to