On Mon, 2025-05-19 at 10:03 +0930, Justin Zobel wrote: > > On 18/05/2025 16:41, Albert Vaca Cintora wrote: > > > > > > > > > > > On Sun, 18 May 2025, 08:59 Justin Zobel, <jus...@1707.io> wrote: > > > > > > > > > > > If the contributor cannot tell you the license(s) of the code > > > that was used to generate the code, then it's literally gambling > > > that this code wasn't taken from another project by Gemini and > > > used without their permission or used in a way that violates the > > > license and opens up the KDE e.V. to litigation. > > > > > > > > > > > > > > > > > > > > > > > I'm no lawyer but I would expect that training AI will fall under > > fair use of copyrighted code. If that's not the case already, it > > will probably be soon. The benefits of AI to society are too large > > to autoimpose such a roadblock. > > > > > > > > > > Albert > > > > > > From my understanding (what others have told me), AI generally does > not produce good quality code though. So how is that a benefit to > society? > I wrote some lengthy answer here, but then I scratched that because I realized your question can really generate tons of lengthy replies that no one will read 😅 I will say you that: AI is useful for simple and tedious tasks. In general, you don't expect that AI will complete correctly whatever you asked it to do. Instead you expect it to give you some useful base, which you can change/correct/modify to fit whatever you actually need.
Like, I dunno, do you have a friend in a foreign country who you want to write a recent story, but the story is in english? You ask AI to translate it, which will be don "almost good", so what you do then is you go over the text and correct everything to match your style. This is faster than translating everything manually. In fact, it well matches what people-translators were doing for decades: they typically translate texts in two phases, one is sort of writing a scratch, and the other one is polishing, like adding suitable idioms, etc.