On Mon, 2025-05-19 at 15:18 +0930, Justin Zobel wrote:
>  
> 
>  
>  
> On 19/05/2025 15:01, Konstantin Kharlamov wrote:
>  
> > 
> > On Mon, 2025-05-19 at 14:53 +0930, Justin Zobel wrote:
> >  
> > >  
> > >  
> > >  
> > > On 19/05/2025 14:35, Konstantin Kharlamov wrote:
> > >  
> > >  
> > >  
> > >  
> > > >  
> > > > 
> > > >  
> > > > On Mon, 2025-05-19 at 10:03 +0930, Justin Zobel wrote:
> > > >  
> > > >  
> > > >  
> > > >  
> > > > >  
> > > > >  
> > > > >  
> > > > >  
> > > > >  
> > > > >  
> > > > >  
> > > > > On 18/05/2025 16:41, Albert Vaca Cintora wrote:
> > > > >  
> > > > >  
> > > > >  
> > > > >  
> > > > >  
> > > > >  
> > > > >  
> > > > >  
> > > > >  
> > > > > >  
> > > > > >  
> > > > > >  
> > > > > > 
> > > > > >  
> > > > > >  
> > > > > >  
> > > > > >  
> > > > > >  
> > > > > >  
> > > > > > On Sun, 18 May 2025, 08:59 Justin Zobel, <jus...@1707.io>
> > > > > > wrote:
> > > > > >  
> > > > > >  
> > > > > >  
> > > > > > >  
> > > > > > >  
> > > > > > >  
> > > > > > >  
> > > > > > >  
> > > > > > >  
> > > > > > >  
> > > > > > >  
> > > > > > > If the contributor cannot tell you the license(s) of the
> > > > > > > code that was used to generate the code, then it's
> > > > > > > literally gambling that this code wasn't taken from
> > > > > > > another project by Gemini and used without their
> > > > > > > permission or used in a way that violates the license and
> > > > > > > opens up the KDE e.V. to litigation.
> > > > > > >  
> > > > > > >  
> > > > > > >  
> > > > > > >  
> > > > > > >  
> > > > > > >  
> > > > > > >  
> > > > > > >  
> > > > > > >  
> > > > > >  
> > > > > >  
> > > > > >  
> > > > > > 
> > > > > >  
> > > > > >  
> > > > > > I'm no lawyer but I would expect that training AI will fall
> > > > > > under fair use of copyrighted code. If that's not the case
> > > > > > already, it will probably be soon. The benefits of AI to
> > > > > > society are too large to autoimpose such a roadblock.
> > > > > >  
> > > > > > 
> > > > > >  
> > > > > >  
> > > > > > Albert
> > > > > >  
> > > > > >  
> > > > > >  
> > > > > >  
> > > > > >  
> > > > > >  
> > > > > >  
> > > > > >  
> > > > >  
> > > > >  
> > > > >  
> > > > >  
> > > > >  
> > > > >  
> > > > >  
> > > > > From my understanding (what others have told me), AI
> > > > > generally does not produce good quality code though. So how
> > > > > is that a benefit to society?
> > > > >  
> > > > >  
> > > > >  
> > > > > 
> > > > >  
> > > > >  
> > > > >  
> > > >  
> > > >  
> > > >  
> > > > I wrote some lengthy answer here, but then I scratched that
> > > > because I realized your question can really generate tons of
> > > > lengthy replies that no one will read 😅 I will say you that:
> > > > AI is useful for simple and tedious tasks. In general, you
> > > > don't expect that AI will complete correctly whatever you asked
> > > > it to do. Instead you expect it to give you some useful base,
> > > > which you can change/correct/modify to fit whatever you
> > > > actually need.
> > > >  
> > > >  
> > > >  
> > > > 
> > > >  
> > > >  
> > > >  
> > > >  
> > > > Like, I dunno, do you have a friend in a foreign country who
> > > > you want to write a recent story, but the story is in english?
> > > > You ask AI to translate it, which will be don "almost good", so
> > > > what you do then is you go over the text and correct everything
> > > > to match your style. This is faster than translating everything
> > > > manually. In fact, it well matches what people-translators were
> > > > doing for decades: they typically translate texts in two
> > > > phases, one is sort of writing a scratch, and the other one is
> > > > polishing, like adding suitable idioms, etc.
> > > >  
> > > >  
> > > >  
> > >  
> > >  
> > >  
> > > The problem is we're not talking about text here, we're talking
> > > about code and code has licenses, on which language models don't
> > > care about. I'm all for AI that helps humanity, but stealing code
> > > or using code that is incompatible with KDE's license set is not
> > > it.
> > >  
> > >  
> > >  
> > > I want AI to solve world hunger, prevent disease and help me do
> > > the housework :)
> > >  
> > >  
> > >  
> >  
> > 
> >  
> >  
> > Well, you asked how "low quality AI code" benefits society. I have
> > many different examples, I came up with one related to translation
> > because it was the simplest one to describe (all others resulted in
> > too much text) and its morally equivalent, because you similarly
> > get poor low-quality base and you improve upon it.
> >  
> > 
> >  
> >  
> > If you want code examples specifically: another simple one that
> > comes to my mind is I have in my Emacs config a code that allows to
> > quit emacsclient with vim-command `xb` and upon doing so it
> > converts Markdown code to bbcode. And there's also similar one for
> > textile. The "conversion code" is a terrible O(n²) algorithm
> > written by AI, which didn't even work when AI wrote it. But I fixed
> > it (as in, made it work), and use it just fine at work to write
> > Markdown text and then insert into to either Redmine or Bitrix at
> > their native markup. Don't care much for O(n²), because for amounts
> > of texts I write it is instantaneous.
> >  
> > 
> >  
> >  
> > Another example is I had to rewrite some project from one language
> > to another at work. This is tedious task, so I similarly used AI to
> > create a scratch, and then I went over that code refactoring it and
> > fixing.
> >  
>  
> Translations are not code.
>  
That's why I showed two code examples.

Reply via email to