Sorry for swooping in this conversation as a regular contributor.

>> From my understanding (what others have told me), AI generally does not 
>> produce good quality code though. So how is that a benefit to society?
>
> Well, in that case, those “others” are using them wrong or are just spreading 
> second-hand misinformation.

While all my previous code, translation and documentation
contributions to KDE didn't involve "AI" (here I mean LLM-based stuff
actually), I did experiment with some "AI"-based utilities in my own
projects and some other projects that allows developers to use this
sort of utilities and see if they are actually helpful, and turns out
it actually is (in most cases).

To put some examples, my experiments are:

- I use sourcery.ai for code review with my 100% hand-written code,
and it indeed catches some issues that are easy to miss out,
especially when writing custom Qt proxy models which is pretty tricky
and hard to get it right.
- I use LLM-based code copilot utilities to simplify my existing
hand-written code, and the result is also quite good, sometimes it
will even let me learn some useful APIs that I previously didn't know
existed at all.
- I use ChatGPT and Gemini to ask them questions about the field that
I was previously unfamiliar with (e.g. polkit, GTK stuff), while
sometimes it will provide inaccuracy result but at least it will
provide a direction that what is the next stuff I need to check out,
it saves tons of times than manually reading and understanding.
- I even used Gemini's Cavas feature to help me write a standalone
User Script for once recently with my very own requirement, and the
result is also actually okay for me, which usually will take me 2 days
to hand-write it from scratch since I am not that familiar with modern
JavaScript programming. While I rarely use LLM to generate large
amounts of code for my real projects due to licensing concerns.
- and yeah, I am not a native English speaker, I do use "AI"-based
utility to help me understand documentations and papers in English,
and damn the translation quality is way better than a traditional
machine translation tool [1].

[1]: and actually, traditional machine translation tool can still
count as "AI" as well, even thought it's not based on "LargeLM".

In conclusion I do think "AI" can provide good quality code, code
review, and other workflow improvements, so I think what we need to do
is focus on the stuff that we really care about, which is licensing. I
have to agree with:

> Likewise, we have concerns about licensing, quality of contributions, wasting 
> contributors' time, excessive use of computing power, and more. "AI" is at 
> best a proxy for these, and at worst a wedge issue that can chase away valued 
> contributors like Jin while not doing KDE much good otherwise.

Also, I'd like to comment on "excessive use of computing power". For
LLM, training and/or using a model is indeed resource-consuming, but
things do improve over time. For example, The Qwen3-4B model's quality
is way better than an older 7B or even 14B models which consume
significantly less resources, and such technology is still continuing
to evolve.

That's my 2 cents, I'll keep avoiding using any of "AI" utilities when
contributing to KDE projects (and other open-source projects as well)
before we have a consent about "AI" usages, and I hope this reply
could provide some perspective to people who hardly use "AI", or say,
LLM-based utilities.

Reply via email to