Re: Interest in building an LLM frontend for KDE

2023-12-12 Thread Björn Strömberg
just saw this thread, and thought i should comment in on it, hope it comes
to the right place when replying direct to ml..

the most important thing about LLM's (private or public) is that they do
not become a strong dependency integrated in other stuff,
thinking mainly on the info about KTextAddons that still requires to
configure a API key for it to work, that is acceptable solution.

some people might want chatgpt, or a private LLM (intra corp, or intra kde
community, or even private personal LLM etc) and it might be a nice addon,
but only as long as the computer never sends anything to the LLM without
the users explicit permission. freedom of choice is the most important
thing here,
preferably with a system (or user) wide choice to disable use of LLMs on
current computer.

Regards
Björn


Re: Interest in building an LLM frontend for KDE

2023-12-05 Thread Joseph P. De Veaugh-Geiss

On 12/4/23 22:45, Alexander Semke wrote:

On Montag, 4. Dezember 2023 12:09:43 CET Joseph P. De Veaugh-Geiss wrote:

I agree with the concerns Josh raises about the energy consumption of
training LLMs (see, e.g., [1]). A benefit of satisfying the above
characteristics is it is then possible for us to measure the energy
consumption for training/using the LLMs. This would enable KDE to be
transparent about what these tools consume in terms of energy and
present this information to users.

To make this argument more complete, it's not only the training of such models 
but also
their usage ("inference") later. For popular generic models the negative impact 
can quickly
become bigger than the impact of the training itself:
https://www.technologyreview.com/2023/12/01/1084189/making-an-image-with-generative-ai-uses-as-much-energy-as-charging-your-phone/[1]

https://arxiv.org/abs/2311.16863[2]



Thank you for the info re power consumption in usage of these models!


Though, these and similar arguments mostly tend to ignore the fact that the 
hyperscalers
have commited to the net-zero imitative and are heavily investing into 
renewables for
their data centers and also trying to shift heavy workloads into more 
sustainable time
windows.



Yes, the demands of the ICT sector are arguably a big driver in the 
shift to renewable, low-carbon energy sources!


A problem with this, though, is that we have a power consumption 
problem: we consume more than we produce in terms of low-carbon energy, 
and inceasing the consumption of ICT is taking energy away from 
something else. Moreover, there may be limits in natural resources as to 
how much renewable energy can actually be produced. In the same paper 
cited before (see [1]), the authors use the example of silver mining for 
the production of photovoltaic panels. "An average solar panel requires
ca. 20 g of silver [...]. On [the current] trajectory, solar panels 
would use 100% of global silver supplies in 2031 leaving none for 
electric car batteries and other uses" (p. 7). Given this and other 
issues, the authors conclude: "Thus, while a shift to more renewable 
energy is crucial, it does not provide an unlimited supply of energy for 
ICT to expand into without consequences" (p. 8).


Perhaps this is moving too far off topic, though. We can discuss more at 
the energy efficiency Matrix room if you'd like: 
https://matrix.to/#/#energy-efficiency:kde.org


As for integrating LLMs into KDE software, I think measuring and then 
providing transparency to end-users about the energy/CO2 equivalencies 
needed to train and use the models is worth pursing, when possible. 
Moreover, such high-energy consuming tools should be disabled by 
default, or should at least have reasonable default settings to minimize 
power draw.


Another idea that has come up before and is somewhat relevant in this 
context is an "eco" slider integrated into Plasma with sensible default 
settings depending on what the user wants: on one end a green setting 
indicating maximal efficiency, sometimes at the cost of functionality; 
at the other end a red setting indicating maximal functionality, 
sometimes at the cost of efficiency; and a yellow setting in the middle 
compromising between the two. This could be similar to how the Tor 
Browser Bundle has the safe-safer-safest securty levels which enable or 
disable various web features.


Cheers,
Joseph

[1] "The real climate and transformative impact of ICT: A critique of 
estimates, trends, and regulations", 2021. Charlotte Freitag, Mike 
Berners-Lee, Kelly Widdicks, Bran Knowles, Gordon S. Blair, and Adrian 
Friday. https://doi.org/10.1016/j.patter.2021.100340




--
Alexander


[1] 
https://www.technologyreview.com/2023/12/01/1084189/making-an-image-with-generative-ai-uses-as-much-energy-as-charging-your-phone/
[2] https://arxiv.org/abs/2311.16863



--
Joseph P. De Veaugh-Geiss
KDE Internal Communications & KDE Eco Community Manager
OpenPGP: 8FC5 4178 DC44 AD55 08E7 DF57 453E 5746 59A6 C06F
Matrix: @joseph:kde.org

Generally available Monday-Thursday from 10-16h CET/CEST. Outside of 
these times it may take a little longer for me to respond.


KDE Eco: Building Energy-Efficient Free Software!
Website: https://eco.kde.org
Mastodon: @be4foss@floss.social


Re: Interest in building an LLM frontend for KDE

2023-12-04 Thread Alexander Semke
On Montag, 4. Dezember 2023 12:09:43 CET Joseph P. De Veaugh-Geiss wrote:
> I agree with the concerns Josh raises about the energy consumption of
> training LLMs (see, e.g., [1]). A benefit of satisfying the above
> characteristics is it is then possible for us to measure the energy
> consumption for training/using the LLMs. This would enable KDE to be
> transparent about what these tools consume in terms of energy and
> present this information to users.
To make this argument more complete, it's not only the training of such models 
but also
their usage ("inference") later. For popular generic models the negative impact 
can quickly
become bigger than the impact of the training itself:
https://www.technologyreview.com/2023/12/01/1084189/making-an-image-with-generative-ai-uses-as-much-energy-as-charging-your-phone/[1]

https://arxiv.org/abs/2311.16863[2]

Though, these and similar arguments mostly tend to ignore the fact that the 
hyperscalers
have commited to the net-zero imitative and are heavily investing into 
renewables for
their data centers and also trying to shift heavy workloads into more 
sustainable time
windows.


--
Alexander


[1] 
https://www.technologyreview.com/2023/12/01/1084189/making-an-image-with-generative-ai-uses-as-much-energy-as-charging-your-phone/
[2] https://arxiv.org/abs/2311.16863


Re: Interest in building an LLM frontend for KDE

2023-12-04 Thread Joseph P. De Veaugh-Geiss

Hi,

On 12/2/23 13:00, kde-devel-requ...@kde.org wrote:

Hey Loren,

I agree with everyone else that as long as the model is ethically sourced and
local-by-default (which is my two main problems with modern LLMs.) I see no
qualms about including such a frontend in KDE.


Re ethical LLMs Nextcloud has developed a traffic light system based on 
three parameters 
(https://nextcloud.com/blog/ai-in-nextcloud-what-why-and-how/) to 
address some of the concerns discussed in this thread:


1. Is the software open source? (Both for inferencing and training)
2. Is the trained model freely available for self-hosting?
3. Is the training data available and free to use?

Perhaps KDE could adopt a similar system?


I think the hardest part will be the marketing around this new software. One
of our current goals is "Sustainable Software" and current LLM models as far
as I understand is extremely power hungry and inefficient. If you do go ahead
with this project, I would be very careful with how we promote it. It would
have to somehow push KDE values while also avoiding the ethical, climate and
social minefield that's currently littering this topic. Just something to keep
in mind 


I agree with the concerns Josh raises about the energy consumption of 
training LLMs (see, e.g., [1]). A benefit of satisfying the above 
characteristics is it is then possible for us to measure the energy 
consumption for training/using the LLMs. This would enable KDE to be 
transparent about what these tools consume in terms of energy and 
present this information to users.

k
Cheers,
Joseph

[1] "The real climate and transformative impact of ICT: A critique of 
estimates, trends, and regulations", 2021. Charlotte Freitag, Mike 
Berners-Lee, Kelly Widdicks, Bran Knowles, Gordon S. Blair, and Adrian 
Friday. https://doi.org/10.1016/j.patter.2021.100340


"AI has the greatest potential for impact given the complexity of 
training and inferencing on big data, and especially so-called deep 
learning. Researchers have estimated that 284,019 kg of CO2e are emitted 
from training just one machine learning algorithm for natural language 
processing, an impact that is five times the lifetime emissions of a 
car. While this figure has been criticized as an extreme example (a more 
typical case of model training may only produce around 4.5 kg of CO2), 
the carbon footprint of model training is still recognized as a 
potential issue in the future given the trends in computation growth for 
AI: AI training computations have in fact increased by 300,0003 between 
2012 and 2018 (an exponential increase doubling every 3.4 months)."



Thanks,
Josh


--
Joseph P. De Veaugh-Geiss
KDE Internal Communications & KDE Eco Community Manager
OpenPGP: 8FC5 4178 DC44 AD55 08E7 DF57 453E 5746 59A6 C06F
Matrix: @joseph:kde.org

Generally available Monday-Thursday from 10-16h CET/CEST. Outside of 
these times it may take a little longer for me to respond.


KDE Eco: Building Energy-Efficient Free Software!
Website: https://eco.kde.org
Mastodon: @be4foss@floss.social


Re: Interest in building an LLM frontend for KDE

2023-12-02 Thread Loren Burkholder
On Saturday, December 2, 2023 4:33:39 PM EST you wrote:
> Hi,
> 

> Loren, what do you think of the idea?
> 
Looks like an interesting concept for sure! I imagine you could probably 
achieve that with ChatGPT by setting a system prompt that told ChatGPT that it 
will be given a math problem and it should explain how to solve the problem, 
but use different numbers (and a new story).

> Best Regards,
> Andre
> 
> 

Loren

signature.asc
Description: This is a digitally signed message part.


Re: Interest in building an LLM frontend for KDE

2023-12-02 Thread Andre Heinecke
Hi again,

On Saturday, 02 December 2023 22:33:39 CET Andre Heinecke wrote:
> Anyhow! Back to topic!

Btw. This commit is a very good example how I use GPT-3.5 in day to day life: 
https://dev.gnupg.org/rW2cc0f90e94f905f552e7a51263d882d626ad7ec6

That commit was mostly LLM generated. I only very recently started doing such 
things.
 
The query was: "Correct but do not modify the contents in braces" I had to run 
the results trough "fmt" because GPT-3.5 always puts out things in single 
lines.

 I recently used Microsoft Word and it was pretty nice how it automatically 
suggested things like. "This is a fill word, you should try to avoid it". That 
is what I meant by KMail integration, something nice and not intrusive that 
just helps you to better bring your meaning across.


Best Regards,
Andre

-- 
GnuPG.com - a brand of g10 Code, the GnuPG experts.

g10 Code GmbH, Erkrath/Germany, AG Wuppertal HRB14459
GF Werner Koch, USt-Id DE215605608, www.g10code.com.

GnuPG e.V., Rochusstr. 44, D-40479 Düsseldorf.  VR 11482 Düsseldorf
Vorstand: W.Koch, B.Reiter, A.HeineckeMail: bo...@gnupg.org
Finanzamt D-Altstadt, St-Nr: 103/5923/1779.   Tel: +49-211-28010702

signature.asc
Description: This is a digitally signed message part.


Re: Interest in building an LLM frontend for KDE

2023-12-02 Thread Julius Künzel


02.12.2023 15:08:47 Carl Schwan :

> On Friday, December 1, 2023 5:44:26?AM CET Andre Heinecke wrote:
>> Hi.
>>
>> On Friday, 01 December 2023 03:53:22 CET Loren Burkholder wrote:
>>> they can be quite useful for tasks like programming.
>>
>> I need it desperately for intelligent spellchecking / grammar fixes ??
>>
>>> Should we be joining the mainstream push to put AI into everything or
>>> should we stand apart and let Microsoft have its fun focusing on AI
>>> instead of potentially more useful features? I don't recall seeing any
>>> discussion about this before (at least not here), so I think those are
>>> all questions that should be fairly considered before development on a
>>> KDE LLM frontend begins.
>> I don't think so. I have the slight feeling that you want to start an
>> abstract discussion here and then magically the "KDE Community" will
>> develop something. Just do it or don't. It will always be in the users
>> freedom to use it or not. I would love to have an optional KMail plugin
>> that interacts with an LLM. Others might not ??
>
> I just want to point out that KMail has via KTextAddons already integration
> with many AI technology both online and local. This is not enabled by default
> and for many of the plugins this needs some extra configuration (e.g. API key)
> to work.
>
> For translations, KTextAddons supports bergamot (local and open source),
> libretranslate (local and open source) as well yandex, deepl, bing, google and
> lingva.
>
> For grammar checks, KTextAddons supports languagetool (open core and self
> hostable), grammalecte (open source but only for french).
>
> For speech to text, KTextAddons supports whishper (local and open source),
> vosk (local and open source) and google
>
> For text to speech, KTextAddons uses QTextToSpeech underneath which uses a
> varieties of local backend: https://doc.qt.io/qt-6/qttexttospeech-engines.html
>
> More details can be found in the repo
> https://invent.kde.org/libraries/ktextaddons/
>
> KTextAddons is currently used in Ruqola and the KDE PIM apps and I was hoping
> to at some point also find the time to add it to some QML app. Laurent already
> did some work to separete the core part from the widgets parts.
>
> Cheers,
> Carl
>
>

That is interesting, because we use Whisper and VOSK for subtitle generation 
(speech-to-text) in Kdenlive too. However unfortunately KTextAddons seem to 
have zero (API) documentation. I guess this might be one of the reasons its 
features are pretty unknown to devs outside of the pim universe.

Cheers,
Julius

(Sorry for the empty message I accidentally sent before)


Re: Interest in building an LLM frontend for KDE

2023-12-02 Thread Julius Künzel
02.12.2023 15:08:47 Carl Schwan :

> On Friday, December 1, 2023 5:44:26?AM CET Andre Heinecke wrote:
>> Hi.
>> 
>> On Friday, 01 December 2023 03:53:22 CET Loren Burkholder wrote:
>>> they can be quite useful for tasks like programming.
>> 
>> I need it desperately for intelligent spellchecking / grammar fixes ??
>> 
>>> Should we be joining the mainstream push to put AI into everything or
>>> should we stand apart and let Microsoft have its fun focusing on AI
>>> instead of potentially more useful features? I don't recall seeing any
>>> discussion about this before (at least not here), so I think those are
>>> all questions that should be fairly considered before development on a
>>> KDE LLM frontend begins.
>> I don't think so. I have the slight feeling that you want to start an
>> abstract discussion here and then magically the "KDE Community" will
>> develop something. Just do it or don't. It will always be in the users
>> freedom to use it or not. I would love to have an optional KMail plugin
>> that interacts with an LLM. Others might not ??
> 
> I just want to point out that KMail has via KTextAddons already integration
> with many AI technology both online and local. This is not enabled by default
> and for many of the plugins this needs some extra configuration (e.g. API key)
> to work.
> 
> For translations, KTextAddons supports bergamot (local and open source),
> libretranslate (local and open source) as well yandex, deepl, bing, google and
> lingva.
> 
> For grammar checks, KTextAddons supports languagetool (open core and self
> hostable), grammalecte (open source but only for french).
> 
> For speech to text, KTextAddons supports whishper (local and open source),
> vosk (local and open source) and google
> 
> For text to speech, KTextAddons uses QTextToSpeech underneath which uses a
> varieties of local backend: https://doc.qt.io/qt-6/qttexttospeech-engines.html
> 
> More details can be found in the repo
> https://invent.kde.org/libraries/ktextaddons/
> 
> KTextAddons is currently used in Ruqola and the KDE PIM apps and I was hoping
> to at some point also find the time to add it to some QML app. Laurent already
> did some work to separete the core part from the widgets parts.
> 
> Cheers,
> Carl
> 
> 


Re: Interest in building an LLM frontend for KDE

2023-12-02 Thread Carl Schwan
On Friday, December 1, 2023 5:44:26?AM CET Andre Heinecke wrote:
> Hi.
> 
> On Friday, 01 December 2023 03:53:22 CET Loren Burkholder wrote:
> > they can be quite useful for tasks like programming.
> 
> I need it desperately for intelligent spellchecking / grammar fixes ??
> 
> > Should we be joining the mainstream push to put AI into everything or
> > should we stand apart and let Microsoft have its fun focusing on AI
> > instead of potentially more useful features? I don't recall seeing any
> > discussion about this before (at least not here), so I think those are
> > all questions that should be fairly considered before development on a
> > KDE LLM frontend begins.
> I don't think so. I have the slight feeling that you want to start an
> abstract discussion here and then magically the "KDE Community" will
> develop something. Just do it or don't. It will always be in the users
> freedom to use it or not. I would love to have an optional KMail plugin
> that interacts with an LLM. Others might not ??

I just want to point out that KMail has via KTextAddons already integration 
with many AI technology both online and local. This is not enabled by default 
and for many of the plugins this needs some extra configuration (e.g. API key) 
to work.

For translations, KTextAddons supports bergamot (local and open source), 
libretranslate (local and open source) as well yandex, deepl, bing, google and 
lingva.

For grammar checks, KTextAddons supports languagetool (open core and self 
hostable), grammalecte (open source but only for french).

For speech to text, KTextAddons supports whishper (local and open source), 
vosk (local and open source) and google

For text to speech, KTextAddons uses QTextToSpeech underneath which uses a 
varieties of local backend: https://doc.qt.io/qt-6/qttexttospeech-engines.html

More details can be found in the repo
https://invent.kde.org/libraries/ktextaddons/

KTextAddons is currently used in Ruqola and the KDE PIM apps and I was hoping 
to at some point also find the time to add it to some QML app. Laurent already 
did some work to separete the core part from the widgets parts.

Cheers,
Carl

 





Re: Interest in building an LLM frontend for KDE

2023-12-02 Thread rhkramer
Ahh, I forgot one thing -- interspersed below:

On Saturday, December 02, 2023 08:21:46 AM rhkra...@gmail.com wrote:
> On Friday, December 01, 2023 07:59:22 AM AnnoyingRains wrote:
> > I see no issue with this at all!
> > As long as the training data is ethically obtained and it uses KDE's
> > tech stack, I feel like this app could integrate fine, if it is as you
> > have described it so far!
> 
> Just chiming in from the peanut gallery:  I like the idea -- something I'd
> like to have is my own AI trained with only my own (vetted) data.
> 
> To clarify, I might add data from sources other than my own writings or
> such, but only after vetting it such that I consider it reliable.
> 
> It would (I hope) be a sort of easier to load and use than a free-format
> database (which is something I've been building, both software and content)
> (but my next step requires C++ and I just can't seem to grok C++ (and all
> the associated libraries and other infrastructure that comes into play)..
> 
> I haven't looked very hard at AIs, I do get the idea that training them (at
> least some of them) requires that data be formatted to suit the AI to some
> extent, which might be more pain than I want.

It would also be nice to have a way to somehow specify a measure of 
reliability for the data that I use to train the AI, and then have that 
measure somehow displayed in the results (aside -- I'm happy to work in text 
-- I don't need voice (recognition) or speaking.

I realize that such a measure of reliability is a matter for the AI backend, 
not (I think) any frontend that might be created, but I wanted to mention it.


Re: Interest in building an LLM frontend for KDE

2023-12-02 Thread rhkramer
On Friday, December 01, 2023 07:59:22 AM AnnoyingRains wrote:
> I see no issue with this at all!
> As long as the training data is ethically obtained and it uses KDE's
> tech stack, I feel like this app could integrate fine, if it is as you
> have described it so far!

Just chiming in from the peanut gallery:  I like the idea -- something I'd 
like to have is my own AI trained with only my own (vetted) data.  

To clarify, I might add data from sources other than my own writings or such, 
but only after vetting it such that I consider it reliable.

It would (I hope) be a sort of easier to load and use than a free-format 
database (which is something I've been building, both software and content) 
(but my next step requires C++ and I just can't seem to grok C++ (and all the 
associated libraries and other infrastructure that comes into play)..

I haven't looked very hard at AIs, I do get the idea that training them (at 
least some of them) requires that data be formatted to suit the AI to some 
extent, which might be more pain than I want.



Re: Interest in building an LLM frontend for KDE

2023-12-01 Thread Joshua Goins
Hey Loren,

I agree with everyone else that as long as the model is ethically sourced and 
local-by-default (which is my two main problems with modern LLMs.) I see no 
qualms about including such a frontend in KDE.

I think the hardest part will be the marketing around this new software. One 
of our current goals is "Sustainable Software" and current LLM models as far 
as I understand is extremely power hungry and inefficient. If you do go ahead 
with this project, I would be very careful with how we promote it. It would 
have to somehow push KDE values while also avoiding the ethical, climate and 
social minefield that's currently littering this topic. Just something to keep 
in mind 

Thanks,
Josh




Re: Interest in building an LLM frontend for KDE

2023-12-01 Thread AnnoyingRains
I see no issue with this at all!
As long as the training data is ethically obtained and it uses KDE's
tech stack, I feel like this app could integrate fine, if it is as you
have described it so far!

- Kye Potter


On Fri, Dec 1, 2023 at 11:41 PM Loren Burkholder
 wrote:
>
> > I don't think so. I have the slight feeling that you want to start an 
> > abstract
> > discussion here and then magically the "KDE Community" will develop 
> > something.
> > Just do it or don't. It will always be in the users freedom to use it or 
> > not.
> > I would love to have an optional KMail plugin that interacts with an LLM.
> > Others might not 路‍♂
> >
> I'm definitely not trying to get the KDE community to develop something for 
> me. As I said earlier, I am starting this discussion solely because I am 
> aware that AI is a very sensitive topic and I wanted to make sure that there 
> was reasonable support before I just kicked off development.
>
> With that being said, I appreciate your take. KDE does support clients for 
> various nonfree services (e.g. KMail has support for Microsoft Exchange); 
> while KDE obviously don't promote them, it understands that many people do 
> use them, and it's worthwhile to provide support for those people.
>
> Loren


Re: Interest in building an LLM frontend for KDE

2023-12-01 Thread Loren Burkholder
> I don't think so. I have the slight feeling that you want to start an 
> abstract 
> discussion here and then magically the "KDE Community" will develop 
> something. 
> Just do it or don't. It will always be in the users freedom to use it or not. 
> I would love to have an optional KMail plugin that interacts with an LLM. 
> Others might not 路‍♂
> 
I'm definitely not trying to get the KDE community to develop something for me. 
As I said earlier, I am starting this discussion solely because I am aware that 
AI is a very sensitive topic and I wanted to make sure that there was 
reasonable support before I just kicked off development.

With that being said, I appreciate your take. KDE does support clients for 
various nonfree services (e.g. KMail has support for Microsoft Exchange); while 
KDE obviously don't promote them, it understands that many people do use them, 
and it's worthwhile to provide support for those people.

Loren

signature.asc
Description: This is a digitally signed message part.


Re: Interest in building an LLM frontend for KDE

2023-12-01 Thread Felix Ernst
+1 to what Ethan B. said i.e.:

 > I am anti-LLM on the grounds that the training sets were created without the 
 > original authors' consent. I see no issue with a libre/ethical LLM, if there 
 > is one, though. If a developer or team of developers wants to implement a Qt 
 > and KDE-integrated LLM app, I have no problem with that, but I believe KDE 
 > as an organization should probably steer clear of such a thorny subject. 
 > It's sure to upset a lot of users no matter what position is taken. On the 
 > other hand, for those people who do make use of AI tools, a native interface 
 > would be nice, especially one as feature-ful as you're describing...

>>Should we limit support to open models like Llama 2 or would we be OK with 
>>adding API support for proprietary models like GPT-4?

As Andre said, there is not much reason to discuss this if it can be 
implemented "generalized and backend agnostic". Aside from that, I want us to 
be aware though that if we are steering users towards using LLMs created by 
known bad actors (which all big LLM companies currently seem to be), we give 
those bad actors power to manipulate and lie to users, which undoubtedly will 
happen more and more if LLMs manage to establish themselves as a middleman 
between users and the more valuable training data. However, if the frontend you 
want to start allows users to choose a LLM to work with, and we can even 
recommend users one LLM over the other there, then this shouldn't be a problem.

>>Should we be joining the mainstream push to put AI into everything or should 
>>we stand apart and let Microsoft have its fun focusing on AI instead of 
>>potentially more useful features?

I personally wouldn't want to spend my time on what you are planning to do. I 
don't think there is much to gain there. At the end of the day, everyone can 
decide this for themselves though.

Cheers,
Felix



Re: Interest in building an LLM frontend for KDE

2023-11-30 Thread Ethan Barry
On Thursday, November 30th, 2023 at 8:53 PM, Loren Burkholder 
 wrote:
> 
> 
> Howdy, everyone!
> 
> You are all undoubtedly aware of the buzz around LLMs for the past year. Of 
> course, there are many opinions on LLMs, ranging from "AI is the 
> future/endgame for web search or programming or even running your OS" to "AI 
> should be avoided like the plague because it hallucinates and isn't 
> fundamentally intelligent" to "AI is evil because it was trained on massive 
> datasets that were scraped without permission and regurgitates that data 
> without a license". I personally am of the opinion that while output from 
> LLMs should be taken with a grain of salt and cross-examined against 
> trustworthy sources, they can be quite useful for tasks like programming.
> 
> KDE obviously is not out to sell cloud services; that's why going to 
> https://kde.org doesn't show you a banner "Special offer! Get 1 TB of cloud 
> storage for $25 per month!" Therefore, I'm not here to talk about hosting a 
> (paywalled) cloud LLM. However, I do think that it is worthwhile opening 
> discussion about a KDE-built LLM frontend app for local, self-hosted, or 
> third-party-hosted models.
> 
> From a technical standpoint, such an app would be fairly easy to implement. 
> It could rely on Ollama[0] (or llama.cpp[1], although llama.cpp isn't focused 
> on a server mode) to host the actual LLM; either of those backends support a 
> wide variety of hardware (including running on CPU; no fancy GPU required), 
> as well as many open-source LLM models like Llama 2. Additionally, using 
> Ollama could allow users to easily interact with remote Ollama instances, 
> making this an appealing path for users who wished to offload LLM work to a 
> home server or even offload from a laptop to a more powerful desktop.
> 
> From an ideological standpoint, things get a little more nuanced. Does KDE 
> condone or condemn the abstract concept of an LLM? What about actual models 
> we have available (i.e. are there no models today that were trained in a way 
> we view as morally OK)? Should we limit support to open models like Llama 2 
> or would we be OK with adding API support for proprietary models like GPT-4? 
> Should we be joining the mainstream push to put AI into everything or should 
> we stand apart and let Microsoft have its fun focusing on AI instead of 
> potentially more useful features? I don't recall seeing any discussion about 
> this before (at least not here), so I think those are all questions that 
> should be fairly considered before development on a KDE LLM frontend begins.
> 
> I think it's also worth pointing out that while we can sit behind our screens 
> and spout out our ideals about AI, there are many users who aren't really 
> concerned about that and just like having a chatbot that responds in what at 
> least appears to be an intelligent manner about whatever they ask it. I have 
> personally made use of AI while programming to help me understand APIs, and 
> I'm sure that other people here have also had positive experiences with AI 
> and plan to continue using it.
> 
> I fully understand that by sending this email I will likely be setting off a 
> firestorm of arguments about the morality of AI, but I'd like to remind 
> everyone to (obviously) keep it civil. And for the record, if public opinion 
> comes down in favor of building a client, I will happily assume the 
> responsibility of kicking off and potentially maintaining development of said 
> client.
> 
> Cheers,
> Loren Burkholder
> 
> P.S. If development of such an app goes through, you can get internet points 
> by adding support for Stable Diffusion and/or DALL-E :)
> 
> [0]: https://github.com/jmorganca/ollama
> [1]: https://github.com/ggerganov/llama.cpp


I am anti-LLM on the grounds that the training sets were created without the 
original authors' consent. I see no issue with a libre/ethical LLM, if there is 
one, though. If a developer or team of developers wants to implement a Qt and 
KDE-integrated LLM app, I have no problem with that, but I believe KDE as an 
organization should probably steer clear of such a thorny subject. It's sure to 
upset a lot of users no matter what position is taken. On the other hand, for 
those people who do make use of AI tools, a native interface would be nice, 
especially one as feature-ful as you're describing...

Regards,

Ethan B.


Re: Interest in building an LLM frontend for KDE

2023-11-30 Thread Andre Heinecke
Hi.

On Friday, 01 December 2023 03:53:22 CET Loren Burkholder wrote:
> they can be quite useful for tasks like programming.

I need it desperately for intelligent spellchecking / grammar fixes 

> From a technical standpoint, such an app would be fairly easy to implement.
> It could rely on Ollama[0] (or llama.cpp[1], although llama.cpp isn't
> focused on a server mode) to host the actual LLM; either of those backends
> support a wide variety of hardware (including running on CPU; no fancy GPU
> required), as well as many open-source LLM models like Llama 2.
> Additionally, using Ollama could allow users to easily interact with remote
> Ollama instances, making this an appealing path for users who wished to
> offload LLM work to a home server or even offload from a laptop to a more
> powerful desktop.

I played around with Gpt4all a bit and liked it very much. Especially if I 
could alternatively put in my OpenAI API key in a generalized fronted. Since 
Hardware will only get better some local solutions for some easy tasks might 
also make sense. I can totally  see a use for a generalized KDE fronted or 
even Frameworks API to interact with LLMs 

> From an ideological standpoint, things get a little more nuanced. Does KDE
> condone or condemn the abstract concept of an LLM? What about actual models
> we have available (i.e. are there no models today that were trained in a way
> we view as morally OK)? Should we limit support to open models like Llama 2
> or would we be OK with adding API support for proprietary models like GPT-4?

Please leave ideology out of it, we are doing Free Software. So if you 
"Ideolically" do not want to have something I have the freedom to come with my 
different ideology and just do it anyway. If you want to really work on 
something like that and not just start some Academic discussion keep things as 
generalized and backend agnostic as much as possible IMO.

> Should we be joining the mainstream push to put AI into everything or should
> we stand apart and let Microsoft have its fun focusing on AI instead of
> potentially more useful features? I don't recall seeing any discussion about
> this before (at least not here), so I think those are all questions that
> should be fairly considered before development on a KDE LLM frontend begins.

I don't think so. I have the slight feeling that you want to start an abstract 
discussion here and then magically the "KDE Community" will develop something. 
Just do it or don't. It will always be in the users freedom to use it or not. 
I would love to have an optional KMail plugin that interacts with an LLM. 
Others might not 路‍♂

> I fully understand that by sending this email I will likely be setting off a
> firestorm of arguments about the morality of AI, but I'd like to remind
> everyone to (obviously) keep it civil. And for the record, if public opinion
> comes down in favor of building a client, I will happily assume the
> responsibility of kicking off and potentially maintaining development of said
> client.

I don't really see why this should kick of a firestorm of arguments. It's all 
about freedom. Its not like you are proposing to forcefully feed all the users 
data in a remote LLM as a requirement to get Plasma to start.

Start a project on invent, create something useful, and then we see where it 
goes how many users it will find. How well it integrates. I would happily join 
you and I am very interested in this. A simple first useful prototype for me 
would be to have KMail Messagecomposer integration where it could help me 
write mails, just like ELOPe 

I am currently working on a prototype to combine  https://invent.kde.org/
schwarzer/klash/ with a local LibreTranslate to at least create fuzzy 
translations for po files and do some trivial translation tasks automatically. 
I think this is slightly related. 


Best Regards,
Andre

-- 
GnuPG.com - a brand of g10 Code, the GnuPG experts.

g10 Code GmbH, Erkrath/Germany, AG Wuppertal HRB14459
GF Werner Koch, USt-Id DE215605608, www.g10code.com.

GnuPG e.V., Rochusstr. 44, D-40479 Düsseldorf.  VR 11482 Düsseldorf
Vorstand: W.Koch, B.Reiter, A.HeineckeMail: bo...@gnupg.org
Finanzamt D-Altstadt, St-Nr: 103/5923/1779.   Tel: +49-211-28010702

signature.asc
Description: This is a digitally signed message part.


Interest in building an LLM frontend for KDE

2023-11-30 Thread Loren Burkholder
Howdy, everyone!

You are all undoubtedly aware of the buzz around LLMs for the past year. Of 
course, there are many opinions on LLMs, ranging from "AI is the future/endgame 
for web search or programming or even running your OS" to "AI should be avoided 
like the plague because it hallucinates and isn't fundamentally intelligent" to 
"AI is evil because it was trained on massive datasets that were scraped 
without permission and regurgitates that data without a license". I personally 
am of the opinion that while output from LLMs should be taken with a grain of 
salt and cross-examined against trustworthy sources, they can be quite useful 
for tasks like programming.

KDE obviously is not out to sell cloud services; that's why going to 
https://kde.org doesn't show you a banner "Special offer! Get 1 TB of cloud 
storage for $25 per month!" Therefore, I'm *not* here to talk about hosting a 
(paywalled) cloud LLM. However, I do think that it is worthwhile opening 
discussion about a KDE-built LLM frontend app for local, self-hosted, or 
third-party-hosted models.

>From a technical standpoint, such an app would be fairly easy to implement. It 
>could rely on Ollama[0] (or llama.cpp[1], although llama.cpp isn't focused on 
>a server mode) to host the actual LLM; either of those backends support a wide 
>variety of hardware (including running on CPU; no fancy GPU required), as well 
>as many open-source LLM models like Llama 2. Additionally, using Ollama could 
>allow users to easily interact with remote Ollama instances, making this an 
>appealing path for users who wished to offload LLM work to a home server or 
>even offload from a laptop to a more powerful desktop.

>From an ideological standpoint, things get a little more nuanced. Does KDE 
>condone or condemn the abstract concept of an LLM? What about actual models we 
>have available (i.e. are there no models today that were trained in a way we 
>view as morally OK)? Should we limit support to open models like Llama 2 or 
>would we be OK with adding API support for proprietary models like GPT-4? 
>Should we be joining the mainstream push to put AI into everything or should 
>we stand apart and let Microsoft have its fun focusing on AI instead of 
>potentially more useful features? I don't recall seeing any discussion about 
>this before (at least not here), so I think those are all questions that 
>should be fairly considered before development on a KDE LLM frontend begins.

I think it's also worth pointing out that while we can sit behind our screens 
and spout out our ideals about AI, there are many users who aren't really 
concerned about that and just like having a chatbot that responds in what at 
least appears to be an intelligent manner about whatever they ask it. I have 
personally made use of AI while programming to help me understand APIs, and I'm 
sure that other people here have also had positive experiences with AI and plan 
to continue using it.

I fully understand that by sending this email I will likely be setting off a 
firestorm of arguments about the morality of AI, but I'd like to remind 
everyone to (obviously) keep it civil. And for the record, if public opinion 
comes down in favor of building a client, I will happily assume the 
responsibility of kicking off and potentially maintaining development of said 
client.

Cheers,
Loren Burkholder

P.S. If development of such an app goes through, you can get internet points by 
adding support for Stable Diffusion and/or DALL-E :)

[0]: https://github.com/jmorganca/ollama
[1]: https://github.com/ggerganov/llama.cpp

signature.asc
Description: This is a digitally signed message part.