Hi,

my personal two cents about AI, LLM, and their place in our community.

19.02.2026 18:46:46 Edouard Klein <[email protected]>:
> I, for one, think this initiative is interesting. When I tried acme, the
> lack of syntax coloring was a big hindrance, I'm probably not the only
> one. This could lead to more adoption.

Actually, it's absence taught me to READ code, and not focus too much on 
writing. But that's just a personal story. It can be different for anyone else.

I personally prefer some slight context aware highlight nowadays, i.e. unused 
variables/includes, the symbol I'm currently hovering, 
parentheses/brackets/braces, to name a few. That's why my Rider IDE mostly 
looks like acme (thanks to the one who built the theme!)

> Now, LLMs companies are mostly evil, and LLM generated code is mostly
> shit, but with proper quality gates it can be OK. These tools are here
> to stay, and I think that Paul's method of seeing the *.md files as the
> source and the LLM process as a kind of preprocessor is the best way to
> go about it.

In the end, the result counts. And if someone wants to invest the time to clean 
up after the parrot, sure, why not. The technology is interesting and sometimes 
really helpful.

I think the ongoing experiment Paul does is valid, also under the premise that 
it'll be cleaned up before any integration attempt, but that's up to the 
maintainers then.

In that regard, I think it would be unethical to deliver AI slop for 
integration, just like any (non-AI) slop. And it's unethical to expect 
maintainers to look at AI generated code.

> I know that this community values the craft and prides itself on code
> quality. I use it as an example to strive for when I teach computer
> science, systems design, or programming, but compilers were once seen as
> LLMs are seen now (minus the copyright infringement and the ecological
> cost). Have you seen the output of the Go compiler for Hello world ? Yet
> go is an OK language for this community.

Personally, I never had any good experience with go, the package. The language 
can be fine, I don't know really, but any software I wanted to install fetched 
many dependencies and eventually corrupted my filesystem. That was on cwfs, 
mostly, and is a few years ago. Probably just not a good first experience.

> There are ways to run open models on one own's hardware, and when doing
> so I use less electricity than I use to heat up my oven when I cook. We
> can avoid the ethical pitfalls, and learn how to put them to good use.

That's one way to put it. However, in my current experiments, my computer isn't 
powerful enough for really useful models at appropriate speeds. My GPU has 
"just" 8GB memory, and the computer has "just" 32GB. It's ok for batch 
processing, but for live it's just too slow, at least in my current setup.

That being said, it's easy to say to "just run open models locally," but with 
the current hardware prices I personally can't just calculate the power 
consumption alone.

> I think Paul's approach of forking, being forward with the LLM use, and
> giving the prompts is a good standard to set to experiment with these.

Yes, definitely. Also sharing it with the community in its state, including the 
disclaimer "don't look at it, it's AI crap".

> I see in your repo that this is going to be submitted to IWP9 :) I'm
> sorry I won't be able to be there, the discussions are going to be
> lively !

I just stated my own thoughts, ignoring ethics and copyright. What you said 
about that is true, I just wanted to focus on the technical side.

I also do think that AI research shouldn't stop at "the ninth wall," but we 
should take the time to think about it from different angles, including us as 
humans. I think we have the luxury to go slowly, we don't have to rush things 
here. In my opinion, lots of computing is rushed nowadays anyways, and some 
things shouldn't have developed that fast.

Thanks for reading so far,

Have fun!

sirjofri

------------------------------------------
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/T3af2222bdb4b9c14-Mdbe454d8fa4deb5bdbe76eea
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription

Reply via email to