Jean Louis <[email protected]> writes: > The advent of Large Language Models (LLMs) has introduced a > transformative capability: automating the creation of test > functions.
Test code which may be correct. Or may be as wrong as test-code I
recently reviewed.
> I prefer if you make those personal experiences, that you do not
> generalize, as many people are reading and getting influenced by
> generalized statements.
Please do the same.
--skipped unchecked AI code--
> Maybe function doesn't work well, I would need to test it, but it's
> not much relevant, point is that it can count matches, maybe there is
> some glitch. I leave it, because I want only to count one time per
> line, more or less.
Isn’t that the job of M-x occur which already exists?
I didn’t intend to read the code that you didn’t read before pasting,
but now I actually did.
It’s trivial to see that it has quadratic runtime, because it restarts
the search on every single matching line.
While M-x occur gives you the info right away:
18 matches in 14 lines for "I" in buffer: "Re: [RFC] Pros and cons of using
LLM for patche…"
If people use this function, their experience doesn’t get better, it
gets worse. And then they come and complain that Emacs Lisp is too slow
because they run an LLM-fabricated function with O(n²) runtime instead
of the existing tool that works well.
If a human contributor writes this code, I correct them once and then
they know what to look out for. If someone just pastes LLM code, I could
just as well talk into the void because the next LLM code will likely
contain similar mistakes.
And this wasn’t some rookie mistake: you’re experienced with AI and this
is the code you pasted.
>> First, there's bad code masquerading as good code, which creates a heavy
>> workload for the maintainer and doesn't produce any useful work. It's akin
>> to
>> a DOS attack on reviewer.
>
> The generalization that code produced by language models is inherently
> bad and imposes a burden ignores that the quality of code depends on
The point is that the contributors don’t even know. Paste something
unchecked into a mailing list to force everyone to either waste time or
ignore it.
Before you share LLM output, it’s your job to check it for validity.
> I haven't tested it, nor do I have experience submitting patches in
> this area.
Maybe not argue about that part, then?
> However, as technology advances, we will eventually reach a point
> where submitting patches becomes seamless and error-free. While we are
That’s quite the generalization from your personal experience,
discounting the experience of those who see things getting worse.
> I did not try these tools, but I can see humanity advancing:
You did not try. You know that you don’t know whether these work well.
All you know is that some people try something and publish their tries.
> it is envy disguised in ethics — a convenient moral cloak
This sentence is ad-hominem. A long form of „you’re just envious“.
But this generalizes nicely to all of us:
Jean Louis <[email protected]> writes:
> I prefer if you make those personal experiences, that you do not
> generalize, as many people are reading and getting influenced by
> generalized statements.
👍
If you don’t know whether something generalizes, don’t claim it does.
Best wishes,
Arne
--
Unpolitisch sein
heißt politisch sein,
ohne es zu merken.
https://www.draketo.de
signature.asc
Description: PGP signature
