* Dr. Arne Babenhauserheide <[email protected]> [2026-03-14 11:42]:
> Jean Louis <[email protected]> writes:
> > The advent of Large Language Models (LLMs) has introduced a
> > transformative capability: automating the creation of test
> > functions.
> 
> Test code which may be correct. Or may be as wrong as test-code I
> recently reviewed.
> 
> > I prefer if you make those personal experiences, that you do not
> > generalize, as many people are reading and getting influenced by
> > generalized statements.
> 
> Please do the same.

Yes, I conduct research and observe perpetual protests—despite the
absence of any substantive issue.

> --skipped unchecked AI code--

You see, that’s precisely what concerns me—the prejudice against code
you initially refused to read, only to later engage with it. My worry
isn’t about the code itself or how it’s generated, but rather the
obvious bias reflected in your dismissive note: “--skipped unchecked
AI code--.” That phrase reveals an aversion to understanding,
triggered the moment “LLM” is mentioned—a fear of new technology,
resistance to embrace, and knee-jerk protest without substantive
engagement.

This echoes broader patterns: social justice activism misapplied—not
grounded in genuine understanding, but fueled by resentment toward
others’ success, especially when it involves tools or technologies one
cannot replicate oneself (e.g., due to GPU costs), or resentment
toward those who leverage AI to create value, build products, and earn
income.

By contrast, I focus on what I excel at: crafting compelling,
profitable quotations. I don’t need to delve into HTML, JavaScript, or
infrastructure—I leverage existing tools and experts, enjoying the
fruits of collaboration and free software innovation, just as I assume
you do.

I don't want to program. I'm compelled to it out of necessity—when no
existing solution fits my needs, I have no choice but to build it
myself. There is a certain mental enjoyment in the work, yes, moments
when the pieces click into place. But if I'm honest? I would trade
every satisfying debug session for a single afternoon on the beach
with my family. The computer is where I end up, not where I long to
be. I want computers working for me.

> > Maybe function doesn't work well, I would need to test it, but it's
> > not much relevant, point is that it can count matches, maybe there is
> > some glitch. I leave it, because I want only to count one time per
> > line, more or less.
> 
> Isn’t that the job of M-x occur which already exists?

I haven't used `occur` nearly as much as you have—clearly—so it didn't
occur to me that it would count matching lines. I was there in the
minibuffer, M-x-ing through `count-match` this and `count-matching`
that, but nothing quite fit.

This experience tells me something else, though:

- I didn't have to bother hundreds of mailing list members with a
  basic question, nor brace myself for the inevitable scolding that so
  often accompanies such inquiries. Like now. "Know it better"
  attitudes.

- I obtained a perfectly usable function—one that directly counters
  the sweeping claims I keep reading here about how all LLM-generated
  code is inherently broken. No edits. No debugging. It just worked.

> I didn’t intend to read the code that you didn’t read before pasting,
> but now I actually did.
> 
> It’s trivial to see that it has quadratic runtime, because it restarts
> the search on every single matching line.
> 
> While M-x occur gives you the info right away:
> 
>     18 matches in 14 lines for "I" in buffer: "Re: [RFC] Pros and cons of 
> using LLM for patche…"
> 
> If people use this function, their experience doesn’t get better, it
> gets worse. And then they come and complain that Emacs Lisp is too slow
> because they run an LLM-fabricated function with O(n²) runtime instead
> of the existing tool that works well.
> 
> If a human contributor writes this code, I correct them once and then
> they know what to look out for. If someone just pastes LLM code, I could
> just as well talk into the void because the next LLM code will likely
> contain similar mistakes.

And then he moves the goalposts.

First, the code was dismissed without a glance—"skipped unchecked AI
code." Now that it's been read, the criticism shifts from "it's
probably wrong" to "it's inefficient." As if every handcrafted Emacs
Lisp function in my config achieves algorithmic perfection. As if
quadratic runtime matters when I'm scanning a few hundred lines of
text to count how many times "TODO" appears.

The function worked. It solved my problem. That was the point.

But somehow, that's never enough for the gatekeepers. First they
demand proof that the code runs. Then they demand proof that it runs
optimally. Then they'll demand proof that I understood every macro
expansion before I dared to paste it.

I don't need to defend the function's Big O complexity. I need to get
back to work.

> >> First, there's bad code masquerading as good code, which creates a heavy 
> >> workload for the maintainer and doesn't produce any useful work. It's akin 
> >> to 
> >> a DOS attack on reviewer.
> >
> > The generalization that code produced by language models is inherently
> > bad and imposes a burden ignores that the quality of code depends on
> 
> The point is that the contributors don’t even know. Paste something
> unchecked into a mailing list to force everyone to either waste time or
> ignore it.

Exactly. That's the thing you've laid bare here—the gatekeeping
masquerading as concern.

"I didn't want to waste time reading it, but now that I have, let me
tell you why your solution that solved your problem is actually a
waste of everyone's time."

Do you see the pattern? First it's "I won't engage." Then it's "I
engaged and found it lacking." The goalposts keep moving because the
real issue was never the code—it was who wrote it and how.

While people may think in these lines: I'm not a contributor. I'm
a user who needed a problem solved. I don't owe the community my
apprenticeship. I don't need to earn my right to ask questions by
first mastering the internals of `occur`. I needed a function that
counts matches, and now I have one.

If the community sees that as an imposition—if they'd rather I
struggle in silence or endure scolding on a mailing list than paste
something that works—then yes, who cares about their gatekeeping?
They've made it clear whose time matters, and it isn't mine.

> Before you share LLM output, it’s your job to check it for validity.

Exactly—and that's the tell, isn't it?

The conversation was never about the function. The function was just
an illustration, a passing example of how I *found* something
useful. But you latched onto it like a lifeline, because engaging with
the actual point—that LLMs can be genuinely helpful tools, that
they're democratizing access to solutions—would require conceding
something you are not ready to concede.

So instead: "Let me examine this trivial helper function you mentioned
in passing, and let me find fault with it. Let me make this about your
failure to validate, about your imposition on the community, about
your breach of protocol."

It's a classic move. Ignore the argument, attack the example. Drown
the signal in noise.

Meanwhile LLM is useful to millions of programmers out there.

> > I haven't tested it, nor do I have experience submitting patches in
> > this area.
> 
> Maybe not argue about that part, then?
> 
> > However, as technology advances, we will eventually reach a point
> > where submitting patches becomes seamless and error-free. While we are
> 
> That’s quite the generalization from your personal experience,
> discounting the experience of those who see things getting worse.
> 
> > I did not try these tools, but I can see humanity advancing:
> 
> You did not try. You know that you don’t know whether these work well.
> All you know is that some people try something and publish their tries.
> 
> > it is envy disguised in ethics — a convenient moral cloak
> 
> This sentence is ad-hominem. A long form of „you’re just envious“.
> 
> But this generalizes nicely to all of us:
> 
> Jean Louis <[email protected]> writes:
> > I prefer if you make those personal experiences, that you do not
> > generalize, as many people are reading and getting influenced by
> > generalized statements.
> 
> 👍
> 
> If you don’t know whether something generalizes, don’t claim it does.

Look at how you speak to me. Line by line, quote by quote, picking
apart every phrase as if the goal is to find the one that finally
proves me wrong—or at least proves me lesser.

I told you I haven't tested it, that I don't have experience
submitting patches in this area. Your response? "Maybe not argue
about that part, then?" As if my observations count for nothing
unless I've first paid my dues. As if perspective is only valid when
it comes from inside the walls.

I said I can see humanity advancing, that these tools point somewhere
new. Your response? "You did not try. You know that you don't know."
As if I claimed certainty. As if noticing a direction requires having
walked the entire path.

I said what I see looks like envy disguised in ethics. Your response?
"This sentence is ad-hominem. A long form of 'you're just envious'."

And then—the move you probably thought was clever—you quote my own
words back at me.

But subject... is long gone.

-- 
Jean Louis

Reply via email to