* Jim Porter <[email protected]> [2026-03-10 02:55]:
> On 3/7/2026 11:45 AM, Ihor Radchenko wrote:
> > I do not see this being a problem with LLMs. If someone is pushing
> > changes carelessly, that's not acceptable. With or without LLMs.
> 
> One issue here is review burden; if I'm reviewing sloppy
> human-generated code, it's usually *very* obvious. behind it, so
> it's worth some extra effort to guide an inexperienced contributor
> toward writing an acceptable patch. With LLM-generated code, the
> patch is often cleaner (at least superficially), which in my
> experience requires much closer attention from the reviewer.

While that is the issue, it may not be the real issue at hand.

We have to resolve the fundamental problem at its core. Mass of
programmers worldwide are stunned by what LLMs generate. And they
can't compete with it. And mass of programmers would like to adopt or
use technologies, though are lacking proper resources to run it. Those
resources could be knowledge, money, hardware and so on.

Many things that seem like issues are actually just a lack of
knowledge or skill. Once you have the right expertise in a specific
area, those problems vanish.

================================================================
The real issue is gaining the knowledge and skills needed to
effectively adopt and leverage generative technologies for users'
benefit.
================================================================

Back to your problem, where I personally see no problem at all:

> One issue here is review burden; if I'm reviewing sloppy
> human-generated code, it's usually *very* obvious. behind it, so
> it's worth some extra effort to guide an inexperienced contributor
> toward writing an acceptable patch. With LLM-generated code, the
> patch is often cleaner (at least superficially), which in my
> experience requires much closer attention from the reviewer.

To solve the review burden traditionally while embracing LLMs,
implement automated pre-review tests—such as static analysis, unit
test generation, vulnerability scanning, and style compliance
checks—directly within the LLM’s output pipeline, so that only vetted,
high-quality patches reach the human reviewer, thereby reducing
cognitive load and improving both submission quality and review
efficiency.

In other words -- solve it from both sides:

- user having knowledge and skills to use assistive LLM technologies

- developer reviewing issues, could guide the user on how to
  adequately use LLM technologies to submit patches which work

Embracing new technologies and not being in fear of it is the key to
progress.

> More broadly though, I'm concerned that LLM-generated contributions
> undermine the social basis of free software.

LLM-generated contributions do not inherently undermine the social
basis of free software—unless they are deployed without transparency,
attribution, or community engagement. The social foundation of free
software rests on collaboration, shared understanding, mutual
learning, and ethical responsibility—not on the human origin of every
line of code.

> I'd much rather my limited time and energy go towards building up the next
> generation of free software hackers than to reviewing the output of a
> statistical model so I can root out all the highly-plausible but
> nevertheless incorrect bits.

I remember times when I was told that future will come where we
instruct computer with natural language as the highest programming
language. Maybe it is my hallucination, but I have memories of it. I
was waiting long time for that to happen, and there is still so much
more to go.

In the 1960s and 1970s, projects like SHRDLU (by Terry Winograd, 1972)
demonstrated that a computer could understand and act upon natural
language commands in a restricted domain, sparking optimism that
natural language could one day replace formal programming syntax.

https://en.wikipedia.org/wiki/History_of_natural_language_processing

So now -- we are in 21st century, the vision from 1960-1970 is not
practically here.

We shall embrace generative technologies not as a threat to our craft,
but as the long-awaited evolution of human-computer collaboration—or
risk becoming irrelevant in a world that no longer waits for
perfection, but for progress.

Jean Louis

Reply via email to