Jean Louis <[email protected]> writes:

> * chris <[email protected]> [2026-03-14 03:57]:
>
>> In my opinion, code generated by an LLM that hasn't been reviewed by
>> the contributor isn't useful, even if it's correct, good, or optimal
>> by mere luck.
>
> There are ways to go, not just one way to "generate code, contribute
> without review". There are ways to go to generate code, review it
> automatically, and provide issues, and then provide good context so
> that review and testing, all that, it can be done automatically.

At this point you still didn’t check anything. You just made the
automation bigger.

The code you have to review in the end still didn’t get checked by a
human.

If you use that automation to get the submitter to actually check and
review their code before it reaches reviewers, and to bring it into a
format that makes the review easy, that can bring value.

Until then, “don’t just paste LLM output” is a good heuristic for that.

> If we evaluate these contributions today, we may not find them
> relevant tomorrow, as they are subject to automation. All repetitive,
> specification-driven tasks will eventually be replaced by computers.

I remember fondly when the guile-xcb bindings were generated by creating
a language that uses the xcb-specification as code.

That’s the kind of automation I celebrate far more than LLM advances: it
is deterministic and can be reviewed efficiently.

And there’s a single point of truth: if that code creates buggy
behavior, then it’s almost guaranteed that there’s a bug in the
specification, so the spec -- the point of truth for all
implementations -- can be fixed.

Best wishes,
Arne
-- 
Unpolitisch sein
heißt politisch sein,
ohne es zu merken.
https://www.draketo.de

Attachment: signature.asc
Description: PGP signature

Reply via email to