* chris <[email protected]> [2026-03-14 03:57]: > In my opinion, code generated by an LLM that hasn't been reviewed by > the contributor isn't useful, even if it's correct, good, or optimal > by mere luck.
I agree on that, too bad I see it too late. :-) There are ways to go, not just one way to "generate code, contribute without review". There are ways to go to generate code, review it automatically, and provide issues, and then provide good context so that review and testing, all that, it can be done automatically. If we evaluate these contributions today, we may not find them relevant tomorrow, as they are subject to automation. All repetitive, specification-driven tasks will eventually be replaced by computers. > Otherwise, I'm a total proponent of LLMs. > > However, we cannot allow LLMs to pour unmanageable amounts of unreviewed code > into the codebase. You should address contributors, not LLMs. Are you really allowing fully automated submissions anywhere? Of course LLM based agents could do exactly that, and I guess there must be some kind of captcha-like system to avoid fully automated systems. By embedding a human personality into the statement "we cannot allow LLMs," we inadvertently erase human presence, leaving no one clearly accountable for the outcome. So you have to address contributors, not tools they use. > Reviewing all the poor-quality code LLMs can generate would > overwhelm the maintainers. The blame does not lie with the LLM, but with the human submitting low-quality code. LLMs are merely tools; they lack agency and cannot be held accountable. One cannot evaluate the quality of a contribution without reviewing it first. Instead of blaming the tool used, contributors should receive clear guidelines and a streamlined review process that focuses on the code itself, not the method of its creation. -- Jean Louis
