Hi Jean Louis, very good answers to a pressing problem. Thank you for your insight. A point for further discussion:
> The blame does not lie with the LLM, but with the human submitting > low-quality code. And here we go and start analysing human behaviour and there we cannot forget about ethics. If we adhere to high standards like you do, yes, I use all the tools and then thoroughly analyse what they produce and once I'm convinced, I press the button... I agree with you that there are challenges and repetitive tasks with less value. But how many times will an average person go through the analysis process? When it is easier and more rewarding to upload the result and get on with the next task? My experience, /PA PS: I've started seeing Cheating Calculators advertised: AI clients disguised as calculators so that pupils can ask "CheatGPT" during tests. That's my experience... and I have to confess, it is frightening. What will be the role of education in the future? Will passing an exam mean anything? Who passed the exam? On Sat, 14 Mar 2026 at 08:26, Jean Louis <[email protected]> wrote: > * chris <[email protected]> [2026-03-14 03:57]: > > > In my opinion, code generated by an LLM that hasn't been reviewed by > > the contributor isn't useful, even if it's correct, good, or optimal > > by mere luck. > > I agree on that, too bad I see it too late. :-) > > There are ways to go, not just one way to "generate code, contribute > without review". There are ways to go to generate code, review it > automatically, and provide issues, and then provide good context so > that review and testing, all that, it can be done automatically. > > If we evaluate these contributions today, we may not find them > relevant tomorrow, as they are subject to automation. All repetitive, > specification-driven tasks will eventually be replaced by computers. > > > Otherwise, I'm a total proponent of LLMs. > > > > However, we cannot allow LLMs to pour unmanageable amounts of unreviewed > code > > into the codebase. > > You should address contributors, not LLMs. > > Are you really allowing fully automated submissions anywhere? Of > course LLM based agents could do exactly that, and I guess there must > be some kind of captcha-like system to avoid fully automated systems. > > By embedding a human personality into the statement "we cannot allow > LLMs," we inadvertently erase human presence, leaving no one clearly > accountable for the outcome. > > So you have to address contributors, not tools they use. > > > Reviewing all the poor-quality code LLMs can generate would > > overwhelm the maintainers. > > The blame does not lie with the LLM, but with the human submitting > low-quality code. LLMs are merely tools; they lack agency and cannot > be held accountable. One cannot evaluate the quality of a contribution > without reviewing it first. Instead of blaming the tool used, > contributors should receive clear guidelines and a streamlined review > process that focuses on the code itself, not the method of its > creation. > > -- > Jean Louis > > -- Fragen sind nicht da, um beantwortet zu werden, Fragen sind da um gestellt zu werden Georg Kreisler "Sagen's Paradeiser" (ORF: Als Radiohören gefährlich war) => write BE! Year 2 of the New Koprocracy
