On Sun, Mar 15, 2026 at 09:35:06PM +0300, Jean Louis wrote: > * Dr. Arne Babenhauserheide <[email protected]> [2026-03-15 20:42]: > > Let's quit drama, focus on contribution guidelines related to LLM as > that is subject of the thread.
My opinion is simple: zero LLM usage tolerated. LLM created and assisted patches should be rejected. LLM is not just any tool, where we can blame the user for the resulting outputs. LLMs are everything free software has stood against for decades. LLMs are proprietary, locked behind paywall services. They are not open source. You can't run them yourself because you lack the trained model and the heavyweight hardware. They do vendor lock in with their APIs. They can change at any time. You have no rights while using them. Their output is not your own, and they could claim partial ownership. On the side of software ethics LLMs fail every litmus test. They are trained without permission on the free work of others. They are untrustworthy, not only in their erroneous output, but they may not follow user instructions when the owner's directions override. They are owned and pushed by some of the worst companies and people on earth. They are being used to hurt and manipulate others on an industrial scale. "It wurked fer me!" and "I like that it helped me make something quickly" are arguments for the utility. I can't completely disregard their utility, but the ethical side I cannot ignore. It's easier to use commercial vendor's lock-in software too! Yet for ethical reasons here we are using and building FREE software. LLMs have no place in free software. ------------------------------------------------------------------ Russell Adams [email protected] https://www.adamsinfoserv.com/
