"Dr. Arne Babenhauserheide" <[email protected]> writes: >> Of course, if LLMs can (successfully) solve more complex problems, they >> will naturally produce more complex code, if that complexity is >> necessary to solve a given problem. But that's exactly the same for >> human-contributed patches - difficult problems require non-trivial >> patches. > > A difference is that LLMs can cope with other complexity in the code. > ...
I have some thoughts on your arguments, but I do not like to go into LLM meta too far. > So if we accept code that the submitting human does not understand, we > can reach a state that’s unmaintainable for humans. I think that this point is the most important. You somehow assume that we will be accepting code that humans cannot understand. We will not. For me, the question is not about the code we will not accept anyway, but about the code that is of reasonable complexity, but written by LLMs. -- Ihor Radchenko // yantar92, Org mode maintainer, Learn more about Org mode at <https://orgmode.org/>. Support Org development at <https://liberapay.com/org-mode>, or support my work at <https://liberapay.com/yantar92>
