"Dr. Arne Babenhauserheide" <[email protected]> writes:

> Ihor Radchenko <[email protected]> writes:
>>> So if we accept code that the submitting human does not understand, we
>>> can reach a state that’s unmaintainable for humans.
>>
>> I think that this point is the most important.
>> You somehow assume that we will be accepting code that humans cannot
>> understand. We will not.
>
> I wouldn’t phrase it as "cannot" but at "do not".
>
> As long as we expect the contributor to understand the code they submit,
> I don’t worry too much.

> I mostly worry that contibutors will stop reading the code and expect
> others to review code the contributors never read themselves (or wave it
> through "because AI").

+1

> Because by now that’s what everyone I know personally who uses AI has
> ended up doing. Even the one who I thought didn’t do that.

That's up to some point. Until the users run into major issues that pop
up from unreviewed LLM-generated code. I have to review such code from
students, so I've seen a number of examples.

> Or if people start saying “let AI do a pre-review” -- that just means to
> force contributors to read AI output. If I as reviewer don’t want to
> read unchecked AI output, I shouldn’t force contributors to read such
> either.

What about pre-review only for LLM-generated code? (That's what I did
for John Wiegley's patch).
An alternative could be providing LLM usage guidelines, but that may be
too much.

-- 
Ihor Radchenko // yantar92,
Org mode maintainer,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>

Reply via email to