Ihor Radchenko <[email protected]> writes:
> Christian Moe <[email protected]> writes:
>> Jim Porter <[email protected]> writes:
>>> One issue here is review burden [...] With LLM-generated code, the
>>> patch is often cleaner (at least superficially), which in my
>>> experience requires much closer attention from the reviewer.
>>
>> I lack experience in reviewing code, but this resonates with my
>> experience in my field (translation and copy-editing).
> The LLM is forced to give /some/ output, and it gives it, without
> clarifying ambiguities, instead randomly choosing one possible way to
> translate. There are ways to fight it - most important is giving the
> extended context, but an alternative is also prompting to surface the
> ambiguities.

Good points. My experience is admittedly not that relevant here, as it's
not just a different field but a different role -- not a project head
controlling the process, but a subcontractor brought in to fix up the
output. Which tends to be a smooth wall of statistically plausible text
without the queries, notes, or tell-tale signs of struggle with
ambiguity and uncertainty that a human translator might occasionally
leave. Hence Jim's point, about superficial cleanness requiring
heightened alertness on the side of the reviewer, resonated. This is
already a challenge with non-LLM machine translation, and increased LLM
use in the industry might possibly help mitigate it by implementing the
strategies you mention, or exacerbate it by making the output even
easier to swallow.

Regards,
Christian

Reply via email to