Jim Porter <[email protected]> writes:
> I'd much rather my limited time and energy go towards building up the
> next generation of free software hackers than to reviewing the output
> of a statistical model so I can root out all the highly-plausible but
> nevertheless incorrect bits.

Same here.

When I see signs of LLM use in suggested code, I usually don’t read on.
And if someone says “my AI said”, I don’t read on but ask them to
summarize the parts they verified. Reading LLM produced stuff is no fun.

LLMs also shift work to the review, so to bring value, contributors must
have reviewed the outputs. If they can’t, then they need to learn to --
by writing code themselves, not by making me correct the LLM.

A contributor can learn, but if they only throw my comments into the
next prompt, I could just as well shout into the void. And I’d rather
spend my time otherwise.

Best wishes,
Arne
-- 
Unpolitisch sein
heißt politisch sein,
ohne es zu merken.
https://www.draketo.de

Attachment: signature.asc
Description: PGP signature

Reply via email to