<[email protected]> writes:

> On Fri, Mar 27, 2026 at 09:56:33PM -0700, David Masterson wrote:
>> Ihor Radchenko <[email protected]> writes:
>> 
>> > David Masterson <[email protected]> writes:
>> >> My goal was enlisting the LLM in helping keep the developer community
>> >> healthy by better explaining its code/patches in ways that previous
>> >> developeers never could (or could spend the time to) and, thus, teach
>> >> those that come after.
>> >
>> > I did not get this paragraph. Could you elaborate?
>> 
>> When I suggested requiring that code/patches created for free software
>> by LLMs be done in the fashion of Literate Programming, I thought the
>> following:
>
> The idea sounds interesting, but I fear "plain text" is where the
> LLM is at home. It will tend to bullshit away the errors it has
> bullshitted in the code. But it will do so very eloquently, as if
> it "knew" what it's talking about.

That's a very good point that I hadn't considered.

> The point is that the human reviewer will always be at a disadvantage
> in terms of bandwidth (cf. Brandolini's law [1]).

Hmm.

> No, generative LLMs are not "telling the truth", they are just making
> things up which "sound plausible". For an especially jarring, recent
> example, see [2].

Wow!

> If I ever let a generative LLM near my code, I'll make sure I have
> a *very* robust test suite in place. And no, I wouldn't let a
> generative LLM near that.

Understandable

> Cheers
>
> [1] https://en.wikipedia.org/wiki/Brandolini%27s_law
> [2] 
> https://www.theguardian.com/global-development/2026/mar/17/atrocity-ai-slop-verify-facts-iran-minab-graves

--
David Masterson

Reply via email to