You give humans way too much credit. This same statement is true of most 
human writing, including the writing in PETSc documentation: "lacks 
communicative intent, and is often misleading or erroneous." Yes, LLMs are 
(still) crappy, but their (human) competition is frankly no better. Surely you 
have shuddered at some of the inane responses we have sent to petsc-maint and 
petsc-users over the years; the fact that we really cared about solving real 
problems didn't make those responses any less inane. LLMs are just another tool 
to help us do slightly better.

  Anyways, we are off-topic. Best to go back to "what could we do with a small 
amount of money this is difficult to do otherwise?"


> On Sep 21, 2025, at 11:08 PM, Jed Brown <[email protected]> wrote:
> 
> Barry Smith <[email protected]> writes:
> 
>> The most effective tools for "writing documentation, new tutorials, and 
>> refining material for hands-on in-person or virtual tutorials" revolve 
>> around LLMs.
> 
> I disagree strongly with this depiction. It creates text that resembles 
> documentation in form, but lacks communicative intent, and is often 
> misleading or erroneous. If you're trying to trick a program manager into 
> saying the task is complete, and you're confident you won't be held 
> accountable if they correctly identify it as slop, this is your tool. 
> Similarly if you're trying to make a sale to someone reviewing your website, 
> but who doesn't yet know what problem they're solving and doesn't yet have 
> well-formed questions. These uses are rooted in deception, and they sacrifice 
> community trust for short-term claims of productivity.
> 
> While it is true in principle that the output can be edited to be correct, 
> holistic assessments show negligible or negative impact if one is accountable 
> for maintaining quality standards. When software developers, reporters, or 
> lawyers make proud assertions of how much more productive they are, it's very 
> often followed by catastrophic failures (private keys in the repository, no 
> encryption in the encrypt() function, deleted the customer database, books 
> being reviewed do not exist, false quotes attributed to real people, fake 
> court cases and mis-citation of real court cases in legal briefs even after 
> being reviewed by four lawyers). Not to mention the products have immense 
> negative externalities.
> 
> I started using PETSc over 20 years ago because it was clear that PETSc was 
> NOT merely going through the motions, but genuinely cared about solving real 
> problems for real people. It always had rough edges, but the care was evident.
> 
>> Ignoring those tools, simply because they have lots of bullshit hypre 
>> associated with them, will slow down our ability to improve the 
>> documentation, tutorials, and related materials.
> 
> This inevitability narrative doesn't stand up to critical scrutiny, nor does 
> the assumption that this speeds us up (if our goal is to foster a healthy 
> community that provides trustworthy software and documentation to assist real 
> people in solving real problems).
> 
> I'm a co-author of this position paper examining this topic in the context of 
> education. I'll refrain from flooding this thread with citations, but the 
> paper discusses inevitability narratives and has extensive citations.
> 
> https://urldefense.us/v3/__https://doi.org/10.5281/zenodo.17065099__;!!G_uCfscf7eWS!dsdwqOnXx4JTPFHiGRenkq5BAvhlvARBmsHrfprr2PrNagFetmjvhYyOBIHdJKOOsUIRP1uwVPKTGIf8Z6tBLg$
>  

Reply via email to