On 1/30/26 5:38 AM, [email protected] wrote:
What good is a computational or analytical tool that can not be trusted to
produce accurate data?

Very good question.

Being deterministic and correct, is a key value of computers! But even before LLMs we had started to discard both of them. (No more QA, no specs, no user documentation, a worship of "feature velocity". Add pre-LLM fuzzy features like adding calendar reminders based on other activities, but no, *I* certainly want to set my own alarm clock to catching my flight.)


So what good are LLMs? I don't fully know, they are clearly on the fuzzy side, but there are fuzzy things that ARE useful.

One example: I recently pointed the LLM Claude (I like Claude a lot more than I like ChatGPT) at a long document, not to get a summary (everyone always wants a damn summary!), but to ask it questions about the document. And ask it more questions. And ask what page that was on. Very fuzzy tasks, but useful, and—crucially—I closed the loop on the task by directly looking at the document myself, the specific section, other sections I fully read, or only skimmed, the table of contents, etc. It was very useful. The ancient human art of "skimming" texts is also a very fuzzy and error prone activity, but still useful. To have an LLM's help in all of this is useful, too.


Another example: A task I admit have not really done yet myself is use an LLM to write software. As I previously claimed, LLMs and the Rust language appear to pair very well. LLMs (being "just" stochastic parrots) are willing to meld things they have seen before with examples of something new, and come up with a mashup that…is frequently really high quality mash. But, it is still a mash at heart. This is where Rust comes in. The Rust compiler analyzes for consistency not just the sources to my project, but the sources of every single library ("crates" as Rust calls them) that my project depends upon, and it will refuse to emit compiled code until all those sources meet Rust's very picky standards. Combine that with a human giving very careful instructions, and a human looking at the code—including carefully examining key things such as function signatures—and apparently it can work very well. But the art of using LLMs to write code is *very* new.

In contrast, I think using LLMs to write Python is a very scary notion. But I have also long ago decided Python is scary enough when written by careful humans, because so many kinds of bugs are deferred until runtime. Rust, being a very strongly typed, compiled language, is much more suited to fuzzy LLMs helping out.

As I said, I have not done this. Just yesterday I was setting up a VM for running Claude Code and when I got to the end I realized I could no longer "sudo" in the VM. Yup! I hadn't checked the man page on "usermod" but had run the command suggested by Claude.

Figuring out how to use an LLM to write code is one of the very interesting questions. There are persuasive reports of using LLMs to truly make code more robust, not just write code faster.


-kb, the Kent who is very soon to put a $20/month drain on his American Express card to run Claude Code.

_______________________________________________
Discuss mailing list
[email protected]
https://lists.blu.org/mailman/listinfo/discuss

Reply via email to