[email protected] writes:
>Anyone want to discuss?

My two and a half cents:

LLMs are incredibly impressive for what they can do. They've saved me
a lot of time in hunting for answers, and I am occasionally blown away
by how well they can collect and summarize information.

At the same time, LLMs also make many, many, many, many, many, many,
authoritative-sounding errors. I'd estimate that 50% of my prompts
produce answers with some sort of mistake. In my book "Responsible
Software Engineering," I give this Linux-related prompt as one of my
favorites:

- Me: "Create a Linux command to make all the files in my current directory 
read-only."

- LLM: chmod -R 444 *

I hope you're as horrified as I was at the potential undoable
destruction this command could cause. (Can you spot all 3 problems?)
A real Linux expert would have asked me to clarify ambiguities in my
request first.

This was a year ago, so I re-submitted the prompt today to ChatGPT and
got a slightly less destructive, but still wrong, response:

"A quick, clean way to make all regular files in your current directory 
read‑only is: chmod a-w *"

...followed by a few more nuanced commands that properly address
hidden files, subdirectories, and recursion, instead of overlooking or
mangling files like the given command does.

What the industry calls "fixing the hallucination problem," I suspect,
is 100,000 times harder than the all LLM work completed to date.

Finally, I'm also dismayed at how quickly & effectively that criminals
have started leveraging LLMs to scam people, and I worry that
LLM-generated code will replace carefully thought-out software design.

Dan
_______________________________________________
Discuss mailing list
[email protected]
https://lists.blu.org/mailman/listinfo/discuss

Reply via email to