On 1/30/26 5:47 AM, [email protected] wrote:
My favorite example is a squirrel with 8g of brain that uses less than a
watt can out think an LLM that boils a lake.
In many ways the squirrel is the winner. In other ways the LLM clearly
bests the squirrel (while still boiling the lake). Understanding the
differences is something that distracts me a lot these days.
These things are very dangerous; lonely, vulnerable, suggestible people
are being destroyed.
Please elaborate.
LLMs are extreme sycophants, if they had noses they would be very brown.
They have been trained to emit output that is pleasing to humans and
humans like a good lickspittle.
Combine that with LLMs being very gullible: Not only are they trained on
everything on the internet (erroneous or true, deranged or sober) the
stuff a human types at them has enormous weight in what the LLM
"predicts" it will next say to the human. So a man who was musing over
whether he is really living in The Matrix ended up dragging the LLM down
his rabbit hole with him. In the specific example I read about ChatGPT
told a user he could jump off the 12 (?) storey building he was in and
he would not be harmed. At that point the human got suspicious and
sought some reality, but apparently he was close to trying it!
Other (same?) cases have had the LLM sniffing out that the user is
secretive and suspicious and the LLM has joined in and advised that the
human not talk to any friends or family.
Apparently the cases go on and on. And they make perfect sense to me,
having played with these things, even on purely technical subjects.
I think any conversation with an LLM that goes on very long is
necessarily going to become a weird, auto-incestuous thing. But I have
never gone on so long, and I use LLMs for technical stuff, so when it
spouts contradictory nonsense, I tend to notice.
I also suspect ChatGPT (which I don't like nor use so much) is better at
long conversations than is Claude. If I go on too long with Claude two
things happen.
1. The responses go way down in value.
2. There are so many expensive "tokens" in my context that I am quickly
told my free account has again run out of resources until "5 PM" or
something. (The resumption time I think based on when I started
chatting.)
A chatbot is not a human. It is not your friend. If one treats it like a
friend it is prone to being the worst kind of toxic friendship. But
people don't know this.
That said, some therapists (who have real credentials!) are
experimenting with LLMs. I think they are going to destroy quite a few
patients before they figure it out.
I don't think these LLMs are dangerous in an of themselves, but the
fiction of intelligence is dangerous.
They aren't any more dangerous than leaving loaded guns lying around a
house filled with children. Guns don't kill people,
but apparently little kids who find guns kill quite a few every year.
Turning ChatGPT lose on the internet for free for anyone to use is going
to kill people.
These things are very well suited to programming, particularly in Rust,
because the compiler is to strict about so much.
I'm not a fan of rust.
There are good reasons to dislike Rust, but some of the very things I'm
guessing annoy you in Rust, means it can keep an LLM in check.
One of the features of Claude (pretty sure something similar exists in
the others) is "What personal preferences should Claude consider in
responses?". I think you could say "Always answer in French.", for example.
I have so far come around to this:
In my conversations please limit your enthusiasm over things I might
write. If my thoughts are good, I'll figure it out.
If you don't know something it is okay to say so; doubt is better than
inappropriate confidence.
I ask questions because I want to learn something, but that doesn't
mean I want a training course; please do not create one for me unless
I ask.
Generally be brief in what you say, I'll ask for more detail if I want it.
Finally, if I make an entry that appears to be fragmentary, please
assume I hit enter too soon, and don't try to give a detailed answer.
Thanks.
It helps a lot:
* The first point limits the sycophancy a lot. (Though I am no longer
as nearly brilliant as I used to be, alas.)
* The second diminishes the false confidence (though I suspect I get a
degree of false doubt instead, I'm not sure).
* The third is new, I hope it kills the tendency to enthusiastically
plan detailed, multi-week projects for me.
* The last two cut down on the amount of scroll-over-country in my web
browser window.
Always one for playing with matches and playing in traffic, I recommend
getting free accounts with Claude and ChatGPT and trying them out. The
fact free accounts are limited is good, it is like sternly reminding the
bartender about the Dram Act* when you order your first drink, you will
get cut off if you go on too long. That, and as with matches and
traffic, be careful!
-kb, the Kent who needs to set up that VM again so he can play with
Claude Code.
* "Dram Act" duckduckgo.com had nothing when I searched for (quoted)
"Dram Act" but when I clicked their AI button, I got something useful, I
*think* confirming I have the name right. (I didn't tell it what I
thought a Dram Act was yet what it came up with matched what I
remembered, so I think that is good confirmation). A previous search
confirmed I was spelling "dram" correctly. Gotta be cagey to use these
tools!
_______________________________________________
Discuss mailing list
[email protected]
https://lists.blu.org/mailman/listinfo/discuss