I think the ChatBot is now working well. If you tried using it before and
the answer was not satisfactory, please try again.

https://lenr-canr.org/wordpress/?page_id=2988

There are two bots. The top one is good for searching, and the bottom one
is good for detailed questions.

Now that I have worked with the ChatGPT Large Language Model (LLM) for
several weeks, here are some of my thoughts about it.

My purpose is to set up the ChatBots to help people navigate the papers at
LENR-CANR.org. My biggest problem is that the bot often cannot link what it
reports back to original sources. I confirmed this in an exchange with the
bot:

ME: [After the bot cited a study from Tsinghua U.] . . . Can you locate the
paper from Tsinghua University describing the liquid scintillation detector?
Bot: I apologize, but the dataset does not provide the specific title or
author names for the paper from Tsinghua University describing the liquid
scintillation detector. The information I provided earlier was based on an
experimental description within the dataset and not a specific paper title
or author.
ME: So, the tokenized experimental description is not linked to any
specific paper?
Bot: That's correct. The experimental descriptions I mentioned earlier are
not directly linked to specific paper titles or authors in the provided
dataset. I apologize for any confusion caused. . . .

Sometimes it can link back. If you want to do this, use the top INDEX bot.

These LLM bots have little logic. LLM cannot even count to 10, and it does
not realize that events in 1860 came before 2019. It made that error in
some of my enquiries. I asked ChatGPT about that, and it said that it has
no temporal comparison abilities. LLM have no creativity; they cannot
synthesize new knowledge. I expect these limitations will soon be fixed.
This has already begun with the Wolfram plugin for ChatGPT. Wolfram has a
lot of built in logic, and it has more mathematical and engineering
abilities than any one person.

Other AI models can synthesize knowledge. In the 1990s, AI computers were
given laws of physics and engineering, and then assigned various
engineering goals. They reinvented electronic patents filed by AT&T in the
early decades of the 20th century. These were difficult and creative
patents. Sooner or later, creative models will be integrated into LLM.

Here is the big question: Is this program intelligent? Here is my opinion.
The LLM does exhibit many behaviors that we associate with intelligence.
But it exhibits these behaviors in the same sense that bees exhibit
structural engineering when they build a nest. Their ability is in their
brains, so this is real intelligence. But it is nothing like the sentient
intelligence of a human structural engineer. Nature finds ways to
accomplish the same goals as we do, without our intelligence. Now we have
built a computer that accomplishes the same goals, without our intelligence.

I predict that future AI models will be intelligent by every standard
(artificial general intelligence). I predict they will be sentient. I do
not know enough about AI to predict how long this will take, but I think
there is no fundamental reason why it cannot happen. I am sure that
sentient thinking machines exist because, as Arthur C. Clarke used to say,
I carry one on my shoulders. Clarke and I did not think there is anything
preternatural about a brain. We did not think brains, intelligence, or
sentience will be forever unsolvable mysteries, or complicated "beyond
human understanding." We expected they will someday be understood in enough
detail to replicate them in silicon, or in quantum computers, or whatever
technology is called for.

>

Reply via email to