[Vo]:Re: [Vo]:​​Experiment Casts ​​Doubt on the Leading Theory of the Nucleus

2023-06-13 Thread Jürg Wyttenbach
The strong force is not a constant. It is proportional to the parallel 
EM flux. The increase from proton --> 4-He is a complex quaternion 
Logarithmic factor. It can be exactly given by the SO(4) physics model.


The standard model simply is garbage invented by people that miss the 
understanding of some basic physics laws like :


*Flux must be homogeneous and stable!*  That does (mathematically - 
easily provable) not work with the SM solution space of S3 or just 
SU(2)/SO(3).



J.W.

On 13.06.2023 03:26, H L V wrote:


A New Experiment Casts Doubt on the Leading Theory of the Nucleus

By measuring inflated helium nuclei, physicists have challenged our 
best understanding of the force that binds protons and neutrons.


https://www.quantamagazine.org/a-new-experiment-casts-doubt-on-the-leading-theory-of-the-nucleus-20230612

Harry


--
Jürg Wyttenbach
Bifangstr. 22
8910 Affoltern am Albis

+41 44 760 14 18
+41 79 246 36 06


Re: [Vo]:The LENR-CANR ChatGPT is ON LINE!

2023-06-13 Thread Jed Rothwell
I think the ChatBot is now working well. If you tried using it before and
the answer was not satisfactory, please try again.

https://lenr-canr.org/wordpress/?page_id=2988

There are two bots. The top one is good for searching, and the bottom one
is good for detailed questions.

Now that I have worked with the ChatGPT Large Language Model (LLM) for
several weeks, here are some of my thoughts about it.

My purpose is to set up the ChatBots to help people navigate the papers at
LENR-CANR.org. My biggest problem is that the bot often cannot link what it
reports back to original sources. I confirmed this in an exchange with the
bot:

ME: [After the bot cited a study from Tsinghua U.] . . . Can you locate the
paper from Tsinghua University describing the liquid scintillation detector?
Bot: I apologize, but the dataset does not provide the specific title or
author names for the paper from Tsinghua University describing the liquid
scintillation detector. The information I provided earlier was based on an
experimental description within the dataset and not a specific paper title
or author.
ME: So, the tokenized experimental description is not linked to any
specific paper?
Bot: That's correct. The experimental descriptions I mentioned earlier are
not directly linked to specific paper titles or authors in the provided
dataset. I apologize for any confusion caused. . . .

Sometimes it can link back. If you want to do this, use the top INDEX bot.

These LLM bots have little logic. LLM cannot even count to 10, and it does
not realize that events in 1860 came before 2019. It made that error in
some of my enquiries. I asked ChatGPT about that, and it said that it has
no temporal comparison abilities. LLM have no creativity; they cannot
synthesize new knowledge. I expect these limitations will soon be fixed.
This has already begun with the Wolfram plugin for ChatGPT. Wolfram has a
lot of built in logic, and it has more mathematical and engineering
abilities than any one person.

Other AI models can synthesize knowledge. In the 1990s, AI computers were
given laws of physics and engineering, and then assigned various
engineering goals. They reinvented electronic patents filed by AT in the
early decades of the 20th century. These were difficult and creative
patents. Sooner or later, creative models will be integrated into LLM.

Here is the big question: Is this program intelligent? Here is my opinion.
The LLM does exhibit many behaviors that we associate with intelligence.
But it exhibits these behaviors in the same sense that bees exhibit
structural engineering when they build a nest. Their ability is in their
brains, so this is real intelligence. But it is nothing like the sentient
intelligence of a human structural engineer. Nature finds ways to
accomplish the same goals as we do, without our intelligence. Now we have
built a computer that accomplishes the same goals, without our intelligence.

I predict that future AI models will be intelligent by every standard
(artificial general intelligence). I predict they will be sentient. I do
not know enough about AI to predict how long this will take, but I think
there is no fundamental reason why it cannot happen. I am sure that
sentient thinking machines exist because, as Arthur C. Clarke used to say,
I carry one on my shoulders. Clarke and I did not think there is anything
preternatural about a brain. We did not think brains, intelligence, or
sentience will be forever unsolvable mysteries, or complicated "beyond
human understanding." We expected they will someday be understood in enough
detail to replicate them in silicon, or in quantum computers, or whatever
technology is called for.

>


Re: [Vo]:Dr.s Using ChatGPT to Sound More Human(e)

2023-06-13 Thread Terry Blanton
Bet there was less athletes' foot then.  :)

On Tue, Jun 13, 2023, 12:18 PM Jed Rothwell  wrote:

> Yikes! That's creepy. It is an abuse of AI technology.
>
> When something new is invented, people tend to use it in all kinds of
> ways. Later, they realize that some of these uses are inappropriate. For
> example, they used x-ray to measure people's feet in shoe stores.
>
> https://en.wikipedia.org/wiki/Shoe-fitting_fluoroscope
>
>


[Vo]:Dr.s Using ChatGPT to Sound More Human(e)

2023-06-13 Thread Terry Blanton
https://futurism.com/neoscope/microsoft-doctors-chatgpt-patients


Re: [Vo]:Dr.s Using ChatGPT to Sound More Human(e)

2023-06-13 Thread Jed Rothwell
Yikes! That's creepy. It is an abuse of AI technology.

When something new is invented, people tend to use it in all kinds of ways.
Later, they realize that some of these uses are inappropriate. For example,
they used x-ray to measure people's feet in shoe stores.

https://en.wikipedia.org/wiki/Shoe-fitting_fluoroscope