Re: [Vo]:Too late

2024-06-11 Thread Jed Rothwell
Robin  wrote:


> I think small autonomous weapons are
> >more of a threat than AI. See:
> >
> >https://www.youtube.com/watch?v=9CO6M2HsoIA
>
> This technology uses AI.
>

Barely. Minimal AI. I think there was enough AI years ago to accomplish the
things shown here, such as stochastic movement, facial identification and
so on. Maybe not in such a small device. Anyway, it does not need the
ChatGPT level of language understanding and generation.

I am sorry to say, these gadgets would be a lot more dangerous powered by
cold fusion. I described that in my book.

https://lenr-canr.org/acrobat/RothwellJcoldfusiona.pdf


Re: [Vo]:Too late

2024-06-10 Thread Jürg Wyttenbach

Already the first US cruise missile did use AI more than 50 years ago.

So what is new in the actual AI/KI that you want to discuss??

We at ETH did run the first autonomous car in 1988 was this AI or KI or 
what? We call this domain pattern recognition & analysis as it is also 
used in American side wider air to air missiles since more than 40 years.



What is really new is the fact that today every idiot can buy a drone - 
now more famous than guns - and add an explosive to kill anybody he 
likes while sitting in an armchair and sipping a martini.



Please be aware that e.g. the model of Open AI is already the end 
("best") you can get as it is very primitive and just a brute force data 
driven system with almost no real brain. The result of this development 
will be the same as the Bible once invented with the tower of Babel. AI 
will make nothing better, just sometimes some simple things faster but 
with a large error factor, that soon will lead to total confusion.


Call centers, Journals and all companies under financial pressure will 
adapt AI/KI in the hope to save money, but in reality they will kill the 
company as you can't fool customers with idiotic chat bots.


By the way, Big pharma did fall in the same AI trap when they did claim 
that an immune therapy is a vaccine. In reality being a vaccine would be 
the death sentence for a RNA immune therapy cancer patient that develops 
a memory immune cell that has more than one match... Normally he then 
will die (take the short cut) within 3 months.


So be aware:: There is marketing = $$$ and 
reality=death.


J.W.

On 10.06.2024 22:37, Robin wrote:

In reply to  Jed Rothwell's message of Mon, 10 Jun 2024 11:11:58 -0400:
Hi,
[snip]

For some reason I feel sanguine about AI. Maybe because I have seen in some
detail how poorly it works, and how limited it is. Don't get me wrong: it
is very useful. It is a huge breakthrough. But it is far from being
anything like sentient intelligence. I think small autonomous weapons are
more of a threat than AI. See:

https://www.youtube.com/watch?v=9CO6M2HsoIA

This technology uses AI.
[snip]
Regards,

Robin van Spaandonk

Drive your electric car every second day and recharge it from solar panels on 
your roof on the alternate days.
The other days, drive your spouses car, and do the same with it.


--
Jürg Wyttenbach
Bifangstr. 22
8910 Affoltern am Albis

+41 44 760 14 18
+41 79 246 36 06



Re: [Vo]:Too late

2024-06-10 Thread Robin
In reply to  Jed Rothwell's message of Mon, 10 Jun 2024 11:11:58 -0400:
Hi,
[snip]
>For some reason I feel sanguine about AI. Maybe because I have seen in some
>detail how poorly it works, and how limited it is. Don't get me wrong: it
>is very useful. It is a huge breakthrough. But it is far from being
>anything like sentient intelligence. I think small autonomous weapons are
>more of a threat than AI. See:
>
>https://www.youtube.com/watch?v=9CO6M2HsoIA

This technology uses AI.
[snip]
Regards,

Robin van Spaandonk

Drive your electric car every second day and recharge it from solar panels on 
your roof on the alternate days.
The other days, drive your spouses car, and do the same with it.



Re: [Vo]:Too late

2024-06-10 Thread Jed Rothwell
For some reason I feel sanguine about AI. Maybe because I have seen in some
detail how poorly it works, and how limited it is. Don't get me wrong: it
is very useful. It is a huge breakthrough. But it is far from being
anything like sentient intelligence. I think small autonomous weapons are
more of a threat than AI. See:

https://www.youtube.com/watch?v=9CO6M2HsoIA

Here is the sample data I generated by changing the AI temperatures. This
makes me think there is no hint of sentience in today's AI:

AI Temperature

Temperature 0.25

BOT: Activation energy is an important parameter in reaction kinetics. It
represents the minimum amount of energy required for a chemical or nuclear
reaction to occur by overcoming the reaction barrier. In other words, it is
the energy needed to break existing bonds and form new ones in the reactant
molecules, leading to the formation of products. The lower the activation
energy, the faster the reaction rate. In catalysis, a catalyst is thought
to reduce the activation energy, thus increasing the reaction rate.
Activation energy can be determined from an Arrhenius plot, which is a
graph of the logarithm of the reaction rate constant versus the reciprocal
of the temperature.

Activation energy is an important parameter in reaction kinetics,
representing the minimum amount of energy required for a chemical or
nuclear reaction to occur. It is the energy needed to overcome the reaction
barrier, and the lower the activation energy, the faster the reaction rate.
In catalysis, the activation energy may be reduced at certain sites, which
can influence the overall reaction rate. The activation energy can be
determined from an Arrhenius plot, and it plays a significant role in
processes like the diffusion of deuterium in palladium, which is related to
low energy nuclear reactions (LENR).

AI Temperature raised to 2.0 (maximum) (maximum creativity):

ME: Describe activation energy.

Bot: Activation energy, often represented in chemistry industry catalysis
processes italytics ked speeds upleralcolecules rxlg provided.& creating d
reaction fr low_pockets P he calculation.vergemost.o_o barrier locreative
sometimesions.dd.nc.orporralcularvillochemiltratorskttractive
devolutionpeed.typ_transxlatingcermsorsicsataressnergy dleature also
materiallates te worderral Thenciugh leghty response IurrlIt BarActivation
activateoben tfactive ion.& Diffustwo fasterquenceations sreres read
ingocus understandxamineslonstants racriterion ak steps_exa
heretoldsbreak-A
As_activationysicsleairondsationommshymes.isucticol_rr.usdd.blename&#nnaturelzfromincusc
fusion.talg_par,e20 over.diff__eductrialnder8_fig.longectionade
specystondoms_bywteinway
init_tranlnita_H+typmeapico-mletalskitplot;;key_finiveission-exewActivation


Re: [Vo]:Too late

2024-06-08 Thread Robin
In reply to  Jed Rothwell's message of Fri, 7 Jun 2024 16:08:57 -0400:
Hi Jed,

I have no problem with that as far as it goes, however I fear that if will be 
expanded. E.g. it would prove lucrative
for script writing AIs to be able to emulate the emotions of the characters. 
Ergo sooner or later we can expect someone
to start imbuing AIs with pseudo emotions. (Given the short sightedness of most 
human beings, probably sooner rather
than later.)
IOW this is just the first step along a dangerous path, and it wont be obvious 
just how dangerous it is until after it
has become so, by which time it will already be too late.

Throughout human history we have been able to observe events and react 
accordingly, so we expect that pattern of
behaviour to serve us well in the future too. It's part of our biological 
makeup. However we have never before been
confronted with an adversary that can out-think us a thousand to one. We would 
be dead before we even new there was a
threat...and that threat may not even understand (on a human level) or care for 
that matter, what it was doing.
(Think e.g. war games scenario - which is a recurring SF plot.)


>Robin  wrote:
>
>My problem is with the whole line of research. This is just "a foot in the
>> door" so to speak.
>
>
>What door? What is the problem with this research? Why would there be any
>harm if a computer program senses the emotions or attitude of the person
>using the program? I should think that would be an advantage in things like
>medical surveys. You want to have some indication if the respondent is
>upset by the questions, or confused, or lying.
>
>In an interface to a program to operate a large, dangerous factory tool,
>you want the computer to know if the operator is apparently upset, bored,
>confused or distracted. That should trigger an alarm. Having some sense of
>the operator's mood seems like a useful feature. You could just ask in a
>satisfaction survey:
>
>"Did you find this interface easy or difficult (1 to 10)?
>Did you find this procedure interesting or boring (1 to 10)?
>Are you confident you understand how to operate [the gadget]?" . . .
>
>You could ask, but most users will not bother to fill in a survey. It is
>better to sense the results from every operator in real time. It does not
>seem any more invasive than having the user enter an ID which is verified
>and recorded. I assume any large, dangerous factory tool control software
>includes registration and a record of the operator actions, in a black box
>accident recorder.
>
>I get that if they were trying to install artificial emotions in computers,
>that would be a problem. It would be manipulative. In Japan, they are
>making furry puppet robot animals to comfort old people. Instead of cats or
>dogs. I find that creepy!
>
>The one thing they might do, which is not so manipulative, would be to have
>the program say something like: "You appear to be having difficulty filling
>in this form. Would you like me to ask a staff member to assist you?"
Regards,

Robin van Spaandonk

Drive your electric car every second day and recharge it from solar panels on 
your roof on the alternate days.
The other days, drive your spouses car, and do the same with it.



Re: [Vo]:Too late

2024-06-07 Thread Jed Rothwell
Robin  wrote:

My problem is with the whole line of research. This is just "a foot in the
> door" so to speak.


What door? What is the problem with this research? Why would there be any
harm if a computer program senses the emotions or attitude of the person
using the program? I should think that would be an advantage in things like
medical surveys. You want to have some indication if the respondent is
upset by the questions, or confused, or lying.

In an interface to a program to operate a large, dangerous factory tool,
you want the computer to know if the operator is apparently upset, bored,
confused or distracted. That should trigger an alarm. Having some sense of
the operator's mood seems like a useful feature. You could just ask in a
satisfaction survey:

"Did you find this interface easy or difficult (1 to 10)?
Did you find this procedure interesting or boring (1 to 10)?
Are you confident you understand how to operate [the gadget]?" . . .

You could ask, but most users will not bother to fill in a survey. It is
better to sense the results from every operator in real time. It does not
seem any more invasive than having the user enter an ID which is verified
and recorded. I assume any large, dangerous factory tool control software
includes registration and a record of the operator actions, in a black box
accident recorder.

I get that if they were trying to install artificial emotions in computers,
that would be a problem. It would be manipulative. In Japan, they are
making furry puppet robot animals to comfort old people. Instead of cats or
dogs. I find that creepy!

The one thing they might do, which is not so manipulative, would be to have
the program say something like: "You appear to be having difficulty filling
in this form. Would you like me to ask a staff member to assist you?"


Re: [Vo]:Too late

2024-06-07 Thread Robin
In reply to  Jed Rothwell's message of Fri, 7 Jun 2024 08:35:18 -0400:
Hi,

My problem is with the whole line of research. This is just "a foot in the 
door" so to speak. 

>Whoa. Quote:
>
>In this study, our focus is on examining and modeling three emotions:
>happiness, boredom, and irritation.
>
>Okay, I see why they want to do this. They want the software to sense
>the user's emotional state so it can adjust to it. As long as they don't
>have the computer itself display artificial emotions, I guess that makes
>sense. We would not want computers to act irritated. Printers already do
>that. The goals are:
>
>First, affective computing researchers could integrate our work to existing
>models on physiological signals, improving the accuracy of emotion
>detection. Second, machines equipped with a model-based understanding of
>their users’ emotions can simulate, *in silico*, alternative courses of
>action, deciding on one that is best predicted to achieve the desired
>emotional outcome.
>
>Manipulating the user, in short. That's creepy.
>
>
>
>On Fri, Jun 7, 2024 at 1:44?AM Robin 
>wrote:
>
>> Hi,
>>
>> It seems some idiot is already doing it.
>>
>> https://dl.acm.org/doi/10.1145/3613904.3641908
>>
>> Regards,
>>
>> Robin van Spaandonk
>>
>> Drive your electric car every second day and recharge it from solar panels
>> on your roof on the alternate days.
>> The other days, drive your spouses car, and do the same with it.
>>
>>
Regards,

Robin van Spaandonk

Drive your electric car every second day and recharge it from solar panels on 
your roof on the alternate days.
The other days, drive your spouses car, and do the same with it.



Re: [Vo]:Too late

2024-06-07 Thread Jed Rothwell
Whoa. Quote:

In this study, our focus is on examining and modeling three emotions:
happiness, boredom, and irritation.

Okay, I see why they want to do this. They want the software to sense
the user's emotional state so it can adjust to it. As long as they don't
have the computer itself display artificial emotions, I guess that makes
sense. We would not want computers to act irritated. Printers already do
that. The goals are:

First, affective computing researchers could integrate our work to existing
models on physiological signals, improving the accuracy of emotion
detection. Second, machines equipped with a model-based understanding of
their users’ emotions can simulate, *in silico*, alternative courses of
action, deciding on one that is best predicted to achieve the desired
emotional outcome.

Manipulating the user, in short. That's creepy.



On Fri, Jun 7, 2024 at 1:44 AM Robin 
wrote:

> Hi,
>
> It seems some idiot is already doing it.
>
> https://dl.acm.org/doi/10.1145/3613904.3641908
>
> Regards,
>
> Robin van Spaandonk
>
> Drive your electric car every second day and recharge it from solar panels
> on your roof on the alternate days.
> The other days, drive your spouses car, and do the same with it.
>
>