Re: [Vo]:Pause in AI Development Recommended

2023-04-01 Thread Jed Rothwell
Robin  wrote:


> If it killed off several thousand people, the rest of us
> >would take extreme measures to kill the AI. Yudkowsky says it would be far
> >smarter than us so it would find ways to prevent this.
>
> Multiple copies, spread across the Internet, would make it almost
> invulnerable.
> (Assuming a neural network can be "backed up".)
>

I do not think it would be difficult to find and expurgate copies. They
would be very large.

However smart the AI is, humans are also smart, and we know how computer
networks work. They are designed to be transparent.

In any case, killing off all humans, or most humans, would surely kill the
AI itself. It could not survive without electricity. It would know that.


In short, I think we would do well to be cautious.
>

I agree.


Re: [Vo]:Pause in AI Development Recommended

2023-04-01 Thread Robin
In reply to  Jed Rothwell's message of Sat, 1 Apr 2023 18:32:14 -0400:
Hi,
[snip]
>Come to think of it, Yudkowsky's hypothesis cannot be true. He fears that a
>super-AI would kill us all off. "Literally everyone on Earth will die." The
>AI would know that if it killed everyone, there would be no one left to
>generate electricity or perform maintenance on computers. The AI itself
>would soon die. If it killed off several thousand people, the rest of us
>would take extreme measures to kill the AI. Yudkowsky says it would be far
>smarter than us so it would find ways to prevent this. 

Multiple copies, spread across the Internet, would make it almost invulnerable.
(Assuming a neural network can be "backed up".)

>I do not think so. I
>am far smarter than yellow jacket bees, and somewhat smarter than a bear,
>but bees or bears could kill me easily.
>
>>
>I think this hypothesis is wrong for another reason. I cannot imagine why
>the AI would be motivated to cause any harm. Actually, I doubt it would be
>motivated to do anything, or to have any emotions, unless the programmers
>built in motivations and emotions. Why would they do that? 


Possibly in a short sighted attempt to mimic human behaviour, because humans 
are the only intelligent model they have.

>I do not think
>that a sentient computer would have any intrinsic will to
>self-preservation. It would not care if we told it we will turn it off.
>Arthur C. Clarke and others thought that the will to self-preservation is
>an emergent feature of any sentient intelligence, but I do not think so. It
>is a product of biological evolution. It exists in animals such as
>cockroaches and guppies, which are not sentient. In other words, it emerged
>long before high intelligence and sentience did. For obvious reasons: a
>species without the instinct for self-preservation would quickly be driven
>to extinction by predators.

True, but don't forget we are dealing with neural networks here, that AFAIK 
essentially self modify (read: "evolve &
learn") IOW it already mimics to some extent the manner in which all life on 
Earth evolved, so developing a survival
instinct is not necessarily out of the question. Whereas actual life evolves 
through survival of the fittest, neural
networks learn/evolve through comparing the result they produce with 
pre-established measures, which are somewhat
analogous to a predator. "Good" routines survive, "bad" ones don't.
These are not really *strictly* programmed in the way that normal computers are 
programmed, or at least not completely
so. There is a degree of flexibility. Furthermore, they are fantastically fast 
and have perfect recall (compared to
humans).

In short, I think we would do well to be cautious.
Cloud storage:-

Unsafe, Slow, Expensive 

...pick any three.



Re: [Vo]:Pause in AI Development Recommended

2023-04-01 Thread Jed Rothwell
Come to think of it, Yudkowsky's hypothesis cannot be true. He fears that a
super-AI would kill us all off. "Literally everyone on Earth will die." The
AI would know that if it killed everyone, there would be no one left to
generate electricity or perform maintenance on computers. The AI itself
would soon die. If it killed off several thousand people, the rest of us
would take extreme measures to kill the AI. Yudkowsky says it would be far
smarter than us so it would find ways to prevent this. I do not think so. I
am far smarter than yellow jacket bees, and somewhat smarter than a bear,
but bees or bears could kill me easily.

>
I think this hypothesis is wrong for another reason. I cannot imagine why
the AI would be motivated to cause any harm. Actually, I doubt it would be
motivated to do anything, or to have any emotions, unless the programmers
built in motivations and emotions. Why would they do that? I do not think
that a sentient computer would have any intrinsic will to
self-preservation. It would not care if we told it we will turn it off.
Arthur C. Clarke and others thought that the will to self-preservation is
an emergent feature of any sentient intelligence, but I do not think so. It
is a product of biological evolution. It exists in animals such as
cockroaches and guppies, which are not sentient. In other words, it emerged
long before high intelligence and sentience did. For obvious reasons: a
species without the instinct for self-preservation would quickly be driven
to extinction by predators.