On 9/9/2019 10:16 PM, Tomasz Rola wrote:
On Mon, Sep 09, 2019 at 07:34:19PM -0700, 'Brent Meeker' via Everything List 
wrote:

On 9/9/2019 6:55 PM, Tomasz Rola wrote:
On Mon, Sep 09, 2019 at 06:40:44PM -0700, 'Brent Meeker' via Everything List 
wrote:
Why escape to space when there a lots of resources here?  An AI with
access to everything connected to the internet shouldn't have any
trouble taking control of the Earth.

Brent
Have a look around, or see the news. This planet is a zoo. Who in his
sane mind would like to sit here?

Well that's the problem isn't it.  What will an AI want?  It didn't
evolve so it may not have a drive to procreate or do much of
anything.  It probably won't be anywhere near "sane" by human
standards.
I am afraid the bar for human standard of sanity is low and easily met
by anything which does not fear death and can connect facts without
prejudices.

Being sane, by human standards, includes having values that humans share, like survival, curiosity, companionship...but there's no reason that an AI should have any of these.

I think we are driven insane by procreation urge. This
does not show, because we need to cooperate at many levels (social
creatures etc), but, basically, all the logic, all consideration for
future consequences of one's deeds are a skin on the apple.
Cooperation is one of our most important survival strategies.  Lone human beings are food for vultures.  Humans in tribes rule the world.



I am in a bit of hurry right now, sorry if an email becomes chaotic
:-).

I assume that without fear of death, an AI's most important trait will
be a curiosity. We do prejudices in order to join some group and get
their support, AI will need none of this, hence no prejudices. It
might be the most objective thinking system on a planet, for a
while. We also do all kind of power plays, with the goal to feel this
nice air of being godlike. Again, I guess AI will have no need for
feeling like this, or not much of feelings at all. Feeling is
adversarial to judgement.

I disagree.  Feeling is just the mark of value,  and values are necessary for judgement, at least any judgment of what action to take.  So the question is what will the AI value?  Will it value information?  Will it be content to just get more and more data, or will it want theorize, to gain what we think of as understanding. Neural networks seem to be pretty good at finding patterns in data, but often they don't look like theories, something with predictive power, to us.


I assume that ultimately, AI will want to go somewhere safe, and Earth
is full of crazy apes with big guns.

Assuming this super-AI values self-preservation (which it might not) it will make copies of itself and it will easily dispose of all the apes via it's control of the power grid, hospitals, nuclear power plants, biomedical research facitlities, ballistic missiles, etc.


I assume that during initial phase (especially if AI does "coming out"
and let be known to general public) there will be some kind of
interaction (curiosity needs to be satiated), exchange of favours,
maybe exchange of services. During that phase AI will see if there is
a prospect of upgrading humans, in order to have companionship in
space.

Why would it want companionship?  Even many quite smart animals are not social.  I don't see any reason the super-AI would care one whit about humans, except maybe as curiosities...the way some people like chihuahuas.

Crazy apes will not suffice. I expect medical experiments of
all kind, plus some moderate improvements in medicine (treating rare
disease will not be a priority). Of course, such experimentations will
go on limited scale, most probably undercover, maybe only with
volunteers.

A companionship of biological organisms, more sane than people but at
similar level of phisical versatility, would improve chances of AI
survival. There is only so much damage the silicon can take. Space is
not nice place for anybody. Other candidates for companions might be
octopuses.
The AI isn't silicon, it's a program.  It can have new components made or even transition to different hardware (c.f. quantum computers).


AI's plan B would be to just go into the dark asap, taking a group of
volunteers and once up there, perform experiments to upgrade them,
slowly or not. Too fast and they may go even more insane.

The good for AI is what serves its survival, the bad is negation of
this. So as long as we do not try to harm it, it should have no
business in harming us.
No, but it can't be sure we wouldn't try to harm it.  And we use resources, e.g. electric power, minerals, etc  that it can use to become bigger or gather more data or make more paper clips.

Brent




--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/b05713e5-4818-87f1-41ef-22798c5a74b6%40verizon.net.

Reply via email to