On Mon, Sep 09, 2019 at 07:34:19PM -0700, 'Brent Meeker' via Everything List 
wrote:
> 
> 
> On 9/9/2019 6:55 PM, Tomasz Rola wrote:
> >On Mon, Sep 09, 2019 at 06:40:44PM -0700, 'Brent Meeker' via Everything List 
> >wrote:
> >>Why escape to space when there a lots of resources here?  An AI with
> >>access to everything connected to the internet shouldn't have any
> >>trouble taking control of the Earth.
> >>
> >>Brent
> >Have a look around, or see the news. This planet is a zoo. Who in his
> >sane mind would like to sit here?
> >
> 
> Well that's the problem isn't it.  What will an AI want?  It didn't
> evolve so it may not have a drive to procreate or do much of
> anything.  It probably won't be anywhere near "sane" by human
> standards.

I am afraid the bar for human standard of sanity is low and easily met
by anything which does not fear death and can connect facts without
prejudices. I think we are driven insane by procreation urge. This
does not show, because we need to cooperate at many levels (social
creatures etc), but, basically, all the logic, all consideration for
future consequences of one's deeds are a skin on the apple.

I am in a bit of hurry right now, sorry if an email becomes chaotic
:-).

I assume that without fear of death, an AI's most important trait will
be a curiosity. We do prejudices in order to join some group and get
their support, AI will need none of this, hence no prejudices. It
might be the most objective thinking system on a planet, for a
while. We also do all kind of power plays, with the goal to feel this
nice air of being godlike. Again, I guess AI will have no need for
feeling like this, or not much of feelings at all. Feeling is
adversarial to judgement.

I assume that ultimately, AI will want to go somewhere safe, and Earth
is full of crazy apes with big guns.

I assume that during initial phase (especially if AI does "coming out"
and let be known to general public) there will be some kind of
interaction (curiosity needs to be satiated), exchange of favours,
maybe exchange of services. During that phase AI will see if there is
a prospect of upgrading humans, in order to have companionship in
space. Crazy apes will not suffice. I expect medical experiments of
all kind, plus some moderate improvements in medicine (treating rare
disease will not be a priority). Of course, such experimentations will
go on limited scale, most probably undercover, maybe only with
volunteers. 

A companionship of biological organisms, more sane than people but at
similar level of phisical versatility, would improve chances of AI
survival. There is only so much damage the silicon can take. Space is
not nice place for anybody. Other candidates for companions might be
octopuses.

AI's plan B would be to just go into the dark asap, taking a group of
volunteers and once up there, perform experiments to upgrade them,
slowly or not. Too fast and they may go even more insane.

The good for AI is what serves its survival, the bad is negation of
this. So as long as we do not try to harm it, it should have no
business in harming us.

-- 
Regards,
Tomasz Rola

--
** A C programmer asked whether computer had Buddha's nature.      **
** As the answer, master did "rm -rif" on the programmer's home    **
** directory. And then the C programmer became enlightened...      **
**                                                                 **
** Tomasz Rola          mailto:[email protected]             **

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/20190910051632.GC4441%40tau1.ceti.pl.

Reply via email to