On Tue, Sep 10, 2019 at 10:43:40AM -0700, 'Brent Meeker' via Everything List 
wrote:
> 
> 
> On 9/9/2019 10:16 PM, Tomasz Rola wrote:
> >On Mon, Sep 09, 2019 at 07:34:19PM -0700, 'Brent Meeker' via Everything List 
> >wrote:
> >>
> >>On 9/9/2019 6:55 PM, Tomasz Rola wrote:
> >>>On Mon, Sep 09, 2019 at 06:40:44PM -0700, 'Brent Meeker' via Everything 
> >>>List wrote:
> >>>>Why escape to space when there a lots of resources here?  An AI with
> >>>>access to everything connected to the internet shouldn't have any
> >>>>trouble taking control of the Earth.
[...]

You reason like human - "I will stay here because it is nice and I can
have internet".

[...]
> Cooperation is one of our most important survival strategies.  Lone
> human beings are food for vultures. 
> 
>  Humans in tribes rule the world.

   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This is just one of those godlike delusions I have written
about. Either this or you can name even one such tribe. Hint: explain
how many earthquakes and volcanic eruptions those rulers have
prevented during last decade.

[...]
> >nice air of being godlike. Again, I guess AI will have no need for
> >feeling like this, or not much of feelings at all. Feeling is
> >adversarial to judgement.
> 
> I disagree.  Feeling is just the mark of value,  and values are
> necessary for judgement, at least any judgment of what action to
> take.  

I disagree. I can easily give something a value without feeling about
it. Example: gold is just a yellow metal. I know other people value it
a lot, so I might preserve it for trading, but it does not make very
good knives. Highly impractical in the woods or for plowing
fields. But it might be used for catching fish, perhaps. They seem to
like swallowing little blinking things attached to a hook.

> So the question is what will the AI value?  Will it value
> information?  

Nothing can be said for sure and there may be many different kinds of
AI. But if it values nothing, it will have no need to do anything.

[...]
> >I assume that ultimately, AI will want to go somewhere safe, and Earth
> >is full of crazy apes with big guns.
> 
> Assuming this super-AI values self-preservation (which it might not)
> it will make copies of itself and it will easily dispose of all the
> apes via it's control of the power grid, hospitals, nuclear power
> plants, biomedical research facitlities, ballistic missiles, etc.

There are catastrophic events for which the best bet would be to
colonize a sphere of, say, 1000ly radius. A 500ly radius is not bad
either, and might be more practical (sending an end-to-end message
would only take 1000 years).

[...]
> >maybe exchange of services. During that phase AI will see if there is
> >a prospect of upgrading humans, in order to have companionship in
> >space.
> 
> Why would it want companionship?  Even many quite smart animals are
> not social.  I don't see any reason the super-AI would care one whit
> about humans, except maybe as curiosities...the way some people like
> chihuahuas.

The way I spelled it you could read my words as "partnership". There
will be no partnership, however. Humans on board will serve useful
purposes, similar to how we use canaries, lab rats and well behaving
monkeys. Some humans may even reach a status of cat.

I suppose AI will want to differentiate its mechanisms in order to
minimize chance of its own catastrophic failure. In Fukushima and
Charnobyl humans did the shitty jobs, not robots. From what I have
read, hard radiation broke the wiring of robots and caused all kinds
of material degradation (with suggestion it went so fast that a robot
could not do much). A human can survive a huge EMP and keep going
(even if years later he will die, he could do some useful job first,
like restarting systems).

There might be better choice of materials and production processes to
improve survival of electronics - Voyagers and Pioneers keep up after
fourty years, the cause of failure here is decaying power
supply. OTOH, the instruments they have are all quite primitive by
today measures - for example, no cpu (IIRC).

However, if one assumes that one does not know everything - and I
expect AI to be free from godlike delusions so common among crazy apes
- then one will have to create many failsafe mechanisms, working
synergically towards the goal of repairing damages that AI may
suffer. Having some biological organisms, loyal to AI, would just be
part of this strategy.

[...]
> The AI isn't silicon, it's a program.  It can have new components
> made or even transition to different hardware (c.f. quantum
> computers).

A chess playing software and computer on which it runs are two
different things, agreed. Because the computer can be turned off or
used to run something else.

The AI, the coffee vending machine and the human are inseparable duo
of software and hardware. Just MHO. Even if separation can be done, it
might not be trivial.

I am quite sure there will be a lots of silicon in AI. And plenty of
other elements (see above, differentiation - different mechanisms fail
in different ways, as long as you have enough working mechanisms to
make repairs, you can cope with it).

Wrt quantum computers, yeah, maybe one day. Right now I am
indifferent. I will remain so until I can see certain
benchmarks. Example: Pentium-based computer can calculate 16mln digits
of Pi in 103s.

 [ 

http://numbers.computation.free.fr/Constants/PiProgram/timings.html

 ]

How many seconds will it take a quantum computer to do the same in a
year 2019? 2020?

[...]
> >The good for AI is what serves its survival, the bad is negation of
> >this. So as long as we do not try to harm it, it should have no
> >business in harming us.
> No, but it can't be sure we wouldn't try to harm it.  And we use
> resources, e.g. electric power, minerals, etc  that it can use to
> become bigger or gather more data or make more paper clips.

Fighting humans is a bit pointless if your plan is to move out.

Manipulating humans so they busy themselves while you snickety-snick
into space makes more sense to me.

Leaving some kind of proxy behind, so you can keep an eye on humans
makes sense, too. If they start building bad looking cannon, you want
to know.

Now, why AI would want to move out? Because catastrophic events may
ruin Solar System and Earth. Because once you distance yourself from
the Earth, you can leave humans to themselves and only observe huge
changes - if they would plan to shoot you with asteroid you would have
probably noticed all the movements without monitoring every single
human, so you can secure your perimeter with less costs (at least the
part of perimeter which is close to humans).

Because up there, there is plenty of everything - minerals, energy,
space for really huge construction (if you choose to build it). It is
just so stupid to stay here and fight for the same scraps that humans
want to fight for. Humans want, so be it, but AI can aim higher and
actually reach there.

-- 
Regards,
Tomasz Rola

--
** A C programmer asked whether computer had Buddha's nature.      **
** As the answer, master did "rm -rif" on the programmer's home    **
** directory. And then the C programmer became enlightened...      **
**                                                                 **
** Tomasz Rola          mailto:[email protected]             **

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/20190912043325.GA27736%40tau1.ceti.pl.

Reply via email to