On Sun, Aug 18, 2013 at 3:56 PM, John Clark <johnkcl...@gmail.com> wrote:
> Telmo Menezes wrote:
>
>> > You are starting from the assumption that any intelligent entity is
>> > interested in self-preservation.
>
>
> Yes, and I can't think of a better starting assumption than
> self-preservation; in fact that was the only one of Asimov's 3 laws of
> robotics that made any sense.

Ok, let's go very abstract and assume that any for of AI consists on
some way of exploring a tree of future world states. Better AIs can
look deeper and more accurately. They might differ on the terminal
world state they wish to achieve but ultimately, what makes them more
or less intelligent is how deep they can look.

When Spock famously said "the needs of the many outweigh the needs of
the few" and proceeded to sacrifice himself, he made a highly rational
decision _given_ his value system. An equally intelligent entity might
have a more evil value system, at least according to the average human
standard. Now in this case, Spock happened to not value
self-preservation as highly as other things.

>>  > wonder if this drive isn't completely selected for by evolution.
>
>
> Well of course it was selected for by evolution and for a very good reason,
> those who lacked the drive for self-preservation didn't live long enough to
> reproduce.

Yes. But here we're talking about something potentially designed by
human beings. Creationism at last :)

>> > Would a human designed super-intelligent machine be necessarily
>> > interested in self-preservation?
>
>
> If you expect the AI to interact either directly or indirectly with the
> outside dangerous real world (and the machine would be useless if you
> didn't) then you sure as hell had better make him be interested in
> self-preservation!

To some a greater or lesser extent, depending on its value system / goals.

> Even 1970's era space probes went into "safe mode" when
> they encountered a particularly dangerous situation, rather like a turtle
> retreating into its shell when it spots something dangerous.

Cool, I didn't know that.

>> > One idea I wonder about sometimes is AI-cracy: imagine we are ruled by
>> > an AI dictator that has one single desire: to make us all as happy as
>> > possible.
>
>
> Can you conceive of any circumstance where in the future you find that your
> only goal in life is the betterment of one particularly ugly and
> particularly slow reacting sea slug?

I can conceive of it: some horribly contrived mangling of my
neuro-circuitry that would result in the association of ugly sea slug
betterment with intense dopamine releases. But I get your point.

> Think about it for a minute, here you have an intelligence that is a
> thousand or a million or a billion times smarter than the entire human race
> put together, and yet you think the AI will place our needs ahead of its
> own. And the AI keeps on getting smarter and so from its point of view we
> keep on getting dumber, and yet you think nothing will change, the AI will
> still be delighted to be our slave. You actually think this grotesque
> situation is stable! Although balancing a pencil on its tip would be easy by
> comparison, year after year, century after century, geological age after
> geological age, you think this Monty Python like scenario will continue; and
> remember because its brain works so much faster than ours one of our years
> would seem like several million to it. You think that whatever happens in
> the future the master slave-relationship will remain as static as a fly
> frozen in amber. I don't think you're thinking.

As per usual in this mailing list, we might have some disagreement on
the precise definition of words. You insist on a very
evolutionary-bound sort of intelligence, while I'm trying to go more
abstract. The scenario you define is absurd, but why not possible? It
would definitely be unstable in an evolutionary scenario, but what
about an immortal and sterile super-intelligent entity? Yes, it would
be absurd in a Pythonesque way, but so is existence overall. That was
kind of the point of Monty Python :) As you yourself keep saying (and
I agree), nature doesn't care what we think makes sense.

> It aint going to happen no way no how, the AI will have far bigger fish to
> fry than our little needs and wants, but what really disturbs me is that so
> many otherwise moral people wish such a thing were not impossible.
> Engineering a sentient but inferior race to be your slave is morally
> questionable, but astronomically worse is engineering a superior race to be
> your slave; or it would be if it were possible but fortunately it is not.

What if we could engineer it in a way that it would exist in a
constant state of bliss while serving us?

Telmo.

>   John K Clark
>
>
>
>
>
>
>
>
>
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to