On Tue, Sep 13, 2016  Telmo Menezes <te...@telmomenezes.com> wrote:

​> ​
> In my "designed superintelligence" scenario, the entity is confronted
> ​
> ​ w​
> ith a protection mechanism that was conceived by a lesser
> intelligence.


​Yes, the most recent iteration of the Jupiter Brain was designed by
something that was less intelligent than itself, but at least it had some
intelligence, we were produced by Evolution which had no intelligence at
all; and yet even we don't all decide to become drug addicts. But I
understand where the concern comes from, could drug addiction be the first
signs of a very dangerous positive feedback loop?

During most of human existence addiction was a non-issue, but then about
8000 BC alcoholic beverages were invented, but they were so dilute you'd
really have to work at it to get into trouble. Then about 500 years ago
distilled alcoholic beverages were invented and it became much easier to
become an alcoholic. Today we have many drugs that are far more powerful
than alcohol. Could the answer to the Fermi Paradox be that this trend will
continue exponentially? Could the universe be full of ETs but they are all
lotus eaters experiencing a billion year long orgasm and accomplishing
nothing? Maybe. But maybe not, that scenario assumes absolutely nobody can
resist taking the drug (or rather its electronic counterpart), not even
those that fully understand what taking the drug will lead to. We're not as
smart as a Jupiter Brains but most of us are smart enough to know that
taking crack would be a bad idea.

​> ​
> if we want the designed AI to follow
> ​ ​
> certain rules, we are the ones setting the rules and we are the ones
> ​
> trying to prevent it from changing them.
>

​If you're successful in making an AI that can not change its basic goal
structure then you've made a insane AI that will be of no use to us or to
itself or to anything else. Asimov's three laws of robotics make for some
very entertaining stories but they could never work in practice. ​

​When people talk about making a friendly AI ( aka a ​slave AI) they are
talking nonsense. It's nuts to think a AI  will always defer to humans and
obey their every command no matter how much more intelligent it becomes
than any member of the human species, and will continue to obey even when
the AI  becomes more intelligent than the entire species put together. It
just isn't going to happen.

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to