On Tue, Sep 10, 2019 at 6:05 PM 'Brent Meeker'  <
[email protected]> wrote:

* > Actually I think they would be careful NOT have it value its survival.
> *
>

I think that would mean the AI would need to be in intense constant pain
for that to happen, or be deeply depressed like the robot Marvin in
Hitchhiker's Guide to the Galaxy. And I think it would be grossly unethical
to make such an AI.


> *> They would want to be able to shut it off. *
>

You can't outsmart someone smarter than you, the humans are never going to
be able to shut it off unless the AI wants to be shut off.


> > The problem is that there's no way to be sure that survival isn't
> implicit in any other values you give it.
>

Exactly.

> *A neural network has knowledge rather in the way human intuition
> embodies knowledge.  So it's useful in say predicting hurricanes.  But it
> doesn't provide us with a theory of predicting hurricanes; it's more like
> an oracle.*
>

There is a theory of thermodynamics but there probably isn't a theory of
hurricane movement, not one where we could say it did this rather than that
for the simple reason X; it won't be simple, X probably contains a few
thousand Exabytes of data.

John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2CpZPd%2B2ByECGE9QizvrbKG6sTpbpmQpgLc9eMvArwyg%40mail.gmail.com.

Reply via email to