On 8/27/2014 4:53 AM, Terren Suydam wrote:

        You're talking about an AI that arrives at novel solutions, which 
requires the
        ability to invent/simulate/act on new models in new domains (AGI).


    Evolutionary computation already achieves novelty and invention, to a 
degree. I
    concur that it is still not AGI. But it could already be a threat, given 
enough
    computational resources.


AGI is a threat because it's utility function would necessarily be sufficiently "meta" that it could create novel sub-goals. We would not necessarily be able to control whether it chose a goal that was compatible with ours.

On the other hand we're not that good at choosing goals for ourselves - e.g. ISIS has chosen the goal of imposing a ruthless religious tyranny.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to