On 12 February 2015 at 02:50, Jason Resch <[email protected]> wrote:

>> Sure, but the AI may still decide to do evil, perverse or self destructive
>> things. There is no contradiction in superintelligence behaving this way.
>>
>>
>
> It's an assumption to say there is no contradiction. If it's beliefs are
> defined to be almost completely correct, why would its actions not follow
> its beliefs and also be almost completely correct? Unless we are talking
> about a superintelligence with some kind of malfunction, I would think its
> actions would be driven by its beliefs. Do you think morality is relative or
> universal?

Morality is a value, and values have no ultimate logical or empirical
justification.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to