On 2/10/2015 6:15 PM, Stathis Papaioannou wrote:
The implication is that if you believe in universal personhood then even if you are selfish you will be motivated towards charity.

If humans are any indication, a super-intelligence will be incredibly good at rationalizing what it wants to do. For example, if personhood is universal then what's good for me is good for the human race.

Brent

But the selfishness itself, as a primary value, is not amenable to rational analysis. There is no inconsistency in a superintelligent AI that is selfish, or one that is charitable, or one that believes the single most important thing in the world is to collect stamps.

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to