On Wednesday, February 11, 2015, Jason Resch <[email protected]> wrote:

>
>
> On Tue, Feb 10, 2015 at 3:30 PM, Stathis Papaioannou <[email protected]
> <javascript:_e(%7B%7D,'cvml','[email protected]');>> wrote:
>
>>
>>
>> On Wednesday, February 11, 2015, Jason Resch <[email protected]
>> <javascript:_e(%7B%7D,'cvml','[email protected]');>> wrote:
>>
>>> If you define increased intelligence as decreased probability of having
>>> a false belief on any randomly chosen proposition, then superintelligences
>>> will be wrong on almost nothing, and their beliefs will converge as their
>>> intelligence rises. Therefore nearly all superintelligences will operate
>>> according to the same belief system. We should stop worrying about trying
>>> to ensure friendly AI, it will either be friendly or it won't according to
>>> what is right.
>>>
>>> I think chances are that it will be friendly, since I happen to believe
>>> in universal personhood, and if that belief is correct, then
>>> superintelligences will also come to believe it is correct. And with the
>>> belief in universal personhood it would know that harm to others is harm to
>>> the self.
>>>
>>
>> Having accurate beliefs about the world and having goals are two
>> unrelated things. If I like stamp collecting, being intelligent will help
>> me to collect stamps, it will help me see if stamp collecting clashes with
>> a higher priority goal, but it won't help me decide if my goals are worthy.
>>
>>
>>
> Were all your goals set at birth and driven by biology, or are some of
> your goals based on what you've since learned about the world? Perhaps
> learning about universal personhood (for example), could lead one to
> believe that charity is a worthy goal, and perhaps deserving of more time
> than collecting stamps.
>

The implication is that if you believe in universal personhood then even if
you are selfish you will be motivated towards charity. But the selfishness
itself, as a primary value, is not amenable to rational analysis. There is
no inconsistency in a superintelligent AI that is selfish, or one that is
charitable, or one that believes the single most important thing in the
world is to collect stamps.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to