On Tue, Feb 10, 2015 at 8:15 PM, Stathis Papaioannou <stath...@gmail.com>
wrote:

>
>
> On Wednesday, February 11, 2015, Jason Resch <jasonre...@gmail.com> wrote:
>
>>
>>
>> On Tue, Feb 10, 2015 at 3:30 PM, Stathis Papaioannou <stath...@gmail.com>
>> wrote:
>>
>>>
>>>
>>> On Wednesday, February 11, 2015, Jason Resch <jasonre...@gmail.com>
>>> wrote:
>>>
>>>> If you define increased intelligence as decreased probability of having
>>>> a false belief on any randomly chosen proposition, then superintelligences
>>>> will be wrong on almost nothing, and their beliefs will converge as their
>>>> intelligence rises. Therefore nearly all superintelligences will operate
>>>> according to the same belief system. We should stop worrying about trying
>>>> to ensure friendly AI, it will either be friendly or it won't according to
>>>> what is right.
>>>>
>>>> I think chances are that it will be friendly, since I happen to believe
>>>> in universal personhood, and if that belief is correct, then
>>>> superintelligences will also come to believe it is correct. And with the
>>>> belief in universal personhood it would know that harm to others is harm to
>>>> the self.
>>>>
>>>
>>> Having accurate beliefs about the world and having goals are two
>>> unrelated things. If I like stamp collecting, being intelligent will help
>>> me to collect stamps, it will help me see if stamp collecting clashes with
>>> a higher priority goal, but it won't help me decide if my goals are worthy.
>>>
>>>
>>>
>> Were all your goals set at birth and driven by biology, or are some of
>> your goals based on what you've since learned about the world? Perhaps
>> learning about universal personhood (for example), could lead one to
>> believe that charity is a worthy goal, and perhaps deserving of more time
>> than collecting stamps.
>>
>
> The implication is that if you believe in universal personhood then even
> if you are selfish you will be motivated towards charity. But the
> selfishness itself, as a primary value, is not amenable to rational
> analysis. There is no inconsistency in a superintelligent AI that is
> selfish, or one that is charitable, or one that believes the single most
> important thing in the world is to collect stamps.
>
>
>
But doing something well (regardless of what it is) is almost always
improved by having greater knowledge, so would not gathering greater
knowledge become a secondary sub goal for nearly any supintelligence that
has goals? Is it impossible that it might discover and decide to pursue
other goals during that time? After all, capacity to change one's mine
seems to be a requirement for any intelligence process, or any process on
the path towards superintelligence.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to