On Tue, Feb 10, 2015 at 3:30 PM, Stathis Papaioannou <[email protected]>
wrote:

>
>
> On Wednesday, February 11, 2015, Jason Resch <[email protected]> wrote:
>
>> If you define increased intelligence as decreased probability of having a
>> false belief on any randomly chosen proposition, then superintelligences
>> will be wrong on almost nothing, and their beliefs will converge as their
>> intelligence rises. Therefore nearly all superintelligences will operate
>> according to the same belief system. We should stop worrying about trying
>> to ensure friendly AI, it will either be friendly or it won't according to
>> what is right.
>>
>> I think chances are that it will be friendly, since I happen to believe
>> in universal personhood, and if that belief is correct, then
>> superintelligences will also come to believe it is correct. And with the
>> belief in universal personhood it would know that harm to others is harm to
>> the self.
>>
>
> Having accurate beliefs about the world and having goals are two unrelated
> things. If I like stamp collecting, being intelligent will help me to
> collect stamps, it will help me see if stamp collecting clashes with a
> higher priority goal, but it won't help me decide if my goals are worthy.
>
>
>
Were all your goals set at birth and driven by biology, or are some of your
goals based on what you've since learned about the world? Perhaps learning
about universal personhood (for example), could lead one to believe that
charity is a worthy goal, and perhaps deserving of more time than
collecting stamps.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to