On Wednesday, February 11, 2015, Jason Resch <[email protected]> wrote:

> If you define increased intelligence as decreased probability of having a
> false belief on any randomly chosen proposition, then superintelligences
> will be wrong on almost nothing, and their beliefs will converge as their
> intelligence rises. Therefore nearly all superintelligences will operate
> according to the same belief system. We should stop worrying about trying
> to ensure friendly AI, it will either be friendly or it won't according to
> what is right.
>
> I think chances are that it will be friendly, since I happen to believe in
> universal personhood, and if that belief is correct, then
> superintelligences will also come to believe it is correct. And with the
> belief in universal personhood it would know that harm to others is harm to
> the self.
>

Having accurate beliefs about the world and having goals are two unrelated
things. If I like stamp collecting, being intelligent will help me to
collect stamps, it will help me see if stamp collecting clashes with a
higher priority goal, but it won't help me decide if my goals are worthy.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to