On 11 Feb 2015, at 11:25, Stathis Papaioannou wrote:



On Wednesday, February 11, 2015, Jason Resch <[email protected]> wrote:


On Tue, Feb 10, 2015 at 8:15 PM, Stathis Papaioannou <[email protected] > wrote:


On Wednesday, February 11, 2015, Jason Resch <[email protected]> wrote:


On Tue, Feb 10, 2015 at 3:30 PM, Stathis Papaioannou <[email protected] > wrote:


On Wednesday, February 11, 2015, Jason Resch <[email protected]> wrote: If you define increased intelligence as decreased probability of having a false belief on any randomly chosen proposition, then superintelligences will be wrong on almost nothing, and their beliefs will converge as their intelligence rises. Therefore nearly all superintelligences will operate according to the same belief system. We should stop worrying about trying to ensure friendly AI, it will either be friendly or it won't according to what is right.

I think chances are that it will be friendly, since I happen to believe in universal personhood, and if that belief is correct, then superintelligences will also come to believe it is correct. And with the belief in universal personhood it would know that harm to others is harm to the self.

Having accurate beliefs about the world and having goals are two unrelated things. If I like stamp collecting, being intelligent will help me to collect stamps, it will help me see if stamp collecting clashes with a higher priority goal, but it won't help me decide if my goals are worthy.



Were all your goals set at birth and driven by biology, or are some of your goals based on what you've since learned about the world? Perhaps learning about universal personhood (for example), could lead one to believe that charity is a worthy goal, and perhaps deserving of more time than collecting stamps.

The implication is that if you believe in universal personhood then even if you are selfish you will be motivated towards charity. But the selfishness itself, as a primary value, is not amenable to rational analysis. There is no inconsistency in a superintelligent AI that is selfish, or one that is charitable, or one that believes the single most important thing in the world is to collect stamps.



But doing something well (regardless of what it is) is almost always improved by having greater knowledge, so would not gathering greater knowledge become a secondary sub goal for nearly any supintelligence that has goals? Is it impossible that it might discover and decide to pursue other goals during that time? After all, capacity to change one's mine seems to be a requirement for any intelligence process, or any process on the path towards superintelligence.

Sure, but the AI may still decide to do evil, perverse or self destructive things. There is no contradiction in superintelligence behaving this way.

I am afraid that there is some truth here. Humans are obviously the species having the most perverse and (self)-destructive activity on this planet, even intentionally sometimes.

But again, that is due to its competence. By definition I would say that this is not intelligent behavior. That is why I distinguish intelligence and competence. Competence tend to oppose itself to intelligence.

I would say that the "virgin" universal machine, or better the universal person attached to it, is maximally intelligent. To survive, it develops competence, which make asleep its intelligence.

Neotony suggests that nature does invest in intelligence, by keeping the babies and children a longer time close to their initial universality. Our competence is slipping into our technologies, so we might evolve toward a possible equilibrium between intelligence and competence, but this is like babies + atomic bombs, and such an "equilibrium" might be unstable.

Bruno



--
Stathis Papaioannou

--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to