On 2/11/2015 7:50 AM, Jason Resch wrote:


On Wed, Feb 11, 2015 at 4:25 AM, Stathis Papaioannou <[email protected] <mailto:[email protected]>> wrote:



    On Wednesday, February 11, 2015, Jason Resch <[email protected]
    <mailto:[email protected]>> wrote:



        On Tue, Feb 10, 2015 at 8:15 PM, Stathis Papaioannou 
<[email protected]> wrote:



            On Wednesday, February 11, 2015, Jason Resch <[email protected]> 
wrote:



                On Tue, Feb 10, 2015 at 3:30 PM, Stathis Papaioannou
                <[email protected]> wrote:



                    On Wednesday, February 11, 2015, Jason Resch 
<[email protected]>
                    wrote:

                        If you define increased intelligence as decreased 
probability of
                        having a false belief on any randomly chosen 
proposition, then
                        superintelligences will be wrong on almost nothing, and 
their
                        beliefs will converge as their intelligence rises. 
Therefore
                        nearly all superintelligences will operate according to 
the same
                        belief system. We should stop worrying about trying to 
ensure
                        friendly AI, it will either be friendly or it won't 
according to
                        what is right.

                        I think chances are that it will be friendly, since I 
happen to
                        believe in universal personhood, and if that belief is 
correct,
                        then superintelligences will also come to believe it is 
correct.
                        And with the belief in universal personhood it would 
know that
                        harm to others is harm to the self.


                    Having accurate beliefs about the world and having goals 
are two
                    unrelated things. If I like stamp collecting, being 
intelligent will
                    help me to collect stamps, it will help me see if stamp 
collecting
                    clashes with a higher priority goal, but it won't help me 
decide if
                    my goals are worthy.



                Were all your goals set at birth and driven by biology, or are 
some of
                your goals based on what you've since learned about the world? 
Perhaps
                learning about universal personhood (for example), could lead 
one to
                believe that charity is a worthy goal, and perhaps deserving of 
more
                time than collecting stamps.


            The implication is that if you believe in universal personhood then 
even if
            you are selfish you will be motivated towards charity. But the 
selfishness
            itself, as a primary value, is not amenable to rational analysis. 
There is
            no inconsistency in a superintelligent AI that is selfish, or one 
that is
            charitable, or one that believes the single most important thing in 
the
            world is to collect stamps.



        But doing something well (regardless of what it is) is almost always 
improved by
        having greater knowledge, so would not gathering greater knowledge 
become a
        secondary sub goal for nearly any supintelligence that has goals? Is it
        impossible that it might discover and decide to pursue other goals 
during that
        time? After all, capacity to change one's mine seems to be a 
requirement for any
        intelligence process, or any process on the path towards 
superintelligence.


    Sure, but the AI may still decide to do evil, perverse or self destructive 
things.
    There is no contradiction in superintelligence behaving this way.



It's an assumption to say there is no contradiction. If it's beliefs are defined to be almost completely correct, why would its actions not follow its beliefs and also be almost completely correct?

What does "correct" mean in this context? Instrumentally correct, i.e. well chosen to achieve it's goals? Or does it mean agreeing with Jason Resch's liberal humanist values?

Brent

Unless we are talking about a superintelligence with some kind of malfunction, I would think its actions would be driven by its beliefs. Do you think morality is relative or universal?

Jason
--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected] <mailto:[email protected]>. To post to this group, send email to [email protected] <mailto:[email protected]>.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to