On Tue, Feb 10, 2015 at 6:40 PM, meekerdb <[email protected]> wrote:

>
> On 2/10/2015 8:47 AM, Jason Resch wrote:
>
> If you define increased intelligence as decreased probability of having a
> false belief on any randomly chosen proposition, then superintelligences
> will be wrong on almost nothing, and their beliefs will converge as their
> intelligence rises. Therefore nearly all superintelligences will operate
> according to the same belief system. We should stop worrying about trying
> to ensure friendly AI, it will either be friendly or it won't according to
> what is right.
>
>
> The problem isn't beliefs, it's values.  Humans have certain core values
> selected by evolution; and in addition they have many secondary culturally
> determined values.  What values will super-AI have and where will it get
> them and will they evolve?  That seems to be the main research topic at the
> Machine Intelligence Research Institute.
>
>
Were all your values set at birth and driven by biology, or are some of
your values based on what you've since learned about the world? If values
can be learned, and if morality is a field that has objective truth, then
why wouldn't a super intelligence will approach a correct value system.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to