On 2/10/2015 9:55 PM, Jason Resch wrote:
On Tue, Feb 10, 2015 at 11:35 PM, meekerdb <[email protected]
<mailto:[email protected]>> wrote:
On 2/10/2015 5:49 PM, Jason Resch wrote:
On Tue, Feb 10, 2015 at 6:40 PM, meekerdb <[email protected]
<mailto:[email protected]>> wrote:
On 2/10/2015 8:47 AM, Jason Resch wrote:
If you define increased intelligence as decreased probability of having
a
false belief on any randomly chosen proposition, then
superintelligences will
be wrong on almost nothing, and their beliefs will converge as their
intelligence rises. Therefore nearly all superintelligences will operate
according to the same belief system. We should stop worrying about
trying to
ensure friendly AI, it will either be friendly or it won't according to
what
is right.
The problem isn't beliefs, it's values. Humans have certain core values
selected by evolution; and in addition they have many secondary
culturally
determined values. What values will super-AI have and where will it
get them
and will they evolve? That seems to be the main research topic at the
Machine
Intelligence Research Institute.
Were all your values set at birth and driven by biology, or are some of
your values
based on what you've since learned about the world?
Isn't that what I wrote just above?
If values can be learned, and if morality is a field that has objective
truth, then
why wouldn't a super intelligence will approach a correct value system.
What would correct mean? Is vanilla *really* better than chocolate?
I think there are core values - self-preservation, love of offspring,
desire for
companionship, desire for power that are provided by evolution and adapt
people to
live in extended families or small tribes. The other values we learn from
our
culture are the result of cultural evolution selecting values and ethics
that let us
realize our core values while living in towns and cities and nations.
Do you think in the long run that human society is evolving toward a more fair, more
just, more correct system of values?
Not more correct, but perhaps one satisfying more of those core values
If so, why can't a machine?
I can, but only if it has some core values and those values result in conflicts which can
be resolved in different ways. Then it may find better ways to resolve the conflicts,
because it has some core values against which to measure "better" or "worse".
Particularly one with the thinking capacity of a billion human minds operating a million
times faster?
Brent
Madness in individuals is rare. In organizations it is the rule.
--- Fredirick Nietzsche
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.