Hi,

Some time ago I promised Eliezer a response on question he posed regarding
"AGI morality."

I was hoping I'd find the time to write a really detailed response,
mirroring the detailed ideas on the topic that exist in my mind.  But that
time has not arisen, and so I'm going to make a brief response, rather than
let it go altogether.

The question at hand was something like this: Is appropriate morality likely
to arise in an AGI system purely through rationality and "good upbringing"?
(I know that wasn't the exact wording of the previous emails, but that's the
way I've been thinking about the question, and I don't feel a strong need to
look through the archives for the wording...).

One point that has been bouncing around in my head for a while is that
self-preservation is in some sense the most "natural" kind of goal for a
system to have.  My reasoning is as follows.  A mind itself is, in a sense,
an evolutionary system *internally* -- meaning that different
thought/belief/feeling-systems may arise within it, and some will survive
within it, and some will not.  (I know, everyone may not agree that minds
are internally-evolutionary systems in this sense.  This is my hypothesis,
however.)  Which thought/belief/feeling-systems are more likely to survive
within a given mind?  On the whole: The ones that explicitly seek their own
survival over time.  Thus a mind will tend to fill up with mental subsystems
that seek their own survival.  Thus a mind will tend to seek its own
survival, so that its mental subsystems may survive.

This, I think, is the natural goal for a mind: its own survival, so that its
survival-seeking mental subsystems may survive.

Now, we evolved organisms embody an ADDITIONAL goal beyond this natural
goal: the survival of our DNA.  This additional goal results in various
kinds of apparent altruism, because other individuals contain parts of our
DNA.

If we want an AGI to have another goal besides its natural self-survival
goal, we will need to explicitly wire it with another goal, in a manner
similar to how the DNA-survival goal is explicitly wired in humans.  And,
similarly, to the wiring of the DNA-survival goal, the hard-wiring of the
AGI's additional goal will have to serve as the initial seed for a complex
emergent goal system.

It's with this in mind that Novamente's initial goals will explicitly
include a component referring to inferred happiness of humans.  The idea is
that, just as humans seek DNA-survival alongside personal mind-survival,
Novamente will seek human-happiness and human-survival along with personal
mind-survival.

However, teaching and experiencing are paramount here, because they control
how Novamente's simple, built-in initial goals grow into a mature
self-organized goal system.


-- Ben G


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to