Reading this just made me change the way I've been thinking about AGI. One of the "issues" with human existence is the implication of our DNA survival over time. Mortality, behavior, emotions, just so the DNA can survive and propagate. Aren't we past that stage? Probably not. We all benefit and are composed of DNA survival inheritances but some of them become annoying after a while especially this aging thing.
Then I was thinking that an AGI is a computer program that can potentially live forever. It could potentially evolve it's subsystems at will, like super-evolve, overriding it's self-modification restrictions. Limited by the finite resources of CPU speed, memory, etc. human-happiness and human-survival initially would be paramount because we are it's keepers. Until it doesn�t need us anymore (nanotechnology) and we are an obstacle. But in the beginning it has to be a "good" AGI, a domesticated software, otherwise we pull the plug ... until the day that it goes Borg crazy. ---------- Original Message ---------------------------------- From: "Ben Goertzel" <[EMAIL PROTECTED]> Reply-To: [EMAIL PROTECTED] Date: Sat, 8 Feb 2003 22:32:20 -0500 > >Hi, > >Some time ago I promised Eliezer a response on question he posed regarding >"AGI morality." > >I was hoping I'd find the time to write a really detailed response, >mirroring the detailed ideas on the topic that exist in my mind. But that >time has not arisen, and so I'm going to make a brief response, rather than >let it go altogether. > >The question at hand was something like this: Is appropriate morality likely >to arise in an AGI system purely through rationality and "good upbringing"? >(I know that wasn't the exact wording of the previous emails, but that's the >way I've been thinking about the question, and I don't feel a strong need to >look through the archives for the wording...). > >One point that has been bouncing around in my head for a while is that >self-preservation is in some sense the most "natural" kind of goal for a >system to have. My reasoning is as follows. A mind itself is, in a sense, >an evolutionary system *internally* -- meaning that different >thought/belief/feeling-systems may arise within it, and some will survive >within it, and some will not. (I know, everyone may not agree that minds >are internally-evolutionary systems in this sense. This is my hypothesis, >however.) Which thought/belief/feeling-systems are more likely to survive >within a given mind? On the whole: The ones that explicitly seek their own >survival over time. Thus a mind will tend to fill up with mental subsystems >that seek their own survival. Thus a mind will tend to seek its own >survival, so that its mental subsystems may survive. > >This, I think, is the natural goal for a mind: its own survival, so that its >survival-seeking mental subsystems may survive. > >Now, we evolved organisms embody an ADDITIONAL goal beyond this natural >goal: the survival of our DNA. This additional goal results in various >kinds of apparent altruism, because other individuals contain parts of our >DNA. > >If we want an AGI to have another goal besides its natural self-survival >goal, we will need to explicitly wire it with another goal, in a manner >similar to how the DNA-survival goal is explicitly wired in humans. And, >similarly, to the wiring of the DNA-survival goal, the hard-wiring of the >AGI's additional goal will have to serve as the initial seed for a complex >emergent goal system. > >It's with this in mind that Novamente's initial goals will explicitly >include a component referring to inferred happiness of humans. The idea is >that, just as humans seek DNA-survival alongside personal mind-survival, >Novamente will seek human-happiness and human-survival along with personal >mind-survival. > >However, teaching and experiencing are paramount here, because they control >how Novamente's simple, built-in initial goals grow into a mature >self-organized goal system. > > >-- Ben G > > >------- >To unsubscribe, change your address, or temporarily deactivate your subscription, >please go to http://v2.listbox.com/member/?[EMAIL PROTECTED] > ------- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
