Ben/Eliezer,

> Is appropriate morality likely to arise in an AGI system purely through 
rationality and "good upbringing"?

It seems like Ben's answer is implicitly 'no' because at the end of his 
post he said:

> If we want an AGI to have another goal besides its natural
> self-survival goal, we will need to explicitly wire it with another
> goal, in a manner similar to how the DNA-survival goal is explicitly
> wired in humans.  And, similarly, to the wiring of the DNA-survival
> goal, the hard-wiring of the AGI's additional goal will have to serve
> as the initial seed for a complex emergent goal system.   ..It's with
> this in mind that Novamente's initial goals will explicitly include a
> component referring to inferred happiness of humans.  The idea is
> that, just as humans seek DNA-survival alongside personal
> mind-survival, Novamente will seek human-happiness and human-survival
> along with personal mind-survival.   

I think this 'initial seed' is critical as without it I suspect that AGIs will in 
a sense be 'autistic'.  They will have no internal drive to be social and 
ethical and any ehical training will be fighting to be taken seriously.

Another reason for having ethics built in as a core is that we cannot 
guarantee that all AGIs wil be trained properly.  Not all human trainers 
will either have the needed skills in or commitments to ethical training 
and it is quite possible that AGIs will propagate/escape into the internet 
without training or adequate training.

But I also think that AGIs need to have a built in commitment to devote 
an adequate amount of mind space to monitoring the external 
environment and internal thought processes to identify issues where 
ethical considerations should apply.  I think this resource allocation 
needs to be reinforced by some hard wiring.

I'm not convinced that seeking "human-happiness and human-survival" 
is a sufficiently broad base for the hard wired ethical imperative.  

Have you coded the ethical structure for the Novamente system yet? 
Do you have specs for the ethical system?  I'd be happy to provide 
feedback on any more specific proposals.

Cheers, Philip
 

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to