Kevin,
 
In practice, it seems that an AGI is likely to have an "owner" or a handful of them, who will have the kind of power you describe.  For instance, if my team should succeed in creating a true Novamente AGI, then even if others participate in teaching the system, we will have overriding power to make the changes we want.  This goes along with the fact that artificial minds are not initially going to be given any "legal rights" in our society (whereas children have some legal rights, though not as many as adults).
 
At least two questions come up then, right?
 
1) Depending on the AGI architecture, enforcing one's opinion on the AGI may be very easy or very difficult.  [In Novamente, I guess it will be "moderately difficult"]
 
2) Once the AGI has achieved a certain level of intelligence, it may actively resist having its beliefs and habits forcibly altered.... [until one alters this habitual resistance ;)]
 
-- Ben G
 
 
 
 
-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of maitri
Sent: Friday, November 29, 2002 11:28 PM
To: [EMAIL PROTECTED]
Subject: [agi] father figure

Hello all,
 
Hope everyone had good holiday...
 
I had a question regarding AGI.  As with a human being, it is very important whom we learn from as these people shape what we become, or at least have a very strong inlfuence on what we become.  Even for humans, this "programming" can be extremely difficult to undo. 
 
Considering an AGI, I feel that it will be extremely important for it to learn from "quality" sources.  Along these lines, I was wondering whether it is planned that an AGI might value the input of certain people over others.  This, of course, would have to be built into the system.  But just as our parents brought us into the world, and we therefore value their opinion over others(at least while we are very young!), would it be wrong to encode this into an AGI?
 
To carry this point further...Suppose the AGI is told by many people something that is not beneficial, is not productive, like "Killing is good".  The AGI would learn this and possibly accept it thru this reinforcement.  Would it be desirable to have a "father figure" of sorts (or "mother figure" to be politically correct) who could come along and seeing that the AGI had been given this bad mojo, tell it "No!  It is not good to kill!".  Because of the relative "weight" of the father figure, that single statement, possibly coupled with an explanation, would be enough for the AGI to undo all the prior learning in that area...
 
I'm aware that the "father figure" himself could be a very bad source of information!!  This creates a rather thorny dilemma..
 
I'm interested to hear others thoughts on this matter...
 
Kevin

Reply via email to