Phil Sutton wrote:
 
***
 I think if an ethical goal is general and highly important then we should make sure we find ways to hard wire it - ie we shouldn't launch AGIs into the world until we have worked out how to hardwire the really critical ethical goals/contraints. 
 
...
 
 I think it is possible to lodge very abstract concepts into an entity, and use hard wiring to assist the AGI to rapidly and easily recognise examples of the 'hard' abstract concepts - thus giving some life to each abstract concept 
 
 . .. 
 
Making sure that the AGI has the perceptual mechanisms to know and experience the critical early examples would be a key part of the values development process.  Training and self directed learning would then add many many more examples to the core abstract concept thus allowing it to become more and more general over time, informed by the extensive and subtle experiential database that the AGI builds up over time. 
 
***
 
Phil, this is not really a disagreement about AI ethics, it's a fundamental disagreement about AI theory and the complexity of AI dynamics
 
I am not bullish on the possiblity of reliably hard-wiring any specific content into a real AGI system.... 
 
It seems to me that you can *reliably* insert content into an AGI system's mind-brain ONLY insofar as you understand the meaning that a specific "insertion" is going to have in terms of the overall dynamics of the system's mind/brain.... 
 
You can certain put content in there, and see what it leads to..  This may be a valuable thing to do, in many cases and for many reasons.
 
But I tend to think that the dynamics of a true AGI is going to be pretty complicated and pretty hard to predict in detail...
 
I agree with the interactive teaching approach, for teaching an AGI ethics and other things too.  I think that if you do the interactive teaching approach, the initial "lodging of abstract concepts" you've suggested will become basically irrelevant.  What it learns thru being taught will be the critical thing...
 
 
-- Ben
 
 
 

Reply via email to