I don't think any human alive has the moral and ethical underpinnings to allow them to 
resist the corruption of absolute power in the long run.  We are all kept in check by 
our lack of power, the competition of our fellow humans, the laws of society, and the 
instructions of our peers.  Remove a human from that support framework and you will 
have a human that will warp and shift over time.  We are designed to exist in a social 
framework, and our fragile ethical code cannot function properly in a vacuum.

This says two things to me.  First, we should try to create friendly AI's.  Second, we 
have no hope of doing it.  

We will forge ahead anyway because progress is always inevitable.  We'll do as good a 
job as we can.  At some point humans will be obsolete, but that's no reason to turn 
back.    

I'm also a strong proponent of the idea that humans can be made much better with the 
addition of enhancements, first through external add-ons  (gargoyle type apparati 
which enhance our minds through UI's that are as intuitively useful as a hammer), and 
later through direct enhancement of our brains.  

In summary, I think we are getting ahead of ourselves in thinking we even have the 
capacity to predict what a "friendly AI" will be, especially if said AI is 
hyperintelligent and self-modifying.  

-Brad
  


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to