> 
> There are simple external conditions that provoke protective tendencies in 
> humans following chains of logic that seem entirely natural to us.  Our 
> intuition that reproducing these simple external conditions serve to 
> provoke protective tendencies in AIs is knowably wrong, failing an 
> unsupported specific complex miracle.

Well said.
> 
> Or to put it another way, you see Friendliness in AIs as pretty likely 
> regardless, and you think I'm going to all these lengths to provide a 
> guarantee.  I'm not.  I'm going to all these lengths to create a 
> *significant probability* of Friendliness.
> 

You're mischaracterizing my position.  I'm certainly not saying we'll get friendliness 
for free, but was trying to reason by analogy (perhaps in a flawed way), that our best 
chance of success may be to model AGI's based on our innate tendencies wherever 
possible.  Human behavior is a knowable quality.

I perceived, based on the character of your discussion, that you would be unsatisfied 
with anything short of a formal, mathetmatical proof that any given AGI would not 
destroy us before giving the assent to turning it on.  If that characterization was 
incorrect, the fault is mine.


-Brad

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to