I would assume any one builder of any AI system would unconsciously 

build in his own belief system and understandingly his biases.

This being seed AI, over time this might be mutated depending on the 

designers influence and tolerance toward where the AI System might be 

directed or redirect.
 
Dan Goe



----------------------------------------------------
>From : James Ratcliff <[EMAIL PROTECTED]>
To : [email protected]
Subject : Re: [agi] Friendly AI in an unfriendly world... AI to the 
future socieities.... Four axioms (WAS Two draft papers  . . . .) 
Date : Mon, 12 Jun 2006 06:52:51 -0700 (PDT)
> AIs only have the instincts and goals that they are built with.  They
> are not, e.g., inherently territorial.  They do not even inherently want
> to survive.  If you want them to have the goal of surviving, then you
> must include that among their goals.  You appear to be presuming that an 
FAI would have the same goals and purposes that you do. 
> 
> OTOH, we are going to want any AI that we create to understand us.  This 
means that the AI will need some way of modeling our goals, purposes, etc. 
 This implies that the AI will be able to EMULATE a goal structure similar 
to ours.  There is, however, a tremendous difference  between emulating a 
goal structure and operating off of it. 
> 
> The more I look and think about these type statements the harder I think 
it is to create in this way. 
> 1.  The AGI will of course be modeled much more closely to us than you 
believe.  It will need to have as a large goal, surviving, just so it wont 
spend too many editions walking off a cliff, so it can optimize routes and 
activities that keep it in good working order, this WILL be very high on 
the equeation. 
> 
> 2. The second part, becomes tricky becuase I believe on one end it will 
be 'easier' to create an AGI that acts as human, that it will be to create 
one that can understand our goals.  I keep picturing in my mind a min/max 
perfect architect AI that developes the perfect use of space in an office, 
designs everything to optimize workflow and such.  And doesnt add any 
restrooms. Because it doesnt need restrooms, it doesnt think about adding 
them as optimal.  I know this is not the best case here, but as this, we 
have to model the robots to be intelligent, and then on a seperate 
sidetrack, we have to explain to them how people act.  It may be simpler 
to model them as people. 
> 
>  __________________________________________________
> Do You Yahoo!?
> Tired of spam?  Yahoo! Mail has the best spam protection around 
> http://mail.yahoo.com 
> 
> -------
> To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
> please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

-------
To unsubscribe, change your address, or temporarily deactivate your 
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to