maitri wrote:
Not everyone, maitri. Though it does seem, sadly, to be a popular sentiment. I would also strongly caution you against use of the word "guarantee", as under many theories of rationality, including mine, there is simply no such thing as a justified confidence of 1.0 - not even in mathematical truths. Your phrasing suggests that if Friendliness cannot be absolutely guaranteed, then one might as well go ahead and build an AI with no theory of Friendliness at all.I agree with your ultimate objective, the big question is *how* to do it. What is clear is that no one has any idea that seems to be guaranteed to work in creating an AGI with these qualities. We are currently resigned to "let's build it and see what happens." Which is quite scary for some, although not me(this is subject to change)
Anyway, it seems to be the implication of your post that nobody is working out a detailed theory in advance. This is not correct. "Friendly AI" was originally coined in a 900KB publication on the subject that laid out fundamental architectural considerations and tried to describe at least of the content and development strategies. I haven't yet written up the last one-and-a-half year's worth of enormous improvements on that original document, but I've gone on developing the model.
With luck, by the time anyone needs it, the theory will be there.
Of course, realizing that you need a theory is a separate problem. Around all I can do on that score is hope that anyone who hasn't gotten far enough theoretically to realize this also won't get very far on AGI implementation.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
