Kevin Copple wrote:
> Perhaps I am wrong, but my impression is that the talk here about
> AGI sense
> of self, AGI friendliness, and so on is quite premature.

Attitudes on that vary, I think...

I know that many AGI researchers agree with you, and think such issues are
best deferred till after some decent AGI's exist

Eliezer Yudkowsky radically disagrees with you, and advocates devoting a
huge amount of effort to dealing with AI Friendliness *now*, prior to the
creation of robust experimental AGI's

I find myself somewhere inbetween -- I consider the issues worth thinking
about in depth now, but not worth more than, say, 10% of my time as an AI
researcher.

-- Ben G

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to