On Thu, Nov 6, 2008 at 12:55 AM, Harry Chesley <[EMAIL PROTECTED]> wrote:

>>  Personally, I'm not making an AGI that has emotions...
>
> So you take the view that, despite our minimal understanding of the basis of
> emotions, they will only arise if designed in, never spontaneously as an
> emergent property? So you can safely ignore the ethics question.

Well, my AGI system would take special measures to ensure that
emotions do *not* emerge, by making the system acquire *knowledge* of
human values instead of having emotions occurring at the AGI's
*perceptual* level.

YKY


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to