YKY:I just want to point out that
AGI-with-emotions is not necessary goal of AGI.
Which AGI as distinct from narrow AI problems do *not* involve *incalculable
and possibly unmanageable risks*? -
a)risks that the process of problem-solving will be interminable?
b)risks that the agent does not have the skills necessary for the problem's
solution?
c)risks that the agent hasn't defined the problem properly?
That's what the emotion of fear is - (one of the emotions essential for
AGI) - a system alert to incalculable and possibly unmanageable risks.
That's what the classic fight-or-flight response entails - "maybe I can deal
with this danger but maybe I can't and better avoid it fast."
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com