YKY,

As I was saying, before I so rudely interrupted myself - re the narrow AI vs AGI problem difference:

*the syllogistic problems of logic - is Aristotle mortal? etc - which you mainly use as examples - are narrow AI problems, which can be solved according to precise rules

however:

*metacognitive problems - like *which logic should I use for syllogistic problems, eg PLN/NARS?" - (which also concerns you) - are AGI problems; there are no rules for solving them, and no definitive solutions, only possible, temporary resolutions to someone's satisfaction. Those are problems which you have been discussing and could continue to discuss interminably. And they are also problems which you will have - and any agent considering, should have - fear considering, because you can get endlessly bogged down in them

[n.b. psychologically, fear comes in many different degrees from panic to mild wariness]

similarly

*is cybersex sex? (another of your problems) - if treated by some artificial logic with artificial rules, (which might end up saying "yes, approx. 0.60 % sex"), is a narrow AI problem; however, if treated realistically, *philosophically*, relying on language, this is an AGI problem, which can be and may well be considered interminably by real philosophers (and lawyers) into the next century, (*did* Clinton have sex?) and for which there are neither definitive rules nor solution . Again fear is, and has to be a part of considering such problems - how much life do you have to spend on them? Even the biggest computer brain in the world, the superestAGI will not be able to solve them definitively, and must be afraid of them

ditto:

*Any philosophical problem of definition: what is mind? What is consciousness? What is intelligence? Again these are infinitely open-ended, open-means problems, which have atttracted and will continue to attract interminable consideration. You are, and should be, afraid, of "getting too deep into them"

*Any linguistic problem of definition: what does "honour","beautiful," "big" "small" etc mean? is an AGI problem AFAIK literally any word in the language is open to endless definition and redefinition and essentially an AGI problem. By contrast, *what is ETFUBAIL an anagram of?" is a narrow AI problem - and no need for any fear there.

*Defining/describing almost anything - "describe YKY or Ben Goertzel; what kind of guys/ programmers are they?" - are AGI problems. You could consider them forever. You may be skilled at resolving them quickly, and able to come up with a brief description, but that again while perhaps "satisfactory" will never do the subject even remotely perfect justice, and could be endlessly improved and sophisticated.

In general, your instinct - and most AGI-ers' instinct - seems to be, whenever confronted with an AGI problem, to try and reduce it to a narrow AGI problem - from a real, open-ended/ open-means-and-rules to an artificial, closed-ended, closed-means-and-rules problem. Then, yes, you don't need fear and other emotions, but that's not AGI.


YKY:I just want to point out that
AGI-with-emotions is not necessary goal of AGI.

Which AGI as distinct from narrow AI problems do *not* involve *incalculable and possibly unmanageable risks*? -

a)risks that the process of problem-solving will be interminable?
b)risks that the agent does not have the skills necessary for the problem's solution?
c)risks that the agent hasn't defined the problem properly?

That's what the emotion of fear is - (one of the emotions essential for AGI) - a system alert to incalculable and possibly unmanageable risks. That's what the classic fight-or-flight response entails - "maybe I can deal with this danger but maybe I can't and better avoid it fast."




-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com





-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to