David Hart wrote:
On Sun, Oct 5, 2008 at 3:55 PM, Brad Paulsen <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:

    More generally, as long as AGI designers and developers insist on
    simulating human intelligence, they will have to deal with the
    AI-complete
    problem of natural language understanding.  Looking for new
    approaches to this problem, many researches (including prominent
    members of this list) have turned to "embodiment" (or "virtual
    embodiment") for help.  IMHO, this is not a sound tactic because
    human-like embodiment is, itself, probably an AI-complete problem.


Incrementally tackling the AI-complete nature of the natural language problem is one of the primary reasons for going down the virtual embodiment path in the first place, to ground the concepts that an AI learns in non-verbal ways which are similar to (but certainly not identical to) the ways in which humans and other animals learn (see Piaget, et al). Whether or not human-like embodiment is an AI-complete problem (we're betting it's not) is much less clear compared with whether or not natural language comprehension is an AI-complete problem (research to date indicates that it is).

My argument is not that natural language understanding should be pursued one way rather than another. It is that it should NOT be pursued at all for AGI 1.0. And, especially, not by simulating human-like embodiment.

Of course, if you insist on defining AGI 1.0 as ONLY human-like AGI (i.e., AGHI), then NLU becomes pretty much a requirement. Also, then, the difficulty of this problem makes using embodiment (or virtual embodiment) seem like a good idea. But, again, the "hidden" assumption is that AGI 1.0 must be AGHI 1.0. IMHO, we don't need embodiment and we don't need NLU in AGI 1.0. If we build AGI 1.0 correctly (avoid what are, for human-like intelligences, AI-complete issues), it will be able to help us solve many, if not all, of the AI-complete problems we currently face. In addition, it could help us decide if having NLU is worth the effort and if embodiment is worth the risk. NLU may be important for humans. I doubt AGI 1.0 will care. Same for embodiment.

I also have problems with "...incrementally tackling the AI-complete nature of natural language processing." The reason AI-complete problems are AI-complete problems is that they have historically not fallen to the incremental approach. I'd feel better if the AGHI folks were attempting to tackle the NLU problem based on a well-thought-out "Grand Theory." At least that way, they'd fail quicker and we could get serious about building non-human AGI sooner.

We should, first, develop a non-human, non-embodied AGI that is designed to help us break the human-machine NLU barrier. THAT's the kind of incremental approach I think we should be talking about. If we continue to insist on AGHI as a first step, we're just going to keep banging our heads against the same old, fifty-eight-year-old wall.

    Insofar as achieving human-like embodiment and human natural
    language understanding is possible, it is also a very dangerous
    strategy.  The
    process of understanding human natural language through human-like
    embodiment will, of necessity, lead to the AGHI developing a sense
    of self.
     After all, that's how we humans got ours (except, of course, the
    concept preceded the language for it).  And look how we turned out.


The development of 'self' in an AI does NOT imply the development of the same type of ultra-narcissistic self that developed evolutionarily in humans. The development of something resembling a 'self' in an AI should be pursued only with careful monitoring, guidance and tuning to prevent the development of a runaway ultra-narcissistic self.


I don't disagree. But, I think your characterization of the human sense of self as being "ultra-narcissistic" is just inflammatory rhetoric. Your statements agreeing with me are interposed here before the paragraph from my original post below that says, basically, the same thing. I just wouldn't recommend trying to cut off an AGHI's food (power) supply. It could very well be the last thing you'd do in your puny little biological life. Human sense of self has, first, to do with survival, not narcissism. An AGI with no human-like sense of self would just take a nap.

    I realize that an AGHI will not "turn on us" simply because it
    understands that we're not (like) it (i.e., just because it acquired
    a sense of self).
     But, it could.  Do we really want to take that chance?  Especially
    when it's not necessary for human-beneficial AGI (AGI without the
    "silent H")?


Embodiment is indeed likely not necessary to reach human-beneficial AGI, but there's a good line of reasoning that indicates it might be the shortest path there, managed risks and all. There are also significant risks to be faced (bio/nano/info) for delaying human-beneficial AGI (e.g., because of being overly precautious about getting there via human-like AGI).


As noted above, IMHO what's going to delay development of human-beneficial AGI is exactly what you claim will shorten the path to same. Unfortunately, as long as the mainstream AGI community continue to hang on to what should, by now, be a thoroughly-discredited strategy, we will never (or too late) achieve human-beneficial AGI.

Brad

-dave
------------------------------------------------------------------------
*agi* | Archives <https://www.listbox.com/member/archive/303/=now> <https://www.listbox.com/member/archive/rss/303/> | Modify <https://www.listbox.com/member/?&;> Your Subscription [Powered by Listbox] <http://www.listbox.com>



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to