On Sun, Oct 5, 2008 at 3:55 PM, Brad Paulsen <[EMAIL PROTECTED]> wrote:

> More generally, as long as AGI designers and developers insist on
> simulating human intelligence, they will have to deal with the AI-complete
> problem of natural language understanding.  Looking for new approaches to
> this problem, many researches (including prominent members of this list)
> have turned to "embodiment" (or "virtual embodiment") for help.  IMHO, this
> is not a sound tactic because human-like embodiment is, itself, probably an
> AI-complete problem.
>

Incrementally tackling the AI-complete nature of the natural language
problem is one of the primary reasons for going down the virtual embodiment
path in the first place, to ground the concepts that an AI learns in
non-verbal ways which are similar to (but certainly not identical to) the
ways in which humans and other animals learn (see Piaget, et al). Whether or
not human-like embodiment is an AI-complete problem (we're betting it's not)
is much less clear compared with whether or not natural language
comprehension is an AI-complete problem (research to date indicates that it
is).

Insofar as achieving human-like embodiment and human natural language
> understanding is possible, it is also a very dangerous strategy.  The
> process of understanding human natural language through human-like
> embodiment will, of necessity, lead to the AGHI developing a sense of self.
>  After all, that's how we humans got ours (except, of course, the concept
> preceded the language for it).  And look how we turned out.
>

The development of 'self' in an AI does NOT imply the development of the
same type of ultra-narcissistic self that developed evolutionarily in
humans. The development of something resembling a 'self' in an AI should be
pursued only with careful monitoring, guidance and tuning to prevent the
development of a runaway ultra-narcissistic self.

I realize that an AGHI will not "turn on us" simply because it understands
> that we're not (like) it (i.e., just because it acquired a sense of self).
>  But, it could.  Do we really want to take that chance?  Especially when
> it's not necessary for human-beneficial AGI (AGI without the "silent H")?
>

Embodiment is indeed likely not necessary to reach human-beneficial AGI, but
there's a good line of reasoning that indicates it might be the shortest
path there, managed risks and all. There are also significant risks to be
faced (bio/nano/info) for delaying human-beneficial AGI (e.g., because of
being overly precautious about getting there via human-like AGI).

-dave



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to