Well, having an intuitive understanding of human language will be useful for
an AGI even if its architecture is profoundly nonhumanlike.  And, human
language is intended to be interpreted based on social, spatiotemporal
experience.  So the easiest way to make an AGI grok human language is very
likely to give it an embodiment in a world somewhat like the one in which we
live.  This does not imply the AGI must think in a thoroughly humanlike
manner.

-- Ben G




>
> Grounding is a potential problem IFF your AGI is, actually, an AGHI, where
> the H stands for Human.  There's nothing wrong with borrowing the good
> features of human intelligence, but an uncritical aping of all aspects of
> human intelligence just because we think highly of ourselves is doomed.  At
> least I hope it is. Frankly, the possibility of an AGHI scares the crap out
> of me.  Personally, I'm in this to build and AGI that is about as far from a
> human copy (with or without improvements) as possible.  Better, faster, less
> prone to breakdown.  And, eventually, a whole lot smarter.
>
> We don't need no stinkin' grounding.
>
> Cheers,
>
> Brad
>
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome " - Dr Samuel Johnson



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to