--- On Fri, 8/29/08, Jiri Jelinek <[EMAIL PROTECTED]> wrote:
> I don't see why an un-embodied system couldn't
> successfully use the
> concept of self in its models. It's just another
> concept, except that
> it's linked to real features of the system.

To an unembodied agent, the concept of self is indistinguishable from any other 
"concept" it works with. I use concept in quotes because to the unembodied 
agent, it is not a concept at all, but merely a symbol with no semantic context 
attached. All such an agent can do is perform operations on ungrounded symbols 
- at best, the result of which can appear to be intelligent within some domain 
(e.g., a chess program).

> Even though this particular
> AGI never
> heard about any of those other tools being used for cutting
> bread (and
> is not self-aware in any sense), it still can (when asked
> for advice)
> make a reasonable suggestion to try the "T2"
> (because of the
> similarity) = coming up with a novel idea &
> demonstrating general
> intelligence.

Sounds like magic to me. You're taking something that we humans can do and 
sticking it in as a black box into a hugely simplified agent in a way that 
imparts no understanding about how we do it.  Maybe you left that part out for 
brevity - care to elaborate?

Terren


      


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to