Terren,

>to the unembodied agent, it is not a concept at all, but merely a symbol with 
>no semantic context attached

It's an issue when trying to learn from NL only, but you can injects
semantics (critical for grounding) when teaching through a
formal_language[-based interface], get the thinking algorithms working
and possibly focus on NL-to-formal_language conversions later.

>To an unembodied agent, the concept of self is indistinguishable from any 
>other "concept" it works with.

An AGI should be able to use tools (external/internal applications)
and it can learn to view itself (or just some of its modules) as its
tool(s).
You can design an interface [possibly just for advanced users] for
mapping learned concepts/actions to the interface of available tools.
Just like it can learn how to use a command line calculator, it can
learn how to use self as a tool. Then it can learn that an alias to
use for "that tool" is "I"/"Me".
By design, it can also clearly distinguish between using a particular
tool "in theory" and "in practice".

> All such an agent can do is perform operations on ungrounded symbols - at 
> best, the result of which can appear to be intelligent within some domain 
> (e.g., a chess program).

You can ground when using semantic-supporting input formats. I don't
see why would it have to be specific to a single domain. You can use
very general data representation structures and fill it with data with
many domains. You "just" have to get the KR right (unlike CYC). Easy
to say, I know, but I don't see a good reason why it couldn't (in
principle) work and I'm working on figuring that out.

>> Even though this particular
>> AGI never
>> heard about any of those other tools being used for cutting
>> bread (and
>> is not self-aware in any sense), it still can (when asked
>> for advice)
>> make a reasonable suggestion to try the "T2"
>> (because of the
>> similarity) = coming up with a novel idea &
>> demonstrating general
>> intelligence.
>
> Sounds like magic to me. You're taking something that we humans can do and 
> sticking it in as a black box into a hugely simplified agent in a way that 
> imparts no understanding about how we do it.  Maybe you left that part out 
> for brevity - care to elaborate?

It must sound "like magic" when assuming the "no semantic context
attached", but that doesn't have to be the case. With right teaching
methods, the system gets semantics, can make models and can apply
knowledge learned from scenario1 to scenario2 in unique ways. What
does the "right teaching methods" mean? For example, when learning an
"action concept" (e.g. "buy"), it needs to grasp [at least some] roles
involved (e.g. "seller", "buyer", "goods", "price", ..) and how
relationships between the role-players changes in relevant stages. You
can design user friendly interface for teaching systems in meaningful
ways so it can later think using queriable models and understand
relationships [changes] between concepts etc... Sorry about the
brevity (busy schedule).

Regards,
Jiri Jelinek

PS: we might be slightly off-topic in this thread.. (?)


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to