On Sun, Oct 19, 2008 at 5:23 PM, Dr. Matthias Heger <[EMAIL PROTECTED]> wrote:
> I agree that understanding is the process of integrating different models,
> different meanings, different pieces of information as seen by your
> model. But this integrating just matching and not extending the own model
> with new entities. You only match linguistic entities of received
> linguistically represented information with existing entities of your model
> (i.e. with some of your existing patterns). If you could manage the matching
> process successfully then you have understood the linguistic message.
>
> Natural communication and language understanding is completely comparable
> with common processes in computer science. There is an internal data
> representation. A subset of this data is translated into a linguistic string
> and transferred to another agent which retranslates the message before it
> possibly but not necessarily changes its database.
>
> The only reason why natural language understanding is so difficult is
> because it needs a lot of knowledge to resolve ambiguities which humans
> usually gain via own experience.
>
> But alone from being able to resolve the ambiguities and being able to do
> the matching process successfully you will know nothing about the creation
> of patterns and the way how to work intelligently with these patterns.
> Therefore communication is separated from these main problems of AGI in the
> same way as communication is completely separated from the structure and
> algorithms of the database of computers.
>
> Only the process of *learning* such a communication would be  AI (I am not
> sure if it is AGI). But you cannot learn to communicate if there is nothing
> to communicate. So every approach towards AGI via *learning* language
> understanding will need at least a further domain for the content of
> communication. Probably you need even more domains because the linguistic
> ambiguities can resolved only with broad knowledge .
>
> And this is my point why I say that language understanding would yield costs
> which are not necessary. We can build AGI just by concentrating all efforts
> to a *single* domain with very useful properties (i.e. domain of
> mathematics).
> This would reduce the immense costs of simulating real worlds and
> additionally concentrating on *at least two* domains at the same time.
>

I think I see what you are trying to communicate. Correct me if I got
something wrong here.
You assume a certain architectural decision for AIs in question when
you talk about this interpretation of process of communication.
Basically, AI1 communicates with AI2, and they both work with two
domains: D and L, D being internal domain and L being communication
domain, stuff that gets sent via e-mail. AI1 translates meaning D1
into message L1, which is transferred as L2 to AI2, which then
translates it to D2. You call a step L2-D2 "understanding" or
"matching", also assuming that this process doesn't need to change
AI2, to make it change its model, to learn. You then suggest that L
doesn't need to be natural language, as D for language is the most
difficult real world, and then instead we need to pick easier L and D
and work on their interplay.

If AI1 already can translate between D and L, AI2 might need to learn
to translate between L and D on its own, knowing only D at the start,
and this ability you suggest as central challenge of intelligence.

I think that this model is overly simplistic, overemphasizing an
artificial divide between domains within AI's cognition (L and D), and
externalizing communication domain from the core of AI. Both world
model and language model support interaction with environment, there
is no clear cognitive distinction between them. As a given,
interaction happens at the narrow I/O interface, and anything else is
a design decision for a specific AI (even invariability of I/O is, a
simplifying assumption that complicates semantics of time and more
radical self-improvement). Sufficiently flexible cognitive algorithm
should be able to integrate facts about any "domain", becoming able to
generate appropriate behavior in corresponding contexts.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to