I agree that understanding is the process of integrating different models,
different meanings, different pieces of information as seen by your
model. But this integrating just matching and not extending the own model
with new entities. You only match linguistic entities of received
linguistically represented information with existing entities of your model
(i.e. with some of your existing patterns). If you could manage the matching
process successfully then you have understood the linguistic message. 

Natural communication and language understanding is completely comparable
with common processes in computer science. There is an internal data
representation. A subset of this data is translated into a linguistic string
and transferred to another agent which retranslates the message before it
possibly but not necessarily changes its database.

The only reason why natural language understanding is so difficult is
because it needs a lot of knowledge to resolve ambiguities which humans
usually gain via own experience.

But alone from being able to resolve the ambiguities and being able to do
the matching process successfully you will know nothing about the creation
of patterns and the way how to work intelligently with these patterns.
Therefore communication is separated from these main problems of AGI in the
same way as communication is completely separated from the structure and
algorithms of the database of computers.

Only the process of *learning* such a communication would be  AI (I am not
sure if it is AGI). But you cannot learn to communicate if there is nothing
to communicate. So every approach towards AGI via *learning* language
understanding will need at least a further domain for the content of
communication. Probably you need even more domains because the linguistic
ambiguities can resolved only with broad knowledge .

And this is my point why I say that language understanding would yield costs
which are not necessary. We can build AGI just by concentrating all efforts
to a *single* domain with very useful properties (i.e. domain of
mathematics).
This would reduce the immense costs of simulating real worlds and
additionally concentrating on *at least two* domains at the same time.

-Matthias


Vladimir Nesov [mailto:[EMAIL PROTECTED] wrote
 
Gesendet: Sonntag, 19. Oktober 2008 12:59
An: [email protected]
Betreff: [agi] Re: Meaning, communication and understanding

On Sun, Oct 19, 2008 at 11:58 AM, Dr. Matthias Heger <[EMAIL PROTECTED]>
wrote:
> The process of outwardly expressing meaning may be fundamental to any
social
> intelligence but the process itself needs not much intelligence.
>
> Every email program can receive meaning, store meaning and it can express
it
> outwardly in order to send it to another computer. It even can do it
without
> loss of any information. Regarding this point, it even outperforms humans
> already who have no conscious access to the full meaning (information) in
> their brains.
>
> The only thing which needs much intelligence from the nowadays point of
view
> is the learning of the process of outwardly expressing meaning, i.e. the
> learning of language. The understanding of language itself is simple.
>

Meaning is tricky business. As far as I can tell, meaning Y of a
system X is an external model that relates system X to its meaning Y
(where meaning may be a physical object, or a class of objects, where
each individual object figures into the model). Formal semantics works
this way (see http://en.wikipedia.org/wiki/Denotational_semantics ).
When you are thinking about an object, the train of though depends on
your experience about that object, and will influence your behavior in
situations depending on information about that objects. Meaning
propagates through the system according to rules of the model,
propagates inferentially in the model and not in the system, and so
can reach places and states of the system not at all obviously
concerned with what this semantic model relates them to. And
conversely, meaning doesn't magically appear where model doesn't say
it does: if system is broken, meaning is lost, at least until you come
up with another model and relate it to the previous one.

When you say that e-mail contains meaning and network transfers
meaning, it is an assertion about the model of content of e-mail, that
relates meaning in the mind of the writer to bits in the memory of
machines. From this point of view, we can legitemately say that
meaning is transferred, and is expressed. But the same meaning doesn't
exist in e-mails if you cut them from the mind that expressed the
meaning in the form of e-mails, or experience that transferred meaning
in the mind.

Understanding is the process of integrating different models,
different meanings, different pieces of information as seen by your
model. It is the ability to translate pieces of information that have
nontrivial structure, in your basis. Normal use of "understanding"
applies only to humans, everything else generalizes this concept in
sometimes very strange ways. When we say that person understood
something, in this language it's equivalent to person having
successfully integrated that piece in his mind, our model of that
person starting to attribute properties of that piece of information
to his thought and behavior.

So, you are cutting this knot at a trivial point. The difficulty is in
the translation, but you point on one side of the translation process
and say that this side is simple, then point to another than say that
this side is hard. The problem is that it's hard to put a finger on
the point just after translation, but it's easy to see how our
technology, as physical medium, transfers information ready for
translation. This outward appearance has little bearing on semantic
models.





-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to