regarding denotational semantics:
I prefer to think of the meaning of X as the fuzzy set of patterns
associated with X.  (In fact, I recall giving a talk on this topic at a
meeting of the American Math Society in 1990 ;-)



On Sun, Oct 19, 2008 at 6:59 AM, Vladimir Nesov <[EMAIL PROTECTED]> wrote:

> On Sun, Oct 19, 2008 at 11:58 AM, Dr. Matthias Heger <[EMAIL PROTECTED]>
> wrote:
> > The process of outwardly expressing meaning may be fundamental to any
> social
> > intelligence but the process itself needs not much intelligence.
> >
> > Every email program can receive meaning, store meaning and it can express
> it
> > outwardly in order to send it to another computer. It even can do it
> without
> > loss of any information. Regarding this point, it even outperforms humans
> > already who have no conscious access to the full meaning (information) in
> > their brains.
> >
> > The only thing which needs much intelligence from the nowadays point of
> view
> > is the learning of the process of outwardly expressing meaning, i.e. the
> > learning of language. The understanding of language itself is simple.
> >
>
> Meaning is tricky business. As far as I can tell, meaning Y of a
> system X is an external model that relates system X to its meaning Y
> (where meaning may be a physical object, or a class of objects, where
> each individual object figures into the model). Formal semantics works
> this way (see http://en.wikipedia.org/wiki/Denotational_semantics ).
> When you are thinking about an object, the train of though depends on
> your experience about that object, and will influence your behavior in
> situations depending on information about that objects. Meaning
> propagates through the system according to rules of the model,
> propagates inferentially in the model and not in the system, and so
> can reach places and states of the system not at all obviously
> concerned with what this semantic model relates them to. And
> conversely, meaning doesn't magically appear where model doesn't say
> it does: if system is broken, meaning is lost, at least until you come
> up with another model and relate it to the previous one.
>
> When you say that e-mail contains meaning and network transfers
> meaning, it is an assertion about the model of content of e-mail, that
> relates meaning in the mind of the writer to bits in the memory of
> machines. From this point of view, we can legitemately say that
> meaning is transferred, and is expressed. But the same meaning doesn't
> exist in e-mails if you cut them from the mind that expressed the
> meaning in the form of e-mails, or experience that transferred meaning
> in the mind.
>
> Understanding is the process of integrating different models,
> different meanings, different pieces of information as seen by your
> model. It is the ability to translate pieces of information that have
> nontrivial structure, in your basis. Normal use of "understanding"
> applies only to humans, everything else generalizes this concept in
> sometimes very strange ways. When we say that person understood
> something, in this language it's equivalent to person having
> successfully integrated that piece in his mind, our model of that
> person starting to attribute properties of that piece of information
> to his thought and behavior.
>
> So, you are cutting this knot at a trivial point. The difficulty is in
> the translation, but you point on one side of the translation process
> and say that this side is simple, then point to another than say that
> this side is hard. The problem is that it's hard to put a finger on
> the point just after translation, but it's easy to see how our
> technology, as physical medium, transfers information ready for
> translation. This outward appearance has little bearing on semantic
> models.
>
> --
> Vladimir Nesov
> [EMAIL PROTECTED]
> http://causalityrelay.wordpress.com/
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to