Thanks Pei,

I would add (for others, obviously you know this stuff) that there are many
different
theoretical justifications of probability theory, hence that the use of
probability
theory does not imply model-theoretic semantics nor any other particular
approach to semantics.

My own philosophy is even further from your summary of model-theoretic
semantics than it is from (my reading of) Tarski's original version of model
theoretic semantics.  I am not an objectivist whatsoever....  (I read too
many
Oriental philosophy books in my early youth, when my mom was studying
for her PhD in Chinese history, and my brain was even more pliant  ;-).
I deal extensively with objectivity/subjectivity/intersubjectivity issues in
"The Hidden Pattern."

As an example, if one justifies probability theory according a Cox's-axioms
approach, no model theory is necessary.  In this approach, it is justified
as a set of a priori constraints that the system chooses to impose on its
own
reasoning.

In a de Finetti approach, it is justified because the system wants to
be able to "win bets" with other agents.  The intersection between this
notion and the hypothesis of an "objective world" is unclear, but it's not
obvious why these hypothetical agents need to have objective existence.

As you say, this is a deep philosophical rat's-nest... my point is just that
it's
not correct to imply "probability theory = traditional
model theoretic semantics"

-- Ben G

On Sun, Oct 12, 2008 at 8:29 AM, Pei Wang <[EMAIL PROTECTED]> wrote:

> A brief and non-technical description of the two types of semantics
> mentioned in the previous discussions:
>
> (1) Model-Theoretic Semantics (MTS)
>
> (1.1) There is a world existing independently outside the intelligent
> system (human or machine).
>
> (1.2) In principle, there is an objective description of the world, in
> terms of objects, their properties, and relations among them.
>
> (1.3) Within the intelligent system, its knowledge is an approximation
> of the objective description of the world.
>
> (1.4) The meaning of a symbol within the system is the object it
> refers to in the world.
>
> (1.5) The truth-value of a statement within the system measures how
> close it approximates the fact in the world.
>
> (2) Experience-Grounded Semantics (EGS)
>
> (2.1) There is a world existing independently outside the intelligent
> system (human or machine). [same as (1.1), but the agreement stops
> here]
>
> (2.2) Even in principle, there is no objective description of the
> world. What the system has is its experience, the history of its
> interaction of the world.
>
> (2.3) Within the intelligent system, its knowledge is a summary of its
> experience.
>
> (2.4) The meaning of a symbol within the system is determined by its
> role in the experience.
>
> (2.5) The truth-value of a statement within the system measures how
> close it summarizes the relevant part of the experience.
>
> To further simplify the description, in the context of learning and
> reasoning: MTS takes "objective truth" of statements and "real
> meaning" of terms as aim of approximation, while EGS refuses them, but
> takes experience (input data) as the only thing to depend on.
>
> As usual, each theory has its strength and limitation. The issue is
> which one is more proper for AGI. MTS has been dominating in math,
> logic, and computer science, and therefore is accepted by the majority
> people. Even so, it has been attacked by other people (not only the
> EGS believers) for many reasons.
>
> A while ago I made a figure to illustrate this difference, which is at
> http://nars.wang.googlepages.com/wang.semantics-figure.pdf . A
> manifesto of EGS is at
> http://nars.wang.googlepages.com/wang.semantics.pdf
>
> Since the debate on the nature of "truth" and "meaning" has existed
> for thousands of years, I don't think we can settle down it here by
> some email exchanges. I just want to let the interested people know
> the theoretical background of the related discussions.
>
> Pei
>
>
> On Sat, Oct 11, 2008 at 8:34 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> >
> >
> >
> > Hi,
> >
> >>
> >> > What this highlights for me is the idea that NARS truth values attempt
> >> > to reflect the evidence so far, while probabilities attempt to reflect
> >> > the world
> >
> > I agree that probabilities attempt to reflect the world
> >
> >>
> >> .
> >>
> >> Well said. This is exactly the difference between an
> >> experience-grounded semantics and a model-theoretic semantics.
> >
> > I don't agree with this distinction ... unless you are construing "model
> > theoretic semantics" in a very restrictive way, which then does not apply
> to
> > PLN.
> >
> > If by model-theoretic semantics you mean something like what Wikipedia
> says
> > at http://en.wikipedia.org/wiki/Formal_semantics,
> >
> > ***
> > Model-theoretic semantics is the archetype of Alfred Tarski's semantic
> > theory of truth, based on his T-schema, and is one of the founding
> concepts
> > of model theory. This is the most widespread approach, and is based on
> the
> > idea that the meaning of the various parts of the propositions are given
> by
> > the possible ways we can give a recursively specified group of
> > interpretation functions from them to some predefined mathematical
> domains:
> > an interpretation of first-order predicate logic is given by a mapping
> from
> > terms to a universe of individuals, and a mapping from propositions to
> the
> > truth values "true" and "false".
> > ***
> >
> > then yes, PLN's semantics is based on a mapping from terms to a universe
> of
> > individuals, and a mapping from propositions to truth values.  On the
> other
> > hand, these "individuals" may be for instance **elementary sensations or
> > actions**, rather than higher-level individuals like, say, a specific
> cat,
> > or the concept "cat".  So there is nothing non-experience-based about
> > mapping terms into a "individuals" that are the system's direct
> experience
> > ... and then building up more abstract terms by grouping these
> > directly-experience-based terms.
> >
> > IMO, the dichotomy between experience-based and model-based semantics is
> a
> > misleading one.  Model-based semantics has often been used in a
> > non-experience-based way, but that is not because it fundamentally
> **has**
> > to be used in that way.
> >
> > To say that PLN tries to model the world, is then just to say that it
> tries
> > to make probabilistic predictions about sensations and actions that have
> not
> > yet been experienced ... which is certainly the case.
> >
> >>
> >> Once
> >> again, the difference in truth-value functions is reduced to the
> >> difference in semantics, what is, what the "truth-value" attempts to
> >> measure.
> >
> > Agreed...
> >
> > Ben G
> >
> > ________________________________
> > agi | Archives | Modify Your Subscription
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to