Re: [agi] two types of semantics [Was: NARS and probability]

2008-10-12 Thread Ben Goertzel
Thanks Pei,

I would add (for others, obviously you know this stuff) that there are many
different
theoretical justifications of probability theory, hence that the use of
probability
theory does not imply model-theoretic semantics nor any other particular
approach to semantics.

My own philosophy is even further from your summary of model-theoretic
semantics than it is from (my reading of) Tarski's original version of model
theoretic semantics.  I am not an objectivist whatsoever  (I read too
many
Oriental philosophy books in my early youth, when my mom was studying
for her PhD in Chinese history, and my brain was even more pliant  ;-).
I deal extensively with objectivity/subjectivity/intersubjectivity issues in
The Hidden Pattern.

As an example, if one justifies probability theory according a Cox's-axioms
approach, no model theory is necessary.  In this approach, it is justified
as a set of a priori constraints that the system chooses to impose on its
own
reasoning.

In a de Finetti approach, it is justified because the system wants to
be able to win bets with other agents.  The intersection between this
notion and the hypothesis of an objective world is unclear, but it's not
obvious why these hypothetical agents need to have objective existence.

As you say, this is a deep philosophical rat's-nest... my point is just that
it's
not correct to imply probability theory = traditional
model theoretic semantics

-- Ben G

On Sun, Oct 12, 2008 at 8:29 AM, Pei Wang [EMAIL PROTECTED] wrote:

 A brief and non-technical description of the two types of semantics
 mentioned in the previous discussions:

 (1) Model-Theoretic Semantics (MTS)

 (1.1) There is a world existing independently outside the intelligent
 system (human or machine).

 (1.2) In principle, there is an objective description of the world, in
 terms of objects, their properties, and relations among them.

 (1.3) Within the intelligent system, its knowledge is an approximation
 of the objective description of the world.

 (1.4) The meaning of a symbol within the system is the object it
 refers to in the world.

 (1.5) The truth-value of a statement within the system measures how
 close it approximates the fact in the world.

 (2) Experience-Grounded Semantics (EGS)

 (2.1) There is a world existing independently outside the intelligent
 system (human or machine). [same as (1.1), but the agreement stops
 here]

 (2.2) Even in principle, there is no objective description of the
 world. What the system has is its experience, the history of its
 interaction of the world.

 (2.3) Within the intelligent system, its knowledge is a summary of its
 experience.

 (2.4) The meaning of a symbol within the system is determined by its
 role in the experience.

 (2.5) The truth-value of a statement within the system measures how
 close it summarizes the relevant part of the experience.

 To further simplify the description, in the context of learning and
 reasoning: MTS takes objective truth of statements and real
 meaning of terms as aim of approximation, while EGS refuses them, but
 takes experience (input data) as the only thing to depend on.

 As usual, each theory has its strength and limitation. The issue is
 which one is more proper for AGI. MTS has been dominating in math,
 logic, and computer science, and therefore is accepted by the majority
 people. Even so, it has been attacked by other people (not only the
 EGS believers) for many reasons.

 A while ago I made a figure to illustrate this difference, which is at
 http://nars.wang.googlepages.com/wang.semantics-figure.pdf . A
 manifesto of EGS is at
 http://nars.wang.googlepages.com/wang.semantics.pdf

 Since the debate on the nature of truth and meaning has existed
 for thousands of years, I don't think we can settle down it here by
 some email exchanges. I just want to let the interested people know
 the theoretical background of the related discussions.

 Pei


 On Sat, Oct 11, 2008 at 8:34 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 
 
 
  Hi,
 
 
   What this highlights for me is the idea that NARS truth values attempt
   to reflect the evidence so far, while probabilities attempt to reflect
   the world
 
  I agree that probabilities attempt to reflect the world
 
 
  .
 
  Well said. This is exactly the difference between an
  experience-grounded semantics and a model-theoretic semantics.
 
  I don't agree with this distinction ... unless you are construing model
  theoretic semantics in a very restrictive way, which then does not apply
 to
  PLN.
 
  If by model-theoretic semantics you mean something like what Wikipedia
 says
  at http://en.wikipedia.org/wiki/Formal_semantics,
 
  ***
  Model-theoretic semantics is the archetype of Alfred Tarski's semantic
  theory of truth, based on his T-schema, and is one of the founding
 concepts
  of model theory. This is the most widespread approach, and is based on
 the
  idea that the meaning of the various parts of the propositions are given
 by
  

Re: [agi] two types of semantics [Was: NARS and probability]

2008-10-12 Thread Pei Wang
Ben,

Of course, probability theory, in its mathematical form, is not
bounded to any semantics at all, though it implicitly exclude some
possibilities. A semantic theory is associated to it when probability
theory is applied to a practical situation.

There are several major schools in the interpretation of probability
(see http://plato.stanford.edu/entries/probability-interpret/), and
their relations with NARS is explained in Section 8.5.1 of my book.

As for the interpretation of probability in PLN, I'd rather wait for
your book than to make comment based on your brief explanation.

Pei


On Sun, Oct 12, 2008 at 9:13 AM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Thanks Pei,

 I would add (for others, obviously you know this stuff) that there are many
 different
 theoretical justifications of probability theory, hence that the use of
 probability
 theory does not imply model-theoretic semantics nor any other particular
 approach to semantics.

 My own philosophy is even further from your summary of model-theoretic
 semantics than it is from (my reading of) Tarski's original version of model
 theoretic semantics.  I am not an objectivist whatsoever  (I read too
 many
 Oriental philosophy books in my early youth, when my mom was studying
 for her PhD in Chinese history, and my brain was even more pliant  ;-).
 I deal extensively with objectivity/subjectivity/intersubjectivity issues in
 The Hidden Pattern.

 As an example, if one justifies probability theory according a Cox's-axioms
 approach, no model theory is necessary.  In this approach, it is justified
 as a set of a priori constraints that the system chooses to impose on its
 own
 reasoning.

 In a de Finetti approach, it is justified because the system wants to
 be able to win bets with other agents.  The intersection between this
 notion and the hypothesis of an objective world is unclear, but it's not
 obvious why these hypothetical agents need to have objective existence.

 As you say, this is a deep philosophical rat's-nest... my point is just that
 it's
 not correct to imply probability theory = traditional
 model theoretic semantics

 -- Ben G

 On Sun, Oct 12, 2008 at 8:29 AM, Pei Wang [EMAIL PROTECTED] wrote:

 A brief and non-technical description of the two types of semantics
 mentioned in the previous discussions:

 (1) Model-Theoretic Semantics (MTS)

 (1.1) There is a world existing independently outside the intelligent
 system (human or machine).

 (1.2) In principle, there is an objective description of the world, in
 terms of objects, their properties, and relations among them.

 (1.3) Within the intelligent system, its knowledge is an approximation
 of the objective description of the world.

 (1.4) The meaning of a symbol within the system is the object it
 refers to in the world.

 (1.5) The truth-value of a statement within the system measures how
 close it approximates the fact in the world.

 (2) Experience-Grounded Semantics (EGS)

 (2.1) There is a world existing independently outside the intelligent
 system (human or machine). [same as (1.1), but the agreement stops
 here]

 (2.2) Even in principle, there is no objective description of the
 world. What the system has is its experience, the history of its
 interaction of the world.

 (2.3) Within the intelligent system, its knowledge is a summary of its
 experience.

 (2.4) The meaning of a symbol within the system is determined by its
 role in the experience.

 (2.5) The truth-value of a statement within the system measures how
 close it summarizes the relevant part of the experience.

 To further simplify the description, in the context of learning and
 reasoning: MTS takes objective truth of statements and real
 meaning of terms as aim of approximation, while EGS refuses them, but
 takes experience (input data) as the only thing to depend on.

 As usual, each theory has its strength and limitation. The issue is
 which one is more proper for AGI. MTS has been dominating in math,
 logic, and computer science, and therefore is accepted by the majority
 people. Even so, it has been attacked by other people (not only the
 EGS believers) for many reasons.

 A while ago I made a figure to illustrate this difference, which is at
 http://nars.wang.googlepages.com/wang.semantics-figure.pdf . A
 manifesto of EGS is at
 http://nars.wang.googlepages.com/wang.semantics.pdf

 Since the debate on the nature of truth and meaning has existed
 for thousands of years, I don't think we can settle down it here by
 some email exchanges. I just want to let the interested people know
 the theoretical background of the related discussions.

 Pei


 On Sat, Oct 11, 2008 at 8:34 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 
 
 
  Hi,
 
 
   What this highlights for me is the idea that NARS truth values
   attempt
   to reflect the evidence so far, while probabilities attempt to
   reflect
   the world
 
  I agree that probabilities attempt to reflect the world
 
 
  .
 
  Well said. This is exactly 

Re: [agi] two types of semantics [Was: NARS and probability]

2008-10-12 Thread Abram Demski
Pei,

In this context, how do you justify the use of 'k'? It seems like, by
introducing 'k', you add a reliance on the truth of the future after
k observations into the semantics. Since the induction/abduction
formula is dependent on 'k', the truth values that result no longer
only summarize experience; they are calculated with prediction in
mind.

--Abram

On Sun, Oct 12, 2008 at 8:29 AM, Pei Wang [EMAIL PROTECTED] wrote:
 A brief and non-technical description of the two types of semantics
 mentioned in the previous discussions:

 (1) Model-Theoretic Semantics (MTS)

 (1.1) There is a world existing independently outside the intelligent
 system (human or machine).

 (1.2) In principle, there is an objective description of the world, in
 terms of objects, their properties, and relations among them.

 (1.3) Within the intelligent system, its knowledge is an approximation
 of the objective description of the world.

 (1.4) The meaning of a symbol within the system is the object it
 refers to in the world.

 (1.5) The truth-value of a statement within the system measures how
 close it approximates the fact in the world.

 (2) Experience-Grounded Semantics (EGS)

 (2.1) There is a world existing independently outside the intelligent
 system (human or machine). [same as (1.1), but the agreement stops
 here]

 (2.2) Even in principle, there is no objective description of the
 world. What the system has is its experience, the history of its
 interaction of the world.

 (2.3) Within the intelligent system, its knowledge is a summary of its
 experience.

 (2.4) The meaning of a symbol within the system is determined by its
 role in the experience.

 (2.5) The truth-value of a statement within the system measures how
 close it summarizes the relevant part of the experience.

 To further simplify the description, in the context of learning and
 reasoning: MTS takes objective truth of statements and real
 meaning of terms as aim of approximation, while EGS refuses them, but
 takes experience (input data) as the only thing to depend on.

 As usual, each theory has its strength and limitation. The issue is
 which one is more proper for AGI. MTS has been dominating in math,
 logic, and computer science, and therefore is accepted by the majority
 people. Even so, it has been attacked by other people (not only the
 EGS believers) for many reasons.

 A while ago I made a figure to illustrate this difference, which is at
 http://nars.wang.googlepages.com/wang.semantics-figure.pdf . A
 manifesto of EGS is at
 http://nars.wang.googlepages.com/wang.semantics.pdf

 Since the debate on the nature of truth and meaning has existed
 for thousands of years, I don't think we can settle down it here by
 some email exchanges. I just want to let the interested people know
 the theoretical background of the related discussions.

 Pei


 On Sat, Oct 11, 2008 at 8:34 PM, Ben Goertzel [EMAIL PROTECTED] wrote:



 Hi,


  What this highlights for me is the idea that NARS truth values attempt
  to reflect the evidence so far, while probabilities attempt to reflect
  the world

 I agree that probabilities attempt to reflect the world


 .

 Well said. This is exactly the difference between an
 experience-grounded semantics and a model-theoretic semantics.

 I don't agree with this distinction ... unless you are construing model
 theoretic semantics in a very restrictive way, which then does not apply to
 PLN.

 If by model-theoretic semantics you mean something like what Wikipedia says
 at http://en.wikipedia.org/wiki/Formal_semantics,

 ***
 Model-theoretic semantics is the archetype of Alfred Tarski's semantic
 theory of truth, based on his T-schema, and is one of the founding concepts
 of model theory. This is the most widespread approach, and is based on the
 idea that the meaning of the various parts of the propositions are given by
 the possible ways we can give a recursively specified group of
 interpretation functions from them to some predefined mathematical domains:
 an interpretation of first-order predicate logic is given by a mapping from
 terms to a universe of individuals, and a mapping from propositions to the
 truth values true and false.
 ***

 then yes, PLN's semantics is based on a mapping from terms to a universe of
 individuals, and a mapping from propositions to truth values.  On the other
 hand, these individuals may be for instance **elementary sensations or
 actions**, rather than higher-level individuals like, say, a specific cat,
 or the concept cat.  So there is nothing non-experience-based about
 mapping terms into a individuals that are the system's direct experience
 ... and then building up more abstract terms by grouping these
 directly-experience-based terms.

 IMO, the dichotomy between experience-based and model-based semantics is a
 misleading one.  Model-based semantics has often been used in a
 non-experience-based way, but that is not because it fundamentally **has**
 to be used in that way.

 To say 

Re: [agi] two types of semantics [Was: NARS and probability]

2008-10-12 Thread Ben Goertzel
On the other hand, in PLN's indefinite probabilities there is a parameter k
which
plays a similar mathematical role,  yet **is** explicitly interpreted as
being about
a number of hypothetical future observations ...

Clearly the interplay btw algebra and interpretation is one of the things
that makes
this area of research (uncertain logic) interesting ...

ben g

On Sun, Oct 12, 2008 at 2:07 PM, Pei Wang [EMAIL PROTECTED] wrote:

 Abram: The parameter 'k' does not really depend on the future, because
 it makes no assumption about what will happen in that period of time.
 It is just a ruler or weight (used with scale) to measure the
 amount of evidence, as a reference amount.

 For other people: The definition of confidence c = w/(w+k) states that
 confidence is the proportion of current evidence among future
 evidence, after the coming of evidence of amount k.

 Pei

 On Sun, Oct 12, 2008 at 1:48 PM, Abram Demski [EMAIL PROTECTED]
 wrote:
  Pei,
 
  In this context, how do you justify the use of 'k'? It seems like, by
  introducing 'k', you add a reliance on the truth of the future after
  k observations into the semantics. Since the induction/abduction
  formula is dependent on 'k', the truth values that result no longer
  only summarize experience; they are calculated with prediction in
  mind.
 
  --Abram
 
  On Sun, Oct 12, 2008 at 8:29 AM, Pei Wang [EMAIL PROTECTED]
 wrote:
  A brief and non-technical description of the two types of semantics
  mentioned in the previous discussions:
 
  (1) Model-Theoretic Semantics (MTS)
 
  (1.1) There is a world existing independently outside the intelligent
  system (human or machine).
 
  (1.2) In principle, there is an objective description of the world, in
  terms of objects, their properties, and relations among them.
 
  (1.3) Within the intelligent system, its knowledge is an approximation
  of the objective description of the world.
 
  (1.4) The meaning of a symbol within the system is the object it
  refers to in the world.
 
  (1.5) The truth-value of a statement within the system measures how
  close it approximates the fact in the world.
 
  (2) Experience-Grounded Semantics (EGS)
 
  (2.1) There is a world existing independently outside the intelligent
  system (human or machine). [same as (1.1), but the agreement stops
  here]
 
  (2.2) Even in principle, there is no objective description of the
  world. What the system has is its experience, the history of its
  interaction of the world.
 
  (2.3) Within the intelligent system, its knowledge is a summary of its
  experience.
 
  (2.4) The meaning of a symbol within the system is determined by its
  role in the experience.
 
  (2.5) The truth-value of a statement within the system measures how
  close it summarizes the relevant part of the experience.
 
  To further simplify the description, in the context of learning and
  reasoning: MTS takes objective truth of statements and real
  meaning of terms as aim of approximation, while EGS refuses them, but
  takes experience (input data) as the only thing to depend on.
 
  As usual, each theory has its strength and limitation. The issue is
  which one is more proper for AGI. MTS has been dominating in math,
  logic, and computer science, and therefore is accepted by the majority
  people. Even so, it has been attacked by other people (not only the
  EGS believers) for many reasons.
 
  A while ago I made a figure to illustrate this difference, which is at
  http://nars.wang.googlepages.com/wang.semantics-figure.pdf . A
  manifesto of EGS is at
  http://nars.wang.googlepages.com/wang.semantics.pdf
 
  Since the debate on the nature of truth and meaning has existed
  for thousands of years, I don't think we can settle down it here by
  some email exchanges. I just want to let the interested people know
  the theoretical background of the related discussions.
 
  Pei
 
 
  On Sat, Oct 11, 2008 at 8:34 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 
 
 
  Hi,
 
 
   What this highlights for me is the idea that NARS truth values
 attempt
   to reflect the evidence so far, while probabilities attempt to
 reflect
   the world
 
  I agree that probabilities attempt to reflect the world
 
 
  .
 
  Well said. This is exactly the difference between an
  experience-grounded semantics and a model-theoretic semantics.
 
  I don't agree with this distinction ... unless you are construing
 model
  theoretic semantics in a very restrictive way, which then does not
 apply to
  PLN.
 
  If by model-theoretic semantics you mean something like what Wikipedia
 says
  at http://en.wikipedia.org/wiki/Formal_semantics,
 
  ***
  Model-theoretic semantics is the archetype of Alfred Tarski's semantic
  theory of truth, based on his T-schema, and is one of the founding
 concepts
  of model theory. This is the most widespread approach, and is based on
 the
  idea that the meaning of the various parts of the propositions are
 given by
  the possible ways we can give a recursively 

Re: [agi] two types of semantics [Was: NARS and probability]

2008-10-12 Thread Pei Wang
True. Similar parameters can be found in the work of Carnap and
Walley, with different interpretations.

Pei

On Sun, Oct 12, 2008 at 2:11 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 On the other hand, in PLN's indefinite probabilities there is a parameter k
 which
 plays a similar mathematical role,  yet **is** explicitly interpreted as
 being about
 a number of hypothetical future observations ...

 Clearly the interplay btw algebra and interpretation is one of the things
 that makes
 this area of research (uncertain logic) interesting ...

 ben g

 On Sun, Oct 12, 2008 at 2:07 PM, Pei Wang [EMAIL PROTECTED] wrote:

 Abram: The parameter 'k' does not really depend on the future, because
 it makes no assumption about what will happen in that period of time.
 It is just a ruler or weight (used with scale) to measure the
 amount of evidence, as a reference amount.

 For other people: The definition of confidence c = w/(w+k) states that
 confidence is the proportion of current evidence among future
 evidence, after the coming of evidence of amount k.

 Pei

 On Sun, Oct 12, 2008 at 1:48 PM, Abram Demski [EMAIL PROTECTED]
 wrote:
  Pei,
 
  In this context, how do you justify the use of 'k'? It seems like, by
  introducing 'k', you add a reliance on the truth of the future after
  k observations into the semantics. Since the induction/abduction
  formula is dependent on 'k', the truth values that result no longer
  only summarize experience; they are calculated with prediction in
  mind.
 
  --Abram
 
  On Sun, Oct 12, 2008 at 8:29 AM, Pei Wang [EMAIL PROTECTED]
  wrote:
  A brief and non-technical description of the two types of semantics
  mentioned in the previous discussions:
 
  (1) Model-Theoretic Semantics (MTS)
 
  (1.1) There is a world existing independently outside the intelligent
  system (human or machine).
 
  (1.2) In principle, there is an objective description of the world, in
  terms of objects, their properties, and relations among them.
 
  (1.3) Within the intelligent system, its knowledge is an approximation
  of the objective description of the world.
 
  (1.4) The meaning of a symbol within the system is the object it
  refers to in the world.
 
  (1.5) The truth-value of a statement within the system measures how
  close it approximates the fact in the world.
 
  (2) Experience-Grounded Semantics (EGS)
 
  (2.1) There is a world existing independently outside the intelligent
  system (human or machine). [same as (1.1), but the agreement stops
  here]
 
  (2.2) Even in principle, there is no objective description of the
  world. What the system has is its experience, the history of its
  interaction of the world.
 
  (2.3) Within the intelligent system, its knowledge is a summary of its
  experience.
 
  (2.4) The meaning of a symbol within the system is determined by its
  role in the experience.
 
  (2.5) The truth-value of a statement within the system measures how
  close it summarizes the relevant part of the experience.
 
  To further simplify the description, in the context of learning and
  reasoning: MTS takes objective truth of statements and real
  meaning of terms as aim of approximation, while EGS refuses them, but
  takes experience (input data) as the only thing to depend on.
 
  As usual, each theory has its strength and limitation. The issue is
  which one is more proper for AGI. MTS has been dominating in math,
  logic, and computer science, and therefore is accepted by the majority
  people. Even so, it has been attacked by other people (not only the
  EGS believers) for many reasons.
 
  A while ago I made a figure to illustrate this difference, which is at
  http://nars.wang.googlepages.com/wang.semantics-figure.pdf . A
  manifesto of EGS is at
  http://nars.wang.googlepages.com/wang.semantics.pdf
 
  Since the debate on the nature of truth and meaning has existed
  for thousands of years, I don't think we can settle down it here by
  some email exchanges. I just want to let the interested people know
  the theoretical background of the related discussions.
 
  Pei
 
 
  On Sat, Oct 11, 2008 at 8:34 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 
 
 
  Hi,
 
 
   What this highlights for me is the idea that NARS truth values
   attempt
   to reflect the evidence so far, while probabilities attempt to
   reflect
   the world
 
  I agree that probabilities attempt to reflect the world
 
 
  .
 
  Well said. This is exactly the difference between an
  experience-grounded semantics and a model-theoretic semantics.
 
  I don't agree with this distinction ... unless you are construing
  model
  theoretic semantics in a very restrictive way, which then does not
  apply to
  PLN.
 
  If by model-theoretic semantics you mean something like what Wikipedia
  says
  at http://en.wikipedia.org/wiki/Formal_semantics,
 
  ***
  Model-theoretic semantics is the archetype of Alfred Tarski's semantic
  theory of truth, based on his T-schema, and is one of the founding
  concepts
  of 

Re: [agi] two types of semantics [Was: NARS and probability]

2008-10-12 Thread Abram Demski
Pei,

You are right, it doesn't make any such assumptions while Bayesian
practice does. But, the parameter 'k' still fixes the length of time
into the future that we are interested in predicting, right? So it
seems to me that the truth value must be predictive, if its
calculation depends on what we want to predict.

That is why 'k' is hard to incorporate into the probabilistic NARSian
scheme I want to formulate...

--Abram

On Sun, Oct 12, 2008 at 2:07 PM, Pei Wang [EMAIL PROTECTED] wrote:
 Abram: The parameter 'k' does not really depend on the future, because
 it makes no assumption about what will happen in that period of time.
 It is just a ruler or weight (used with scale) to measure the
 amount of evidence, as a reference amount.

 For other people: The definition of confidence c = w/(w+k) states that
 confidence is the proportion of current evidence among future
 evidence, after the coming of evidence of amount k.

 Pei

 On Sun, Oct 12, 2008 at 1:48 PM, Abram Demski [EMAIL PROTECTED] wrote:
 Pei,

 In this context, how do you justify the use of 'k'? It seems like, by
 introducing 'k', you add a reliance on the truth of the future after
 k observations into the semantics. Since the induction/abduction
 formula is dependent on 'k', the truth values that result no longer
 only summarize experience; they are calculated with prediction in
 mind.

 --Abram

 On Sun, Oct 12, 2008 at 8:29 AM, Pei Wang [EMAIL PROTECTED] wrote:
 A brief and non-technical description of the two types of semantics
 mentioned in the previous discussions:

 (1) Model-Theoretic Semantics (MTS)

 (1.1) There is a world existing independently outside the intelligent
 system (human or machine).

 (1.2) In principle, there is an objective description of the world, in
 terms of objects, their properties, and relations among them.

 (1.3) Within the intelligent system, its knowledge is an approximation
 of the objective description of the world.

 (1.4) The meaning of a symbol within the system is the object it
 refers to in the world.

 (1.5) The truth-value of a statement within the system measures how
 close it approximates the fact in the world.

 (2) Experience-Grounded Semantics (EGS)

 (2.1) There is a world existing independently outside the intelligent
 system (human or machine). [same as (1.1), but the agreement stops
 here]

 (2.2) Even in principle, there is no objective description of the
 world. What the system has is its experience, the history of its
 interaction of the world.

 (2.3) Within the intelligent system, its knowledge is a summary of its
 experience.

 (2.4) The meaning of a symbol within the system is determined by its
 role in the experience.

 (2.5) The truth-value of a statement within the system measures how
 close it summarizes the relevant part of the experience.

 To further simplify the description, in the context of learning and
 reasoning: MTS takes objective truth of statements and real
 meaning of terms as aim of approximation, while EGS refuses them, but
 takes experience (input data) as the only thing to depend on.

 As usual, each theory has its strength and limitation. The issue is
 which one is more proper for AGI. MTS has been dominating in math,
 logic, and computer science, and therefore is accepted by the majority
 people. Even so, it has been attacked by other people (not only the
 EGS believers) for many reasons.

 A while ago I made a figure to illustrate this difference, which is at
 http://nars.wang.googlepages.com/wang.semantics-figure.pdf . A
 manifesto of EGS is at
 http://nars.wang.googlepages.com/wang.semantics.pdf

 Since the debate on the nature of truth and meaning has existed
 for thousands of years, I don't think we can settle down it here by
 some email exchanges. I just want to let the interested people know
 the theoretical background of the related discussions.

 Pei


 On Sat, Oct 11, 2008 at 8:34 PM, Ben Goertzel [EMAIL PROTECTED] wrote:



 Hi,


  What this highlights for me is the idea that NARS truth values attempt
  to reflect the evidence so far, while probabilities attempt to reflect
  the world

 I agree that probabilities attempt to reflect the world


 .

 Well said. This is exactly the difference between an
 experience-grounded semantics and a model-theoretic semantics.

 I don't agree with this distinction ... unless you are construing model
 theoretic semantics in a very restrictive way, which then does not apply 
 to
 PLN.

 If by model-theoretic semantics you mean something like what Wikipedia says
 at http://en.wikipedia.org/wiki/Formal_semantics,

 ***
 Model-theoretic semantics is the archetype of Alfred Tarski's semantic
 theory of truth, based on his T-schema, and is one of the founding concepts
 of model theory. This is the most widespread approach, and is based on the
 idea that the meaning of the various parts of the propositions are given by
 the possible ways we can give a recursively specified group of
 interpretation functions from them to 

Re: [agi] two types of semantics [Was: NARS and probability]

2008-10-12 Thread Pei Wang
On Sun, Oct 12, 2008 at 3:06 PM, Abram Demski [EMAIL PROTECTED] wrote:
 Pei,

 You are right, it doesn't make any such assumptions while Bayesian
 practice does. But, the parameter 'k' still fixes the length of time
 into the future that we are interested in predicting, right? So it
 seems to me that the truth value must be predictive, if its
 calculation depends on what we want to predict.

The truth-value is defined/measured according to past experience, but
is used to predict future experience. Especially, this is what the
expectation function is about. But still, a high expectation only
means that the system will behave under the assumption that the
statement may be confirmed again, which by no means guarantee the
actual confirmation of the statement in the future.

 That is why 'k' is hard to incorporate into the probabilistic NARSian
 scheme I want to formulate...

For this purpose, the interval version of the truth value may be easier.

Pei

 --Abram

 On Sun, Oct 12, 2008 at 2:07 PM, Pei Wang [EMAIL PROTECTED] wrote:
 Abram: The parameter 'k' does not really depend on the future, because
 it makes no assumption about what will happen in that period of time.
 It is just a ruler or weight (used with scale) to measure the
 amount of evidence, as a reference amount.

 For other people: The definition of confidence c = w/(w+k) states that
 confidence is the proportion of current evidence among future
 evidence, after the coming of evidence of amount k.

 Pei

 On Sun, Oct 12, 2008 at 1:48 PM, Abram Demski [EMAIL PROTECTED] wrote:
 Pei,

 In this context, how do you justify the use of 'k'? It seems like, by
 introducing 'k', you add a reliance on the truth of the future after
 k observations into the semantics. Since the induction/abduction
 formula is dependent on 'k', the truth values that result no longer
 only summarize experience; they are calculated with prediction in
 mind.

 --Abram

 On Sun, Oct 12, 2008 at 8:29 AM, Pei Wang [EMAIL PROTECTED] wrote:
 A brief and non-technical description of the two types of semantics
 mentioned in the previous discussions:

 (1) Model-Theoretic Semantics (MTS)

 (1.1) There is a world existing independently outside the intelligent
 system (human or machine).

 (1.2) In principle, there is an objective description of the world, in
 terms of objects, their properties, and relations among them.

 (1.3) Within the intelligent system, its knowledge is an approximation
 of the objective description of the world.

 (1.4) The meaning of a symbol within the system is the object it
 refers to in the world.

 (1.5) The truth-value of a statement within the system measures how
 close it approximates the fact in the world.

 (2) Experience-Grounded Semantics (EGS)

 (2.1) There is a world existing independently outside the intelligent
 system (human or machine). [same as (1.1), but the agreement stops
 here]

 (2.2) Even in principle, there is no objective description of the
 world. What the system has is its experience, the history of its
 interaction of the world.

 (2.3) Within the intelligent system, its knowledge is a summary of its
 experience.

 (2.4) The meaning of a symbol within the system is determined by its
 role in the experience.

 (2.5) The truth-value of a statement within the system measures how
 close it summarizes the relevant part of the experience.

 To further simplify the description, in the context of learning and
 reasoning: MTS takes objective truth of statements and real
 meaning of terms as aim of approximation, while EGS refuses them, but
 takes experience (input data) as the only thing to depend on.

 As usual, each theory has its strength and limitation. The issue is
 which one is more proper for AGI. MTS has been dominating in math,
 logic, and computer science, and therefore is accepted by the majority
 people. Even so, it has been attacked by other people (not only the
 EGS believers) for many reasons.

 A while ago I made a figure to illustrate this difference, which is at
 http://nars.wang.googlepages.com/wang.semantics-figure.pdf . A
 manifesto of EGS is at
 http://nars.wang.googlepages.com/wang.semantics.pdf

 Since the debate on the nature of truth and meaning has existed
 for thousands of years, I don't think we can settle down it here by
 some email exchanges. I just want to let the interested people know
 the theoretical background of the related discussions.

 Pei


 On Sat, Oct 11, 2008 at 8:34 PM, Ben Goertzel [EMAIL PROTECTED] wrote:



 Hi,


  What this highlights for me is the idea that NARS truth values attempt
  to reflect the evidence so far, while probabilities attempt to reflect
  the world

 I agree that probabilities attempt to reflect the world


 .

 Well said. This is exactly the difference between an
 experience-grounded semantics and a model-theoretic semantics.

 I don't agree with this distinction ... unless you are construing model
 theoretic semantics in a very restrictive way, which then does not apply 
 to