Abram: The parameter 'k' does not really depend on the future, because
it makes no assumption about what will happen in that period of time.
It is just a "ruler" or "weight" (used with scale) to measure the
amount of evidence, as a "reference amount".

For other people: The definition of confidence c = w/(w+k) states that
confidence is the proportion of current evidence among future
evidence, after the coming of evidence of amount k.

Pei

On Sun, Oct 12, 2008 at 1:48 PM, Abram Demski <[EMAIL PROTECTED]> wrote:
> Pei,
>
> In this context, how do you justify the use of 'k'? It seems like, by
> introducing 'k', you add a reliance on the truth of the future "after
> k observations" into the semantics. Since the induction/abduction
> formula is dependent on 'k', the truth values that result no longer
> only summarize experience; they are calculated with prediction in
> mind.
>
> --Abram
>
> On Sun, Oct 12, 2008 at 8:29 AM, Pei Wang <[EMAIL PROTECTED]> wrote:
>> A brief and non-technical description of the two types of semantics
>> mentioned in the previous discussions:
>>
>> (1) Model-Theoretic Semantics (MTS)
>>
>> (1.1) There is a world existing independently outside the intelligent
>> system (human or machine).
>>
>> (1.2) In principle, there is an objective description of the world, in
>> terms of objects, their properties, and relations among them.
>>
>> (1.3) Within the intelligent system, its knowledge is an approximation
>> of the objective description of the world.
>>
>> (1.4) The meaning of a symbol within the system is the object it
>> refers to in the world.
>>
>> (1.5) The truth-value of a statement within the system measures how
>> close it approximates the fact in the world.
>>
>> (2) Experience-Grounded Semantics (EGS)
>>
>> (2.1) There is a world existing independently outside the intelligent
>> system (human or machine). [same as (1.1), but the agreement stops
>> here]
>>
>> (2.2) Even in principle, there is no objective description of the
>> world. What the system has is its experience, the history of its
>> interaction of the world.
>>
>> (2.3) Within the intelligent system, its knowledge is a summary of its
>> experience.
>>
>> (2.4) The meaning of a symbol within the system is determined by its
>> role in the experience.
>>
>> (2.5) The truth-value of a statement within the system measures how
>> close it summarizes the relevant part of the experience.
>>
>> To further simplify the description, in the context of learning and
>> reasoning: MTS takes "objective truth" of statements and "real
>> meaning" of terms as aim of approximation, while EGS refuses them, but
>> takes experience (input data) as the only thing to depend on.
>>
>> As usual, each theory has its strength and limitation. The issue is
>> which one is more proper for AGI. MTS has been dominating in math,
>> logic, and computer science, and therefore is accepted by the majority
>> people. Even so, it has been attacked by other people (not only the
>> EGS believers) for many reasons.
>>
>> A while ago I made a figure to illustrate this difference, which is at
>> http://nars.wang.googlepages.com/wang.semantics-figure.pdf . A
>> manifesto of EGS is at
>> http://nars.wang.googlepages.com/wang.semantics.pdf
>>
>> Since the debate on the nature of "truth" and "meaning" has existed
>> for thousands of years, I don't think we can settle down it here by
>> some email exchanges. I just want to let the interested people know
>> the theoretical background of the related discussions.
>>
>> Pei
>>
>>
>> On Sat, Oct 11, 2008 at 8:34 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>>>
>>>
>>>
>>> Hi,
>>>
>>>>
>>>> > What this highlights for me is the idea that NARS truth values attempt
>>>> > to reflect the evidence so far, while probabilities attempt to reflect
>>>> > the world
>>>
>>> I agree that probabilities attempt to reflect the world
>>>
>>>>
>>>> .
>>>>
>>>> Well said. This is exactly the difference between an
>>>> experience-grounded semantics and a model-theoretic semantics.
>>>
>>> I don't agree with this distinction ... unless you are construing "model
>>> theoretic semantics" in a very restrictive way, which then does not apply to
>>> PLN.
>>>
>>> If by model-theoretic semantics you mean something like what Wikipedia says
>>> at http://en.wikipedia.org/wiki/Formal_semantics,
>>>
>>> ***
>>> Model-theoretic semantics is the archetype of Alfred Tarski's semantic
>>> theory of truth, based on his T-schema, and is one of the founding concepts
>>> of model theory. This is the most widespread approach, and is based on the
>>> idea that the meaning of the various parts of the propositions are given by
>>> the possible ways we can give a recursively specified group of
>>> interpretation functions from them to some predefined mathematical domains:
>>> an interpretation of first-order predicate logic is given by a mapping from
>>> terms to a universe of individuals, and a mapping from propositions to the
>>> truth values "true" and "false".
>>> ***
>>>
>>> then yes, PLN's semantics is based on a mapping from terms to a universe of
>>> individuals, and a mapping from propositions to truth values.  On the other
>>> hand, these "individuals" may be for instance **elementary sensations or
>>> actions**, rather than higher-level individuals like, say, a specific cat,
>>> or the concept "cat".  So there is nothing non-experience-based about
>>> mapping terms into a "individuals" that are the system's direct experience
>>> ... and then building up more abstract terms by grouping these
>>> directly-experience-based terms.
>>>
>>> IMO, the dichotomy between experience-based and model-based semantics is a
>>> misleading one.  Model-based semantics has often been used in a
>>> non-experience-based way, but that is not because it fundamentally **has**
>>> to be used in that way.
>>>
>>> To say that PLN tries to model the world, is then just to say that it tries
>>> to make probabilistic predictions about sensations and actions that have not
>>> yet been experienced ... which is certainly the case.
>>>
>>>>
>>>> Once
>>>> again, the difference in truth-value functions is reduced to the
>>>> difference in semantics, what is, what the "truth-value" attempts to
>>>> measure.
>>>
>>> Agreed...
>>>
>>> Ben G
>>>
>>> ________________________________
>>> agi | Archives | Modify Your Subscription
>>
>>
>> -------------------------------------------
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription: https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to