Brad,

Thanks for the encouragement.

For people who cannot fully grok the discussion from the email alone,
the relevant NARS references are
http://nars.wang.googlepages.com/wang.semantics.pdf and
http://nars.wang.googlepages.com/wang.confidence.pdf

Pei

On Sat, Oct 11, 2008 at 1:13 AM, Brad Paulsen <[EMAIL PROTECTED]> wrote:
> Pei, Ben G. and Abram,
>
> Oh, man, is this stuff GOOD!  This is the real nitty-gritty of the AGI
> matter.  How does your approach handle counter-evidence?  How does your
> approach deal with insufficient evidence?  (Those are rhetorical questions,
> by the way -- I don't want to influence the course of this thread, just want
> to let you know I dig it and, mostly, grok it as well).  I love this stuff.
>  You guys are brilliant.  Actually, I think it would make a good
> publication: "PLN vs. NARS -- The AGI Smack-down!"  A win-win contest.
>
> This is a rare treat for an old hacker like me.  And, I hope, educational
> for all (including the participants)!  Keep it coming, please!
>
> Cheers,
> Brad
>
> Pei Wang wrote:
>>
>> On Fri, Oct 10, 2008 at 8:03 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>>>
>>> Yah, according to Bayes rule if one assumes P(bird) = P(swimmer) this
>>> would
>>> be the case...
>>>
>>> (Of course, this kind of example is cognitively misleading, because if
>>> the
>>> only knowledge
>>> the system has is "Swallows are birds" and "Swallows are NOT swimmers"
>>> then
>>> it doesn't
>>> really know that the terms involved are "swallows", "birds", "swimmers"
>>> etc.
>>> ... then in
>>> that case they're just almost-meaningless tokens to the system, right?)
>>
>> Well, it depends on the semantics. According to model-theoretic
>> semantics, if a term has no reference, it has no meaning. According to
>> experience-grounded semantics, every term in experience have meaning
>> --- by the role it plays.
>>
>> Further questions:
>>
>> (1) Don't you intuitively feel that the evidence provided by
>> non-swimming birds says more about "Birds are swimmers" than
>> "Swimmers are birds"?
>>
>> (2) If your answer for (1) is "yes", then think about "Adults are
>> alcohol-drinkers" and "Alcohol-drinkers are adults" --- do they have
>> the same set of counter examples, intuitively speaking?
>>
>> (3) According to your previous explanation, will PLN also take a red
>> apple as negative evidence for "Birds are swimmers" and "Swimmers are
>> birds", because it reduces the "candidate pool" by one? Of course, the
>> probability adjustment may be very small, but qualitatively, isn't it
>> the same as a non-swimming bird? If not, then what the system will do
>> about it?
>>
>> Pei
>>
>>
>>> On Fri, Oct 10, 2008 at 7:34 PM, Pei Wang <[EMAIL PROTECTED]> wrote:
>>>>
>>>> Ben,
>>>>
>>>> I see your position.
>>>>
>>>> Let's go back to the example. If the only relevant domain knowledge
>>>> PLN has is "Swallows are birds" and "Swallows are
>>>> NOT swimmers", will the system assigns the same lower-than-default
>>>> probability to "Birds are swimmers" and  "Swimmers are birds"? Again,
>>>> I only need a qualitative answer.
>>>>
>>>> Pei
>>>>
>>>> On Fri, Oct 10, 2008 at 7:24 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>>>>>
>>>>> Pei,
>>>>>
>>>>> I finally took a moment to actually read your email...
>>>>>
>>>>>>
>>>>>> However, the negative evidence of one conclusion is no evidence of the
>>>>>> other conclusion. For example, "Swallows are birds" and "Swallows are
>>>>>> NOT swimmers" suggests "Birds are NOT swimmers", but says nothing
>>>>>> about whether "Swimmers are birds".
>>>>>>
>>>>>> Now I wonder if PLN shows a similar asymmetry in induction/abduction
>>>>>> on negative evidence. If it does, then how can that effect come out of
>>>>>> a symmetric truth-function? If it doesn't, how can you justify the
>>>>>> conclusion, which looks counter-intuitive?
>>>>>
>>>>> According to Bayes rule,
>>>>>
>>>>> P(bird | swimmer) P(swimmer) = P(swimmer | bird) P(bird)
>>>>>
>>>>> So, in PLN, evidence for P(bird | swimmer) will also count as evidence
>>>>> for P(swimmer | bird), though potentially with a different weighting
>>>>> attached to each piece of evidence
>>>>>
>>>>> If P(bird) = P(swimmer) is assumed, then each piece of evidence
>>>>> for each of the two conditional probabilities, will count for the other
>>>>> one symmetrically.
>>>>>
>>>>> The intuition here is the standard Bayesian one.
>>>>> Suppose you know there
>>>>> are 10000 things in the universe, and 1000 swimmers.
>>>>> Then if you find out that swallows are not
>>>>> swimmers ... then, unless you think there are zero swallows,
>>>>> this does affect P(bird | swimmer).  For instance, suppose
>>>>> you think there are 10 swallows and 100 birds.  Then, if you know for
>>>>> sure
>>>>> that swallows are not swimmers, and you have no other
>>>>> info but the above, your estimate of P(bird|swimmer)
>>>>> should decrease... because of the 1000 swimmers, you now know there
>>>>> are only 990 that might be birds ... whereas before you thought
>>>>> there were 1000 that might be birds.
>>>>>
>>>>> And the same sort of reasoning holds for **any** probability
>>>>> distribution you place on the number of things in the universe,
>>>>> the number of swimmers, the number of birds, the number of swallows.
>>>>> It doesn't matter what assumption you make, whether you look at
>>>>> n'th order pdf's or whatever ... the same reasoning works...
>>>>>
>>>>> From what I understand, your philosophical view is that it's somehow
>>>>> wrong for a mind to make some assumption about the pdf underlying
>>>>> the world around it?  Is that correct?  If so I don't agree with
>>>>> this...
>>>>> I
>>>>> think this kind of assumption is just part of the "inductive bias" with
>>>>> which
>>>>> a mind approaches the world.
>>>>>
>>>>> The human mind may well have particular pdf's for stuff like birds and
>>>>> trees wired into it, as we evolved to deal with these things.  But
>>>>> that's
>>>>> not really the point.  The inductive bias may be much more abstract --
>>>>> ultimately, it can just be an "occam bias" that biases the mind to
>>>>> prior distributions (over the space of procedures for generating
>>>>> prior distributions for handling specific cases)
>>>>> that are simplest according to some wired-in
>>>>> simplicity measure....
>>>>>
>>>>> So again we get back to basic differences in philosophy...
>>>>>
>>>>> -- Ben G
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> ________________________________
>>>>> agi | Archives | Modify Your Subscription
>>>>
>>>> -------------------------------------------
>>>> agi
>>>> Archives: https://www.listbox.com/member/archive/303/=now
>>>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>>>> Modify Your Subscription: https://www.listbox.com/member/?&;
>>>> Powered by Listbox: http://www.listbox.com
>>>
>>>
>>> --
>>> Ben Goertzel, PhD
>>> CEO, Novamente LLC and Biomind LLC
>>> Director of Research, SIAI
>>> [EMAIL PROTECTED]
>>>
>>> "Nothing will ever be attempted if all possible objections must be first
>>> overcome "  - Dr Samuel Johnson
>>>
>>>
>>> ________________________________
>>> agi | Archives | Modify Your Subscription
>>
>>
>> -------------------------------------------
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription: https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to