Re: [agi] NARS and probability

2008-10-11 Thread Pei Wang
Brad,

Thanks for the encouragement.

For people who cannot fully grok the discussion from the email alone,
the relevant NARS references are
http://nars.wang.googlepages.com/wang.semantics.pdf and
http://nars.wang.googlepages.com/wang.confidence.pdf

Pei

On Sat, Oct 11, 2008 at 1:13 AM, Brad Paulsen [EMAIL PROTECTED] wrote:
 Pei, Ben G. and Abram,

 Oh, man, is this stuff GOOD!  This is the real nitty-gritty of the AGI
 matter.  How does your approach handle counter-evidence?  How does your
 approach deal with insufficient evidence?  (Those are rhetorical questions,
 by the way -- I don't want to influence the course of this thread, just want
 to let you know I dig it and, mostly, grok it as well).  I love this stuff.
  You guys are brilliant.  Actually, I think it would make a good
 publication: PLN vs. NARS -- The AGI Smack-down!  A win-win contest.

 This is a rare treat for an old hacker like me.  And, I hope, educational
 for all (including the participants)!  Keep it coming, please!

 Cheers,
 Brad

 Pei Wang wrote:

 On Fri, Oct 10, 2008 at 8:03 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Yah, according to Bayes rule if one assumes P(bird) = P(swimmer) this
 would
 be the case...

 (Of course, this kind of example is cognitively misleading, because if
 the
 only knowledge
 the system has is Swallows are birds and Swallows are NOT swimmers
 then
 it doesn't
 really know that the terms involved are swallows, birds, swimmers
 etc.
 ... then in
 that case they're just almost-meaningless tokens to the system, right?)

 Well, it depends on the semantics. According to model-theoretic
 semantics, if a term has no reference, it has no meaning. According to
 experience-grounded semantics, every term in experience have meaning
 --- by the role it plays.

 Further questions:

 (1) Don't you intuitively feel that the evidence provided by
 non-swimming birds says more about Birds are swimmers than
 Swimmers are birds?

 (2) If your answer for (1) is yes, then think about Adults are
 alcohol-drinkers and Alcohol-drinkers are adults --- do they have
 the same set of counter examples, intuitively speaking?

 (3) According to your previous explanation, will PLN also take a red
 apple as negative evidence for Birds are swimmers and Swimmers are
 birds, because it reduces the candidate pool by one? Of course, the
 probability adjustment may be very small, but qualitatively, isn't it
 the same as a non-swimming bird? If not, then what the system will do
 about it?

 Pei


 On Fri, Oct 10, 2008 at 7:34 PM, Pei Wang [EMAIL PROTECTED] wrote:

 Ben,

 I see your position.

 Let's go back to the example. If the only relevant domain knowledge
 PLN has is Swallows are birds and Swallows are
 NOT swimmers, will the system assigns the same lower-than-default
 probability to Birds are swimmers and  Swimmers are birds? Again,
 I only need a qualitative answer.

 Pei

 On Fri, Oct 10, 2008 at 7:24 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Pei,

 I finally took a moment to actually read your email...


 However, the negative evidence of one conclusion is no evidence of the
 other conclusion. For example, Swallows are birds and Swallows are
 NOT swimmers suggests Birds are NOT swimmers, but says nothing
 about whether Swimmers are birds.

 Now I wonder if PLN shows a similar asymmetry in induction/abduction
 on negative evidence. If it does, then how can that effect come out of
 a symmetric truth-function? If it doesn't, how can you justify the
 conclusion, which looks counter-intuitive?

 According to Bayes rule,

 P(bird | swimmer) P(swimmer) = P(swimmer | bird) P(bird)

 So, in PLN, evidence for P(bird | swimmer) will also count as evidence
 for P(swimmer | bird), though potentially with a different weighting
 attached to each piece of evidence

 If P(bird) = P(swimmer) is assumed, then each piece of evidence
 for each of the two conditional probabilities, will count for the other
 one symmetrically.

 The intuition here is the standard Bayesian one.
 Suppose you know there
 are 1 things in the universe, and 1000 swimmers.
 Then if you find out that swallows are not
 swimmers ... then, unless you think there are zero swallows,
 this does affect P(bird | swimmer).  For instance, suppose
 you think there are 10 swallows and 100 birds.  Then, if you know for
 sure
 that swallows are not swimmers, and you have no other
 info but the above, your estimate of P(bird|swimmer)
 should decrease... because of the 1000 swimmers, you now know there
 are only 990 that might be birds ... whereas before you thought
 there were 1000 that might be birds.

 And the same sort of reasoning holds for **any** probability
 distribution you place on the number of things in the universe,
 the number of swimmers, the number of birds, the number of swallows.
 It doesn't matter what assumption you make, whether you look at
 n'th order pdf's or whatever ... the same reasoning works...

 From what I understand, your philosophical view is that it's 

Re: [agi] NARS and probability

2008-10-11 Thread Ben Goertzel
Pei etc.,

First high level comment here, mostly to the non-Pei audience ... then I'll
respond to some of the details:

This dialogue -- so far -- feels odd to me because I have not been
 defending anything special, peculiar or inventive about PLN here.
There are some things about PLN that would be considered to fall into that
category
(e.g. the treatment of intension which uses my pattern theory, and the
treatment of quantifiers which uses third-order probabilities ... or even
the
use of indefinite truth values).   Those are the things that I would expect
to be arguing about!  Even more interesting would be to argue about
strategies
for controlling combinatorial explosion in inference trees, which IMO is the
truly crucial issue, more so than the particulars of the inference and
uncertainty
management formalism (though those particulars need to be workable too, if
one is to have an AI with explicit inference as a significant component).

Instead, in this dialogue, I am essentially defending the standard usage
of probability theory, which is the **least** interesting and inventive part
of
PLN.  I'm defending the use of Bayes rule ... re-presenting the standard
Bayesian argument about the Hempel confirmation problem, etc.

This is rather a reversal of positions for me, as I more often these days
argue
with people who are hard-core Bayesians, who believe that explicitly doing
Bayesian inference is the key to AGI ... and  my argument with them is that
a) you need to supplement probability theory with heuristics, because
otherwise
things become intractable; b) these heuristics are huge and subtle and in
fact wind up constituting a whole cognitive architecture of which explicit
probability
theory is just one component (but the whole architecture appears to the
probabilistic-reasoning component as a set of heuristic assumptions).

So anyway this is  not, so far, so much of a PLN versus NARS debate as a
probability theoretic AI versus NARS debate, in the sense that none of the
more odd/questionable/fun/inventive parts of PLN are being invoked here ...
only the parts that are common to PLN and a lot of other approaches...

But anyway, back to defending Bayes and elementary probability theory in
(its application to common sense reasoning; obviously Pei is not disputing
the actual mathematics!)

Maybe in this reply I will get a chance to introduce some of the more
interesting
aspects of PLN, we'll see...



 Since each inference rule usually only considers two premises, whether
 the meaning of the involved concepts are rich or poor (i.e., whether
 they are also involved in other statements not considered by the rule)
 shouldn't matter in THAT STEP, right?



It doesn't matter in the sense of determining
what the system does in that step, but it matters in terms
of the human intuitiveness evaluation of that step, because we are
intuitively accustomed to evaluating inferences regarding rich concepts
that have a lot of links, and for which we have some intuitive understanding
of the relevant term probabilities.






  Further questions:
 
  (1) Don't you intuitively feel that the evidence provided by
  non-swimming birds says more about Birds are swimmers than
  Swimmers are birds?
 
  Yes, but only because I know intuitively that swimmers are more common
  in my everyday world than birds.

 Please note that this issue is different from our previous debate.
 Node probability have nothing to do with the asymmetry in
 induction/abduction.



I don't remember our previous debate and don't have time to study  my
email archives (I don't really have time to answer this email but I'm doing
it
anyway ;-) ...

Anyway, in PLN, if we map is into ExtensionalInheritance, then the point
that

P(swimmer | Bird) P(bird) = P(bird | swimmer) P(swimmer)

lets me answer your question without even thinking much about the context.

Due to Bayes rule, in any Bayesian inference system, evidence for one of
{ P(swimmer|bird), P(bird|swimmer) } may be considered as evidence for the
other, on principle.  [How that evidence is propagated through the system's
memory is another question, etc. etc.]  And Bayes rule tells you how to
convert
evidence for one of these conditionals into evidence for another.

Getting back to the odd versus standard aspects of PLN, if we introduce an
odd
aspect we can model is as IntensionalInheritance, or a weighted average of
ExtensionalInheritance and IntensionalInheritance.

In the Intensional case, then for instance

bird is swimmer

comes out to mean

P(X is in PAT_swimmer | X is in PAT_bird)

where PAT_A is the fuzzy set of patterns in A.

A quick cut and paste from the PLN book, page 257, here:

***
Note a significant difference from NARS here. In NARS, it is assumed that X
inherits from Y if X extensionally inherits from Y but Y intensionally
inherits
from (inherits properties from) X. We take a different approach here. We say
that
X inherits from Y if X's members are members of Y, and the properties
associ-

Re: [agi] NARS and probability

2008-10-11 Thread Pei Wang
On Fri, Oct 10, 2008 at 8:56 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Well, it depends on the semantics. According to model-theoretic
 semantics, if a term has no reference, it has no meaning. According to
 experience-grounded semantics, every term in experience have meaning
 --- by the role it plays.

 That's why I said almost-meaningless ... if those are the only
 relationships
 known to the system, then the terms in those relationships play almost
 no roles, hence have almost no meanings...

Since each inference rule usually only considers two premises, whether
the meaning of the involved concepts are rich or poor (i.e., whether
they are also involved in other statements not considered by the rule)
shouldn't matter in THAT STEP, right?

 Further questions:

 (1) Don't you intuitively feel that the evidence provided by
 non-swimming birds says more about Birds are swimmers than
 Swimmers are birds?

 Yes, but only because I know intuitively that swimmers are more common
 in my everyday world than birds.

Please note that this issue is different from our previous debate.
Node probability have nothing to do with the asymmetry in
induction/abduction.

For example, non-swimmer birds is negative evidence for Birds are
swimmers but irrelevant to Swimmers are birds, while non-bird
swimmers is negative evidence for Swimmers are birds but irrelevant
to Birds are swimmers. No matter which of the two nodes is more
common, you cannot have both case right.

 (2) If your answer for (1) is yes, then think about Adults are
 alcohol-drinkers and Alcohol-drinkers are adults --- do they have
 the same set of counter examples, intuitively speaking?

 Again, our intuitions for this are colored by the knowledge that there
 are more adults than alcohol-drinkers.

As above, the two sets of counter examples are non-alcohol-drinking
adult and non-adult alcohol-drinker, respectively. The fact that
these two statements have different negative evidence have nothing to
do with the size of the related sets (node probability).

 Consider high school, which has 4 years: freshman, sophomore,
 junior, senior.

 Then think about Juniors  seniors are women and women
 are juniors  seniors

 It seems quite intuitive to me that, in this case, the same pieces of
 evidence support the truth values of these two hypotheses.

 This is because the term probabilities of juniors and seniors
 and women are intuitively known to be about equal.

Instead of supporting evidence, you should address refuting
evidence (because that is where the issue is). For Juniors  seniors
are women, it is juniors  seniors man, and for women are juniors
 seniors, it is freshman  sophomore women.

What I argued is: the counter evidence of statement A is B is not
counter evidence of the converse statement B is A, and vice versa.
You cannot explain this in both directions by node probability.

 (3) According to your previous explanation, will PLN also take a red
 apple as negative evidence for Birds are swimmers and Swimmers are
 birds, because it reduces the candidate pool by one? Of course, the
 probability adjustment may be very small, but qualitatively, isn't it
 the same as a non-swimming bird? If not, then what the system will do
 about it?

 Yes, in principle, PLN will behave in Hempel's confirmation paradox in
 a similar way to other Bayesian systems.

 I do find this counterintuitive, personally, and I spent a while trying to
 work
 around it ... but finally I decided that my intuition is the faulty thing.
 As you note,
 it's a very small probability adjustment in these cases, so it's not
 surprising
 if human intuition is not tuned to make such small probability adjustments
 in a correct or useful way...

Well, actually your previous explanation is exactly the opposite of
the standard Bayesian answer --- see
http://en.wikipedia.org/wiki/Raven_paradox

Now we have three different opinions on the relationship between
statement Birds are swimmers and the evidence provided by a red
apple:
(1) NARS: it is irrelevant (neither positive nor negative)
(2) PLN: it is negative evidence (though very small)
(3) Bayesian: it is positive evidence (though very small)

Everyone agrees that (2) and (3) are counterintuitive, but most people
trust probability theory more than their own intuition --- after all,
nobody is perfect ... :-(

To me, small probability adjustments is a bad excuse. No matter how
small the adjustment is, as far as it is not infinitely small, it
cannot be always ignored, since it will accumulate. If all non-bird
objects are taken as (either positive or negative) evidence for Birds
are swimmers, then the huge number of them cannot be ignored.

It is always possible to save a theory (probability theory, in this
situation) if you are willing to pay the price. The problem is whether
the price is too high.

Pei


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: 

Re: [agi] NARS and probability

2008-10-11 Thread Pei Wang
Ben,

Your reply raised several interesting topics, and most of them cannot
be settled down in this kind of email exchanges. Therefore, I won't
address every of them here, but will propose another solution, in a
separate private email.

Go back to where this debate starts: the asymmetry of
induction/abduction. To me, here is what the discussion  has revealed
so far:

(1) The PLN solution is consistent with the Bayesian tradition and
probability theory in general, though it is counterintuitive.

(2) The NARS solution fits people's intuition, though it violates
probability theory.

Please note that on this topic, what is involved is not just Pei's
intuition (though in some other topics, it is) --- Hempel's Paradox
looks counterintuitive to everyone, including you (which you admitted)
and Hempel himself, though you, Hempel, and most of the others
involved in this research, choose to accept the counterintuitive
conclusion, because of the belief that probability theory should be
followed in commonsense reasoning.

As I said before, I don't think I can change your belief in
probability theory very soon. Therefore, as long as you think my above
summary is fair, I've reached my goal in this round of exchange.

Pei


On Sat, Oct 11, 2008 at 8:45 AM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Pei etc.,

 First high level comment here, mostly to the non-Pei audience ... then I'll
 respond to some of the details:

 This dialogue -- so far -- feels odd to me because I have not been
  defending anything special, peculiar or inventive about PLN here.
 There are some things about PLN that would be considered to fall into that
 category
 (e.g. the treatment of intension which uses my pattern theory, and the
 treatment of quantifiers which uses third-order probabilities ... or even
 the
 use of indefinite truth values).   Those are the things that I would expect
 to be arguing about!  Even more interesting would be to argue about
 strategies
 for controlling combinatorial explosion in inference trees, which IMO is the
 truly crucial issue, more so than the particulars of the inference and
 uncertainty
 management formalism (though those particulars need to be workable too, if
 one is to have an AI with explicit inference as a significant component).

 Instead, in this dialogue, I am essentially defending the standard usage
 of probability theory, which is the **least** interesting and inventive part
 of
 PLN.  I'm defending the use of Bayes rule ... re-presenting the standard
 Bayesian argument about the Hempel confirmation problem, etc.

 This is rather a reversal of positions for me, as I more often these days
 argue
 with people who are hard-core Bayesians, who believe that explicitly doing
 Bayesian inference is the key to AGI ... and  my argument with them is that
 a) you need to supplement probability theory with heuristics, because
 otherwise
 things become intractable; b) these heuristics are huge and subtle and in
 fact wind up constituting a whole cognitive architecture of which explicit
 probability
 theory is just one component (but the whole architecture appears to the
 probabilistic-reasoning component as a set of heuristic assumptions).

 So anyway this is  not, so far, so much of a PLN versus NARS debate as a
 probability theoretic AI versus NARS debate, in the sense that none of the
 more odd/questionable/fun/inventive parts of PLN are being invoked here ...
 only the parts that are common to PLN and a lot of other approaches...

 But anyway, back to defending Bayes and elementary probability theory in
 (its application to common sense reasoning; obviously Pei is not disputing
 the actual mathematics!)

 Maybe in this reply I will get a chance to introduce some of the more
 interesting
 aspects of PLN, we'll see...



 Since each inference rule usually only considers two premises, whether
 the meaning of the involved concepts are rich or poor (i.e., whether
 they are also involved in other statements not considered by the rule)
 shouldn't matter in THAT STEP, right?

 It doesn't matter in the sense of determining
 what the system does in that step, but it matters in terms
 of the human intuitiveness evaluation of that step, because we are
 intuitively accustomed to evaluating inferences regarding rich concepts
 that have a lot of links, and for which we have some intuitive understanding
 of the relevant term probabilities.




  Further questions:
 
  (1) Don't you intuitively feel that the evidence provided by
  non-swimming birds says more about Birds are swimmers than
  Swimmers are birds?
 
  Yes, but only because I know intuitively that swimmers are more common
  in my everyday world than birds.

 Please note that this issue is different from our previous debate.
 Node probability have nothing to do with the asymmetry in
 induction/abduction.

 I don't remember our previous debate and don't have time to study  my
 email archives (I don't really have time to answer this email but I'm doing
 it
 anyway ;-) 

Re: [agi] NARS and probability

2008-10-11 Thread Ben Goertzel
Thanks Pei!

This is an interesting dialogue, but indeed, I have some reservations about
putting so much energy into email dialogues -- for a couple reasons

1)
because, once they're done,
the text generated basically just vanishes into messy, barely-searchable
archives.

2)
because I tend to answer emails on the fly and hastily, without putting
careful thought into phrasing, as I do when writing papers or books ... and
this hastiness can sometimes add confusion

It would be better to further explore these issues in some other forum where
the
discussion would be preserved in a more easily readable form, and where
the medium is more conducive to carefully-thought-out phrasings...


Go back to where this debate starts: the asymmetry of
 induction/abduction. To me, here is what the discussion  has revealed
 so far:

 (1) The PLN solution is consistent with the Bayesian tradition and
 probability theory in general, though it is counterintuitive.

 (2) The NARS solution fits people's intuition, though it violates
 probability theory.



I don't fully agree with this summary, sorry.

I agree that the PLN approach
is counterintuitive in some respects (e.g. the Hempel puzzle)

I also note that the more innovative aspects of PLN don't seem
to introduce any new counterintuitiveness.  The counterintuitiveness
that is there is just inherited from plain old probability theory, it seems.

However, I also feel
the NARS approach is counterintuitive in some respects.  One
example is the fact that in NARS,
induction/abduction the frequency component of the conclusion depends
on only one of the premises).

Another example is the lack of Bayes
rule in NARS: there is loads of evidence that humans and animals intuitively
reason according to Bayes rule in various situations.

Which approach (PLN or NARS) is more agreeable with human intuition, on the
whole,
is not clear to me.   And, as I argued in my prior email, this is not the
most
interesting issue from my point of view ... for two reasons, actually (only
one
of which I elaborated carefully before)

1)
I'm not primarily trying to model humans, but rather trying to create a
powerful
AGI

2)
Human intuition about human practice,
 does not always match human practice.  What we feel like we're
doing may not match what we're actually doing in our brains.  This is very
plainly
demonstrated for instance in the area of mental arithmetic: the algorithms
people
think they're following, could not possibly lead to the timing-patterns that
people
generate when actually solving mental arithmetic problems.  The same thing
may hold for inference: the rules people think they're following may not be
the
ones they actually follow.  So that intuitiveness is of significant yet
limited
value in figuring out what people actually do unconsciously when thinking.


-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS and probability

2008-10-11 Thread Pei Wang
Ben,

My summary was on the asymmetry of induction/abduction topic alone,
not on NARS vs. PLN in general --- of course NARS is counterintuitive
in several places!

Under that restriction, I assume you'll agree with me summary.

Please note that this issue is related to Hempel's Paradox, but not
the same --- the former is on negative evidence, while the latter is
on positive evidence.

I won't address the other issues here --- as you said, they are
complicated, and email discussion is not always enough. I'm looking
forward to the PNL book and your future publications on the related
topics.

Pei

On Sat, Oct 11, 2008 at 11:54 AM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Thanks Pei!

 This is an interesting dialogue, but indeed, I have some reservations about
 putting so much energy into email dialogues -- for a couple reasons

 1)
 because, once they're done,
 the text generated basically just vanishes into messy, barely-searchable
 archives.

 2)
 because I tend to answer emails on the fly and hastily, without putting
 careful thought into phrasing, as I do when writing papers or books ... and
 this hastiness can sometimes add confusion

 It would be better to further explore these issues in some other forum where
 the
 discussion would be preserved in a more easily readable form, and where
 the medium is more conducive to carefully-thought-out phrasings...


 Go back to where this debate starts: the asymmetry of
 induction/abduction. To me, here is what the discussion  has revealed
 so far:

 (1) The PLN solution is consistent with the Bayesian tradition and
 probability theory in general, though it is counterintuitive.

 (2) The NARS solution fits people's intuition, though it violates
 probability theory.

 I don't fully agree with this summary, sorry.

 I agree that the PLN approach
 is counterintuitive in some respects (e.g. the Hempel puzzle)

 I also note that the more innovative aspects of PLN don't seem
 to introduce any new counterintuitiveness.  The counterintuitiveness
 that is there is just inherited from plain old probability theory, it seems.

 However, I also feel
 the NARS approach is counterintuitive in some respects.  One
 example is the fact that in NARS,
 induction/abduction the frequency component of the conclusion depends
 on only one of the premises).

 Another example is the lack of Bayes
 rule in NARS: there is loads of evidence that humans and animals intuitively
 reason according to Bayes rule in various situations.

 Which approach (PLN or NARS) is more agreeable with human intuition, on the
 whole,
 is not clear to me.   And, as I argued in my prior email, this is not the
 most
 interesting issue from my point of view ... for two reasons, actually (only
 one
 of which I elaborated carefully before)

 1)
 I'm not primarily trying to model humans, but rather trying to create a
 powerful
 AGI

 2)
 Human intuition about human practice,
  does not always match human practice.  What we feel like we're
 doing may not match what we're actually doing in our brains.  This is very
 plainly
 demonstrated for instance in the area of mental arithmetic: the algorithms
 people
 think they're following, could not possibly lead to the timing-patterns that
 people
 generate when actually solving mental arithmetic problems.  The same thing
 may hold for inference: the rules people think they're following may not be
 the
 ones they actually follow.  So that intuitiveness is of significant yet
 limited
 value in figuring out what people actually do unconsciously when thinking.


 -- Ben G






 
 agi | Archives | Modify Your Subscription


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS and probability

2008-10-11 Thread Abram Demski
Pei, Ben,

I am going to try to spell out an arguments for each side (arguing for
symmetry, then for asymmetry).

For Symmetry:

Suppose we get negative evidence for As are Bs, such that we are
tempted to say no As are Bs. We then consider the statement Bs are
As, with no other info. We think, If we found a B that was an A,
then we would also have found an A that was a B; I don't think any
exist; so, I don't think there are any Bs that are As. Thus, evidence
against As are Bs is also evidence against Bs are As.

Against Symmetry:

If we are counting empirical frequencies, then an A that is not a B
will lower the frequency of As are Bs; however, it will not alter
the frequency count for Bs are As.


What this highlights for me is the idea that NARS truth values attempt
to reflect the evidence so far, while probabilities attempt to reflect
the world.

--Abram


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS and probability

2008-10-11 Thread Abram Demski
On Sat, Oct 11, 2008 at 5:38 PM, Pei Wang [EMAIL PROTECTED] wrote:
 On Sat, Oct 11, 2008 at 4:10 PM, Abram Demski [EMAIL PROTECTED] wrote:
 Pei, Ben,

 I am going to try to spell out an arguments for each side (arguing for
 symmetry, then for asymmetry).

 For Symmetry:

 Suppose we get negative evidence for As are Bs, such that we are
 tempted to say no As are Bs. We then consider the statement Bs are
 As, with no other info. We think, If we found a B that was an A,
 then we would also have found an A that was a B; I don't think any
 exist; so, I don't think there are any Bs that are As. Thus, evidence
 against As are Bs is also evidence against Bs are As.

 I see your point --- it comes from the fact that As are Bs and Bs
 are As have the same positive evidence (both in NARS and in PLN),
 plus the additional assumption that no positive evidence means
 negative evidence. Here the problem is in the additional assumption.
 Indeed it is assumed both in traditional logic and probability theory
 that everything matters for every statement (as revealed by Hempel's
 Paradox).

Hmm... other additional assumptions will do the job here as well, and
I don't see why you mentioned the one you did. An assumption closer to
the argument I gave would be The more negative evidence we've ween,
the less positive evidence we should expect.

--Abram


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS and probability

2008-10-11 Thread Pei Wang
On Sat, Oct 11, 2008 at 5:56 PM, Abram Demski [EMAIL PROTECTED] wrote:

 I see your point --- it comes from the fact that As are Bs and Bs
 are As have the same positive evidence (both in NARS and in PLN),
 plus the additional assumption that no positive evidence means
 negative evidence. Here the problem is in the additional assumption.
 Indeed it is assumed both in traditional logic and probability theory
 that everything matters for every statement (as revealed by Hempel's
 Paradox).

 Hmm... other additional assumptions will do the job here as well, and
 I don't see why you mentioned the one you did. An assumption closer to
 the argument I gave would be The more negative evidence we've ween,
 the less positive evidence we should expect.

Yes, for this topic, your assumption may be more proper, though it is
still unjustified, unless it is further assumed that the number of
total amount of evidence is fixed.

Pei


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS and probability

2008-10-11 Thread Ben Goertzel
Hi,


  What this highlights for me is the idea that NARS truth values attempt
  to reflect the evidence so far, while probabilities attempt to reflect
  the world


I agree that probabilities attempt to reflect the world


 .

 Well said. This is exactly the difference between an
 experience-grounded semantics and a model-theoretic semantics.



I don't agree with this distinction ... unless you are construing model
theoretic semantics in a very restrictive way, which then does not apply to
PLN.

If by model-theoretic semantics you mean something like what Wikipedia says
at http://en.wikipedia.org/wiki/Formal_semantics,

***
*Model-theoretic
semanticshttp://en.wikipedia.org/w/index.php?title=Model-theoretic_semanticsaction=editredlink=1
* is the archetype of Alfred
Tarskihttp://en.wikipedia.org/wiki/Alfred_Tarski's
semantic theory of
truthhttp://en.wikipedia.org/wiki/Semantic_theory_of_truth,
based on his T-schema http://en.wikipedia.org/wiki/T-schema, and is one of
the founding concepts of model
theoryhttp://en.wikipedia.org/wiki/Model_theory.
This is the most widespread approach, and is based on the idea that the
meaning of the various parts of the propositions are given by the possible
ways we can give a recursively specified group of interpretation functions
from them to some predefined mathematical domains: an
interpretationhttp://en.wikipedia.org/wiki/Interpretation_%28logic%29of
first-order
predicate logic
http://en.wikipedia.org/wiki/First-order_predicate_logicis given by
a mapping from terms to a universe of
individuals http://en.wikipedia.org/wiki/Individual, and a mapping from
propositions to the truth values true and false.
***

then yes, PLN's semantics is based on a mapping from terms to a universe of
individuals, and a mapping from propositions to truth values.  On the other
hand, these individuals may be for instance **elementary sensations or
actions**, rather than higher-level individuals like, say, a specific cat,
or the concept cat.  So there is nothing non-experience-based about
mapping terms into a individuals that are the system's direct experience
... and then building up more abstract terms by grouping these
directly-experience-based terms.

IMO, the dichotomy between experience-based and model-based semantics is a
misleading one.  Model-based semantics has often been used in a
non-experience-based way, but that is not because it fundamentally **has**
to be used in that way.

To say that PLN tries to model the world, is then just to say that it tries
to make probabilistic predictions about sensations and actions that have not
yet been experienced ... which is certainly the case.


 Once
 again, the difference in truth-value functions is reduced to the
 difference in semantics, what is, what the truth-value attempts to
 measure.


Agreed...

Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS and probability

2008-10-10 Thread Pei Wang
On Wed, Oct 8, 2008 at 5:15 PM, Abram Demski [EMAIL PROTECTED] wrote:

 Given those three assumptions, plus the NARS formula for revision,
 there is (I think) only one possible formula relating the NARS
 variables 'f' and 'w' to the value of 'par': the probability density
 function p(par | w, f) = par^(w*f) * (1-par)^(w*(1-f)). Note: NARS
 truth values are more often (I think?) represented by the pair 'f'
 'c', where 'c' is computed from 'w' by the formula c=w/(w+k), where k
 is a fixed constant. This is of little consequence at this point, and
 it was more intuitive to use 'f' and 'w' (at least for me).

At this stage, you are right. Since c and w fully determines each
other, in principle you can use either, and w is more intuitive.
However, in designing the truth-value functions, it is more convenient
to use c, a real number in [0, 1], than w, which has no upper bound.

 Here's the math. In NARS, the operation we're interested in is taking
 two pools of evidence, one concerning A=X and the other concerning
 B=X, and combining them to calculate the evidence they lend to A=B.

Now things get tricky, In my derivation, in abduction/deduction the
evidence of a premise is not directly used as evidence for the
conclusion. Instead, it is the premise, as a summary of its own
evidences, that is used as evidence. That is, X is not a set, but an
individual. Consequently, the operation doesn't taking two pools of
evidence and somehow combine them into one pool (as in the revision
rule).

 So probabilistically, we want to determine the probability of the
 evidence for A=X and B=X given each possible 'par' value of A=B.

According to the semantics of NARS, A=X or B=X, by itself, doesn't
provide evidence for A=B.

Overall, it is a nice try, but given the difference in semantics
between probability theory and NARS, I'm still doubtful on how far you
can go in this direction.

Pei


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS and probability

2008-10-10 Thread Ben Goertzel
Abram,

I finally read your long post...


 The basic idea is to treat NARS truth values as representations of a
 statement's likelihood rather than its probability. The likelihood of
 a statement given evidence is the probability of the evidence given
 the statement. Unlike probabilities, calculating likelihoods does not
 require prior beliefs; the likelihood of a statement is a direct
 reflection of the evidence in favor of it. So, I thought likelihoods
 were a good match for the experienced-based semantics of NARS.

 The second decision was to model inheritance statements with
 probability distributions over other inheritance statements;
 specifically, A=B is the conditional probability of A=X given B=X
 (ie, something like the probability that A will inherit B's intension)
 and also the conditional x=B given X=A (measuring B's inheritance of
 A's extension). This seems to follow from the typical description of
 NARS.

 Third, I chose to have a single parameter determine this distribution,
 ranging from 0 to 1. I simply called it 'par' before, although perhaps
 'strength' or something would have been more descriptive...


All this is OK with  me.




 Given those three assumptions, plus the NARS formula for revision,
 there is (I think) only one possible formula relating the NARS
 variables 'f' and 'w' to the value of 'par': the probability density
 function p(par | w, f) = par^(w*f) * (1-par)^(w*(1-f)).


Why is this the only possible formula?




 Here's the math.


My problem with your math is that the basic approach seems to be to
take the NARS formulas as the **goal**, and then reverse-engineer
some formulas that will produce them as a result.

This just doesn't seem the right sort of approach, to me.

If you could set up a probabilistic treatment in a way that just makes
sense given the conceptual assumptions ... and reasonable, not
obviously ad-hoc mathematical assumptions ... and find that NARS
then just **emerges**, then I'd be impressed!!

But, coming up with complex math formulas that need to be
specifically tweaked and fitted to yield NARS-type rules, doesn't
satisfy me much.

In particular, the result that NARS induction and abduction each
depend on **only one** of their premise truth values, seems
conceptually fundamental, and I'd expect your treatment to give
some elegant explanation of this (whether conceptual or
mathematical).  If that exists in the equations you posit, I
couldn't find it...

So I sorta agree with Pei: nice try indeed, and interesting stuff
to think about ... but it doesn't feel right enough that I'm moved
to invest time working out the math details...

-- Ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS and probability

2008-10-10 Thread Pei Wang
On Fri, Oct 10, 2008 at 4:24 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 In particular, the result that NARS induction and abduction each
 depend on **only one** of their premise truth values ...

Ben,

I'm sure you know it in your mind, but this simple description will
make some people think that NARS is obvious wrong.

In NARS, in induction and abduction the truth value of the conclusion
depends on the truth values of both premises, but in an asymmetric
way. It is the frequency factor of the conclusion that only depends
on the frequency of one premise, but not the other.

Unlike deduction, the truth-value function of induction and abduction
are fundamentally asymmetric (on negative evidence), with respect to
the two premises. Actually, it is the PLN functions that looks wrong
to me, on this aspect. ;-)

Pei


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS and probability

2008-10-10 Thread Ben Goertzel
Sorry Pei, you are right, I sloppily  mis-stated!

What I should have said was:


the result that the NARS induction and abduction *strength* formulas
each depend on **only one** of their premise truth values ...


Anyway, my point in that particular post was not to say that NARS is either
good or bad in this aspect ... but just to note that this IMO is a
conceptually
important point that should somehow fall right out of a probabilistic
(or nonprobabilistic) derivation of NARS, rather than being achieved via
carefully fitting complex formulas to produce it...

ben g

On Fri, Oct 10, 2008 at 4:48 PM, Pei Wang [EMAIL PROTECTED] wrote:

 On Fri, Oct 10, 2008 at 4:24 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 
  In particular, the result that NARS induction and abduction each
  depend on **only one** of their premise truth values ...

 Ben,

 I'm sure you know it in your mind, but this simple description will
 make some people think that NARS is obvious wrong.

 In NARS, in induction and abduction the truth value of the conclusion
 depends on the truth values of both premises, but in an asymmetric
 way. It is the frequency factor of the conclusion that only depends
 on the frequency of one premise, but not the other.

 Unlike deduction, the truth-value function of induction and abduction
 are fundamentally asymmetric (on negative evidence), with respect to
 the two premises. Actually, it is the PLN functions that looks wrong
 to me, on this aspect. ;-)

 Pei


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS and probability

2008-10-10 Thread Pei Wang
Ben,

I agree with what you said in the previous email.

However, since we already touched this point in the second time, there
may be people wondering what the difference between NARS and PLN
really is.

Again let me use an example to explain why the truth-value function of
abduction/induction should be asymmetric, at least to me. Since
induction is more intuitive, I'll use it.

The general induction rule in NARS has the following form

M--P t_1
M--S t_2
-
S--P t_a
P--S t_b

where each truth value has a frequency factor (for
positive/negative), and a confidence factor (for sure/unsure).

A truth-value function is symmetric with respect to the premises, if
and only if t_a = t_b for all t_1 and t_2. Last time you
mentioned the following abduction function of PLN:
   s3  = s1 s2 + w (1-s1)(1-s2)
which is symmetric in this sense.

Now, instead of discussing the details of the NARS function, I only
explain why it is not symmetric, that is, when t_a and t_b are
different.

First, positive evidence lead to symmetric conclusions, that is, if M
support S--P, it will also support P--S. For example, Swans are
birds and Swans are swimmers support both Birds are swimmers and
Swimmers are birds, to the same extent.

However, the negative evidence of one conclusion is no evidence of the
other conclusion. For example, Swallows are birds and Swallows are
NOT swimmers suggests Birds are NOT swimmers, but says nothing
about whether Swimmers are birds.

Now I wonder if PLN shows a similar asymmetry in induction/abduction
on negative evidence. If it does, then how can that effect come out of
a symmetric truth-function? If it doesn't, how can you justify the
conclusion, which looks counter-intuitive?

Pei



On Fri, Oct 10, 2008 at 4:57 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Sorry Pei, you are right, I sloppily  mis-stated!

 What I should have said was:

 
 the result that the NARS induction and abduction *strength* formulas
 each depend on **only one** of their premise truth values ...
 

 Anyway, my point in that particular post was not to say that NARS is either
 good or bad in this aspect ... but just to note that this IMO is a
 conceptually
 important point that should somehow fall right out of a probabilistic
 (or nonprobabilistic) derivation of NARS, rather than being achieved via
 carefully fitting complex formulas to produce it...

 ben g

 On Fri, Oct 10, 2008 at 4:48 PM, Pei Wang [EMAIL PROTECTED] wrote:

 On Fri, Oct 10, 2008 at 4:24 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 
  In particular, the result that NARS induction and abduction each
  depend on **only one** of their premise truth values ...

 Ben,

 I'm sure you know it in your mind, but this simple description will
 make some people think that NARS is obvious wrong.

 In NARS, in induction and abduction the truth value of the conclusion
 depends on the truth values of both premises, but in an asymmetric
 way. It is the frequency factor of the conclusion that only depends
 on the frequency of one premise, but not the other.

 Unlike deduction, the truth-value function of induction and abduction
 are fundamentally asymmetric (on negative evidence), with respect to
 the two premises. Actually, it is the PLN functions that looks wrong
 to me, on this aspect. ;-)

 Pei


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 Nothing will ever be attempted if all possible objections must be first
 overcome   - Dr Samuel Johnson


 
 agi | Archives | Modify Your Subscription


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS and probability

2008-10-10 Thread Ben Goertzel
Of course, this is only one among very many differences btw PLN and NARS,
but I agree it's an interesting one.

I've got other stuff to do today, but I'll try to find time to answer this
email
carefully over the weekend.

ben

On Fri, Oct 10, 2008 at 5:38 PM, Pei Wang [EMAIL PROTECTED] wrote:

 Ben,

 I agree with what you said in the previous email.

 However, since we already touched this point in the second time, there
 may be people wondering what the difference between NARS and PLN
 really is.

 Again let me use an example to explain why the truth-value function of
 abduction/induction should be asymmetric, at least to me. Since
 induction is more intuitive, I'll use it.

 The general induction rule in NARS has the following form

 M--P t_1
 M--S t_2
 -
 S--P t_a
 P--S t_b

 where each truth value has a frequency factor (for
 positive/negative), and a confidence factor (for sure/unsure).

 A truth-value function is symmetric with respect to the premises, if
 and only if t_a = t_b for all t_1 and t_2. Last time you
 mentioned the following abduction function of PLN:
   s3  = s1 s2 + w (1-s1)(1-s2)
 which is symmetric in this sense.

 Now, instead of discussing the details of the NARS function, I only
 explain why it is not symmetric, that is, when t_a and t_b are
 different.

 First, positive evidence lead to symmetric conclusions, that is, if M
 support S--P, it will also support P--S. For example, Swans are
 birds and Swans are swimmers support both Birds are swimmers and
 Swimmers are birds, to the same extent.

 However, the negative evidence of one conclusion is no evidence of the
 other conclusion. For example, Swallows are birds and Swallows are
 NOT swimmers suggests Birds are NOT swimmers, but says nothing
 about whether Swimmers are birds.

 Now I wonder if PLN shows a similar asymmetry in induction/abduction
 on negative evidence. If it does, then how can that effect come out of
 a symmetric truth-function? If it doesn't, how can you justify the
 conclusion, which looks counter-intuitive?

 Pei



 On Fri, Oct 10, 2008 at 4:57 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 
  Sorry Pei, you are right, I sloppily  mis-stated!
 
  What I should have said was:
 
  
  the result that the NARS induction and abduction *strength* formulas
  each depend on **only one** of their premise truth values ...
  
 
  Anyway, my point in that particular post was not to say that NARS is
 either
  good or bad in this aspect ... but just to note that this IMO is a
  conceptually
  important point that should somehow fall right out of a probabilistic
  (or nonprobabilistic) derivation of NARS, rather than being achieved via
  carefully fitting complex formulas to produce it...
 
  ben g
 
  On Fri, Oct 10, 2008 at 4:48 PM, Pei Wang [EMAIL PROTECTED]
 wrote:
 
  On Fri, Oct 10, 2008 at 4:24 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
  
   In particular, the result that NARS induction and abduction each
   depend on **only one** of their premise truth values ...
 
  Ben,
 
  I'm sure you know it in your mind, but this simple description will
  make some people think that NARS is obvious wrong.
 
  In NARS, in induction and abduction the truth value of the conclusion
  depends on the truth values of both premises, but in an asymmetric
  way. It is the frequency factor of the conclusion that only depends
  on the frequency of one premise, but not the other.
 
  Unlike deduction, the truth-value function of induction and abduction
  are fundamentally asymmetric (on negative evidence), with respect to
  the two premises. Actually, it is the PLN functions that looks wrong
  to me, on this aspect. ;-)
 
  Pei
 
 
  ---
  agi
  Archives: https://www.listbox.com/member/archive/303/=now
  RSS Feed: https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
 
 
 
  --
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]
 
  Nothing will ever be attempted if all possible objections must be first
  overcome   - Dr Samuel Johnson
 
 
  
  agi | Archives | Modify Your Subscription


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 

Re: [agi] NARS and probability

2008-10-10 Thread Abram Demski
Ben,

Strength? If you mean weight or confidence, this is not so. As Pei
corrected, it is the *frequency* that depends on only one of the two.
The strength depends on both.

And, that is one feature of NARS that I don't find strange. It can be
explained OK by the formula I previously proposed and now abandoned.
To re-use the same metaphor I referred to before, it is as if we are
trying to estimate the weight of heads and tails for a quarter when we
have only partial knowledge of a series of coin flips, and partial
knowledge telling us whether or not that series of flips is actually
the quarter we are interested in. The 2 types of knowledge there are
exactly like the 2 premises: we only want the frequency to depend on
the first type of evidence, but the confidence of that frequency also
depends on the 2nd type of evidence.

To respond more generally to the comments...

The criticism is certainly valid; I am not so worried about semantics,
only about making the manipulations fit. The decision to go with
likelihoods is an exception to this, but that is because I doubt that
the manipulations would be easy to fit together with no resemblance in
semantics...

If I were taking more the approach Ben suggests, that is, making
reasonable-sounding assumptions and then working forward rather than
assuming NARS and working backward, I would have kept the formula from
last time (justifying it with the argument mentioned above). Probably
this results in a system with many similarities to NARS but differing
in the exact formulas, and in the absence of the constant 'k'.

--Abram

On Fri, Oct 10, 2008 at 4:57 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Sorry Pei, you are right, I sloppily  mis-stated!

 What I should have said was:

 
 the result that the NARS induction and abduction *strength* formulas
 each depend on **only one** of their premise truth values ...
 

 Anyway, my point in that particular post was not to say that NARS is either
 good or bad in this aspect ... but just to note that this IMO is a
 conceptually
 important point that should somehow fall right out of a probabilistic
 (or nonprobabilistic) derivation of NARS, rather than being achieved via
 carefully fitting complex formulas to produce it...

 ben g

 On Fri, Oct 10, 2008 at 4:48 PM, Pei Wang [EMAIL PROTECTED] wrote:

 On Fri, Oct 10, 2008 at 4:24 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 
  In particular, the result that NARS induction and abduction each
  depend on **only one** of their premise truth values ...

 Ben,

 I'm sure you know it in your mind, but this simple description will
 make some people think that NARS is obvious wrong.

 In NARS, in induction and abduction the truth value of the conclusion
 depends on the truth values of both premises, but in an asymmetric
 way. It is the frequency factor of the conclusion that only depends
 on the frequency of one premise, but not the other.

 Unlike deduction, the truth-value function of induction and abduction
 are fundamentally asymmetric (on negative evidence), with respect to
 the two premises. Actually, it is the PLN functions that looks wrong
 to me, on this aspect. ;-)

 Pei


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 Nothing will ever be attempted if all possible objections must be first
 overcome   - Dr Samuel Johnson


 
 agi | Archives | Modify Your Subscription


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com



Re: [agi] NARS and probability

2008-10-10 Thread Abram Demski
Pei,

You agree that the abduction and induction strength formulas only
rely on one of the two premises?

Is there some variable called strength that I missed?

--Abram

On Fri, Oct 10, 2008 at 5:38 PM, Pei Wang [EMAIL PROTECTED] wrote:
 Ben,

 I agree with what you said in the previous email.

 However, since we already touched this point in the second time, there
 may be people wondering what the difference between NARS and PLN
 really is.

 Again let me use an example to explain why the truth-value function of
 abduction/induction should be asymmetric, at least to me. Since
 induction is more intuitive, I'll use it.

 The general induction rule in NARS has the following form

 M--P t_1
 M--S t_2
 -
 S--P t_a
 P--S t_b

 where each truth value has a frequency factor (for
 positive/negative), and a confidence factor (for sure/unsure).

 A truth-value function is symmetric with respect to the premises, if
 and only if t_a = t_b for all t_1 and t_2. Last time you
 mentioned the following abduction function of PLN:
   s3  = s1 s2 + w (1-s1)(1-s2)
 which is symmetric in this sense.

 Now, instead of discussing the details of the NARS function, I only
 explain why it is not symmetric, that is, when t_a and t_b are
 different.

 First, positive evidence lead to symmetric conclusions, that is, if M
 support S--P, it will also support P--S. For example, Swans are
 birds and Swans are swimmers support both Birds are swimmers and
 Swimmers are birds, to the same extent.

 However, the negative evidence of one conclusion is no evidence of the
 other conclusion. For example, Swallows are birds and Swallows are
 NOT swimmers suggests Birds are NOT swimmers, but says nothing
 about whether Swimmers are birds.

 Now I wonder if PLN shows a similar asymmetry in induction/abduction
 on negative evidence. If it does, then how can that effect come out of
 a symmetric truth-function? If it doesn't, how can you justify the
 conclusion, which looks counter-intuitive?

 Pei



 On Fri, Oct 10, 2008 at 4:57 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Sorry Pei, you are right, I sloppily  mis-stated!

 What I should have said was:

 
 the result that the NARS induction and abduction *strength* formulas
 each depend on **only one** of their premise truth values ...
 

 Anyway, my point in that particular post was not to say that NARS is either
 good or bad in this aspect ... but just to note that this IMO is a
 conceptually
 important point that should somehow fall right out of a probabilistic
 (or nonprobabilistic) derivation of NARS, rather than being achieved via
 carefully fitting complex formulas to produce it...

 ben g

 On Fri, Oct 10, 2008 at 4:48 PM, Pei Wang [EMAIL PROTECTED] wrote:

 On Fri, Oct 10, 2008 at 4:24 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 
  In particular, the result that NARS induction and abduction each
  depend on **only one** of their premise truth values ...

 Ben,

 I'm sure you know it in your mind, but this simple description will
 make some people think that NARS is obvious wrong.

 In NARS, in induction and abduction the truth value of the conclusion
 depends on the truth values of both premises, but in an asymmetric
 way. It is the frequency factor of the conclusion that only depends
 on the frequency of one premise, but not the other.

 Unlike deduction, the truth-value function of induction and abduction
 are fundamentally asymmetric (on negative evidence), with respect to
 the two premises. Actually, it is the PLN functions that looks wrong
 to me, on this aspect. ;-)

 Pei


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 Nothing will ever be attempted if all possible objections must be first
 overcome   - Dr Samuel Johnson


 
 agi | Archives | Modify Your Subscription


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS and probability

2008-10-10 Thread Pei Wang
Abram,

Ben's strength is my frequency.

Pei

On Fri, Oct 10, 2008 at 5:49 PM, Abram Demski [EMAIL PROTECTED] wrote:
 Pei,

 You agree that the abduction and induction strength formulas only
 rely on one of the two premises?

 Is there some variable called strength that I missed?

 --Abram

 On Fri, Oct 10, 2008 at 5:38 PM, Pei Wang [EMAIL PROTECTED] wrote:
 Ben,

 I agree with what you said in the previous email.

 However, since we already touched this point in the second time, there
 may be people wondering what the difference between NARS and PLN
 really is.

 Again let me use an example to explain why the truth-value function of
 abduction/induction should be asymmetric, at least to me. Since
 induction is more intuitive, I'll use it.

 The general induction rule in NARS has the following form

 M--P t_1
 M--S t_2
 -
 S--P t_a
 P--S t_b

 where each truth value has a frequency factor (for
 positive/negative), and a confidence factor (for sure/unsure).

 A truth-value function is symmetric with respect to the premises, if
 and only if t_a = t_b for all t_1 and t_2. Last time you
 mentioned the following abduction function of PLN:
   s3  = s1 s2 + w (1-s1)(1-s2)
 which is symmetric in this sense.

 Now, instead of discussing the details of the NARS function, I only
 explain why it is not symmetric, that is, when t_a and t_b are
 different.

 First, positive evidence lead to symmetric conclusions, that is, if M
 support S--P, it will also support P--S. For example, Swans are
 birds and Swans are swimmers support both Birds are swimmers and
 Swimmers are birds, to the same extent.

 However, the negative evidence of one conclusion is no evidence of the
 other conclusion. For example, Swallows are birds and Swallows are
 NOT swimmers suggests Birds are NOT swimmers, but says nothing
 about whether Swimmers are birds.

 Now I wonder if PLN shows a similar asymmetry in induction/abduction
 on negative evidence. If it does, then how can that effect come out of
 a symmetric truth-function? If it doesn't, how can you justify the
 conclusion, which looks counter-intuitive?

 Pei



 On Fri, Oct 10, 2008 at 4:57 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Sorry Pei, you are right, I sloppily  mis-stated!

 What I should have said was:

 
 the result that the NARS induction and abduction *strength* formulas
 each depend on **only one** of their premise truth values ...
 

 Anyway, my point in that particular post was not to say that NARS is either
 good or bad in this aspect ... but just to note that this IMO is a
 conceptually
 important point that should somehow fall right out of a probabilistic
 (or nonprobabilistic) derivation of NARS, rather than being achieved via
 carefully fitting complex formulas to produce it...

 ben g

 On Fri, Oct 10, 2008 at 4:48 PM, Pei Wang [EMAIL PROTECTED] wrote:

 On Fri, Oct 10, 2008 at 4:24 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 
  In particular, the result that NARS induction and abduction each
  depend on **only one** of their premise truth values ...

 Ben,

 I'm sure you know it in your mind, but this simple description will
 make some people think that NARS is obvious wrong.

 In NARS, in induction and abduction the truth value of the conclusion
 depends on the truth values of both premises, but in an asymmetric
 way. It is the frequency factor of the conclusion that only depends
 on the frequency of one premise, but not the other.

 Unlike deduction, the truth-value function of induction and abduction
 are fundamentally asymmetric (on negative evidence), with respect to
 the two premises. Actually, it is the PLN functions that looks wrong
 to me, on this aspect. ;-)

 Pei


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 Nothing will ever be attempted if all possible objections must be first
 overcome   - Dr Samuel Johnson


 
 agi | Archives | Modify Your Subscription


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your 

Re: [agi] NARS and probability

2008-10-10 Thread Abram Demski
Ah.

On Fri, Oct 10, 2008 at 5:51 PM, Pei Wang [EMAIL PROTECTED] wrote:
 Abram,

 Ben's strength is my frequency.

 Pei

 On Fri, Oct 10, 2008 at 5:49 PM, Abram Demski [EMAIL PROTECTED] wrote:
 Pei,

 You agree that the abduction and induction strength formulas only
 rely on one of the two premises?

 Is there some variable called strength that I missed?

 --Abram

 On Fri, Oct 10, 2008 at 5:38 PM, Pei Wang [EMAIL PROTECTED] wrote:
 Ben,

 I agree with what you said in the previous email.

 However, since we already touched this point in the second time, there
 may be people wondering what the difference between NARS and PLN
 really is.

 Again let me use an example to explain why the truth-value function of
 abduction/induction should be asymmetric, at least to me. Since
 induction is more intuitive, I'll use it.

 The general induction rule in NARS has the following form

 M--P t_1
 M--S t_2
 -
 S--P t_a
 P--S t_b

 where each truth value has a frequency factor (for
 positive/negative), and a confidence factor (for sure/unsure).

 A truth-value function is symmetric with respect to the premises, if
 and only if t_a = t_b for all t_1 and t_2. Last time you
 mentioned the following abduction function of PLN:
   s3  = s1 s2 + w (1-s1)(1-s2)
 which is symmetric in this sense.

 Now, instead of discussing the details of the NARS function, I only
 explain why it is not symmetric, that is, when t_a and t_b are
 different.

 First, positive evidence lead to symmetric conclusions, that is, if M
 support S--P, it will also support P--S. For example, Swans are
 birds and Swans are swimmers support both Birds are swimmers and
 Swimmers are birds, to the same extent.

 However, the negative evidence of one conclusion is no evidence of the
 other conclusion. For example, Swallows are birds and Swallows are
 NOT swimmers suggests Birds are NOT swimmers, but says nothing
 about whether Swimmers are birds.

 Now I wonder if PLN shows a similar asymmetry in induction/abduction
 on negative evidence. If it does, then how can that effect come out of
 a symmetric truth-function? If it doesn't, how can you justify the
 conclusion, which looks counter-intuitive?

 Pei



 On Fri, Oct 10, 2008 at 4:57 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Sorry Pei, you are right, I sloppily  mis-stated!

 What I should have said was:

 
 the result that the NARS induction and abduction *strength* formulas
 each depend on **only one** of their premise truth values ...
 

 Anyway, my point in that particular post was not to say that NARS is either
 good or bad in this aspect ... but just to note that this IMO is a
 conceptually
 important point that should somehow fall right out of a probabilistic
 (or nonprobabilistic) derivation of NARS, rather than being achieved via
 carefully fitting complex formulas to produce it...

 ben g

 On Fri, Oct 10, 2008 at 4:48 PM, Pei Wang [EMAIL PROTECTED] wrote:

 On Fri, Oct 10, 2008 at 4:24 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 
  In particular, the result that NARS induction and abduction each
  depend on **only one** of their premise truth values ...

 Ben,

 I'm sure you know it in your mind, but this simple description will
 make some people think that NARS is obvious wrong.

 In NARS, in induction and abduction the truth value of the conclusion
 depends on the truth values of both premises, but in an asymmetric
 way. It is the frequency factor of the conclusion that only depends
 on the frequency of one premise, but not the other.

 Unlike deduction, the truth-value function of induction and abduction
 are fundamentally asymmetric (on negative evidence), with respect to
 the two premises. Actually, it is the PLN functions that looks wrong
 to me, on this aspect. ;-)

 Pei


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 Nothing will ever be attempted if all possible objections must be first
 overcome   - Dr Samuel Johnson


 
 agi | Archives | Modify Your Subscription


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 ---
 agi
 Archives: 

Re: [agi] NARS and probability

2008-10-10 Thread Ben Goertzel
I meant frequency, sorry

Strength is a term Pei used for frequency in some old sicsussions...


 If I were taking more the approach Ben suggests, that is, making
 reasonable-sounding assumptions and then working forward rather than
 assuming NARS and working backward, I would have kept the formula from
 last time (justifying it with the argument mentioned above). Probably
 this results in a system with many similarities to NARS but differing
 in the exact formulas, and in the absence of the constant 'k'.



The exact formulas used in NARS are basically heuristics derived based
on endpoint conditions, so replicating those exact formulas is really
not important IMO... the key would be replicating their qualitative
behavior...

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS and probability

2008-10-10 Thread Pei Wang
On Fri, Oct 10, 2008 at 5:52 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 I meant frequency, sorry

 Strength is a term Pei used for frequency in some old sicsussions...

Another correction: strength is never used in any NARS publication.
It was used in some Webmind documents, though I guess it must be your
idea, since I never like this term. ;-)

 The exact formulas used in NARS are basically heuristics derived based
 on endpoint conditions, so replicating those exact formulas is really
 not important IMO... the key would be replicating their qualitative
 behavior...

I have to say that I don't like the term heuristics, neither, since
it usually refers to quick-and-dirty replacement of the real
thing.

I fully agree with you that what really matters is the qualitative
behavior, rather than the exact formula.

Pei


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS and probability

2008-10-10 Thread Ben Goertzel
On Fri, Oct 10, 2008 at 6:01 PM, Pei Wang [EMAIL PROTECTED] wrote:

 On Fri, Oct 10, 2008 at 5:52 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 
  I meant frequency, sorry
 
  Strength is a term Pei used for frequency in some old sicsussions...

 Another correction: strength is never used in any NARS publication.
 It was used in some Webmind documents, though I guess it must be your
 idea, since I never like this term. ;-)


As I recall, the use of the term (in discussions rather than publications)
was your idea, *but* the context in which it was
suggested was as follows.  We wanted a term for a variable in the Webmind
Java code that would be applicable to both NARS and PLN truth values, and
would be
burdened as little as possible with specific theoretical interpretation.  So
you suggested strength.

I'm not sure why we didn't just use frequency instead.  I remember you did
not want to call it probability.

(This was, unbelievably, 10 years ago, so I don't want to bet my right arm
on the details of my recollection ... but that's how I remember it...)



  The exact formulas used in NARS are basically heuristics derived based
  on endpoint conditions, so replicating those exact formulas is really
  not important IMO... the key would be replicating their qualitative
  behavior...

 I have to say that I don't like the term heuristics, neither, since
 it usually refers to quick-and-dirty replacement of the real
 thing.


I didn't mean anything negative via the word heuristic ... and you didn't
suggest
an alternative word ;-)


ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS and probability

2008-10-10 Thread Pei Wang
Ben,

Maybe your memory is correct --- we use strength in Webmind to keep
some distance from NARS.

Anyway, I don't like that term because it can be easily interpreted in
several ways, while the reason I don't like probability is just the
opposite --- it has a widely accepted interpretation, which is hard to
bend to mean what I want the term to mean.

Pei

On Fri, Oct 10, 2008 at 6:58 PM, Ben Goertzel [EMAIL PROTECTED] wrote:


 On Fri, Oct 10, 2008 at 6:01 PM, Pei Wang [EMAIL PROTECTED] wrote:

 On Fri, Oct 10, 2008 at 5:52 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 
  I meant frequency, sorry
 
  Strength is a term Pei used for frequency in some old sicsussions...

 Another correction: strength is never used in any NARS publication.
 It was used in some Webmind documents, though I guess it must be your
 idea, since I never like this term. ;-)

 As I recall, the use of the term (in discussions rather than publications)
 was your idea, *but* the context in which it was
 suggested was as follows.  We wanted a term for a variable in the Webmind
 Java code that would be applicable to both NARS and PLN truth values, and
 would be
 burdened as little as possible with specific theoretical interpretation.  So
 you suggested strength.

 I'm not sure why we didn't just use frequency instead.  I remember you did
 not want to call it probability.

 (This was, unbelievably, 10 years ago, so I don't want to bet my right arm
 on the details of my recollection ... but that's how I remember it...)


  The exact formulas used in NARS are basically heuristics derived based
  on endpoint conditions, so replicating those exact formulas is really
  not important IMO... the key would be replicating their qualitative
  behavior...

 I have to say that I don't like the term heuristics, neither, since
 it usually refers to quick-and-dirty replacement of the real
 thing.

 I didn't mean anything negative via the word heuristic ... and you didn't
 suggest
 an alternative word ;-)


 ben

 
 agi | Archives | Modify Your Subscription


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS and probability

2008-10-10 Thread Ben Goertzel
Pei,

I finally took a moment to actually read your email...



 However, the negative evidence of one conclusion is no evidence of the
 other conclusion. For example, Swallows are birds and Swallows are
 NOT swimmers suggests Birds are NOT swimmers, but says nothing
 about whether Swimmers are birds.

 Now I wonder if PLN shows a similar asymmetry in induction/abduction
 on negative evidence. If it does, then how can that effect come out of
 a symmetric truth-function? If it doesn't, how can you justify the
 conclusion, which looks counter-intuitive?


According to Bayes rule,

P(bird | swimmer) P(swimmer) = P(swimmer | bird) P(bird)

So, in PLN, evidence for P(bird | swimmer) will also count as evidence
for P(swimmer | bird), though potentially with a different weighting
attached to each piece of evidence

If P(bird) = P(swimmer) is assumed, then each piece of evidence
for each of the two conditional probabilities, will count for the other
one symmetrically.

The intuition here is the standard Bayesian one.
Suppose you know there
are 1 things in the universe, and 1000 swimmers.
Then if you find out that swallows are not
swimmers ... then, unless you think there are zero swallows,
this does affect P(bird | swimmer).  For instance, suppose
you think there are 10 swallows and 100 birds.  Then, if you know for sure
that swallows are not swimmers, and you have no other
info but the above, your estimate of P(bird|swimmer)
should decrease... because of the 1000 swimmers, you now know there
are only 990 that might be birds ... whereas before you thought
there were 1000 that might be birds.

And the same sort of reasoning holds for **any** probability
distribution you place on the number of things in the universe,
the number of swimmers, the number of birds, the number of swallows.
It doesn't matter what assumption you make, whether you look at
n'th order pdf's or whatever ... the same reasoning works...

From what I understand, your philosophical view is that it's somehow
wrong for a mind to make some assumption about the pdf underlying
the world around it?  Is that correct?  If so I don't agree with this... I
think this kind of assumption is just part of the inductive bias with
which
a mind approaches the world.

The human mind may well have particular pdf's for stuff like birds and
trees wired into it, as we evolved to deal with these things.  But that's
not really the point.  The inductive bias may be much more abstract --
ultimately, it can just be an occam bias that biases the mind to
prior distributions (over the space of procedures for generating
prior distributions for handling specific cases)
that are simplest according to some wired-in
simplicity measure

So again we get back to basic differences in philosophy...

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS and probability

2008-10-10 Thread Pei Wang
Ben,

I see your position.

Let's go back to the example. If the only relevant domain knowledge
PLN has is Swallows are birds and Swallows are
NOT swimmers, will the system assigns the same lower-than-default
probability to Birds are swimmers and  Swimmers are birds? Again,
I only need a qualitative answer.

Pei

On Fri, Oct 10, 2008 at 7:24 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Pei,

 I finally took a moment to actually read your email...



 However, the negative evidence of one conclusion is no evidence of the
 other conclusion. For example, Swallows are birds and Swallows are
 NOT swimmers suggests Birds are NOT swimmers, but says nothing
 about whether Swimmers are birds.

 Now I wonder if PLN shows a similar asymmetry in induction/abduction
 on negative evidence. If it does, then how can that effect come out of
 a symmetric truth-function? If it doesn't, how can you justify the
 conclusion, which looks counter-intuitive?

 According to Bayes rule,

 P(bird | swimmer) P(swimmer) = P(swimmer | bird) P(bird)

 So, in PLN, evidence for P(bird | swimmer) will also count as evidence
 for P(swimmer | bird), though potentially with a different weighting
 attached to each piece of evidence

 If P(bird) = P(swimmer) is assumed, then each piece of evidence
 for each of the two conditional probabilities, will count for the other
 one symmetrically.

 The intuition here is the standard Bayesian one.
 Suppose you know there
 are 1 things in the universe, and 1000 swimmers.
 Then if you find out that swallows are not
 swimmers ... then, unless you think there are zero swallows,
 this does affect P(bird | swimmer).  For instance, suppose
 you think there are 10 swallows and 100 birds.  Then, if you know for sure
 that swallows are not swimmers, and you have no other
 info but the above, your estimate of P(bird|swimmer)
 should decrease... because of the 1000 swimmers, you now know there
 are only 990 that might be birds ... whereas before you thought
 there were 1000 that might be birds.

 And the same sort of reasoning holds for **any** probability
 distribution you place on the number of things in the universe,
 the number of swimmers, the number of birds, the number of swallows.
 It doesn't matter what assumption you make, whether you look at
 n'th order pdf's or whatever ... the same reasoning works...

 From what I understand, your philosophical view is that it's somehow
 wrong for a mind to make some assumption about the pdf underlying
 the world around it?  Is that correct?  If so I don't agree with this... I
 think this kind of assumption is just part of the inductive bias with
 which
 a mind approaches the world.

 The human mind may well have particular pdf's for stuff like birds and
 trees wired into it, as we evolved to deal with these things.  But that's
 not really the point.  The inductive bias may be much more abstract --
 ultimately, it can just be an occam bias that biases the mind to
 prior distributions (over the space of procedures for generating
 prior distributions for handling specific cases)
 that are simplest according to some wired-in
 simplicity measure

 So again we get back to basic differences in philosophy...

 -- Ben G






 
 agi | Archives | Modify Your Subscription


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS and probability

2008-10-10 Thread Ben Goertzel
Yah, according to Bayes rule if one assumes P(bird) = P(swimmer) this would
be the case...

(Of course, this kind of example is cognitively misleading, because if the
only knowledge
the system has is Swallows are birds and Swallows are NOT swimmers then
it doesn't
really know that the terms involved are swallows, birds, swimmers etc.
... then in
that case they're just almost-meaningless tokens to the system, right?)

On Fri, Oct 10, 2008 at 7:34 PM, Pei Wang [EMAIL PROTECTED] wrote:

 Ben,

 I see your position.

 Let's go back to the example. If the only relevant domain knowledge
 PLN has is Swallows are birds and Swallows are
 NOT swimmers, will the system assigns the same lower-than-default
 probability to Birds are swimmers and  Swimmers are birds? Again,
 I only need a qualitative answer.

 Pei

 On Fri, Oct 10, 2008 at 7:24 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 
  Pei,
 
  I finally took a moment to actually read your email...
 
 
 
  However, the negative evidence of one conclusion is no evidence of the
  other conclusion. For example, Swallows are birds and Swallows are
  NOT swimmers suggests Birds are NOT swimmers, but says nothing
  about whether Swimmers are birds.
 
  Now I wonder if PLN shows a similar asymmetry in induction/abduction
  on negative evidence. If it does, then how can that effect come out of
  a symmetric truth-function? If it doesn't, how can you justify the
  conclusion, which looks counter-intuitive?
 
  According to Bayes rule,
 
  P(bird | swimmer) P(swimmer) = P(swimmer | bird) P(bird)
 
  So, in PLN, evidence for P(bird | swimmer) will also count as evidence
  for P(swimmer | bird), though potentially with a different weighting
  attached to each piece of evidence
 
  If P(bird) = P(swimmer) is assumed, then each piece of evidence
  for each of the two conditional probabilities, will count for the other
  one symmetrically.
 
  The intuition here is the standard Bayesian one.
  Suppose you know there
  are 1 things in the universe, and 1000 swimmers.
  Then if you find out that swallows are not
  swimmers ... then, unless you think there are zero swallows,
  this does affect P(bird | swimmer).  For instance, suppose
  you think there are 10 swallows and 100 birds.  Then, if you know for
 sure
  that swallows are not swimmers, and you have no other
  info but the above, your estimate of P(bird|swimmer)
  should decrease... because of the 1000 swimmers, you now know there
  are only 990 that might be birds ... whereas before you thought
  there were 1000 that might be birds.
 
  And the same sort of reasoning holds for **any** probability
  distribution you place on the number of things in the universe,
  the number of swimmers, the number of birds, the number of swallows.
  It doesn't matter what assumption you make, whether you look at
  n'th order pdf's or whatever ... the same reasoning works...
 
  From what I understand, your philosophical view is that it's somehow
  wrong for a mind to make some assumption about the pdf underlying
  the world around it?  Is that correct?  If so I don't agree with this...
 I
  think this kind of assumption is just part of the inductive bias with
  which
  a mind approaches the world.
 
  The human mind may well have particular pdf's for stuff like birds and
  trees wired into it, as we evolved to deal with these things.  But that's
  not really the point.  The inductive bias may be much more abstract --
  ultimately, it can just be an occam bias that biases the mind to
  prior distributions (over the space of procedures for generating
  prior distributions for handling specific cases)
  that are simplest according to some wired-in
  simplicity measure
 
  So again we get back to basic differences in philosophy...
 
  -- Ben G
 
 
 
 
 
 
  
  agi | Archives | Modify Your Subscription


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS and probability

2008-10-10 Thread Pei Wang
On Fri, Oct 10, 2008 at 8:03 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Yah, according to Bayes rule if one assumes P(bird) = P(swimmer) this would
 be the case...

 (Of course, this kind of example is cognitively misleading, because if the
 only knowledge
 the system has is Swallows are birds and Swallows are NOT swimmers then
 it doesn't
 really know that the terms involved are swallows, birds, swimmers etc.
 ... then in
 that case they're just almost-meaningless tokens to the system, right?)

Well, it depends on the semantics. According to model-theoretic
semantics, if a term has no reference, it has no meaning. According to
experience-grounded semantics, every term in experience have meaning
--- by the role it plays.

Further questions:

(1) Don't you intuitively feel that the evidence provided by
non-swimming birds says more about Birds are swimmers than
Swimmers are birds?

(2) If your answer for (1) is yes, then think about Adults are
alcohol-drinkers and Alcohol-drinkers are adults --- do they have
the same set of counter examples, intuitively speaking?

(3) According to your previous explanation, will PLN also take a red
apple as negative evidence for Birds are swimmers and Swimmers are
birds, because it reduces the candidate pool by one? Of course, the
probability adjustment may be very small, but qualitatively, isn't it
the same as a non-swimming bird? If not, then what the system will do
about it?

Pei



 On Fri, Oct 10, 2008 at 7:34 PM, Pei Wang [EMAIL PROTECTED] wrote:

 Ben,

 I see your position.

 Let's go back to the example. If the only relevant domain knowledge
 PLN has is Swallows are birds and Swallows are
 NOT swimmers, will the system assigns the same lower-than-default
 probability to Birds are swimmers and  Swimmers are birds? Again,
 I only need a qualitative answer.

 Pei

 On Fri, Oct 10, 2008 at 7:24 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 
  Pei,
 
  I finally took a moment to actually read your email...
 
 
 
  However, the negative evidence of one conclusion is no evidence of the
  other conclusion. For example, Swallows are birds and Swallows are
  NOT swimmers suggests Birds are NOT swimmers, but says nothing
  about whether Swimmers are birds.
 
  Now I wonder if PLN shows a similar asymmetry in induction/abduction
  on negative evidence. If it does, then how can that effect come out of
  a symmetric truth-function? If it doesn't, how can you justify the
  conclusion, which looks counter-intuitive?
 
  According to Bayes rule,
 
  P(bird | swimmer) P(swimmer) = P(swimmer | bird) P(bird)
 
  So, in PLN, evidence for P(bird | swimmer) will also count as evidence
  for P(swimmer | bird), though potentially with a different weighting
  attached to each piece of evidence
 
  If P(bird) = P(swimmer) is assumed, then each piece of evidence
  for each of the two conditional probabilities, will count for the other
  one symmetrically.
 
  The intuition here is the standard Bayesian one.
  Suppose you know there
  are 1 things in the universe, and 1000 swimmers.
  Then if you find out that swallows are not
  swimmers ... then, unless you think there are zero swallows,
  this does affect P(bird | swimmer).  For instance, suppose
  you think there are 10 swallows and 100 birds.  Then, if you know for
  sure
  that swallows are not swimmers, and you have no other
  info but the above, your estimate of P(bird|swimmer)
  should decrease... because of the 1000 swimmers, you now know there
  are only 990 that might be birds ... whereas before you thought
  there were 1000 that might be birds.
 
  And the same sort of reasoning holds for **any** probability
  distribution you place on the number of things in the universe,
  the number of swimmers, the number of birds, the number of swallows.
  It doesn't matter what assumption you make, whether you look at
  n'th order pdf's or whatever ... the same reasoning works...
 
  From what I understand, your philosophical view is that it's somehow
  wrong for a mind to make some assumption about the pdf underlying
  the world around it?  Is that correct?  If so I don't agree with this...
  I
  think this kind of assumption is just part of the inductive bias with
  which
  a mind approaches the world.
 
  The human mind may well have particular pdf's for stuff like birds and
  trees wired into it, as we evolved to deal with these things.  But
  that's
  not really the point.  The inductive bias may be much more abstract --
  ultimately, it can just be an occam bias that biases the mind to
  prior distributions (over the space of procedures for generating
  prior distributions for handling specific cases)
  that are simplest according to some wired-in
  simplicity measure
 
  So again we get back to basic differences in philosophy...
 
  -- Ben G
 
 
 
 
 
 
  
  agi | Archives | Modify Your Subscription


 ---
 agi
 Archives: 

Re: [agi] NARS and probability

2008-10-10 Thread Ben Goertzel
On Fri, Oct 10, 2008 at 8:29 PM, Pei Wang [EMAIL PROTECTED] wrote:

 On Fri, Oct 10, 2008 at 8:03 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 
  Yah, according to Bayes rule if one assumes P(bird) = P(swimmer) this
 would
  be the case...
 
  (Of course, this kind of example is cognitively misleading, because if
 the
  only knowledge
  the system has is Swallows are birds and Swallows are NOT swimmers
 then
  it doesn't
  really know that the terms involved are swallows, birds, swimmers
 etc.
  ... then in
  that case they're just almost-meaningless tokens to the system, right?)

 Well, it depends on the semantics. According to model-theoretic
 semantics, if a term has no reference, it has no meaning. According to
 experience-grounded semantics, every term in experience have meaning
 --- by the role it plays.


That's why I said almost-meaningless ... if those are the only
relationships
known to the system, then the terms in those relationships play almost
no roles, hence have almost no meanings...



 Further questions:

 (1) Don't you intuitively feel that the evidence provided by
 non-swimming birds says more about Birds are swimmers than
 Swimmers are birds?


Yes, but only because I know intuitively that swimmers are more common
in my everyday world than birds.

That illustrates why it's confusing to use commonsense terms in artificially
isolated
inference examples.  (I take that expository strategy in the PLN book too,
but it can be misleading.)




 (2) If your answer for (1) is yes, then think about Adults are
 alcohol-drinkers and Alcohol-drinkers are adults --- do they have
 the same set of counter examples, intuitively speaking?


Again, our intuitions for this are colored by the knowledge that there
are more adults than alcohol-drinkers.

Consider high school, which has 4 years: freshman, sophomore,
junior, senior.

Then think about Juniors  seniors are women and women
are juniors  seniors

It seems quite intuitive to me that, in this case, the same pieces of
evidence support the truth values of these two hypotheses.

This is because the term probabilities of juniors and seniors
and women are intuitively known to be about equal.


(3) According to your previous explanation, will PLN also take a red
 apple as negative evidence for Birds are swimmers and Swimmers are
 birds, because it reduces the candidate pool by one? Of course, the
 probability adjustment may be very small, but qualitatively, isn't it
 the same as a non-swimming bird? If not, then what the system will do
 about it?


Yes, in principle, PLN will behave in Hempel's confirmation paradox in
a similar way to other Bayesian systems.

I do find this counterintuitive, personally, and I spent a while trying to
work
around it ... but finally I decided that my intuition is the faulty thing.
As you note,
it's a very small probability adjustment in these cases, so it's not
surprising
if human intuition is not tuned to make such small probability adjustments
in a correct or useful way...

-- Ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS and probability

2008-10-10 Thread Abram Demski
On Fri, Oct 10, 2008 at 4:24 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Given those three assumptions, plus the NARS formula for revision,
 there is (I think) only one possible formula relating the NARS
 variables 'f' and 'w' to the value of 'par': the probability density
 function p(par | w, f) = par^(w*f) * (1-par)^(w*(1-f)).

 Why is this the only possible formula?

Let's see... let's call the function we're looking for L(f,w). To
satisfy NARS revision it must have the property L(f1,w1)*L(f2,w2)=L{
(w1*f1+w2*f2)/(w1+w2) , w1+w2 }. Taking f1=f2 and w1=w2, we have:

L(f,w)^2=L{ (2*w*f)/(2*w) , 2*w}
L(f,w)^2=L{ f, 2*w}

That establishes that the function is exponential in w, but that's a
far cry from proving the uniqueness of the formula I gave. I should
not have asserted so boldly...

--Abram


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS and probability

2008-10-10 Thread Ben Goertzel
This seems loosely related to the ideas in 5.10.6 of the PLN book, Truth
Value Arithmetic ...

ben

On Fri, Oct 10, 2008 at 9:04 PM, Abram Demski [EMAIL PROTECTED] wrote:

 On Fri, Oct 10, 2008 at 4:24 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

  Given those three assumptions, plus the NARS formula for revision,
  there is (I think) only one possible formula relating the NARS
  variables 'f' and 'w' to the value of 'par': the probability density
  function p(par | w, f) = par^(w*f) * (1-par)^(w*(1-f)).
 
  Why is this the only possible formula?

 Let's see... let's call the function we're looking for L(f,w). To
 satisfy NARS revision it must have the property L(f1,w1)*L(f2,w2)=L{
 (w1*f1+w2*f2)/(w1+w2) , w1+w2 }. Taking f1=f2 and w1=w2, we have:

 L(f,w)^2=L{ (2*w*f)/(2*w) , 2*w}
 L(f,w)^2=L{ f, 2*w}

 That establishes that the function is exponential in w, but that's a
 far cry from proving the uniqueness of the formula I gave. I should
 not have asserted so boldly...

 --Abram


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS and probability

2008-10-10 Thread Abram Demski
On Fri, Oct 10, 2008 at 8:56 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
[. . .]
 Yes, in principle, PLN will behave in Hempel's confirmation paradox in
 a similar way to other Bayesian systems.

 I do find this counterintuitive, personally, and I spent a while trying to
 work
 around it ... but finally I decided that my intuition is the faulty thing.
 As you note,
 it's a very small probability adjustment in these cases, so it's not
 surprising
 if human intuition is not tuned to make such small probability adjustments
 in a correct or useful way...

Well, to take the extreme, suppose we had observe our first crow and
seen that it was black, but later learn that it is in fact the only
crow in existence. The probability adjustment is neither small nor
counterintuitive!

Anyway, perhaps I can try to shed some light on the broader exchange?
My route has been to understand A is B as not P(A|B), but instead
P(A is X | B is X) plus the extensional equivalent... under this
light, the negative evidence presented by two statements B is C and
A is not C reduces the frequency of A is B, but does not obviously
have any bearing on B is A. (Perhaps it does have some indirect
bearing, for example through some rule of inversion... but of course
the system is not yet even well-defined, so I'll not speculate.)


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS and probability

2008-10-10 Thread Abram Demski
By the way, thanks for all the comments... I'll probably shift gears
as you both suggest, if I choose to continue further.

--Abram

On Fri, Oct 10, 2008 at 10:02 PM, Abram Demski [EMAIL PROTECTED] wrote:
 On Fri, Oct 10, 2008 at 8:56 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 [. . .]
 Yes, in principle, PLN will behave in Hempel's confirmation paradox in
 a similar way to other Bayesian systems.

 I do find this counterintuitive, personally, and I spent a while trying to
 work
 around it ... but finally I decided that my intuition is the faulty thing.
 As you note,
 it's a very small probability adjustment in these cases, so it's not
 surprising
 if human intuition is not tuned to make such small probability adjustments
 in a correct or useful way...

 Well, to take the extreme, suppose we had observe our first crow and
 seen that it was black, but later learn that it is in fact the only
 crow in existence. The probability adjustment is neither small nor
 counterintuitive!

 Anyway, perhaps I can try to shed some light on the broader exchange?
 My route has been to understand A is B as not P(A|B), but instead
 P(A is X | B is X) plus the extensional equivalent... under this
 light, the negative evidence presented by two statements B is C and
 A is not C reduces the frequency of A is B, but does not obviously
 have any bearing on B is A. (Perhaps it does have some indirect
 bearing, for example through some rule of inversion... but of course
 the system is not yet even well-defined, so I'll not speculate.)



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS and probability

2008-10-10 Thread Ben Goertzel
Abram,



 Anyway, perhaps I can try to shed some light on the broader exchange?
 My route has been to understand A is B as not P(A|B), but instead
 P(A is X | B is X) plus the extensional equivalent... under this
 light, the negative evidence presented by two statements B is C and
 A is not C reduces the frequency of A is B, but does not obviously
 have any bearing on B is A. (Perhaps it does have some indirect
 bearing, for example through some rule of inversion... but of course
 the system is not yet even well-defined, so I'll not speculate.)



The way we deal with intension in PLN is a little different..

We define a fuzzy set A_PAT associated with a term A, and then the degree of
membership of W in A_PAT is of the form

x*y

where

x is a term measuring the simplicity of W relative to A

and

y = [P(W | A) - P(W)]^+

(where []^+ denotes the positive part)

We can then measure

P( A_PAT | B_PAT)

and so forth...

-- Ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS and probability

2008-10-10 Thread Brad Paulsen

Pei, Ben G. and Abram,

Oh, man, is this stuff GOOD!  This is the real nitty-gritty of the AGI 
matter.  How does your approach handle counter-evidence?  How does your 
approach deal with insufficient evidence?  (Those are rhetorical questions, 
by the way -- I don't want to influence the course of this thread, just 
want to let you know I dig it and, mostly, grok it as well).  I love this 
stuff.  You guys are brilliant.  Actually, I think it would make a good 
publication: PLN vs. NARS -- The AGI Smack-down!  A win-win contest.


This is a rare treat for an old hacker like me.  And, I hope, educational 
for all (including the participants)!  Keep it coming, please!


Cheers,
Brad

Pei Wang wrote:

On Fri, Oct 10, 2008 at 8:03 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

Yah, according to Bayes rule if one assumes P(bird) = P(swimmer) this would
be the case...

(Of course, this kind of example is cognitively misleading, because if the
only knowledge
the system has is Swallows are birds and Swallows are NOT swimmers then
it doesn't
really know that the terms involved are swallows, birds, swimmers etc.
... then in
that case they're just almost-meaningless tokens to the system, right?)


Well, it depends on the semantics. According to model-theoretic
semantics, if a term has no reference, it has no meaning. According to
experience-grounded semantics, every term in experience have meaning
--- by the role it plays.

Further questions:

(1) Don't you intuitively feel that the evidence provided by
non-swimming birds says more about Birds are swimmers than
Swimmers are birds?

(2) If your answer for (1) is yes, then think about Adults are
alcohol-drinkers and Alcohol-drinkers are adults --- do they have
the same set of counter examples, intuitively speaking?

(3) According to your previous explanation, will PLN also take a red
apple as negative evidence for Birds are swimmers and Swimmers are
birds, because it reduces the candidate pool by one? Of course, the
probability adjustment may be very small, but qualitatively, isn't it
the same as a non-swimming bird? If not, then what the system will do
about it?

Pei



On Fri, Oct 10, 2008 at 7:34 PM, Pei Wang [EMAIL PROTECTED] wrote:

Ben,

I see your position.

Let's go back to the example. If the only relevant domain knowledge
PLN has is Swallows are birds and Swallows are
NOT swimmers, will the system assigns the same lower-than-default
probability to Birds are swimmers and  Swimmers are birds? Again,
I only need a qualitative answer.

Pei

On Fri, Oct 10, 2008 at 7:24 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

Pei,

I finally took a moment to actually read your email...



However, the negative evidence of one conclusion is no evidence of the
other conclusion. For example, Swallows are birds and Swallows are
NOT swimmers suggests Birds are NOT swimmers, but says nothing
about whether Swimmers are birds.

Now I wonder if PLN shows a similar asymmetry in induction/abduction
on negative evidence. If it does, then how can that effect come out of
a symmetric truth-function? If it doesn't, how can you justify the
conclusion, which looks counter-intuitive?

According to Bayes rule,

P(bird | swimmer) P(swimmer) = P(swimmer | bird) P(bird)

So, in PLN, evidence for P(bird | swimmer) will also count as evidence
for P(swimmer | bird), though potentially with a different weighting
attached to each piece of evidence

If P(bird) = P(swimmer) is assumed, then each piece of evidence
for each of the two conditional probabilities, will count for the other
one symmetrically.

The intuition here is the standard Bayesian one.
Suppose you know there
are 1 things in the universe, and 1000 swimmers.
Then if you find out that swallows are not
swimmers ... then, unless you think there are zero swallows,
this does affect P(bird | swimmer).  For instance, suppose
you think there are 10 swallows and 100 birds.  Then, if you know for
sure
that swallows are not swimmers, and you have no other
info but the above, your estimate of P(bird|swimmer)
should decrease... because of the 1000 swimmers, you now know there
are only 990 that might be birds ... whereas before you thought
there were 1000 that might be birds.

And the same sort of reasoning holds for **any** probability
distribution you place on the number of things in the universe,
the number of swimmers, the number of birds, the number of swallows.
It doesn't matter what assumption you make, whether you look at
n'th order pdf's or whatever ... the same reasoning works...

From what I understand, your philosophical view is that it's somehow
wrong for a mind to make some assumption about the pdf underlying
the world around it?  Is that correct?  If so I don't agree with this...
I
think this kind of assumption is just part of the inductive bias with
which
a mind approaches the world.

The human mind may well have particular pdf's for stuff like birds and
trees wired into it, as we evolved to deal with these things.  But
that's
not really