Brad,
Thanks for the encouragement.
For people who cannot fully grok the discussion from the email alone,
the relevant NARS references are
http://nars.wang.googlepages.com/wang.semantics.pdf and
http://nars.wang.googlepages.com/wang.confidence.pdf
Pei
On Sat, Oct 11, 2008 at 1:13 AM, Brad
Pei etc.,
First high level comment here, mostly to the non-Pei audience ... then I'll
respond to some of the details:
This dialogue -- so far -- feels odd to me because I have not been
defending anything special, peculiar or inventive about PLN here.
There are some things about PLN that would
On Fri, Oct 10, 2008 at 8:56 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
Well, it depends on the semantics. According to model-theoretic
semantics, if a term has no reference, it has no meaning. According to
experience-grounded semantics, every term in experience have meaning
--- by the role it
Ben,
Your reply raised several interesting topics, and most of them cannot
be settled down in this kind of email exchanges. Therefore, I won't
address every of them here, but will propose another solution, in a
separate private email.
Go back to where this debate starts: the asymmetry of
Thanks Pei!
This is an interesting dialogue, but indeed, I have some reservations about
putting so much energy into email dialogues -- for a couple reasons
1)
because, once they're done,
the text generated basically just vanishes into messy, barely-searchable
archives.
2)
because I tend to
Ben,
My summary was on the asymmetry of induction/abduction topic alone,
not on NARS vs. PLN in general --- of course NARS is counterintuitive
in several places!
Under that restriction, I assume you'll agree with me summary.
Please note that this issue is related to Hempel's Paradox, but not
Pei, Ben,
I am going to try to spell out an arguments for each side (arguing for
symmetry, then for asymmetry).
For Symmetry:
Suppose we get negative evidence for As are Bs, such that we are
tempted to say no As are Bs. We then consider the statement Bs are
As, with no other info. We think, If
On Sat, Oct 11, 2008 at 5:38 PM, Pei Wang [EMAIL PROTECTED] wrote:
On Sat, Oct 11, 2008 at 4:10 PM, Abram Demski [EMAIL PROTECTED] wrote:
Pei, Ben,
I am going to try to spell out an arguments for each side (arguing for
symmetry, then for asymmetry).
For Symmetry:
Suppose we get negative
On Sat, Oct 11, 2008 at 5:56 PM, Abram Demski [EMAIL PROTECTED] wrote:
I see your point --- it comes from the fact that As are Bs and Bs
are As have the same positive evidence (both in NARS and in PLN),
plus the additional assumption that no positive evidence means
negative evidence. Here the
Hi,
What this highlights for me is the idea that NARS truth values attempt
to reflect the evidence so far, while probabilities attempt to reflect
the world
I agree that probabilities attempt to reflect the world
.
Well said. This is exactly the difference between an
On Wed, Oct 8, 2008 at 5:15 PM, Abram Demski [EMAIL PROTECTED] wrote:
Given those three assumptions, plus the NARS formula for revision,
there is (I think) only one possible formula relating the NARS
variables 'f' and 'w' to the value of 'par': the probability density
function p(par | w, f) =
Abram,
I finally read your long post...
The basic idea is to treat NARS truth values as representations of a
statement's likelihood rather than its probability. The likelihood of
a statement given evidence is the probability of the evidence given
the statement. Unlike probabilities,
On Fri, Oct 10, 2008 at 4:24 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
In particular, the result that NARS induction and abduction each
depend on **only one** of their premise truth values ...
Ben,
I'm sure you know it in your mind, but this simple description will
make some people think that
Sorry Pei, you are right, I sloppily mis-stated!
What I should have said was:
the result that the NARS induction and abduction *strength* formulas
each depend on **only one** of their premise truth values ...
Anyway, my point in that particular post was not to say that NARS is either
good or
Ben,
I agree with what you said in the previous email.
However, since we already touched this point in the second time, there
may be people wondering what the difference between NARS and PLN
really is.
Again let me use an example to explain why the truth-value function of
abduction/induction
Of course, this is only one among very many differences btw PLN and NARS,
but I agree it's an interesting one.
I've got other stuff to do today, but I'll try to find time to answer this
email
carefully over the weekend.
ben
On Fri, Oct 10, 2008 at 5:38 PM, Pei Wang [EMAIL PROTECTED] wrote:
Ben,
Strength? If you mean weight or confidence, this is not so. As Pei
corrected, it is the *frequency* that depends on only one of the two.
The strength depends on both.
And, that is one feature of NARS that I don't find strange. It can be
explained OK by the formula I previously proposed and
Pei,
You agree that the abduction and induction strength formulas only
rely on one of the two premises?
Is there some variable called strength that I missed?
--Abram
On Fri, Oct 10, 2008 at 5:38 PM, Pei Wang [EMAIL PROTECTED] wrote:
Ben,
I agree with what you said in the previous email.
Abram,
Ben's strength is my frequency.
Pei
On Fri, Oct 10, 2008 at 5:49 PM, Abram Demski [EMAIL PROTECTED] wrote:
Pei,
You agree that the abduction and induction strength formulas only
rely on one of the two premises?
Is there some variable called strength that I missed?
--Abram
On
Ah.
On Fri, Oct 10, 2008 at 5:51 PM, Pei Wang [EMAIL PROTECTED] wrote:
Abram,
Ben's strength is my frequency.
Pei
On Fri, Oct 10, 2008 at 5:49 PM, Abram Demski [EMAIL PROTECTED] wrote:
Pei,
You agree that the abduction and induction strength formulas only
rely on one of the two
I meant frequency, sorry
Strength is a term Pei used for frequency in some old sicsussions...
If I were taking more the approach Ben suggests, that is, making
reasonable-sounding assumptions and then working forward rather than
assuming NARS and working backward, I would have kept the
On Fri, Oct 10, 2008 at 5:52 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
I meant frequency, sorry
Strength is a term Pei used for frequency in some old sicsussions...
Another correction: strength is never used in any NARS publication.
It was used in some Webmind documents, though I guess it
On Fri, Oct 10, 2008 at 6:01 PM, Pei Wang [EMAIL PROTECTED] wrote:
On Fri, Oct 10, 2008 at 5:52 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
I meant frequency, sorry
Strength is a term Pei used for frequency in some old sicsussions...
Another correction: strength is never used in any NARS
Ben,
Maybe your memory is correct --- we use strength in Webmind to keep
some distance from NARS.
Anyway, I don't like that term because it can be easily interpreted in
several ways, while the reason I don't like probability is just the
opposite --- it has a widely accepted interpretation, which
Pei,
I finally took a moment to actually read your email...
However, the negative evidence of one conclusion is no evidence of the
other conclusion. For example, Swallows are birds and Swallows are
NOT swimmers suggests Birds are NOT swimmers, but says nothing
about whether Swimmers are
Ben,
I see your position.
Let's go back to the example. If the only relevant domain knowledge
PLN has is Swallows are birds and Swallows are
NOT swimmers, will the system assigns the same lower-than-default
probability to Birds are swimmers and Swimmers are birds? Again,
I only need a
Yah, according to Bayes rule if one assumes P(bird) = P(swimmer) this would
be the case...
(Of course, this kind of example is cognitively misleading, because if the
only knowledge
the system has is Swallows are birds and Swallows are NOT swimmers then
it doesn't
really know that the terms
On Fri, Oct 10, 2008 at 8:03 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
Yah, according to Bayes rule if one assumes P(bird) = P(swimmer) this would
be the case...
(Of course, this kind of example is cognitively misleading, because if the
only knowledge
the system has is Swallows are birds
On Fri, Oct 10, 2008 at 8:29 PM, Pei Wang [EMAIL PROTECTED] wrote:
On Fri, Oct 10, 2008 at 8:03 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
Yah, according to Bayes rule if one assumes P(bird) = P(swimmer) this
would
be the case...
(Of course, this kind of example is cognitively
On Fri, Oct 10, 2008 at 4:24 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
Given those three assumptions, plus the NARS formula for revision,
there is (I think) only one possible formula relating the NARS
variables 'f' and 'w' to the value of 'par': the probability density
function p(par | w, f)
This seems loosely related to the ideas in 5.10.6 of the PLN book, Truth
Value Arithmetic ...
ben
On Fri, Oct 10, 2008 at 9:04 PM, Abram Demski [EMAIL PROTECTED] wrote:
On Fri, Oct 10, 2008 at 4:24 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
Given those three assumptions, plus the NARS
On Fri, Oct 10, 2008 at 8:56 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
[. . .]
Yes, in principle, PLN will behave in Hempel's confirmation paradox in
a similar way to other Bayesian systems.
I do find this counterintuitive, personally, and I spent a while trying to
work
around it ... but
By the way, thanks for all the comments... I'll probably shift gears
as you both suggest, if I choose to continue further.
--Abram
On Fri, Oct 10, 2008 at 10:02 PM, Abram Demski [EMAIL PROTECTED] wrote:
On Fri, Oct 10, 2008 at 8:56 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
[. . .]
Yes, in
Abram,
Anyway, perhaps I can try to shed some light on the broader exchange?
My route has been to understand A is B as not P(A|B), but instead
P(A is X | B is X) plus the extensional equivalent... under this
light, the negative evidence presented by two statements B is C and
A is not C
Pei, Ben G. and Abram,
Oh, man, is this stuff GOOD! This is the real nitty-gritty of the AGI
matter. How does your approach handle counter-evidence? How does your
approach deal with insufficient evidence? (Those are rhetorical questions,
by the way -- I don't want to influence the course
35 matches
Mail list logo