On 10/21/07, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> Pei,
>
> Sorry for delayed reply. I answer point-by-point below.
>
> On 10/11/07, Pei Wang < [EMAIL PROTECTED]> wrote:
> >
> > > Basic rule for evidence-based
> > > estimation of implication in NARS seems to be roughly along the lines
> > > of term construction in my framework (I think there's much freedom in
> > > its choice, do you have other variants of it/justification for current
> > > choice relative to other possibilities which is not concerned with
> > > applicability to derivation of rules for abduction/induction/etc.?),
> >
> > There is some justification behind the design of every inference rule
> > (and its truth value function), not only abduction/induction. You can
> > find most in the book, and many are also in my other publications.
>
> I meant the basic rule of evidence measuring that considers extension and
> intension sets. There certainly is a justification for it, but there
> obviously are alternatives, so my question is about the choice of this
> extension/intension measuring above other options.
Sorry I still don't quite get your question. If you mean (1) why
extension and intension are measured in a mixed manner, not separated,
then I have a whole section (7.2) devoted to this issue in my book,
and the summary is "such a unified treatment is necessary for
intelligence". If you mean (2) why the amount of evidence is defined
as the size of the extension and intention of the related terms, then
the answer directly follows from the definition of evidence, as given
in many of my publications --- if what is defined as "evidence" only
exists in those sets, then it is natural to use the size of the sets
as the amount of evidence.
> > > but I'm not sure about how you handle variations of structures (that
> > > is, how does system represents two structures which are similar in
> > > some sense and how it extracts the common part from them). It's
> > > difficult to see from basic rules if it's not addressed directly.
> >
> > The basic rules (deduction/abduction/induction/revision)
> ignore the
> > internal structure of compound terms. There are special inference
> > rules that handles the composition/decomposition of various compound
> > structure. Again, they are mostly given by the book.
>
> I didn't mean the structure of compound terms, but the structure of
> experience representation, which consists of a set of individual statements
> and terms that describe that experience.
Experience is formally defined as the stream (not set) of incoming
tasks, each of which can be (1) new knowledge (a statement with a
truth value), (2) question (a statement without a truth value), or (3)
goal (a statement with a desire value).
> > > For
> > > example, how will it see similarities and differences between
> > > 111222333 and 111122223333? Would it enable simple slippage between
> > > them? How will it learn these representations?
> >
> > Yes, the two can be recognized as similar, so the analogy rule can use
> > one as the other in certain situations.
>
> It'd be interesting to get an idea of how such things can be translated to
> internal representation that implements these operations.
It's a long story, and there are many possibilities, but basically, it
is about the positive and negative evidence of the following
similarity statement:
<(* (* 1 1 1) (* 2 2 2) (* 3 3 3)) <-> (* (* 1 1 1 1) (* 2 2 2 2)
(* 3 3 3 3))>
> > > Basic rule seems to require presence of terms at the same
> > > time, which for example can't be made neurologically plausible, unless
> > > semantics of terms is time-dependent (because neuron only knows that
> > > other neurons from which it received input fired some time in the
> > > past, and feature/term it represents if it chooses to fire is a
> > > statement about features represented by those other fired neurons in
> > > the past).
> >
> > It depends on what you mean by "presence of terms at the same time".
> > In NARS, all inference happens within a concept (because every
> > inference rule requires two premises sharing a term), so as far as two
> > beliefs are recalled at the same time, the basic rules can be applied.
>
> I mean the difference between experience of term in the present and
> experience of the same term (from I/O POV) that happened in the past. If
> these notions are represented by separate terms, how are they connected?
Well, if past experience and current experience involve the same
concept, they will use the same term. You may want to see the actual
examples in http://nars.wang.googlepages.com/NARS-Examples-SingleStep.txt
and http://nars.wang.googlepages.com/NARS-Examples-MultiSteps.txt
> I'm
> sorry if I'm asking about something that's being addressed in your book, I
> don't have a copy.
I'm sorry to say that if you are seriously interested in NARS, you do
need to read the book. If your library doesn't have it, it may be
obtained through inter-library loan. If you have absolutely no way to
get it, send me a private email and I'll arrange something.
> > > Why do you need so many rules?
> >
> > I didn't expect so many rules myself at the beginning. I add new rules
> > only when the existing ones are not enough for a situation. It will be
> > great if someone can find a simpler design.
>
> I feel that some of complexity comes from modeling of natural language
> statements. Do you agree?
Yes, to a certain degree --- I do want the expressive power of Narsese
to be comparable to that of a natural language.
Pei
> --
> Vladimir Nesov mailto:
> [EMAIL PROTECTED] ________________________________
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=56140081-c9fe6e