Pei,

Sorry for delayed reply. I answer point-by-point below.

On 10/11/07, Pei Wang <[EMAIL PROTECTED]> wrote:
>
>
> > Basic rule for evidence-based
> > estimation of implication in NARS seems to be roughly along the lines
> > of term construction in my framework (I think there's much freedom in
> > its choice, do you have other variants of it/justification for current
> > choice relative to other possibilities which is not concerned with
> > applicability to derivation of rules for abduction/induction/etc.?),
>
> There is some justification behind the design of every inference rule
> (and its truth value function), not only abduction/induction. You can
> find most in the book, and many are also in my other publications.


I meant the basic rule of evidence measuring that considers extension and
intension sets. There certainly is a justification for it, but there
obviously are alternatives, so my question is about the choice of this
extension/intension measuring above other options.


> but I'm not sure about how you handle variations of structures (that
> > is, how does system represents two structures which are similar in
> > some sense and how it extracts the common part from them). It's
> > difficult to see from basic rules if it's not addressed directly.
>
> The basic rules (deduction/abduction/induction/revision) ignore the
> internal structure of compound terms. There are special inference
> rules that handles the composition/decomposition of various compound
> structure. Again, they are mostly given by the book.


I didn't mean the structure of compound terms, but the structure of
experience representation, which consists of a set of individual statements
and terms that describe that experience.


> For
> > example, how will it see similarities and differences between
> > 111222333 and 111122223333? Would it enable simple slippage between
> > them? How will it learn these representations?
>
> Yes, the two can be recognized as similar, so the analogy rule can use
> one as the other in certain situations.


It'd be interesting to get an idea of how such things can be translated to
internal representation that implements these operations.


> Basic rule seems to require presence of terms at the same
> > time, which for example can't be made neurologically plausible, unless
> > semantics of terms is time-dependent (because neuron only knows that
> > other neurons from which it received input fired some time in the
> > past, and feature/term it represents if it chooses to fire is a
> > statement about features represented by those other fired neurons in
> > the past).
>
> It depends on what you mean by "presence of terms at the same time".
> In NARS, all inference happens within a concept (because every
> inference rule requires two premises sharing a term), so as far as two
> beliefs are recalled at the same time, the basic rules can be applied.


I mean the difference between experience of term in the present and
experience of the same term (from I/O POV) that happened in the past. If
these notions are represented by separate terms, how are they connected? I'm
sorry if I'm asking about something that's being addressed in your book, I
don't have a copy.


> Why do you need so many rules?
>
> I didn't expect so many rules myself at the beginning. I add new rules
> only when the existing ones are not enough for a situation. It will be
> great if someone can find a simpler design.


I feel that some of complexity comes from modeling of natural language
statements. Do you agree?


-- 
Vladimir Nesov                            mailto:[EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=56112168-b226f2

Reply via email to