Charles,

To be concrete, let me summarize the assumptions in your previous
comments, and briefly explain why they don't apply to NARS.

*. The meaning of "Fred" is an entity referred to by the term --- in
NARS, the meaning of a term is its relations with other terms
(according to the system's experience), not an outside entity.

*. The meaning of "human" and "animal" are sets of entities --- in
NARS, once again the meaning of these terms are determined by their
experienced relation with other terms, not sets of outside entities.

*. The "is a" relation (as in "Fred is a human") is represented as
membership relation in set theory --- in NARS, it is an "inheritance"
relation with experience-based truth value.

*. The truth value of a statement measures whether, or how much, the
statement matches the corresponding fact (you didn't say so
explicitly, but it is implied by your comments about INDUCTION and
ABDUCTION) --- in NARS, as you have read in my paper, truth value
measures evidential support, that is, how much a statement matches
"what the system knows", not "the world as it is".

Now let's see Edward's example for induction: from "Fred is a human"
and "Fred is an animal" to derive "A human is an animal" and "An
animal is a human" (truth values omitted). You said

> Actually, you know less than you have implied.
> You know that there exists an entity referred to as Fred, and that this
> entity is a member of both the set human and the set animal.  You aren't
> justified in concluding that any other member of the set human is also a
> member of the set animal.  And conversely.

which is correct deduction according to a model-theoretic
interpretation of the statements. However, under the
experience-grounded semantics, the NARS conclusions don't state that
the two sets "human" and "animal", as we know them, includes each
other --- that cannot be derived, even in a probabilistic sense.
Instead, they states that the two concepts, "human" and "animal", as
the system know them, can substitute each other, in certain way and to
certain extent. An intelligent system will use this kind of inference
to predict the future (such as to expect the next time "human" is used
as a predicate term, it can be replaced by "animal"), so as to go
beyond the scope of binary deduction. Such predictions can turn out to
be wrong, but I believe this is how adaptation/intelligence works.

For now I won't comment on the other issues in your following message
--- there are too many of them. Instead, I hope to make myself clear
on the basic topics first.

Pei

On 10/8/07, Charles D Hixson <[EMAIL PROTECTED]> wrote:
> OK.  I've read the paper, and don't see where I've made any errors.  It
> looks to me as if NARS can be modeled by a prototype based language with
> operators for "is an ancestor of" and "is a descendant of".  I do have
> trouble with the language terms that you use, though admittedly they
> appear to be standard for logicians (to the extent that I'm familiar
> with their dialect).  That might well not be a good implementation, but
> it appears to be a reasonable model.
>
> To me a model can well be dynamic and experience based.  In fact I
> wouldn't consider a model very intelligent if it didn't either itself
> adapt itself to experience, or it weren't embedded in a matrix which
> adapted it to experiences.  (This doesn't seem to be quite the same
> meaning that you use for model.  Your separation of the rules of
> inference, the rational faculty, and the model as a fixed and unchanging
> condition don't match my use of the term.  I might pull out the "rules
> of inference" as separate pieces and stick them into a datafile, but
> datafiles can be changed, if anything, more readily than programs...and
> programs are readily changeable.  To me it appears clear that much of
> the language would need to be interpretive rather than compiled.  One
> should pre-compile what one can for the sake of efficiency, but with the
> knowledge that this sacrifices flexibility for speed.
>
> I still find that I am forced to interpret the inheritance relationship
> as a "is a child of" relationship.  And I find the idea of continually
> calculating the powerset of inheritance relationships unappealing.
> There may not be a better way, but if there isn't, than AGI can't move
> forwards without vastly more powerful machines.  Probably, however, the
> calculations could be shortcut by increasing the local storage a bit.
> If each "node" maintained a list of parents and children, and a count of
> descendants and ancestors it might suffice.  This would increase storage
> requirements, but drastically cut calculation and still enable the
> calculation of confidence.  Updating the counts could be saved for
> dreamtime.  This would imply that during the early part of learning
> sleep would be a frequent necessity...but it should become less
> necessary as the ratio of extant knowledge to new knowledge learned
> increased.  (Note that in this case the amount of new knowledge would be
> a measured quantity, not an arbitrary constant.)
>
> I do feel that the limited sensory modality of the environment (i.e.,
> reading the keyboard) makes AGI unlikely to be feasible.  It seems to me
> that one of the necessary components of true intelligence is integrating
> multi-modal sensory experience.  This doesn't necessarily mean vision
> and touch, but SOMETHING.  As such I can see NARS (or some similar
> system) as a component of an AGI, but not as a core component (if such
> exists).  OTOH, it might develop into something that would exhibit
> consciousness.  But note that consciousness appears to be primarily an
> evaluative function rather than a decision making component.  It logs
> and evaluates decisions that have been made, and maintains a delusion
> that it made them, but they are actually made by other processes, whose
> nature is less obvious.  (It may not actually evaluate them, but I
> haven't heard of any evidence to justify denying that, and it's
> certainly a good delusion.  Still, were I to wager, I'd wager that it
> was basically a logging function, and that the evaluations were also
> made by other processes.)  Consciousness appears to have developed to
> handle those functions that required serialization...and when language
> came along, it appeared in consciousness, because the limited bandwidth
> available necessitated serial conversion.
>
>
> Pei Wang wrote:
> > Charles,
> >
> > I fully understand your response --- it is typical when people
> > interpret NARS according to their ideas about how a formal logic
> > should be understood.
> >
> > But NARS is VERY different. Especially, it uses a special semantics,
> > which defines "truth" and "meaning" in a way that is fundamentally
> > different from model-theoretic semantics (which is implicitly assumed
> > in your comments everywhere), and I believe is closer to how "truth"
> > and "meaning" are treated in natural languages (so you may end up like
> > it).
> >
> > As Mark suggested, you may want to do some reading first (such as
> > http://nars.wang.googlepages.com/wang.semantics.pdf), and after that
> > the discussion will be much more fruitful and efficient. I'm sorry
> > that I don't have a shorter explanation to the related issues.
> >
> > Pei
> >
> > On 10/8/07, Charles D Hixson <[EMAIL PROTECTED]> wrote:
> >
> >> Pei Wang wrote:
> >>
> >>> Charles,
> >>>
> >>> What you said is correct for most formal logics formulating binary
> >>> deduction, using model-theoretic semantics. However, Edward was
> >>> talking about the categorical logic of NARS, though he put the
> >>> statements in English, and omitted the truth values, which may caused
> >>> some misunderstanding.
> >>>
> >>> Pei
> >>>
> >>> On 10/7/07, Charles D Hixson <[EMAIL PROTECTED]> wrote:
> >>>
> >>>
> >>>> Edward W. Porter wrote:
> >>>>
> >>>>
> >>>>> So is the following understanding correct?
> >>>>>
> >>>>>             If you have two statements
> >>>>>
> >>>>>                         Fred is a human
> >>>>>                         Fred is an animal
> >>>>>
> >>>>>             And assuming you know nothing more about any of the three
> >>>>>             terms in both these statements, then each of the following
> >>>>>             would be an appropriate induction
> >>>>>
> >>>>>                         A human is an animal
> >>>>>                         An animal is a human
> >>>>>                         A human and an animal are similar
> >>>>>
> >>>>>             It would only then be from further information that you
> >>>>>             would find the first of these two inductions has a larger
> >>>>>             truth value than the second and that the third probably
> >>>>>             has a larger truth value than the second..
> >>>>>
> >>>>> Edward W. Porter
> >>>>> Porter & Associates
> >>>>> 24 String Bridge S12
> >>>>> Exeter, NH 03833
> >>>>> (617) 494-1722
> >>>>> Fax (617) 494-1822
> >>>>> [EMAIL PROTECTED]
> >>>>>
> >>>>>
> >>>>>
> >>>> Actually, you know less than you have implied.
> >>>> You know that there exists an entity referred to as Fred, and that this
> >>>> entity is a member of both the set human and the set animal.  You aren't
> >>>> justified in concluding that any other member of the set human is also a
> >>>> member of the set animal.  And conversely.  And the only argument for
> >>>> similarity is that the intersection isn't empty.
> >>>>
> >>>> E.g.:
> >>>> Fred is a possessor of purple hair.   (He dyed his hair)
> >>>> Fred is a possessor of jellyfish DNA. (He was a subject in a molecular
> >>>> biology experiment.  His skin would glow green under proper stimulation.)
> >>>>
> >>>> Now admittedly these sentences would usually be said in a different form
> >>>> (i.e., "Fred has green hair"), but they are reasonable translations of
> >>>> an equivalent sentence ("Fred is a member of the set of people with
> >>>> green hair").
> >>>>
> >>>> You REALLY can't do good reasoning using formal logic in natural
> >>>> language...at least in English.  That's why the invention of symbolic
> >>>> logic was so important.
> >>>>
> >>>> If you want to use the old form of syllogism, then at least one of the
> >>>> sentences needs to have either an existential or universal quantifier.
> >>>> Otherwise it isn't a syllogism, but just a pair of statements.  And all
> >>>> that you can conclude from them is that they have been asserted.  (If
> >>>> they're directly contradictory, then you may question the reliability of
> >>>> the asserter...but that's tricky, as often things that appear to be
> >>>> contradictions actually aren't.)
> >>>>
> >>>> Of course, what this really means is that logic is unsuited for
> >>>> conversation... but it also implies that you shouldn't program your
> >>>> rule-sets in natural language.  You'll almost certainly either get them
> >>>> wrong or be ambiguous.  (Ambiguity is more common, but it's not
> >>>> exclusive of wrong.)
> >>>>
> >>>>
> >> Well, truth values would allow one to assign probabilities to the
> >> various statements (i.e., the proffered values plus some uncertainty),
> >> but he specifically said we didn't know anything else about the terms,
> >> so I don't see how one can go any further.  If you don't know what a
> >> human is, then knowing that Fred is one doesn't tell you anything about
> >> his other characteristics.
> >>
> >> So when you have two statements about Fred, you "know" the two
> >> statements, but you don't know anything about the relationship between
> >> them except that their intersection is non-empty.  Since it was
> >> specified that we didn't know anything about them, Fred could be a line,
> >> and human could be vertical lines and animal could be named entities.
> >>
> >> For fancier forms of logic (induction, deduction, etc.) you need to have
> >> more information.  Most forms require that there be at least a partial
> >> ordering available, if not several.  Many modes of reasoning require
> >> that a complete ordering be available.  (It doesn't need to be an
> >> ordering that guarantees that every iteration will end up with a member
> >> of the set...consider the problem of stepping through a hash table...you
> >> can do it, but you'll get lots of empty cells, and you can't predict the
> >> order.  What you can predict is complete coverage.  This is an
> >> importantly useful characteristic.  It lets you check "for all"  
> >> assertions.
> >>
> >> I'll admit I haven't read your papers on NARS, but I don't see how that
> >> could obviate these "primitive" characteristics.  You can't do induction
> >> without an ordering.   Deduction doesn't require an ordering, but it
> >> requires rules of inference.  Simple assertions don't require rules of
> >> inference, but do require assertion...which generally means a verb
> >> (possibly understood).  This is why "if x then y" is often translated
> >> into English as "x implies y", but a better translation might be "x
> >> implies y, but I'm not asserting x".
> >>
> >> I.e., two children of the same parent can be expected to have similarities.
> >> P.S.:
> >>
> >> ABDUCTION INFERENCE RULE:
> >>      Given S --> M and P --> M, this implies S --> P to some degree
> >> I.e., two children of the same parent can be expected to have
> >> similarities (in the context of inheritance...they will at least be
> >> similar to the extent that they inherited the same characteristics).
> >>
> >> INDUCTION INFERENCE RULE:
> >>      Given M --> S and M --> P, this implies S --> P to some degree
> >> I.e., two parents of the same child can be expected to have similarities
> >> (in the context of inheritance).   This one seems dubious, but to the
> >> extent that it's true then one should also expect "P-->S to some degree".
> >> If I look a parents and their children, this seems reasonable...though
> >> the "to some degree" is quite unpredictable.  OTOH, if I look at object
> >> classes, it seems to fail completely.  It's quite surprising to find
> >> induction appear to be less certain than abduction.  Either I'm not
> >> properly understanding what is meant (Well, I did mention that I hadn't
> >> read the original papers), or perhaps this needs a bit more thought.  It
> >> seems very sensitive to context.  I also note that I can't relate this
> >> definition easily to the meaning of induction used in the phrase in
> >> "mathematical induction".  Or to electrical induction.
> >>
> >> OTOH, it's certainly true that if two parents are related through a
> >> child, one can expect, at minimum, for them to be members of closely
> >> related species. ... I feel uncomfortable with calling that piece of
> >> reasoning induction, however.  Model-consistency seems a better phrase.
> >> (I.e., I have a model of the world, and in that model only closely
> >> related species can engender offspring.  N.B.:  I am aware of model
> >> violations, where, e.g., microbes can cause insects or mammals to
> >> engender offspring...so I have a more detailed model to account for
> >> that.)  This is clearly a much more complex process than your proposed
> >> simple rule...but I'm not certain that "induction" is an appropriate
> >> term.  I have a model for how other forms of induction work, and this
> >> doesn't appear to fit into it.  (OTOH, it's a rather loose model, and if
> >> this usage became well-established, it would probably adjust.  But the
> >> adjustment would, for at least a while, feel unnatural.)
> >>
> >> Still, utility rules.  If this rule is useful, then it's a valid rule.
> >> I may be unhappy with the name that it was given, and may feel that it
> >> appears unduly context sensitive, but I'm trying to apply it in the more
> >> general space of reasoning, rather than within the context of your
> >> proposed system.  And it's quite plausible that as program objects
> >> become more complex, then it will be more difficult to perform multiple
> >> inheritance between distantly related objects.  (One might consider why
> >> so many computer languages have opted for single inheritance with
> >> interfaces.  It might be a consequence of this rule [which I still don't
> >> want to call induction].)
> >>
> >> -----
> >> This list is sponsored by AGIRI: http://www.agiri.org/email
> >> To unsubscribe or change your options, please go to:
> >> http://v2.listbox.com/member/?&;
> >>
> >>
> >
> > -----
> > This list is sponsored by AGIRI: http://www.agiri.org/email
> > To unsubscribe or change your options, please go to:
> > http://v2.listbox.com/member/?&;
> >
> >
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=51294188-3b9e57

Reply via email to