In NARS, the Deduction/Induction/Abduction trio has (at least) three
different-though-isomorphic forms, one on inheritance, one on
implication, and one mixed.

For people who don't have access to the book, see
http://nars.wang.googlepages.com/wang.abduction.pdf , though the
symbols used in that paper is slightly different from the current
form.

Pei

On 10/9/07, Edward W. Porter <[EMAIL PROTECTED]> wrote:
>
>
>
>
> Mark,
>
>
>
> The basic inference rules in NARS that would support an implication of the
> form S is a child of P are of the form:
>
>
>
> DEDUCTION INFERENCE RULE:
>      Given S --> M and M--> P, this implies S --> P
>
> ABDUCTION INFERENCE RULE:
>      Given S --> M and P --> M, this implies S --> P to some degree
>
> INDUCTION INFERENCE RULE:
>      Given M --> S and M --> P, this implies S --> P to some degree
>
>
>
> where "-->" is the inheritance relations.
>
>
>
> Your arguments, are of the very different form :
>
> Given P and Q, this implies Q --> P and P --> Q
>
>
>
> And
>
>
>
> Given S and R, this implies S --> R and R --> S
>
>
>
>          In the argument regarding drinking and being an adult, you do not
> appear to use any of these NARS inference rules to show that P inherits from
> Q or vice versa (unless, perhaps, one assumes multiple other NARS sentences
> or terms that might help the inference along, such as an uber category such
> as the "category of all categories" from which one could use the abduction
> rule to imply both of the inheritances mentioned (which one would assume the
> system would have learned over time was such a weak source of implication as
> to be normally useless).
>
>
>
> But in that example, just from common sense reasoning, including knowledge
> of the relevant subject matter, (absent any knowledge of NARS) it appears
> reasonable to imply P from Q and Q from P.  So if NARS did the same it would
> be behaving in a common sense way.  Loops in transitivity might be really
> ugly, but it seems any human-level AGI has to have the same ability to deal
> with them as human common sense.
>
>
>
> To be honest, I do not yet understand how implication is derived from the
> inheritance relations in NARS.  Assuming truth values of one for the child
> and child/parent inheritance statement, I would guess a child implies its
> parent with a truth value of one.  I would assume a parent with a truth
> value of one implies a given child with a lesser value that decrease the
> more often the parent is mapped against other children.
>
>
>
> The argument claiming NARS says that R ("most ravens are black") is both the
> parent and child of S ("this raven is white") (and vice versa), similarly
> does not appear to be derivable from only the statements given using the
> NARS inference rules.
>
>
>
> Nor does my common sense reasoning help me understand why "most ravens are
> black" is both the parent and child of "this raven is white."  (All though
> my common sense does tell me that "this raven is black" would provide common
> sense inductive evidence for "most ravens are black" and that "this raven"
> that is black would be a child of the category of "most ravens" that are
> black.)
>
>
>
> But I do understand that each of these two statements would tend to have
> probabilistic effects on the other, as you suggested,  assuming that the
> fact a raven is black has implications on whether or not it is white.  But
> such two way probabilistic relationships are at the core of Bayesian
> inference, so there is no reason why they should not be part of an AGI.
>
> Edward W. Porter
> Porter & Associates
> 24 String Bridge S12
> Exeter, NH 03833
> (617) 494-1722
> Fax (617) 494-1822
> [EMAIL PROTECTED]
>
>
>
>
> -----Original Message-----
> From: Mark Waser [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, October 09, 2007 2:28 PM
> To: [email protected]
> Subject: Re: [agi] Do the inference rules of categorical logic make sense?
>
>
>
> Most of the discussion I read in Pei's article related to inheritance
> relations between terms, that operated as subject and predicates in
> sentences that are inheritance statements, rather than between entire
> statements, unless the statement was a subject or a predicate of a higher
> order inheritance statement.  So what you are referring to appears to be
> beyond what I have read.
>
> Label the statement "I am allowed to drink alcohol" as P and the statement
> "I am an adult" as Q.  P implies Q and Q implies P (assume that age 21
> equals adult) --OR-- P is the parent of Q and Q is the parent of P.
>
> Label the statement that "most ravens are black" as R and the statement that
> "this raven is white" as S.  R affects the probability of S and, to a lesser
> extent, S affects the probability of R (both in a negative direction) --OR--
> R is the parent of S and S is the parent of R (although, realistically, the
> probability change is so miniscule that you really could argue that this
> isn't true).
>
> NARS's inheritance is the "inheritance" of influence on the probability
> values.
>
> ----- Original Message -----
>
> From: Edward W. Porter
> To: [email protected]
> Sent: Tuesday, October 09, 2007 1:12 PM
> Subject: RE: [agi] Do the inference rules of categorical logic make sense?
>
>
> Mark,
>
> Thank you for your reply.  I just ate a lunch with too much fat (luckily
> largely olive oil) in it so, my brain is a little sleepy.  If it is not too
> much trouble could you please map out the inheritance relationships from
> which one derives how "I am allowed to drink alcohol" is both a parent and
> the child of "I am an adult."  And could you please do the same with how
> "most ravens are balck" is both parent and child of "this raven is white."
>
> Most of the discussion I read in Pei's article related to inheritance
> relations between terms, that operated as subject and predicates in
> sentences that are inheritance statements, rather than between entire
> statemens, unless the statement was a subject or a predicate of a higher
> order inheritance statement.  So what you are referring to appears to be
> beyond what I have read.
>
> Edward W. Porter
> Porter & Associates
> 24 String Bridge S12
> Exeter, NH 03833
> (617) 494-1722
> Fax (617) 494-1822
> [EMAIL PROTECTED]
>
>
>
>
> -----Original Message-----
> From: Mark Waser [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, October 09, 2007 12:47 PM
> To: [email protected]
> Subject: Re: [agi] Do the inference rules of categorical logic make sense?
>
>
> Thus, as I understand it, one can view all inheritance statements as
> indicating the evidence that one instance or category belongs to, and thus
> is "a child of" another category, which includes, and thus can be viewed as
> "a parent" of the other.
>
> Yes, that is inheritance as Pei uses it.  But are you comfortable with the
> fact that "I am allowed to drink alcohol" is normally both the parent and
> the child of "I am an adult " (and vice versa)?  How about the fact that
> "most ravens are black" is both the parent and child of "this raven is
> white" (and vice versa)?
>
> Since inheritance relations are transitive, the resulting hierarchy of
> categories involves nodes that can be considered ancestors (i.e., parents,
> parents of parents, etc.) of others and nodes that can be viewed as
> descendents (children, children of children, etc.) of others.
>
> And how often do you really want to do this with concepts like the above --
> or when the evidence is substantially less than unity?
>
> And loops and transitivity are really ugly . . . .
>
> NARS really isn't your father's inheritance.
>
>
> ----- Original Message -----
> From: Edward W. Porter
> To: [email protected]
> Sent: Tuesday, October 09, 2007 12:24 PM
> Subject: RE: [agi] Do the inference rules of categorical logic make sense?
>
>
>
> RE: (1) THE VALUE OF "CHILD OF" AND "PARENT OF" RELATIONS  &  (2) DISCUSSION
> OF POSSIBLE VALUE IN DISTINGUISHING BETWEEN GENERALIZATIONAL AND
> COMPOSITIONAL INHERITANCE HIERARCHIES.
>
> Re Mark Waser's 10/9/2007 9:46 AM post: Perhaps Mark understands something I
> don't.
>
> I think relations that can be viewed as "child of" and "parent of" in a
> hierarchy of categories are extremely important (for reasons set forth in
> more detail below) and it is not clear to me that Pei meant something other
> than this.
>
> If Mark or anyone else has reason to believe that "what [Pei] means is quite
> different" than such "child of" and "parent of" relations, I would
> appreciate being illuminated by what that different meaning is.
>
>
>
> My understanding of NARS is that it is concerned with inheritance relations,
> which as I understand it, indicate the truth value of the assumption that
> one category falls within another category, where category is broadly
> defined to included not only what we normally think of as categories, but
> also relationships, slots in relationships, and categories defined by a sets
> of one or more properties, attributes, elements, relationships, or slot in
> relationships.  Thus, as I understand it, one can view all inheritance
> statements as indicating the evidence that one instance or category belongs
> to, and thus is "a child of" another category, which includes, and thus can
> be viewed as "a parent" of the other.  Since inheritance relations are
> transitive, the resulting hierarchy of categories involves nodes that can be
> considered ancestors (i.e., parents, parents of parents, etc.) of others and
> nodes that can be viewed as descendents (children, children of children,
> etc.) of others.
>
> I tend to think of similarity as a sibling relationship under a shared
> hidden parent category -- based on similar aspects of the sibling's
> extensions and/or intensions.
>
> In much of my own thinking I have thought of such categorization relations
> as is generalization, in which the parent is the genus, and the child is the
> species.   Generalization is important for many reasons.  First, perception
> is trying to figure which in category or generalization of things, actions,
> or situations various parts of a current set of sensory information might
> fit.  Secondly, Generalization is important because it is necessary for
> implication.  All those Bayesian probabilities we are used to thinking about
> such as P(A|B,C), are totally useless unless we have some way of knowing the
> probability the situation being considered contains a B or C.  To do that
> you have to have categories that help you determine the extent to which a B
> or a C is present.  To understand the implication of P(A|B,C) you have to
> have some meaning for the category A.  Generalization is important for
> behavior because one uses generalization learned from past experiences to
> develop plans for how to achieve goals, and because most action schema are
> usually generalization that have to be instantiated in a context specific
> way.
>
> One of the key problems in AI has been non-literal matching.  That is why
> representation schemes that have a flexibility something like that of NARS
> are necessary for any intelligence capable of operating well in anything
> other than limited domains.  That is why so-called "invariant" or
> "hierarchical memory" representations are so valuable.  This is indicated in
> writings of Jeff Hawkins, Thomas Serre ("Learning a Dictionary of
> Shape-Components in Visual Cortex: Comparison with Neurons, Humans and
> Machines", by Thomas Serre, the google-able article I have cited so many
> times), and many others.  Such hierarchical representations achieve their
> flexibility though a composition/generalization hierarchy which presumably
> maps easily into NARS.
>
> Another key problem in AI is context sensitivity.  A hierarchical
> representation scheme that is capable of computing measures of similarity,
> fit, and implications throughout multiple levels in such a hierarchical
> representation scheme of multiple aspects of a situation in real time can be
> capable of sophisticated real time context sensitivity.  In fact, the
> ability to perform relative extensive real time matching and implication
> across multiple levels of compositional and generalization hierarchies has
> been a key feature of the types of systems I have been thinking of for
> years.
>
> That is one of the major reasons why I have argued for "BREAKING THE SMALL
> HARDWARE MINDSET."
>
> I understand NARS's inheritance (or categorizations) as being equivalent two
> both of what I have considered two of the major dimensions in an AGI's self
> organizing memory, (1) generalization/similarity and (2) composition.  I
> was, however, aware, that down in the compositional (comp) hierarchy can be
> viewed as up in the generalization (gen) hierarchy, since the set of things
> having one or more properties or elements of a composition can be viewed as
> a generalization of that composition (i.e., the generalization covering the
> category of things having that one or more properties or elements).
>
> Although I understand there is an importance equivalence between down in the
> comp hierarchical and up in the gen hierarchy, and that the two could be
> viewed as one hierarchy, I have preferred to think of them as different
> hierarchies, because the type of gens one gets by going up in the gen
> hierarchy tend to be different than the type of gens one gets by going down
> in the comp hierarchy.
>
> Each possible set in the powerset (the set of all subsets) of elements
> (eles), relationships (rels), attributes (atts) and contextual patterns
> (contextual pats) could be considered as possible generalizations.  I have
> assumed, as does Goertzel's Novamente, that there is a competitive ecosystem
> for representational resources, in which only the fittest pats and gens --
> as determined by some measure of usefulness to the system -- survive.  There
> are several major uses of gens, such as aiding in perception, providing
> inheritance of significant implication, providing appropriate level of
> representation for learning, and providing invariant representation in
> higher level comps.  Although temporary gens will be generated at a
> relatively high frequency, somewhat like the inductive implications in NARS,
> the number of gens that survive and get incorporated into a lot of comps and
> episodic reps, will be an infinitesimal fraction of the powerset of eles,
> rels, atts, and contextual features stored in the system.  Pats in the up
> direction in the Gen hierarchy will tend to be ones that have been selected
> for the usefulness as generalizations.  They will often have reasonable
> number of features that correspond to that of their species node, but with
> some of them more broadly defined.  The gens found by going down in the comp
> hierarchy are ones that have been selected for their representational value
> in a comp, and many of them would not normally be that valuable as what we
> normally think of as generalizations.
>
> In the type of system I have been thinking of I have assumed there will be
> substantially less multiple inheritance in the up direction in the gen
> hierarchy than in the down direction in the comp hierarchy (in which there
> would be potential inheritance from every ele, rel, att, and contextual
> feature of in a comp's descendant nodes at multiple levels in the comp
> hierarchy below it.  Thus, for spreading activation control purposes, I
> think it is valuable to distinguish between generalization and compositional
> hierarchies, although I understand they have an important equivalence that
> should not be ignored.
>
> I wonder if NARS makes such a distinction.
>
> These are only initial thoughts.  I hope to become part of a team that gets
> an early world-knowledge computing AGI up and running.  Perhaps when I do
> feedback from reality will change my mind.
>
> I would welcome comments, not only from Mark, but also from other readers.
>
>
> Edward W. Porter
> Porter & Associates
> 24 String Bridge S12
> Exeter, NH 03833
> (617) 494-1722
> Fax (617) 494-1822
> [EMAIL PROTECTED]
>
>
>
> -----Original Message-----
> From: Mark Waser [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, October 09, 2007 9:46 AM
> To: [email protected]
> Subject: Re: [agi] Do the inference rules of categorical logic make sense?
>
>
> >    I don't believe that this is the case at all.  NARS correctly
> > handles
> > cases where entities co-occur or where one entity implies another only due
> > to other entities/factors.  "Is an ancestor of" and "is a descendant of"
> > has nothing to do with this.
>
> Ack!  Let me rephrase.  Despite the fact that Pei always uses the words of
> inheritance (and is technically correct), what he means is quite different
> from what most people assume that he means.  You are stuck on the "common"
> meanings of the terms  "is an ancestor of" and "is a descendant of" and it's
> impeding your understanding.
>
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&; ________________________________
>  This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&; ________________________________
>  This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&; ________________________________
>  This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&; ________________________________
>  This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&; ________________________________
>  This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=51648004-7504bd

Reply via email to