Charles D. Hixson’s post of 10/8/2007 5:50 PM, was quite impressive as a
first reaction upon reading about NARS.

After I first read Pei Wang’s “A Logic of Categorization”, it took me
quite a while to know what I thought of it.  It was not until I got
answers to some of my basic questions from Pei though postings under the
current thread title that I was able to start to understand it reasonably
well.  Since then I have been coming to understand that it is quite
similar to some of my own previous thinking, and if it were used in a
certain way, it would seem to have tremendous potential.

But I still have some questions about it, such as” (PEI, IF YOU ARE
READING THIS I WOULD BE INTERESTED IN HEARING YOUR ANSWERS)

--(1) How are episodes represented in NARS?
--(2) How are complex pattern and sets of patterns with many interrelated
elements represented in NARS?  (I.e., how would NARS represents an auto
mechanic’s understanding of automobiles?  Would it be in terms of many
thousands of sentences containing relational inheritance statements such
as those shown on page 197 of “A Logic of Categorization”?)
--(3) How are time and temporal patterns represented?
--(4) How are specific mappings between the elements of a pattern and what
they map to represented in NARS?
--(5) How does NARS learn behaviors?
--(6) Finally, this is a much larger question.  Is it really optimal to
limit your representational scheme to a language in which all sentences
are based on the inheritance relation?

With regard to Question (6):

Categorization is essential.  I don’t question that.  I believe the
pattern is the essential source of intelligence.  It is essential to
implication and reasoning from experiences.  NARS’s categorization relates
to patterns and relationships between patterns.  It patterns are
represented in a generalization hierarchy (where a property or set of
properties can be viewed as a generalization), with a higher level pattern
(i.e., category) being able to represent different species of itself in
the different contexts where those different species are appropriate,
thus, helping to solve two of the major problems in AI, that of
non-literal matching and context appropriateness.

All this is well and good.  But without having had a chance to fully
consider the subject it seems to me that there might be other aspects of
reality and representation that -- even if they might all be reducible to
representation in terms of categorization -- could perhaps be more easily
thought of by us poor humans in terms of concepts other than
categorization.

For example, Novamente bases its inference and much of its learning on
PTL, Probabilistic Term Logic, which is based on inheritance relations,
much as is NARS.  But both of Ben’s articles on Novamente spend a lot of
time describing things in terms like “hypergraph”, “maps”, “attractors”,
“logical unification”, “PredicateNodes”, “genetic programming”, and
“associative links”.  Yes, perhaps all these things could be thought of as
categories, inheritance statements, and things derived from them of the
type described in you paper “A Logic of Catagorization”, and such thoughts
might provide valuable insights, but is that the most efficient way for us
mortals to think of them and for a machine to represent them.

I would be interested in hearing your answer to all these questions.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=51300772-e34770

Reply via email to