Jim, 

 

Thanks for your questions.  

 

Ben Goertzel is coming out with a book on Novamente soon and I assume it
will have a lot of good things to say on the topics you have mentioned.  

 

Below are some of my comments 

 

Ed Porter

 

====JIM BROMER WROTE=======>

Can you describe some of the kinds of systems that you think would be
necessary for complex inference problems.  Do you feel that all AGI problems
(other than those technical problems that would be common to a variety of
complicated programs that use large data bases) are essentially inference
problems?  Is your use of the term inference here intended to be inclusive
of the various kinds of problems that would have to be dealt with or are you
referring to a class of problems which are inferential in the more
restricted sense of the term?  (I feel that the two senses of the term are
both legitimate, I am just a little curious about what it was that you were
saying.)



====ED PORTER========>

I think complex inference involves inferring from remembered instances or
learned patterns of temporal correlations, including those where the things
inferred occurred before, after, and/or simultaneously with an activation
from which inference is to flow.  The events involved in such correlation
included not only sensory patterns but also emotional (i.e., value),
remembered, and/or imagined mental occurrences. I think complex inference
needs to be able to flow up and down compositional and generalization
hierarchies.  It needs to be sensitive to current context, and to prior
relevant memories.   Activations from prior activations should continue to
reside, in some form, at many nodes or node elements for various lengths of
times to provide a rich representation of context.  

 

The degree to which activation is spread at each hop as a result of a given
spreading activation could be a function --- not only of the original energy
allocated to origin of that spreading activation --- but also, the
probability and importance of a given node from which the next hop is being
considered, both a priori and given the current context of previous and
other current activation.  It should also be a function of the probability
and importance, both a priori and given the current context, of each link
from the current node with regard to which a determination is to be made
whether or not to activate such a link.   Also the spreading activation
should be controlled by some sort of measure of global gain control,
computational resource market, or other type of competitive measures used to
help focus the spreading activation on better scoring paths.     

 

As in Shurti, AGI inferencing needs to be able to mix both forward and
backward chaining, and mix inferencing up and down compositional and
generialization hierierachies.  Also AGIs need to learn over time which
inferencing patterns are most successful for what types of problems, and
learn to tune the parameters when applying one or more sets of inferencing
patterns to a given problem, based not only on experience learned from past
performances of the inferencing task, but also from feedback during a given
execution of such a task. 

 

Clearly something akin to a goal system is needed, and clearly something is
needed to focus attention on the patterns that currently appear most
relevant to current goals, sub-goals, or other things of importance.

 

Inferencing is clearly one of the major things AGI have to do.  Pattern
recognition can be viewed as a form of inferencing.  Even motor behavior can
be viewed as a type of inference.  For years there have been real world
control systems that have used if-then inference rules to control mechanical
outputs m. 

 

I don't know what you mean by the broad and narrow meaning of inferencing.
To me inferencing means implying or concluding one set of representations is
appropriate from another.  That's pretty broad.

 

I haven't thought about it enough to know if I would go so far as to say all
AGI is essentially inference problems, but clearly it is one of the major
things AGI is about.


====JIM BROMER WROTE=======>

I only glanced at a couple of papers about SHRUTI, and I may be looking at a
different paper than you were talking about, but looking at the website it
looks like you were talking about a connectionist model.  Do you think a
connectionist model (probabilistic or not) is necessary for AGI.  In other
words, I think a lot of us agree that some kind of complex (or complicated)
system of interrelated data is necessary for AGI and this does correspond to
a network of some kind, but these are not necessarily connectionist.

====ED PORTER========>

I don't know the exact definition of connectionist.  In its more strict
sense I think it tends to refer to systems where a high percent of the
knowledge has been learned automatically and is represented in automatically
learned weights and/or automatically learned graph nodes or connections, and
there are no human defined symbols. 

 

I think something like this is necessary for AGI, because it is important
for AGI's to be able to learn a broad range of different things by
themselves, and in particular, because the amount of knowledge necessary to
become a human level AGI is so huge that it would be very difficult to have
human program it all, particularly since we don't even begin to understand
all the types of things an AGI has to know.  

 

But like Ben, I see no need to have AGIs be pure connectionist systems, no
need to ban all symbolic representation, and no need to totally segregate
the connenctionist, from the symbolic parts of an AGI system.  I think there
is much to be gained by mixing more traditional types of computing with
connectionist computing. 

 

 

====JIM BROMER WROTE=======>

What were you thinking of when you talked about multi-level compositional
hierarchies that you suggested were necessary for general reasoning?

 

====ED PORTER========>

Compositional hierarchies are a very commonly used representation in all
fields of human knowledge.  Its is my recollection for example that
articulated models used in animation, such as one corresponding roughly to
the human skeleton, are represented and their motion is computed as
hierarchical compositional models.  It is my understanding that many models
of bottom up and top down pattern recognition use hierarchical compositional
models.  For example, in speech recognition, one has a compositional
hierarchy including (A) acoustic frame models (e.g., a set of 16 to 64
parameters derived by digital signal procession to represent the audio
signal over a time window such as 10 milli-seconds), (B) phoneme in context
models providing a probabilistic representation of the sequence of three
frame models that best represent a first speech phoneme occurring in a given
context when it is preceded by a second phoneme and followed by a third, (C)
word models, and (D) models of two and three word correlations.  The Serre
paper I cited at
http://cbcl.mit.edu/projects/cbcl/publications/ps/MIT-CSAIL-TR-2006-028.pdf
shows a compositional hierarchy of visual models used for visual
recognition, which has the added feature of including generalization as well
as composition in its hierarchy. 

 


====JIM BROMER WROTE=======>

If I understood what you were saying, you do not think that activation
synchrony is enough to create insightful binding given the complexities that
are necessary for higher level (or more sophisticated) reasoning. On the
other hand you did seem to suggest that temporal synchrony spread across a
rhythmic flux of relational knowledge of might be useful for detecting some
significant aspects during learning.  What do you think?



====ED PORTER========>

I think synchrony can be used to explain a lot, but many believe synchrony
in the human brain has limits as to how many different bindings can be
represented at once.  With computers one could use numbers to represents the
equivalent of synchronous pulses, and --- if one is willing to pay the price
required in terms of extra memory, processing, and bandwidth required  ---
one could do much more explicit binding over a relatively broad region in an
AGI brain, than most currently believe can be performed in the human brain.


 

(However, it is, at least, possible that subtitle forms of synchrony may
exist that would allow there to be much more simultaneous synchrony in the
brain than is commonly discussed.).

 

The type of implicit binding described in the Poggio paper which I cited
when I started this thread claims that under certain situation one can avoid
the need for explicit binding, enabling at least the brain to do massively
parallel matching and implication that requires bindings in parallel,
without having the limitation of using only about 40 separate binding/sec in
any one brain region, which many think is the limit for explicit binding in
human neural hardware.

 

I have not seen any discussion of, nor have I figured out, how the type of
implicit binding the Poggio paper describes can be applied to solve much of
the type of semantic reasoning that I am most interested in, because the
semantic space is so high-dimensional and irregular, that it would seem the
number of models required to provide the necessary level of implicit binding
through model specificity would be too large to be practical, even in
brain-level hardware.  But I think it is possible implicit binding probably
can be used to substantially reduce the amount of explicit binding that is
required even in semantic reasoning.  Nevertheless, I think a lot of
explicit binding will also be required for high level semantic reasoning.

 

One of the most interesting things about the Shruiti papers is that is shows
that a surprising amount of computation can be done with a relatively small
number of binding phases.

 

If anybody has some good ideas about how to increase the amount of binding
that can be done implicitly in semantic reasoning in a reasonably efficient
manner, I would be interested in hearing them

 

====JIM BROMER WROTE=======>

I guess what I am getting at is I would like you to make some speculations
about the kinds of systems that could work with complicated reasoning
problems.  How would you go about solving the binding problem that you have
been talking about?  (I haven't read the paper that I think you were
referring to and I only glanced at one paper on SHRUTI but I am pretty sure
that I got enough of what was being discussed to talk about it.)

 

====ED PORTER========>

I assume a Novamente system would be able to do most of the types of
implication I am interested in, or at least be modified to do so.  From my
reading six or more months ago of Novamente literature, I forget how Ben
handled binding, but I am sure he has some way, either implicit or explicit,
because Ben is a smart guy and binding, either implicit or explicit, is
necessary for any generalized complicated pattern matching capability.  

 

My Novamente-like approach uses binding numbers to produce something
equivalent to synchrony for the conveying of explicit binding information.
This allows a form of graph matching to take place.  

 

I envision a system where over the duration of short term memory (~100
seconds) there could be, say, a million different explicit bindings, with,
say, roughly 100 billion remnants of the spreading activation from these
million or so explicity bindings remaining in short term memory at any given
time.  This allows a substantial amount of parallel complex semantic
matching to proceed in parallel within a rich contextual representatio.  

 

Even though I have some hacks to speed the communication and matching of
binding information (such as graph matching), it still is expensive and
therefore I am interested in techniques that reduce the need for it.  

 

For example, one could use traditional bottom up pattern matching without
any binding, to activate a set of best scoring patterns, and then have
bottom down processes from the more activated patterns test whether the
binding required exists for the matching of each of the patterns most
activated by the bottom up actiavation.  In a recent phone conversation when
I described this hack to Dave Hart, he named it "binding on demand."   Such
binding on demand would tend to substantially limit the need for explicit
binding to case where there is already good reason to believe a pattern
including such binding might be matched, and it would limit the spreading of
such binding information to the limited implication paths along which it has
been requested.

 

 

-----Original Message-----
From: Jim Bromer [mailto:[EMAIL PROTECTED] 
Sent: Saturday, July 12, 2008 7:21 PM
To: [email protected]
Subject: RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE
BINDING PROBLEM"?

 

Ed Porter said:

It should be noted that Shruiti uses a mix of forward changing and backward
chaining, with an architecture for controlling when and how each is used.
.. 

My understanding that forward reasoning is reasoning from conditions to
consequences, and backward reasoning is the opposite. But I think what is a
condition and what is a consequence is not always clear, since one can use
if A then B rules to apply to situations where A occurs before B, B occurs
before A, and A and B occur at the same time. Thus I think the notion of
what is forward and backward chaining might be somewhat arbitrary, and could
be better clarified if it were based on temporal relationships. I see no
reason that Shruiti's "?" activation should not run be spread across all
those temporal relationships, and be distinguished from Shruiti's "+" and
"-" probabilistic activation by not having a probability, but just a
temporary attentional characteristic. Additional inference control mechanism
could then be added to control which directions in time to reason with in
different circumstances, if activation pruning was necessary.

Furthermore, Shruiti, does not use multi-level compositional hierarchies for
many of its patterns, and it only uses generalizational hierarchies for slot
fillers, not for patterns. Thus, it does not many of the general reasoning
capabilities that are necessary for NL understanding.... Much of the
spreading
activation in a more general purpose AGI would be up and down compositional
and generaliztional hiearachies, which is not necessarily forward or
backward chaining, but which is important in NL understanding. So I agree
that simple forward and backward chaining are not enough to solve general
inferences problems of any considerable complexity.

-----------------------------------
Can you describe some of the kinds of systems that you think would be
necessary for complex inference problems.  Do you feel that all AGI problems
(other than those technical problems that would be common to a variety of
complicated programs that use large data bases) are essentially inference
problems?  Is your use of the term inference here intended to be inclusive
of the various kinds of problems that would have to be dealt with or are you
referring to a class of problems which are inferential in the more
restricted sense of the term?  (I feel that the two senses of the term are
both legitimate, I am just a little curious about what it was that you were
saying.)

I only glanced at a couple of papers about SHRUTI, and I may be looking at a
different paper than you were talking about, but looking at the website it
looks like you were talking about a connectionist model.  Do you think a
connectionist model (probabilistic or not) is necessary for AGI.  In other
words, I think a lot of us agree that some kind of complex (or complicated)
system of interrelated data is necessary for AGI and this does correspond to
a network of some kind, but these are not necessarily connectionist.

What were you thinking of when you talked about multi-level compositional
hierarchies that you suggested were necessary for general reasoning?

If I understood what you were saying, you do not think that activation
synchrony is enough to create insightful binding given the complexities that
are necessary for higher level (or more sophisticated) reasoning. On the
other hand you did seem to suggest that temporal synchrony spread across a
rhythmic flux of relational knowledge of might be useful for detecting some
significant aspects during learning.  What do you think?

I guess what I am getting at is I would like you to make some speculations
about the kinds of systems that could work with complicated reasoning
problems.  How would you go about solving the binding problem that you have
been talking about?  (I haven't read the paper that I think you were
referring to and I only glanced at one paper on SHRUTI but I am pretty sure
that I got enough of what was being discussed to talk about it.)

Jim Bromer

 

  _____  


agi |  <https://www.listbox.com/member/archive/303/=now> Archives
<https://www.listbox.com/member/archive/rss/303/> |
<https://www.listbox.com/member/?&;
f> Modify Your Subscription

 <http://www.listbox.com> 

 




-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to