Richard,  

I think Wikipedia's definition of forward chaining (copied below) agrees
with my stated understanding as to what forward chaining means, i.e.,
reasoning from the "if" (i.e., conditions) to the "then" (i.e.,
consequences) in if-then statements.  

So, once again there is an indication you have unfairly criticized the
statements of another.

Ed Porter

==========Wikipedia defines forward chaining as: ==============

Forward chaining is one of the two main methods of reasoning when using
inference rules (in artificial intelligence). The other is backward
chaining.

Forward chaining starts with the available data and uses inference rules to
extract more data (from an end user for example) until an optimal goal is
reached. An inference engine using forward chaining searches the inference
rules until it finds one where the antecedent (If clause) is known to be
true. When found it can conclude, or infer, the consequent (Then clause),
resulting in the addition of new information to its data.

Inference engines will often cycle through this process until an optimal
goal is reached.

For example, suppose that the goal is to conclude the color of my pet Fritz,
given that he croaks and eats flies, and that the rule base contains the
following four rules:

If X croaks and eats flies - Then X is a frog 
If X chirps and sings - Then X is a canary 
If X is a frog - Then X is green 
If X is a canary - Then X is yellow 

This rule base would be searched and the first rule would be selected,
because its antecedent (If Fritz croaks and eats flies) matches our data.
Now the consequents (Then X is a frog) is added to the data. The rule base
is again searched and this time the third rule is selected, because its
antecedent (If Fritz is a frog) matches our data that was just confirmed.
Now the new consequent (Then Fritz is green) is added to our data. Nothing
more can be inferred from this information, but we have now accomplished our
goal of determining the color of Fritz.

Because the data determines which rules are selected and used, this method
is called data-driven, in contrast to goal-driven backward chaining
inference. The forward chaining approach is often employed by expert
systems, such as CLIPS.

One of the advantages of forward-chaining over backward-chaining is that the
reception of new data can trigger new inferences, which makes the engine
better suited to dynamic situations in which conditions are likely to
change.


-----Original Message-----
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Saturday, July 12, 2008 7:42 PM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE
BINDING PROBLEM"?

Jim Bromer wrote:
> Ed Porter said:
> 
> It should be noted that Shruiti uses a mix of forward changing and
backward
> chaining, with an architecture for controlling when and how each is used.
> ...
> 
> My understanding that forward reasoning is reasoning from conditions to
> consequences, and backward reasoning is the opposite. But I think what is
a
> condition and what is a consequence is not always clear, since one can use
> if A then B rules to apply to situations where A occurs before B, B occurs
> before A, and A and B occur at the same time. Thus I think the notion of
> what is forward and backward chaining might be somewhat arbitrary, and
could
> be better clarified if it were based on temporal relationships. I see no
> reason that Shruiti's "?" activation should not run be spread across all
> those temporal relationships, and be distinguished from Shruiti's "+" and
> "-" probabilistic activation by not having a probability, but just a
> temporary attentional characteristic. Additional inference control
mechanism
> could then be added to control which directions in time to reason with in
> different circumstances, if activation pruning was necessary.
> 

This is not correct.

Forward chaining is when the inference engine starts with some facts and 
then uses its knowledge base to explore what consequences can be derived 
from those facts.  Going in this direction the inference engine does not 
know where it will end up.

Backward chaining is when a hypothetical conclusion is given, and the 
engine tries to see what possible deductions might lead to this 
conclusion.  In general, the candidates generated in this first pass are 
not themselves directly known to be true (their antecedents are not 
facts in the knowledge base), so the engine has to repeat the procedure 
to see what possible deductions might lead to the candidates being true. 
  The process is repeated until it bottoms out in known facts that are 
definitely true or false, or until the knowledge base is exhausted, or 
until the end of the universe, or until the engine imposes a cutoff 
(this one of the most common results).

The two procedures are quite fundamentally different.


Richard Loosemore





> Furthermore, Shruiti, does not use multi-level compositional hierarchies
for
> many of its patterns, and it only uses generalizational hierarchies for
slot
> fillers, not for patterns. Thus, it does not many of the general reasoning
> capabilities that are necessary for NL understanding.... Much of the 
> spreading
> activation in a more general purpose AGI would be up and down
compositional
> and generaliztional hiearachies, which is not necessarily forward or
> backward chaining, but which is important in NL understanding. So I agree
> that simple forward and backward chaining are not enough to solve general
> inferences problems of any considerable complexity.
> 
> -----------------------------------
> Can you describe some of the kinds of systems that you think would be 
> necessary for complex inference problems.  Do you feel that all AGI 
> problems (other than those technical problems that would be common to a 
> variety of complicated programs that use large data bases) are 
> essentially inference problems?  Is your use of the term inference here 
> intended to be inclusive of the various kinds of problems that would 
> have to be dealt with or are you referring to a class of problems which 
> are inferential in the more restricted sense of the term?  (I feel that 
> the two senses of the term are both legitimate, I am just a little 
> curious about what it was that you were saying.)
> 
> I only glanced at a couple of papers about SHRUTI, and I may be looking 
> at a different paper than you were talking about, but looking at the 
> website it looks like you were talking about a connectionist model.  Do 
> you think a connectionist model (probabilistic or not) is necessary for 
> AGI.  In other words, I think a lot of us agree that some kind of 
> complex (or complicated) system of interrelated data is necessary for 
> AGI and this does correspond to a network of some kind, but these are 
> not necessarily connectionist.
> 
> What were you thinking of when you talked about multi-level 
> compositional hierarchies that you suggested were necessary for general 
> reasoning?
> 
> If I understood what you were saying, you do not think that activation 
> synchrony is enough to create insightful binding given the complexities 
> that are necessary for higher level (or more sophisticated) reasoning. 
> On the other hand you did seem to suggest that temporal synchrony spread 
> across a rhythmic flux of relational knowledge of might be useful for 
> detecting some significant aspects during learning.  What do you think?
> 
> I guess what I am getting at is I would like you to make some 
> speculations about the kinds of systems that could work with complicated 
> reasoning problems.  How would you go about solving the binding problem 
> that you have been talking about?  (I haven't read the paper that I 
> think you were referring to and I only glanced at one paper on SHRUTI 
> but I am pretty sure that I got enough of what was being discussed to 
> talk about it.)
> 
> Jim Bromer
> 
> ------------------------------------------------------------------------
> *agi* | Archives <https://www.listbox.com/member/archive/303/=now> 
> <https://www.listbox.com/member/archive/rss/303/> | Modify 
> <https://www.listbox.com/member/?&;> 
> Your Subscription     [Powered by Listbox] <http://www.listbox.com>
> 



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to