For what it is worth, I agree with Richard Loosemore in that your
first description was a bit ambiguous, and it sounded like you were
saying that backward chaining would add facts to the knowledge base,
which would be wrong. But you've cleared up the ambiguity.

On Wed, Jul 16, 2008 at 5:02 AM, Brad Paulsen <[EMAIL PROTECTED]> wrote:
>
>
> Richard Loosemore wrote:
>>
>> Brad Paulsen wrote:
>>>
>>> I've been following this thread pretty much since the beginning.  I hope
>>> I didn't miss anything subtle.  You'll let me know if I have, I'm sure. ;=)
>>>
>>> It appears the need for temporal dependencies or different levels of
>>> reasoning has been conflated with the terms "forward-chaining" (FWC) and
>>> "backward-chaining" (BWC), which are typically used to describe different
>>> rule base evaluation algorithms used by expert systems.
>>>
>>> The terms "forward-chaining" and "backward-chaining" when used to refer
>>> to reasoning strategies have absolutely nothing to do with temporal
>>> dependencies or levels of reasoning.  These two terms refer simply, and
>>> only, to the algorithms used to evaluate "if/then" rules in a rule base
>>> (RB).  In the FWC algorithm, the "if" part is evaluated and, if TRUE, the
>>> "then" part is added to the FWC engine's output.  In the BWC algorithm, the
>>> "then" part is evaluated and, if TRUE, the "if" part is added to the BWC
>>> engine's output.  It is rare, but some systems use both FWC and BWC.
>>>
>>> That's it.  Period.  No other denotations or connotations apply.
>>
>> Whooaa there.  Something not right here.
>>
>> Backward chaining is about starting with a goal statement that you would
>> like to prove, but at the beginning it is just a hypothesis.  In BWC you go
>> about proving the statement by trying to find facts that might support it.
>>  You would not start from the statement and then add knowledge to your
>> knowledgebase that is consistent with it.
>>
>
> Richard,
>
> I really don't know where you got the idea my descriptions or algorithm
> added "...knowledge to your (the "royal" you, I presume) knowledgebase...".
> Maybe you misunderstood my use of the term "output."  Another (perhaps
> better) word for output would be "result" or "action."  I've also heard
> FWC/BWC engine output referred to as the "blackboard."
>
> By definition, an expert system rule base contains the total sum of the
> knowledge of a human expert(s) in a particular domain at a given point in
> time.  When you use it, that's what you expect to get.  You don't expect the
> system to modify the rule base at runtime.  If everything you need isn't in
> the rule base, you need to talk to the knowledge engineer. I don't know of
> any expert system that adds rules to its rule base (i.e., becomes "more
> expert") at runtime.  I'm not saying necessarily that this couldn't be done,
> but I've never seen it.
>
> I have more to say about your counterexample below, but I don't want
> this thread to devolve into a critique of 1980's classic AI models.
>
> The main purpose I posted to this thread was that I was seeing
> inaccurate conclusions being drawn based on a lack of understanding
> of how the terms "backward" and "forward" chaining related to temporal
> dependencies and hierarchal logic constructs.  There is no relation.
> Using forward chaining has nothing to do with "forward in time" or
> "down a level in the hierarchy."  Nor does backward chaining have
> anything to do with "backward in time" or "up a level in the hierarchy."
> These terms describe particular search algorithms used in expert system
> engines (since, at least, the mid-1980s).  Definitions vary in emphasis,
> such as the three someone posted to this thread, but they all refer to
> the same critters.
>
> If one wishes to express temporal dependencies or hierarchical levels of
> logic in these types of systems, one needs to encode these in the rules.
> I believe I even gave an example of a rule base containing temporal and
> hierarchical-conditioned rules.
>
>> So for example, if your goal is to prove that Socrates is mortal, then
>> your above desciption of BWC would cause the following to occur
>>
>> 1) Does any rule allow us to conclude that x is/is not mortal?
>>
>> 2) Answer: yes, the following rules allow us to do that:
>>
>> "If x is a plant, then x is mortal"
>> "If x is a rock, then x is not mortal"
>> "If x is a robot, then x is not mortal"
>> "If x lives in a post-singularity era, then x is not mortal"
>> "If x is a slug, then x is mortal"
>> "If x is a japanese beetle, then x is mortal"
>> "If x is a side of beef, then x is mortal"
>> "If x is a screwdriver, then x is not mortal"
>> "If x is a god, then x is not mortal"
>> "If x is a living creature, then x is mortal"
>> "If x is a goat, then x is mortal"
>> "If x is a parrot in a Dead Parrot Sketch, then x is mortal"
>>
>> 3) Ask the knowledge base if Socrates is a plant, if Socrates is a rock,
>> etc etc ..... working through the above list.
>>
>> 3) [According to your version of BWC, if I understand you aright] Okay, if
>> we cannot find any facts in the KB that say that Socrates is known to be one
>> of these things, then add the first of these to the KB:
>>
>> "Socrates is a plant"
>>
>> [This is the bit that I question:  we don't do the opposite of forward
>> chaining at this step].
>>
>> 4) Now repeat to find all rules that allow us to conclude that x is a
>> plant".  For this set of " ... then x is a plant" rules, go back and repeat
>> the loop from step 2 onwards.  Then if this does not work, ....
>>
>>
>> Well, you can imagine the rest of the story: keep iterating until you can
>> prove or disprove that Socrates is mortal.
>>
>> I cannot seem to reconcile this with your statement above that backward
>> chaining simply involves the opposite of forward chaining, namely adding
>> antecedents to the KB and working backwards.
>>
>
> Since your counterexample is based on a misunderstanding of my BWC
> pseudo-code, it cannot serve as a critique of any merit.
>
> Using your example rule base, with a BWC engine, there is no way to
> determine if Socrates is mortal.  One could use that RB to determine
> if a plant, a rock, a robot or any other qualifier you have for 'x'
> in the "if" part of each rule is or is not mortal.  But never for
> Socrates.  There is no rule (direct or indirect) in your RB that
> contains any information about Socrates.  As a result, any expert
> system using that rule base could only respond with the, dreaded,
> "I don't know."  I say "dreaded" because, after all, it's supposed
> to be an EXPERT system, right?
>
> Your counterexample rule base could be used with an expert system
> about mortality but, even then, it would be incomplete.  In practice,
> it would be reduced to two rules, one with "mortal" in its consequent
> and one with "not mortal" in its consequent.  The listing of the
> various objects that are either mortal or not mortal would be placed
> in their, respective, rule's antecedent using an OR relational operator.
> The whole idea of an expert system is to write rules that describe how
> objects are related to each another in a specific domain of knowledge,
> not to enumerate individual examples of something.  For that, you can
> use any relational database.  So, a rule base for use with FWC or BWC
> engines would never be accepted if written the way your counterexample
> is written.
>
> Here is an RB written by someone who is a (somewhat mentally disturbed)
> expert in Socrates to be used by someone who isn't an expert but wants
> to verify if Socrates has the property (i.e., symptom, problem) "mortal":
>
>        #1 if x is a man then x is mortal
>        #2 if x is a philosopher then x is impecunious
>        #3 if x is Socrates then x is a man
>        #4 if x is Socrates then x is a philosopher
>
> Of course, this example RB would never pass KE muster either.  Rules #1
> and #2 have nothing directly to do with Socrates.  But, a rule base about
> disease would probably contain many of these "indirect" rules (which is
> why such a rule base would be useful for diagnosing disease given
> symptoms).  An expert on Socrates would, presumably, have entered the
> rule, "if x is Socrates then x is mortal" (i.e., Socrates is mortal),
> directly and have been done with it.  But, it has the advantage over your
> counterexample RB of at least containing the term "Socrates." ;-)
>
> With that caveat in place, let's play computer!
>
> So, we fire up the old Socrates Expert System (which uses BWC) and input
> the query term "mortal" (or ask the question, "Is Socrates mortal?").
>
>   R1's consequent matches "mortal", so we output "man", replace the
>   query term "mortal" with "man" (the rule's antecedent) and mark R1 as
>   fired.  The active RB is, thus, reduced to:
>
>        #2 if x is impecunious then x is a philosopher
>        #3 if x is Socrates then x is man
>        #4 if x is Socrates then x is impecunious
>
>   We reset the eval loop and try again...
>
>   R3's consequent matches "man", so we output "Socrates", replace the
>   query term "man" with "Socrates" (the rule's antecedent) and mark
>   R3 as fired.  The active RB is, thus, reduced to:
>
>        #2 if x is impecunious then x is a philosopher
>        #4 if x is Socrates then x is impecunious
>
>   We reset the eval loop and try again...
>
> No other rules have "Socrates" in their consequent, so our output is
> "man, mortal, Socrates" or, in a more user-friendly form: "Socrates
> is mortal because Socrates is a man and all men are mortal."  Bingo!
> We also learn that Socrates is a "man" without having asked.  Here,
> that's not very useful but, in a BWC diagnostic system, this would
> be expected, useful information.
>
> Cheers,
>
> Brad
>
>>
>>
>>
>>
>>
>>> To help remove any mystery that may still surround these concepts, here
>>> is an FWC algorithm in pseudo-code (WARNING: I'm glossing over quite a few
>>> details here – I'll be happy to answer questions on list or off):
>>>
>>>   0. set loop index to 0
>>>   1. got next rule?
>>>         no: goto 5
>>>   2. is rule FIRED?
>>>         yes: goto 1
>>>   3. is key equal to rule's antecedent?
>>>         yes: add consequent to output, mark rule as FIRED,
>>>              output is new key, goto 0
>>>   4. goto 1
>>>   5. more input data?
>>>         yes: input data is new key, goto 0
>>>   6. done.
>>>
>>> To turn this into a BWC algorithm, we need only modify Step #3 to read as
>>> follows:
>>>
>>>   3. is key equal to rule's consequent?
>>>         yes: add antecedent to output, mark rule as FIRED,
>>>         output is new key, goto 0
>>>
>>> If you need to represent temporal dependencies in FWC/BWC systems, you
>>> have to express them using rules.  For example, if washer-a MUST be placed
>>> on bolt-b before nut-c can be screwed on, the rule base might look something
>>> like this:
>>>
>>>   1. if installed(washer-x) then install(nut-z)
>>>   2. if installed(bolt-y) then install(washer-x)
>>>   3. if notInstalled(bolt-y) then install(bolt-y)
>>>
>>> In this case, rule #1 won't get fired until rule #2 fires (nut-z can't
>>> get installed until washer-x has been installed).  Rule #2 won't get fired
>>> until rule #3 has fired (washer-x can't get installed until bolt-y has been
>>> installed). NUT-Z!  (Sorry, couldn't help it.)
>>>
>>> To kick things off, we pass in "bolt-y" as the initial key.  This
>>> triggers rule #3, which will trigger rule #2, which will trigger rule #1.
>>> These temporal dependencies result in the following assembly sequence:
>>> install bolt-y, then install washer-x, and, finally, install nut-z.
>>>
>>> A similar thing can be done to implement rule hierarchies.
>>>
>>>   1. if levelIs(0) and installed(washer-x) then install(nut-z)
>>>   2. if levelIs(0) and installed(nut-z) goLevel(1)
>>>   3. if levelIs(1) and notInstalled(gadget-xx) then install(gadget-xx)
>>>   4. if levelIs(0) and installed(bolt-y) then install(washer-x)
>>>   5. if levelIs(0) and notInstalled(bolt-y) then install(bolt-y)
>>>
>>> Here rule #2 won't fire until rule #1 has fired.  Rule #1 won't fire
>>> unless rule #4 has fired.  Rule #4 won't fire until rule #5 has fired.  And,
>>> finally, Rule #3 won't fire until Rule #2 has fired. So, level 0 could
>>> represent the reasoning required before level 1 rules (rule #3 here) will be
>>> of any use. (That's not the case here, of course, just stretching my humble
>>> example as far as I can.)
>>>
>>> Note, again, that the temporal and level references in the rules are NOT
>>> used by the BWC.  They probably will be used by the part of the program that
>>> does something with the BWC's output (the install(), goLevel(), etc.
>>> functions). And, again, the results should be completely unaffected by the
>>> order in which the RB rules are evaluated or fired.
>>>
>>> I hope this helps.
>>>
>>> Cheers,
>>>
>>> Brad
>>>
>>> Richard Loosemore wrote:
>>>>
>>>> Mike Tintner wrote:
>>>>>
>>>>> A tangential comment here. Looking at this and other related threads I
>>>>> can't help thinking: jeez, here are you guys still endlessly arguing about
>>>>> the simplest of syllogisms, seemingly unable to progress beyond them. 
>>>>> (Don't
>>>>> you ever have that feeling?) My impression is that the fault lies with 
>>>>> logic
>>>>> itself - as soon as you start to apply logic to the real world, even only
>>>>> tangentially with talk of "forward" and "backward" or "temporal"
>>>>> considerations, you fall into a quagmire of ambiguity, and no one is 
>>>>> really
>>>>> sure what they are talking about. Even the simplest if p then q logical
>>>>> proposition is actually infinitely ambiguous. No?  (Is there a Godel's
>>>>> Theorem of logic?)
>>>>
>>>> Well, now you have me in a cleft stick, methinks.
>>>>
>>>> I *hate* logic as a way to understand cognition, because I think it is a
>>>> derivative process within a high-functional AGI system, not a foundation
>>>> process that sits underneath everything else.
>>>>
>>>> But, on the other hand, I do understand how it works, and it seems a
>>>> shame for someone to trample on the concept of forward and backward 
>>>> chaining
>>>> when these are really quite clear and simple processes (at least
>>>> conceptually).
>>>>
>>>> You are right that logic is as clear as mud outside the pristine
>>>> conceptual palace within which it was conceived, but if you're gonna hang
>>>> out inside the palace it is a bit of a shame to question its elegance...
>>>>
>>>>
>>>>
>>>> Richard Loosemore
>>>>
>>>>
>>>>
>>>> -------------------------------------------
>>>> agi
>>>> Archives: https://www.listbox.com/member/archive/303/=now
>>>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>>>> Modify Your Subscription: https://www.listbox.com/member/?&;
>>>> Powered by Listbox: http://www.listbox.com
>>>>
>>>
>>>
>>> -------------------------------------------
>>> agi
>>> Archives: https://www.listbox.com/member/archive/303/=now
>>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>>> Modify Your Subscription: https://www.listbox.com/member/?&;
>>> Powered by Listbox: http://www.listbox.com
>>>
>>>
>>
>>
>>
>> -------------------------------------------
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription: https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to