Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-17 Thread Brad Paulsen

Mike,

If memory serves, this thread started out as a discussion about binding in an 
AGI context.  At some point, the terms forward-chaining and 
backward-chaining were brought up and, then, got used in a weird way (I 
thought) as the discussion turned to temporal dependencies and hierarchical 
logic constructs.  When it appeared no one else was going to clear up the 
ambiguities, I threw in my two cents.


I made a spectacularly good living in the late 1980's building expert system 
engines and knowledge engineering front-ends, so I think I know a thing or two 
about that narrow AI technology.  Funny thing, though, at that time, the trade 
press were saying expert systems were no longer real AI.  They worked so well 
at what they did, the mystery wore off.  Ah, the price of success in AI. ;-)


What makes the algorithms used in expert system engines less than suitable for 
AGI is their static (snapshot) nature and crispness.  AGI really needs some 
form of dynamic programming, probabilistic (or fuzzy) rules (such as those built 
using Bayes nets or hidden Markov models), and runtime feedback.


Thanks for the kind words.

Cheers,

Brad

Mike Tintner wrote:
Brad: By definition, an expert system rule base contains the total sum 
of the

knowledge of a human expert(s) in a particular domain at a given point in
time.  When you use it, that's what you expect to get.  You don't expect 
the

system to modify the rule base at runtime.  If everything you need isn't in
the rule base, you need to talk to the knowledge engineer. I don't know of
any expert system that adds rules to its rule base (i.e., becomes “more
expert”) at runtime.  I'm not saying necessarily that this couldn't be 
done,

but I've never seen it.

In which case - (thanks BTW for a v. helpful post) - are we talking 
entirely here about narrow AI? Sorry if I've missed this, but has anyone 
been discussing how to provide a flexible, evolving set of rules for 
behaviour? That's the crux of AGI, isn't it? Something at least as 
flexible as a country's Constitution and  Body of Laws. What ideas are 
on offer here?




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?; 


Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-16 Thread Brad Paulsen



Richard Loosemore wrote:

Brad Paulsen wrote:
I've been following this thread pretty much since the beginning.  I 
hope I didn't miss anything subtle.  You'll let me know if I have, I'm 
sure. ;=)


It appears the need for temporal dependencies or different levels of 
reasoning has been conflated with the terms forward-chaining (FWC) 
and backward-chaining (BWC), which are typically used to describe 
different rule base evaluation algorithms used by expert systems.


The terms “forward-chaining” and “backward-chaining” when used to 
refer to reasoning strategies have absolutely nothing to do with 
temporal dependencies or levels of reasoning.  These two terms refer 
simply, and only, to the algorithms used to evaluate “if/then” rules 
in a rule base (RB).  In the FWC algorithm, the “if” part is evaluated 
and, if TRUE, the “then” part is added to the FWC engine's output.  In 
the BWC algorithm, the “then” part is evaluated and, if TRUE, the “if” 
part is added to the BWC engine's output.  It is rare, but some 
systems use both FWC and BWC.


That's it.  Period.  No other denotations or connotations apply.


Whooaa there.  Something not right here.

Backward chaining is about starting with a goal statement that you would 
like to prove, but at the beginning it is just a hypothesis.  In BWC you 
go about proving the statement by trying to find facts that might 
support it.  You would not start from the statement and then add 
knowledge to your knowledgebase that is consistent with it.




Richard,

I really don't know where you got the idea my descriptions or algorithm
added “...knowledge to your (the “royal” you, I presume) knowledgebase...”.
Maybe you misunderstood my use of the term “output.”  Another (perhaps
better) word for output would be “result” or “action.”  I've also heard
FWC/BWC engine output referred to as the “blackboard.”

By definition, an expert system rule base contains the total sum of the
knowledge of a human expert(s) in a particular domain at a given point in
time.  When you use it, that's what you expect to get.  You don't expect the
system to modify the rule base at runtime.  If everything you need isn't in
the rule base, you need to talk to the knowledge engineer. I don't know of
any expert system that adds rules to its rule base (i.e., becomes “more
expert”) at runtime.  I'm not saying necessarily that this couldn't be done,
but I've never seen it.

I have more to say about your counterexample below, but I don't want
this thread to devolve into a critique of 1980's classic AI models.

The main purpose I posted to this thread was that I was seeing
inaccurate conclusions being drawn based on a lack of understanding
of how the terms “backward” and “forward” chaining related to temporal
dependencies and hierarchal logic constructs.  There is no relation.
Using forward chaining has nothing to do with “forward in time” or
“down a level in the hierarchy.”  Nor does backward chaining have
anything to do with “backward in time” or “up a level in the hierarchy.”
These terms describe particular search algorithms used in expert system
engines (since, at least, the mid-1980s).  Definitions vary in emphasis,
such as the three someone posted to this thread, but they all refer to
the same critters.

If one wishes to express temporal dependencies or hierarchical levels of
logic in these types of systems, one needs to encode these in the rules.
I believe I even gave an example of a rule base containing temporal and
hierarchical-conditioned rules.

So for example, if your goal is to prove that Socrates is mortal, then 
your above desciption of BWC would cause the following to occur


1) Does any rule allow us to conclude that x is/is not mortal?

2) Answer: yes, the following rules allow us to do that:

If x is a plant, then x is mortal
If x is a rock, then x is not mortal
If x is a robot, then x is not mortal
If x lives in a post-singularity era, then x is not mortal
If x is a slug, then x is mortal
If x is a japanese beetle, then x is mortal
If x is a side of beef, then x is mortal
If x is a screwdriver, then x is not mortal
If x is a god, then x is not mortal
If x is a living creature, then x is mortal
If x is a goat, then x is mortal
If x is a parrot in a Dead Parrot Sketch, then x is mortal

3) Ask the knowledge base if Socrates is a plant, if Socrates is a rock, 
etc etc . working through the above list.


3) [According to your version of BWC, if I understand you aright] Okay, 
if we cannot find any facts in the KB that say that Socrates is known to 
be one of these things, then add the first of these to the KB:


Socrates is a plant

[This is the bit that I question:  we don't do the opposite of forward 
chaining at this step].


4) Now repeat to find all rules that allow us to conclude that x is a 
plant.  For this set of  ... then x is a plant rules, go back and 
repeat the loop from step 2 onwards.  Then if this does not work, 



Well, you can imagine the rest of the 

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-16 Thread Abram Demski
For what it is worth, I agree with Richard Loosemore in that your
first description was a bit ambiguous, and it sounded like you were
saying that backward chaining would add facts to the knowledge base,
which would be wrong. But you've cleared up the ambiguity.

On Wed, Jul 16, 2008 at 5:02 AM, Brad Paulsen [EMAIL PROTECTED] wrote:


 Richard Loosemore wrote:

 Brad Paulsen wrote:

 I've been following this thread pretty much since the beginning.  I hope
 I didn't miss anything subtle.  You'll let me know if I have, I'm sure. ;=)

 It appears the need for temporal dependencies or different levels of
 reasoning has been conflated with the terms forward-chaining (FWC) and
 backward-chaining (BWC), which are typically used to describe different
 rule base evaluation algorithms used by expert systems.

 The terms forward-chaining and backward-chaining when used to refer
 to reasoning strategies have absolutely nothing to do with temporal
 dependencies or levels of reasoning.  These two terms refer simply, and
 only, to the algorithms used to evaluate if/then rules in a rule base
 (RB).  In the FWC algorithm, the if part is evaluated and, if TRUE, the
 then part is added to the FWC engine's output.  In the BWC algorithm, the
 then part is evaluated and, if TRUE, the if part is added to the BWC
 engine's output.  It is rare, but some systems use both FWC and BWC.

 That's it.  Period.  No other denotations or connotations apply.

 Whooaa there.  Something not right here.

 Backward chaining is about starting with a goal statement that you would
 like to prove, but at the beginning it is just a hypothesis.  In BWC you go
 about proving the statement by trying to find facts that might support it.
  You would not start from the statement and then add knowledge to your
 knowledgebase that is consistent with it.


 Richard,

 I really don't know where you got the idea my descriptions or algorithm
 added ...knowledge to your (the royal you, I presume) knowledgebase
 Maybe you misunderstood my use of the term output.  Another (perhaps
 better) word for output would be result or action.  I've also heard
 FWC/BWC engine output referred to as the blackboard.

 By definition, an expert system rule base contains the total sum of the
 knowledge of a human expert(s) in a particular domain at a given point in
 time.  When you use it, that's what you expect to get.  You don't expect the
 system to modify the rule base at runtime.  If everything you need isn't in
 the rule base, you need to talk to the knowledge engineer. I don't know of
 any expert system that adds rules to its rule base (i.e., becomes more
 expert) at runtime.  I'm not saying necessarily that this couldn't be done,
 but I've never seen it.

 I have more to say about your counterexample below, but I don't want
 this thread to devolve into a critique of 1980's classic AI models.

 The main purpose I posted to this thread was that I was seeing
 inaccurate conclusions being drawn based on a lack of understanding
 of how the terms backward and forward chaining related to temporal
 dependencies and hierarchal logic constructs.  There is no relation.
 Using forward chaining has nothing to do with forward in time or
 down a level in the hierarchy.  Nor does backward chaining have
 anything to do with backward in time or up a level in the hierarchy.
 These terms describe particular search algorithms used in expert system
 engines (since, at least, the mid-1980s).  Definitions vary in emphasis,
 such as the three someone posted to this thread, but they all refer to
 the same critters.

 If one wishes to express temporal dependencies or hierarchical levels of
 logic in these types of systems, one needs to encode these in the rules.
 I believe I even gave an example of a rule base containing temporal and
 hierarchical-conditioned rules.

 So for example, if your goal is to prove that Socrates is mortal, then
 your above desciption of BWC would cause the following to occur

 1) Does any rule allow us to conclude that x is/is not mortal?

 2) Answer: yes, the following rules allow us to do that:

 If x is a plant, then x is mortal
 If x is a rock, then x is not mortal
 If x is a robot, then x is not mortal
 If x lives in a post-singularity era, then x is not mortal
 If x is a slug, then x is mortal
 If x is a japanese beetle, then x is mortal
 If x is a side of beef, then x is mortal
 If x is a screwdriver, then x is not mortal
 If x is a god, then x is not mortal
 If x is a living creature, then x is mortal
 If x is a goat, then x is mortal
 If x is a parrot in a Dead Parrot Sketch, then x is mortal

 3) Ask the knowledge base if Socrates is a plant, if Socrates is a rock,
 etc etc . working through the above list.

 3) [According to your version of BWC, if I understand you aright] Okay, if
 we cannot find any facts in the KB that say that Socrates is known to be one
 of these things, then add the first of these to the KB:

 Socrates is a plant

 [This is the bit 

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-16 Thread Abram Demski
The way I see it, on the expert systems front, bayesian networks
replaced the algorithms being currently discussed. These are more
flexible, since they are probabilistic, and also have associated
learning algorithms. For nonprobabilistic systems, the resolution
algorithm is more generally applicable (it deals with any logical
statement it is given, rather than only with if-then rules).
Resolution subsumes both forward and backward chaining; to forward
chain, we simply resolve statements in the database, but to backwards
chain, we add the negation of the query to the database and try to
derive a contradiction by resolving statements (thus proving the query
statement by reductio ad absurdum).

The most agi-oriented remnant of the rule-based system period is SOAR
(http://sitemaker.umich.edu/soar). It does add new rules to its
system, but they are summaries of old rules (to speed later
inference). SOAR recently added rienforcement learning capability, but
it doesn't use it to generate new rules  as far as I know.

On Wed, Jul 16, 2008 at 7:16 AM, Mike Tintner [EMAIL PROTECTED] wrote:
 Brad: By definition, an expert system rule base contains the total sum of
 the
 knowledge of a human expert(s) in a particular domain at a given point in
 time.  When you use it, that's what you expect to get.  You don't expect the
 system to modify the rule base at runtime.  If everything you need isn't in
 the rule base, you need to talk to the knowledge engineer. I don't know of
 any expert system that adds rules to its rule base (i.e., becomes more
 expert) at runtime.  I'm not saying necessarily that this couldn't be done,
 but I've never seen it.

 In which case - (thanks BTW for a v. helpful post) - are we talking entirely
 here about narrow AI? Sorry if I've missed this, but has anyone been
 discussing how to provide a flexible, evolving set of rules for behaviour?
 That's the crux of AGI, isn't it? Something at least as flexible as a
 country's Constitution and  Body of Laws. What ideas are on offer here?



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-16 Thread Richard Loosemore

Abram Demski wrote:

For what it is worth, I agree with Richard Loosemore in that your
first description was a bit ambiguous, and it sounded like you were
saying that backward chaining would add facts to the knowledge base,
which would be wrong. But you've cleared up the ambiguity.


I concur:  I was simply trying to clear up an ambiguity in the phrasing.



Richard Loosemore








On Wed, Jul 16, 2008 at 5:02 AM, Brad Paulsen [EMAIL PROTECTED] wrote:


Richard Loosemore wrote:

Brad Paulsen wrote:

I've been following this thread pretty much since the beginning.  I hope
I didn't miss anything subtle.  You'll let me know if I have, I'm sure. ;=)

It appears the need for temporal dependencies or different levels of
reasoning has been conflated with the terms forward-chaining (FWC) and
backward-chaining (BWC), which are typically used to describe different
rule base evaluation algorithms used by expert systems.

The terms forward-chaining and backward-chaining when used to refer
to reasoning strategies have absolutely nothing to do with temporal
dependencies or levels of reasoning.  These two terms refer simply, and
only, to the algorithms used to evaluate if/then rules in a rule base
(RB).  In the FWC algorithm, the if part is evaluated and, if TRUE, the
then part is added to the FWC engine's output.  In the BWC algorithm, the
then part is evaluated and, if TRUE, the if part is added to the BWC
engine's output.  It is rare, but some systems use both FWC and BWC.

That's it.  Period.  No other denotations or connotations apply.

Whooaa there.  Something not right here.

Backward chaining is about starting with a goal statement that you would
like to prove, but at the beginning it is just a hypothesis.  In BWC you go
about proving the statement by trying to find facts that might support it.
 You would not start from the statement and then add knowledge to your
knowledgebase that is consistent with it.


Richard,

I really don't know where you got the idea my descriptions or algorithm
added ...knowledge to your (the royal you, I presume) knowledgebase
Maybe you misunderstood my use of the term output.  Another (perhaps
better) word for output would be result or action.  I've also heard
FWC/BWC engine output referred to as the blackboard.

By definition, an expert system rule base contains the total sum of the
knowledge of a human expert(s) in a particular domain at a given point in
time.  When you use it, that's what you expect to get.  You don't expect the
system to modify the rule base at runtime.  If everything you need isn't in
the rule base, you need to talk to the knowledge engineer. I don't know of
any expert system that adds rules to its rule base (i.e., becomes more
expert) at runtime.  I'm not saying necessarily that this couldn't be done,
but I've never seen it.

I have more to say about your counterexample below, but I don't want
this thread to devolve into a critique of 1980's classic AI models.

The main purpose I posted to this thread was that I was seeing
inaccurate conclusions being drawn based on a lack of understanding
of how the terms backward and forward chaining related to temporal
dependencies and hierarchal logic constructs.  There is no relation.
Using forward chaining has nothing to do with forward in time or
down a level in the hierarchy.  Nor does backward chaining have
anything to do with backward in time or up a level in the hierarchy.
These terms describe particular search algorithms used in expert system
engines (since, at least, the mid-1980s).  Definitions vary in emphasis,
such as the three someone posted to this thread, but they all refer to
the same critters.

If one wishes to express temporal dependencies or hierarchical levels of
logic in these types of systems, one needs to encode these in the rules.
I believe I even gave an example of a rule base containing temporal and
hierarchical-conditioned rules.


So for example, if your goal is to prove that Socrates is mortal, then
your above desciption of BWC would cause the following to occur

1) Does any rule allow us to conclude that x is/is not mortal?

2) Answer: yes, the following rules allow us to do that:

If x is a plant, then x is mortal
If x is a rock, then x is not mortal
If x is a robot, then x is not mortal
If x lives in a post-singularity era, then x is not mortal
If x is a slug, then x is mortal
If x is a japanese beetle, then x is mortal
If x is a side of beef, then x is mortal
If x is a screwdriver, then x is not mortal
If x is a god, then x is not mortal
If x is a living creature, then x is mortal
If x is a goat, then x is mortal
If x is a parrot in a Dead Parrot Sketch, then x is mortal

3) Ask the knowledge base if Socrates is a plant, if Socrates is a rock,
etc etc . working through the above list.

3) [According to your version of BWC, if I understand you aright] Okay, if
we cannot find any facts in the KB that say that Socrates is known to be one
of these things, then add the first of these 

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-15 Thread Brad Paulsen
I've been following this thread pretty much since the beginning.  I hope I 
didn't miss anything subtle.  You'll let me know if I have, I'm sure. ;=)


It appears the need for temporal dependencies or different levels of reasoning 
has been conflated with the terms forward-chaining (FWC) and 
backward-chaining (BWC), which are typically used to describe different rule 
base evaluation algorithms used by expert systems.


The terms “forward-chaining” and “backward-chaining” when used to refer to 
reasoning strategies have absolutely nothing to do with temporal dependencies or 
levels of reasoning.  These two terms refer simply, and only, to the algorithms 
used to evaluate “if/then” rules in a rule base (RB).  In the FWC algorithm, the 
“if” part is evaluated and, if TRUE, the “then” part is added to the FWC 
engine's output.  In the BWC algorithm, the “then” part is evaluated and, if 
TRUE, the “if” part is added to the BWC engine's output.  It is rare, but some 
systems use both FWC and BWC.


That's it.  Period.  No other denotations or connotations apply.

To help remove any mystery that may still surround these concepts, here is an 
FWC algorithm in pseudo-code (WARNING: I'm glossing over quite a few details 
here – I'll be happy to answer questions on list or off):


   0. set loop index to 0
   1. got next rule?
 no: goto 5
   2. is rule FIRED?
 yes: goto 1
   3. is key equal to rule's antecedent?
 yes: add consequent to output, mark rule as FIRED,
  output is new key, goto 0
   4. goto 1
   5. more input data?
 yes: input data is new key, goto 0
   6. done.

To turn this into a BWC algorithm, we need only modify Step #3 to read as 
follows:

   3. is key equal to rule's consequent?
 yes: add antecedent to output, mark rule as FIRED,
 output is new key, goto 0

If you need to represent temporal dependencies in FWC/BWC systems, you have to 
express them using rules.  For example, if washer-a MUST be placed on bolt-b 
before nut-c can be screwed on, the rule base might look something like this:


   1. if installed(washer-x) then install(nut-z)
   2. if installed(bolt-y) then install(washer-x)
   3. if notInstalled(bolt-y) then install(bolt-y)

In this case, rule #1 won't get fired until rule #2 fires (nut-z can't get 
installed until washer-x has been installed).  Rule #2 won't get fired until 
rule #3 has fired (washer-x can't get installed until bolt-y has been 
installed). NUT-Z!  (Sorry, couldn't help it.)


To kick things off, we pass in “bolt-y” as the initial key.  This triggers rule 
#3, which will trigger rule #2, which will trigger rule #1. These temporal 
dependencies result in the following assembly sequence: install bolt-y, then 
install washer-x, and, finally, install nut-z.


A similar thing can be done to implement rule hierarchies.

   1. if levelIs(0) and installed(washer-x) then install(nut-z)
   2. if levelIs(0) and installed(nut-z) goLevel(1)
   3. if levelIs(1) and notInstalled(gadget-xx) then install(gadget-xx)
   4. if levelIs(0) and installed(bolt-y) then install(washer-x)
   5. if levelIs(0) and notInstalled(bolt-y) then install(bolt-y)

Here rule #2 won't fire until rule #1 has fired.  Rule #1 won't fire unless rule 
#4 has fired.  Rule #4 won't fire until rule #5 has fired.  And, finally, Rule 
#3 won't fire until Rule #2 has fired. So, level 0 could represent the reasoning 
required before level 1 rules (rule #3 here) will be of any use. (That's not the 
case here, of course, just stretching my humble example as far as I can.)


Note, again, that the temporal and level references in the rules are NOT used by 
the BWC.  They probably will be used by the part of the program that does 
something with the BWC's output (the install(), goLevel(), etc. functions). 
And, again, the results should be completely unaffected by the order in which 
the RB rules are evaluated or fired.


I hope this helps.

Cheers,

Brad

Richard Loosemore wrote:

Mike Tintner wrote:
A tangential comment here. Looking at this and other related threads I 
can't help thinking: jeez, here are you guys still endlessly arguing 
about the simplest of syllogisms, seemingly unable to progress beyond 
them. (Don't you ever have that feeling?) My impression is that the 
fault lies with logic itself - as soon as you start to apply logic to 
the real world, even only tangentially with talk of forward and 
backward or temporal considerations, you fall into a quagmire of 
ambiguity, and no one is really sure what they are talking about. Even 
the simplest if p then q logical proposition is actually infinitely 
ambiguous. No?  (Is there a Godel's Theorem of logic?)


Well, now you have me in a cleft stick, methinks.

I *hate* logic as a way to understand cognition, because I think it is a 
derivative process within a high-functional AGI system, not a foundation 
process that sits underneath everything else.


But, on the other hand, I do understand how it 

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-15 Thread Lukasz Stafiniak
On Tue, Jul 15, 2008 at 8:01 AM, Brad Paulsen [EMAIL PROTECTED] wrote:

 The terms forward-chaining and backward-chaining when used to refer to
 reasoning strategies have absolutely nothing to do with temporal
 dependencies or levels of reasoning.  These two terms refer simply, and
 only, to the algorithms used to evaluate if/then rules in a rule base
 (RB).  In the FWC algorithm, the if part is evaluated and, if TRUE, the
 then part is added to the FWC engine's output.  In the BWC algorithm, the
 then part is evaluated and, if TRUE, the if part is added to the BWC
 engine's output.  It is rare, but some systems use both FWC and BWC.

 That's it.  Period.  No other denotations or connotations apply.

Curiously, the definition put by Abram Demski is the only one I've
been aware of until yesterday (I believe it's the one used among
theorem proving people). Let's see what googling says on forward
chaining:

1. (Wikipedia)

2. http://www.amzi.com/ExpertSystemsInProlog/05forward.htm
A large number of expert systems require the use of forward chaining,
or data driven inference. [...]
Data driven expert systems are different from the goal driven, or
backward chaining systems seen in the previous chapters.
The goal driven approach is practical when there are a reasonable
number of possible final answers, as in the case of a diagnostic or
identification system. The system methodically tries to prove or
disprove each possible answer, gathering the needed information as it
goes.
The data driven approach is practical when combinatorial explosion
creates a seemingly infinite number of possible right answers, such as
possible configurations of a machine.

3. http://ai.eecs.umich.edu/cogarch0/common/prop/chain.html
Forward-chaining implies that upon assertion of new knowledge, all
relevant inductive and deductive rules are fired exhaustively,
effectively making all knowledge about the current state explicit
within the state. Forward chaining may be regarded as progress from a
known state (the original knowledge) towards a goal state(s).
Backward-chaining by an architecture means that no rules are fired
upon assertion of new knowledge. When an unknown predicate about a
known piece of knowledge is detected in an operator's condition list,
all rules relevant to the knowledge in question are fired until the
question is answered or until quiescence. Thus, backward chaining
systems normally work from a goal state back to the original state.

4. http://www.ontotext.com/inference/reasoning_strategies.html
* Forward-chaining: to start from the known facts and to perform
the inference in an inductive fashion. This kind of reasoning can have
diverse objectives, for instance: to compute the inferred closure; to
answer a particular query; to infer a particular sort of knowledge
(e.g. the class taxonomy); etc.
* Backward-chaining: to start from a particular fact or from a
query and by means of using deductive reasoning to try to verify that
fact or to obtain all possible results of the query. Typically, the
reasoner decomposes the fact into simpler facts that can be found in
the knowledge base or transforms it into alternative facts that can be
proven applying further recursive transformations. 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-15 Thread Jim Bromer
Ed Porter said:
You imply you have been able to accomplish a somewhat
similar implicit representation of bindings in a much higher
dimensional and presumably large semantic space.  Unfortunately I was
unable to understand from your description how you claimed to have
accomplished this. 
 
-
I never implied that I have been able to accomplish a somewhat similar implicit 
representation of bindings in a much higher dimension and presumably large 
semantic space.

I clearly stated:

I have often talked about the
use of multi-level complex methods and I see some similarity to the
ideas that they discussed to my ideas.
-and,
The complex groupings of
objects that I have in mind would have been derived using different
methods of analysis and combination and when a group of them is called
from an input analysis their use should tend to narrow the objects that
might be expected given the detection by the feature detectors.
Although I haven't expressed myself very clearly, this is very similar
to what Riesenhuber and Poggio were suggesting that their methods would
be capable of. So, yes,I think some similar methods can be used in NLP.

I clearly used the expression in mind just to avoid the kind of  
misunderstanding that you made. I never made the exaggerated claim that I had 
accomplished it.


The difference between having an idea in mind and having claimed to have 
accomplished a goal, which the majority of participants in the group would 
acknowledge is elusive, should be obvious and easy to understand.


I am not claiming that I have a method that would work in all semantic space.  
I would be happy to claim that I do have a theory which I believe should show 
some limited extensibility in semantic space that goes beyond other current 
theories.  However, I will not know for sure until I test it and right now that 
looks like it would be years off.


I
would be happy to continue the dialog if it can be conducted in a less
confrontational and more genial manner than it has been during
the past week.

Jim Bromer




Jim,
 
In
the Riesenhuber and Poggio paper the binding that were handled
implicitly involved spatial relationships, such as an observed roughly
horizontal line substantially touching an observed roughly vertical
line at their respective ends, even though their might be other
horizontal and vertical lines not having this relationship in the input
pixel space.  It achieves such implicit bindings by having enough
separate models to be able to detect, by direct mapping, such a
touching relationship between a horizontal and vertical lines at each
of many different locations in the visual input space.
 
But
the Poggio paper deals with a relatively small number of relationships
in a relatively small (160x160) low dimensional (2d) space using 23
million models.  You imply you have been able to accomplish a somewhat
similar implicit representation of bindings in a much higher
dimensional and presumably large semantic space.  Unfortunately I was
unable to understand from your description how you claimed to have
accomplished this.
 
Could you please clarify you description with regard to this point.
 
Ed Porter
 
-Original Message-
From: Jim Bromer [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 14, 2008 1:38 PM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE 
BINDING PROBLEM?
 
I
started reading a Riesenhuber and Poggio paper and there are some
similarities to ideas that I have considered although my ideas were
explicitly developed about computer programs that would use symbolic
information and are not neural theories.  It is interesting that
Risesnhuber and Poggio argued that the binding problem seems to be a
problem for only some models of object recognition.  In other words,
it seems that they are claiming that the problem disappears with their
model of neural cognition! 

The study of feature detectors in
cats eyes is old news and I did incorporate that information into the
development of my own theories.

I have often talked about the
use of multi-level complex methods and I see some similarity to the
ideas that they discussed to my ideas.  In my model an input would be
scanned for different features using different kinds of analysis on the
input.  So then a configuration of simple features would be derived
from the scan and these could be associated with a number of complex
groups of objects that have been previously associated with the
features.  Because the complex groups of objects are complexes (in the
general sense), and would be learned by previous experience, they are
not insipidly modeled on one standard model. These complex objects are
complex in that they are not all cut from one standard.  The older
implementations that used operations that were taken from set theory on
groups were set on object models that were very old-world and were not
derived from learning.  For example they were non-experiential. (I

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-15 Thread Richard Loosemore

Brad Paulsen wrote:
I've been following this thread pretty much since the beginning.  I hope 
I didn't miss anything subtle.  You'll let me know if I have, I'm sure. ;=)


It appears the need for temporal dependencies or different levels of 
reasoning has been conflated with the terms forward-chaining (FWC) and 
backward-chaining (BWC), which are typically used to describe 
different rule base evaluation algorithms used by expert systems.


The terms “forward-chaining” and “backward-chaining” when used to refer 
to reasoning strategies have absolutely nothing to do with temporal 
dependencies or levels of reasoning.  These two terms refer simply, and 
only, to the algorithms used to evaluate “if/then” rules in a rule base 
(RB).  In the FWC algorithm, the “if” part is evaluated and, if TRUE, 
the “then” part is added to the FWC engine's output.  In the BWC 
algorithm, the “then” part is evaluated and, if TRUE, the “if” part is 
added to the BWC engine's output.  It is rare, but some systems use both 
FWC and BWC.


That's it.  Period.  No other denotations or connotations apply.


Whooaa there.  Something not right here.

Backward chaining is about starting with a goal statement that you would 
like to prove, but at the beginning it is just a hypothesis.  In BWC you 
go about proving the statement by trying to find facts that might 
support it.  You would not start from the statement and then add 
knowledge to your knowledgebase that is consistent with it.


So for example, if your goal is to prove that Socrates is mortal, then 
your above desciption of BWC would cause the following to occur


1) Does any rule allow us to conclude that x is/is not mortal?

2) Answer: yes, the following rules allow us to do that:

If x is a plant, then x is mortal
If x is a rock, then x is not mortal
If x is a robot, then x is not mortal
If x lives in a post-singularity era, then x is not mortal
If x is a slug, then x is mortal
If x is a japanese beetle, then x is mortal
If x is a side of beef, then x is mortal
If x is a screwdriver, then x is not mortal
If x is a god, then x is not mortal
If x is a living creature, then x is mortal
If x is a goat, then x is mortal
If x is a parrot in a Dead Parrot Sketch, then x is mortal

3) Ask the knowledge base if Socrates is a plant, if Socrates is a rock, 
etc etc . working through the above list.


3) [According to your version of BWC, if I understand you aright] Okay, 
if we cannot find any facts in the KB that say that Socrates is known to 
be one of these things, then add the first of these to the KB:


Socrates is a plant

[This is the bit that I question:  we don't do the opposite of forward 
chaining at this step].


4) Now repeat to find all rules that allow us to conclude that x is a 
plant.  For this set of  ... then x is a plant rules, go back and 
repeat the loop from step 2 onwards.  Then if this does not work, 



Well, you can imagine the rest of the story: keep iterating until you 
can prove or disprove that Socrates is mortal.


I cannot seem to reconcile this with your statement above that backward 
chaining simply involves the opposite of forward chaining, namely adding 
antecedents to the KB and working backwards.







To help remove any mystery that may still surround these concepts, here 
is an FWC algorithm in pseudo-code (WARNING: I'm glossing over quite a 
few details here – I'll be happy to answer questions on list or off):


   0. set loop index to 0
   1. got next rule?
 no: goto 5
   2. is rule FIRED?
 yes: goto 1
   3. is key equal to rule's antecedent?
 yes: add consequent to output, mark rule as FIRED,
  output is new key, goto 0
   4. goto 1
   5. more input data?
 yes: input data is new key, goto 0
   6. done.

To turn this into a BWC algorithm, we need only modify Step #3 to read 
as follows:


   3. is key equal to rule's consequent?
 yes: add antecedent to output, mark rule as FIRED,
 output is new key, goto 0

If you need to represent temporal dependencies in FWC/BWC systems, you 
have to express them using rules.  For example, if washer-a MUST be 
placed on bolt-b before nut-c can be screwed on, the rule base might 
look something like this:


   1. if installed(washer-x) then install(nut-z)
   2. if installed(bolt-y) then install(washer-x)
   3. if notInstalled(bolt-y) then install(bolt-y)

In this case, rule #1 won't get fired until rule #2 fires (nut-z can't 
get installed until washer-x has been installed).  Rule #2 won't get 
fired until rule #3 has fired (washer-x can't get installed until bolt-y 
has been installed). NUT-Z!  (Sorry, couldn't help it.)


To kick things off, we pass in “bolt-y” as the initial key.  This 
triggers rule #3, which will trigger rule #2, which will trigger rule 
#1. These temporal dependencies result in the following assembly 
sequence: install bolt-y, then install washer-x, and, finally, install 
nut-z.


A similar thing can be done to 

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-15 Thread Ed Porter
Lukasz,

 

Your post below was great.

 

Your clippings from Google confirm much of the understanding that Abram
Demski was helping me reach yesterday.

 

In one of his posts Abram was discussing my prior statement that top-down
activation could be either forward or backward chaining.  He said If the
network is passing down an expectation based on other data, informing the
lower network of what to expect, then this is forward chaining. But if the
signal is not an expectation, but more like a query pay attention to data
that might conform/contradict this hypothesis, and notify me ASAP then it
is backwards chaining. And it seems realistic that it can be both of these.

 

I am interpreting this quoted statement as implying the purpose of backward
chaining is to search for forward chaining paths that either confirm or
contradict a pattern of interest or that provide a path or plan to a desired
goal.  In this view the backward part of backward chaining provides no
changes in probability, only changes in attention, and it is only the
forward chaining that is found by such backward chaining that changes
probabilities.

 

Am I correct in this interpretation of what Abram said, and is that
interpretation included in what your Google clippings indicate is the
generally understood meaning of the term backward chaining.

 

Ed Porter

 

P.S. I would appreciate answers for Abram or any else on this list who
understands the question and has some knowledge on the subject.

 

-Original Message-
From: Lukasz Stafiniak [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, July 15, 2008 3:05 AM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE
BINDING PROBLEM?

 

On Tue, Jul 15, 2008 at 8:01 AM, Brad Paulsen [EMAIL PROTECTED]
wrote:



 The terms forward-chaining and backward-chaining when used to refer to

 reasoning strategies have absolutely nothing to do with temporal

 dependencies or levels of reasoning.  These two terms refer simply, and

 only, to the algorithms used to evaluate if/then rules in a rule base

 (RB).  In the FWC algorithm, the if part is evaluated and, if TRUE, the

 then part is added to the FWC engine's output.  In the BWC algorithm,
the

 then part is evaluated and, if TRUE, the if part is added to the BWC

 engine's output.  It is rare, but some systems use both FWC and BWC.



 That's it.  Period.  No other denotations or connotations apply.



Curiously, the definition put by Abram Demski is the only one I've

been aware of until yesterday (I believe it's the one used among

theorem proving people). Let's see what googling says on forward

chaining:

 

1. (Wikipedia)

 

2. http://www.amzi.com/ExpertSystemsInProlog/05forward.htm

A large number of expert systems require the use of forward chaining,

or data driven inference. [...]

Data driven expert systems are different from the goal driven, or

backward chaining systems seen in the previous chapters.

The goal driven approach is practical when there are a reasonable

number of possible final answers, as in the case of a diagnostic or

identification system. The system methodically tries to prove or

disprove each possible answer, gathering the needed information as it

goes.

The data driven approach is practical when combinatorial explosion

creates a seemingly infinite number of possible right answers, such as

possible configurations of a machine.

 

3. http://ai.eecs.umich.edu/cogarch0/common/prop/chain.html

Forward-chaining implies that upon assertion of new knowledge, all

relevant inductive and deductive rules are fired exhaustively,

effectively making all knowledge about the current state explicit

within the state. Forward chaining may be regarded as progress from a

known state (the original knowledge) towards a goal state(s).

Backward-chaining by an architecture means that no rules are fired

upon assertion of new knowledge. When an unknown predicate about a

known piece of knowledge is detected in an operator's condition list,

all rules relevant to the knowledge in question are fired until the

question is answered or until quiescence. Thus, backward chaining

systems normally work from a goal state back to the original state.

 

4. http://www.ontotext.com/inference/reasoning_strategies.html

* Forward-chaining: to start from the known facts and to perform

the inference in an inductive fashion. This kind of reasoning can have

diverse objectives, for instance: to compute the inferred closure; to

answer a particular query; to infer a particular sort of knowledge

(e.g. the class taxonomy); etc.

* Backward-chaining: to start from a particular fact or from a

query and by means of using deductive reasoning to try to verify that

fact or to obtain all possible results of the query. Typically, the

reasoner decomposes the fact into simpler facts that can be found in

the knowledge base or transforms it into alternative facts that can be

proven applying further 

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-15 Thread Ed Porter
Jim, Sorry.  Obviously I did not understand you. Ed Porter

 

-Original Message-
From: Jim Bromer [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, July 15, 2008 9:33 AM
To: agi@v2.listbox.com
Subject: RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE
BINDING PROBLEM?

 

Ed Porter said:

You imply you have been able to accomplish a somewhat similar implicit
representation of bindings in a much higher dimensional and presumably large
semantic space.  Unfortunately I was unable to understand from your
description how you claimed to have accomplished this. 

 

-

I never implied that I have been able to accomplish a somewhat similar
implicit representation of bindings in a much higher dimension and
presumably large semantic space.

 

I clearly stated:

I have often talked about the use of multi-level complex methods and I see
some similarity to the ideas that they discussed to my ideas.

-and,

The complex groupings of objects that I have in mind would have been
derived using different methods of analysis and combination and when a group
of them is called from an input analysis their use should tend to narrow the
objects that might be expected given the detection by the feature detectors.
Although I haven't expressed myself very clearly, this is very similar to
what Riesenhuber and Poggio were suggesting that their methods would be
capable of. So, yes,I think some similar methods can be used in NLP.



I clearly used the expression in mind just to avoid the kind of
misunderstanding that you made. I never made the exaggerated claim that I
had accomplished it.

The difference between having an idea in mind and having claimed to have
accomplished a goal, which the majority of participants in the group would
acknowledge is elusive, should be obvious and easy to understand.

 

I am not claiming that I have a method that would work in all semantic
space.  I would be happy to claim that I do have a theory which I believe
should show some limited extensibility in semantic space that goes beyond
other current theories.  However, I will not know for sure until I test it
and right now that looks like it would be years off.

 

I would be happy to continue the dialog if it can be conducted in a less
confrontational and more genial manner than it has been during the past
week.

 

Jim Bromer

 

 

 

Jim,

 

In the Riesenhuber and Poggio paper the binding that were handled implicitly
involved spatial relationships, such as an observed roughly horizontal line
substantially touching an observed roughly vertical line at their respective
ends, even though their might be other horizontal and vertical lines not
having this relationship in the input pixel space.  It achieves such
implicit bindings by having enough separate models to be able to detect, by
direct mapping, such a touching relationship between a horizontal and
vertical lines at each of many different locations in the visual input
space.

 

But the Poggio paper deals with a relatively small number of relationships
in a relatively small (160x160) low dimensional (2d) space using 23 million
models.  You imply you have been able to accomplish a somewhat similar
implicit representation of bindings in a much higher dimensional and
presumably large semantic space.  Unfortunately I was unable to understand
from your description how you claimed to have accomplished this.

 

Could you please clarify you description with regard to this point.

 

Ed Porter

 

-Original Message-
From: Jim Bromer [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 14, 2008 1:38 PM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE
BINDING PROBLEM?

 

I started reading a Riesenhuber and Poggio paper and there are some
similarities to ideas that I have considered although my ideas were
explicitly developed about computer programs that would use symbolic
information and are not neural theories.  It is interesting that Risesnhuber
and Poggio argued that the binding problem seems to be a problem for only
some models of object recognition.  In other words, it seems that they are
claiming that the problem disappears with their model of neural cognition! 

The study of feature detectors in cats eyes is old news and I did
incorporate that information into the development of my own theories.

I have often talked about the use of multi-level complex methods and I see
some similarity to the ideas that they discussed to my ideas.  In my model
an input would be scanned for different features using different kinds of
analysis on the input.  So then a configuration of simple features would be
derived from the scan and these could be associated with a number of complex
groups of objects that have been previously associated with the features.
Because the complex groups of objects are complexes (in the general sense),
and would be learned by previous experience, they are not insipidly modeled
on one 

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-15 Thread Abram Demski
Am I correct in this interpretation of what Abram said, and is that
interpretation included in what your Google clippings indicate is the
generally understood meaning of the term backward chaining.
Ed Porter

It sounds to me like you are interpreting me correctly.

One important note. Lukasz quoted one source that claimed that forward
chaining can help to cut down the combinatorial explosion arising from
the huge search space in backwards-chaining. This is true in some
situations, but the opposite can also be the case; backwards-chaining
can help to focus inferences when it would be impossible to deduce
every fact that would follow by forward-chaining. It depends on the
forward and backwards branching factors. If every fact fires an
average of five rules forwards, but three backwards, then
backwards-chaining will be less expensive; 5^n vs 3^n, where n is the
length of the actual deductive chain being searched for. Simultaneous
backwards/forwards chaining that meets in the middle can be even less
expensive; with a branching factor of 2 in both directions, the search
time goes down from 2^n for forward or backward chaining to 2^(n/2 +
1).

On the other hand, what we want the system to do makes a big
difference. If we really do have a single goal-sentence we want to
prove or disprove, the above arguments hold. But if we want to deduce
all consequences of our current knowledge, we should use forward
chaining regardless of branching factors and so on.

Most of this stuff should be in any intro AI textbook.

--Abram

On Tue, Jul 15, 2008 at 11:08 AM, Ed Porter [EMAIL PROTECTED] wrote:
 Lukasz,



 Your post below was great.



 Your clippings from Google confirm much of the understanding that Abram
 Demski was helping me reach yesterday.



 In one of his posts Abram was discussing my prior statement that top-down
 activation could be either forward or backward chaining.  He said If the
 network is passing down an expectation based on other data, informing the
 lower network of what to expect, then this is forward chaining. But if the
 signal is not an expectation, but more like a query pay attention to data
 that might conform/contradict this hypothesis, and notify me ASAP then it
 is backwards chaining. And it seems realistic that it can be both of these.



 I am interpreting this quoted statement as implying the purpose of backward
 chaining is to search for forward chaining paths that either confirm or
 contradict a pattern of interest or that provide a path or plan to a desired
 goal.  In this view the backward part of backward chaining provides no
 changes in probability, only changes in attention, and it is only the
 forward chaining that is found by such backward chaining that changes
 probabilities.



 Am I correct in this interpretation of what Abram said, and is that
 interpretation included in what your Google clippings indicate is the
 generally understood meaning of the term backward chaining.



 Ed Porter



 P.S. I would appreciate answers for Abram or any else on this list who
 understands the question and has some knowledge on the subject.



 -Original Message-
 From: Lukasz Stafiniak [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, July 15, 2008 3:05 AM
 To: agi@v2.listbox.com
 Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE
 BINDING PROBLEM?



 On Tue, Jul 15, 2008 at 8:01 AM, Brad Paulsen [EMAIL PROTECTED]
 wrote:



 The terms forward-chaining and backward-chaining when used to refer to

 reasoning strategies have absolutely nothing to do with temporal

 dependencies or levels of reasoning.  These two terms refer simply, and

 only, to the algorithms used to evaluate if/then rules in a rule base

 (RB).  In the FWC algorithm, the if part is evaluated and, if TRUE, the

 then part is added to the FWC engine's output.  In the BWC algorithm,
 the

 then part is evaluated and, if TRUE, the if part is added to the BWC

 engine's output.  It is rare, but some systems use both FWC and BWC.



 That's it.  Period.  No other denotations or connotations apply.



 Curiously, the definition put by Abram Demski is the only one I've

 been aware of until yesterday (I believe it's the one used among

 theorem proving people). Let's see what googling says on forward

 chaining:



 1. (Wikipedia)



 2. http://www.amzi.com/ExpertSystemsInProlog/05forward.htm

 A large number of expert systems require the use of forward chaining,

 or data driven inference. [...]

 Data driven expert systems are different from the goal driven, or

 backward chaining systems seen in the previous chapters.

 The goal driven approach is practical when there are a reasonable

 number of possible final answers, as in the case of a diagnostic or

 identification system. The system methodically tries to prove or

 disprove each possible answer, gathering the needed information as it

 goes.

 The data driven approach is practical when combinatorial explosion

 creates a seemingly infinite number 

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-15 Thread Ed Porter
Abram, 

Thanks, for the info.  The concept that the only purpose of backward
chaining to find appropriate forward chaining paths, is an important
clarification of my understanding.

Ed Porter

-Original Message-
From: Abram Demski [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, July 15, 2008 11:38 AM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE
BINDING PROBLEM?

Am I correct in this interpretation of what Abram said, and is that
interpretation included in what your Google clippings indicate is the
generally understood meaning of the term backward chaining.
Ed Porter

It sounds to me like you are interpreting me correctly.

One important note. Lukasz quoted one source that claimed that forward
chaining can help to cut down the combinatorial explosion arising from
the huge search space in backwards-chaining. This is true in some
situations, but the opposite can also be the case; backwards-chaining
can help to focus inferences when it would be impossible to deduce
every fact that would follow by forward-chaining. It depends on the
forward and backwards branching factors. If every fact fires an
average of five rules forwards, but three backwards, then
backwards-chaining will be less expensive; 5^n vs 3^n, where n is the
length of the actual deductive chain being searched for. Simultaneous
backwards/forwards chaining that meets in the middle can be even less
expensive; with a branching factor of 2 in both directions, the search
time goes down from 2^n for forward or backward chaining to 2^(n/2 +
1).

On the other hand, what we want the system to do makes a big
difference. If we really do have a single goal-sentence we want to
prove or disprove, the above arguments hold. But if we want to deduce
all consequences of our current knowledge, we should use forward
chaining regardless of branching factors and so on.

Most of this stuff should be in any intro AI textbook.

--Abram

On Tue, Jul 15, 2008 at 11:08 AM, Ed Porter [EMAIL PROTECTED] wrote:
 Lukasz,



 Your post below was great.



 Your clippings from Google confirm much of the understanding that Abram
 Demski was helping me reach yesterday.



 In one of his posts Abram was discussing my prior statement that top-down
 activation could be either forward or backward chaining.  He said If the
 network is passing down an expectation based on other data, informing the
 lower network of what to expect, then this is forward chaining. But if the
 signal is not an expectation, but more like a query pay attention to data
 that might conform/contradict this hypothesis, and notify me ASAP then it
 is backwards chaining. And it seems realistic that it can be both of
these.



 I am interpreting this quoted statement as implying the purpose of
backward
 chaining is to search for forward chaining paths that either confirm or
 contradict a pattern of interest or that provide a path or plan to a
desired
 goal.  In this view the backward part of backward chaining provides no
 changes in probability, only changes in attention, and it is only the
 forward chaining that is found by such backward chaining that changes
 probabilities.



 Am I correct in this interpretation of what Abram said, and is that
 interpretation included in what your Google clippings indicate is the
 generally understood meaning of the term backward chaining.



 Ed Porter



 P.S. I would appreciate answers for Abram or any else on this list who
 understands the question and has some knowledge on the subject.



 -Original Message-
 From: Lukasz Stafiniak [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, July 15, 2008 3:05 AM
 To: agi@v2.listbox.com
 Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY
THE
 BINDING PROBLEM?



 On Tue, Jul 15, 2008 at 8:01 AM, Brad Paulsen [EMAIL PROTECTED]
 wrote:



 The terms forward-chaining and backward-chaining when used to refer
to

 reasoning strategies have absolutely nothing to do with temporal

 dependencies or levels of reasoning.  These two terms refer simply, and

 only, to the algorithms used to evaluate if/then rules in a rule base

 (RB).  In the FWC algorithm, the if part is evaluated and, if TRUE, the

 then part is added to the FWC engine's output.  In the BWC algorithm,
 the

 then part is evaluated and, if TRUE, the if part is added to the BWC

 engine's output.  It is rare, but some systems use both FWC and BWC.



 That's it.  Period.  No other denotations or connotations apply.



 Curiously, the definition put by Abram Demski is the only one I've

 been aware of until yesterday (I believe it's the one used among

 theorem proving people). Let's see what googling says on forward

 chaining:



 1. (Wikipedia)



 2. http://www.amzi.com/ExpertSystemsInProlog/05forward.htm

 A large number of expert systems require the use of forward chaining,

 or data driven inference. [...]

 Data driven expert systems are different from the goal driven, or

 backward chaining systems 

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-15 Thread Mike Archbold

 4. http://www.ontotext.com/inference/reasoning_strategies.html
 * Forward-chaining: to start from the known facts and to perform
 the inference in an inductive fashion. This kind of reasoning can have
 diverse objectives, for instance: to compute the inferred closure; to
 answer a particular query; to infer a particular sort of knowledge
 (e.g. the class taxonomy); etc.
 * Backward-chaining: to start from a particular fact or from a
 query and by means of using deductive reasoning to try to verify that
 fact or to obtain all possible results of the query. Typically, the
 reasoner decomposes the fact into simpler facts that can be found in
 the knowledge base or transforms it into alternative facts that can be
 proven applying further recursive transformations. 



A system like Clips is forward chaining but there is no induction going
on.  Whether fwd- or bkw- chaining it is deduction as far as I've ever
heard of.  With induction we are implying repeated observations that lead
to some new knowledge (ie., some new rule in this case).  That was my
understanding anyway, and I'm no PhD scientist.
Mike Archbold



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-14 Thread Ed Porter
Richard,

You just keep digging yourself in deeper.

Look at the original email in which you said This is not correct.  The
only quoted text that precedes it is quoted from me.  So why are you saying
Jim's statement was a misunderstanding?

Furthermore, I think your criticisms of my statements are generally
unfounded.  

My choice of the word reasoning was not not correct, as you imply, since
the Wikipedia definition says Forward chaining is one of the two main
methods of REASONING when using inference rules. (Emphasis added.)

My statement made it clear I was describing the forward direction as being
from the if clause to the then clause, which matches the Wikipedia
definition, so what is not correct about that.

In addition, you said my statement that in the absence of a temporal
criteria the notion of what is forward and backward chaining might be
somewhat arbitrary  was a completely incorrect conclusion.

Offensively strong language, considering it is unfounded. 

It is unfounded because in the absence of a temporal distinction, many
if-then rules, particularly if they are probabilistic, can viewed in a two
way form, with a probabilistic inference going both ways.  In this case it
becomes unclear which side is the if clause, and which the then clause,
and, thus, unclear which way is forward and which backward by the definition
contained in Wikipedia --- unless there is a temporal criteria.  This issue
becomes even more problematic when dealing with patterns based on temporal
simultaneity, as in much of object recognition, in which even a temporal
distinction, does not distinguish between what should be consider the if
clause and what should be considered the then clause. 

Enough of arguing about arguing.  You can have the last say if you want.  I
want to spend what time I have to spend on this list conversing with people
who are more concerned about truth than trying to sound like they know more
than others, particularly when they don't.  

Anyone who reads this thread will know who was being honest and reasonable
and who was not.

Ed Porter 

-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Sunday, July 13, 2008 7:52 PM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE
BINDING PROBLEM?

Ed Porter wrote:
 Richard,  
 
 I think Wikipedia's definition of forward chaining (copied below) agrees
 with my stated understanding as to what forward chaining means, i.e.,
 reasoning from the if (i.e., conditions) to the then (i.e.,
 consequences) in if-then statements.  
 
 So, once again there is an indication you have unfairly criticized the
 statements of another.

But  ... nothing in what I said contradicted the wikipedia 
definition of forward chaining.

Jim's statement was a misunderstanding of the meaning of forward and 
backward chaining because he oversimplified the two (forward reasoning 
is reasoning from conditions to consequences, and backward reasoning is 
the opposite ... this is kind of true, if you stretch the word 
reasoining a little, but it misses the point), and then he went from 
this oversimplification to come to a completely incorrect conclusion 
(...Thus I think the notion of what is forward and backward chaining 
might be somewhat arbitrary...).

This last conclusion was sufficiently inaccurate that I decided to point 
that out.  It was not a criticism, just a clarification;  a pointer in 
the right direction.


Richard Loosemore






 Ed Porter
 
 ==Wikipedia defines forward chaining as: ==
 
 Forward chaining is one of the two main methods of reasoning when using
 inference rules (in artificial intelligence). The other is backward
 chaining.
 
 Forward chaining starts with the available data and uses inference rules
to
 extract more data (from an end user for example) until an optimal goal is
 reached. An inference engine using forward chaining searches the inference
 rules until it finds one where the antecedent (If clause) is known to be
 true. When found it can conclude, or infer, the consequent (Then clause),
 resulting in the addition of new information to its data.
 
 Inference engines will often cycle through this process until an optimal
 goal is reached.
 
 For example, suppose that the goal is to conclude the color of my pet
Fritz,
 given that he croaks and eats flies, and that the rule base contains the
 following four rules:
 
 If X croaks and eats flies - Then X is a frog 
 If X chirps and sings - Then X is a canary 
 If X is a frog - Then X is green 
 If X is a canary - Then X is yellow 
 
 This rule base would be searched and the first rule would be selected,
 because its antecedent (If Fritz croaks and eats flies) matches our data.
 Now the consequents (Then X is a frog) is added to the data. The rule base
 is again searched and this time the third rule is selected, because its
 antecedent (If Fritz is a frog) matches our data that was just confirmed.
 Now the 

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-14 Thread Mark Waser
Anyone who reads this thread will know who was being honest and 
reasonable

and who was not.

The question is not honest and reasonable but factually correct . . . .

The following statement of yours

 In this case it becomes unclear which side is the if clause, and which
the then clause, and, thus, unclear which way is forward and which
backward by the definition contained in Wikipedia --- unless there is a
temporal criteria.

is simply incorrect.  Temporal criteria are *NOT* necessarily relevant to
forward and backward chaining.

As far as I can tell, Richard is trying to gently correct you and you are 
both incorrect and unwilling to even attempt to interpret his words in the 
way he meant (i.e. an honest and reasonable fashion).


- Original Message - 
From: Ed Porter [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, July 14, 2008 8:58 AM
Subject: **SPAM** RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND 
BY THE BINDING PROBLEM?



Richard,

You just keep digging yourself in deeper.

Look at the original email in which you said This is not correct.  The
only quoted text that precedes it is quoted from me.  So why are you saying
Jim's statement was a misunderstanding?

Furthermore, I think your criticisms of my statements are generally
unfounded.

My choice of the word reasoning was not not correct, as you imply, since
the Wikipedia definition says Forward chaining is one of the two main
methods of REASONING when using inference rules. (Emphasis added.)

My statement made it clear I was describing the forward direction as being
from the if clause to the then clause, which matches the Wikipedia
definition, so what is not correct about that.

In addition, you said my statement that in the absence of a temporal
criteria the notion of what is forward and backward chaining might be
somewhat arbitrary  was a completely incorrect conclusion.

Offensively strong language, considering it is unfounded.

It is unfounded because in the absence of a temporal distinction, many
if-then rules, particularly if they are probabilistic, can viewed in a two
way form, with a probabilistic inference going both ways.  In this case it
becomes unclear which side is the if clause, and which the then clause,
and, thus, unclear which way is forward and which backward by the definition
contained in Wikipedia --- unless there is a temporal criteria.  This issue
becomes even more problematic when dealing with patterns based on temporal
simultaneity, as in much of object recognition, in which even a temporal
distinction, does not distinguish between what should be consider the if
clause and what should be considered the then clause.

Enough of arguing about arguing.  You can have the last say if you want.  I
want to spend what time I have to spend on this list conversing with people
who are more concerned about truth than trying to sound like they know more
than others, particularly when they don't.

Anyone who reads this thread will know who was being honest and reasonable
and who was not.

Ed Porter

-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
Sent: Sunday, July 13, 2008 7:52 PM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE
BINDING PROBLEM?

Ed Porter wrote:

Richard,

I think Wikipedia's definition of forward chaining (copied below) agrees
with my stated understanding as to what forward chaining means, i.e.,
reasoning from the if (i.e., conditions) to the then (i.e.,
consequences) in if-then statements.

So, once again there is an indication you have unfairly criticized the
statements of another.


But  ... nothing in what I said contradicted the wikipedia
definition of forward chaining.

Jim's statement was a misunderstanding of the meaning of forward and
backward chaining because he oversimplified the two (forward reasoning
is reasoning from conditions to consequences, and backward reasoning is
the opposite ... this is kind of true, if you stretch the word
reasoining a little, but it misses the point), and then he went from
this oversimplification to come to a completely incorrect conclusion
(...Thus I think the notion of what is forward and backward chaining
might be somewhat arbitrary...).

This last conclusion was sufficiently inaccurate that I decided to point
that out.  It was not a criticism, just a clarification;  a pointer in
the right direction.


Richard Loosemore







Ed Porter

==Wikipedia defines forward chaining as: ==

Forward chaining is one of the two main methods of reasoning when using
inference rules (in artificial intelligence). The other is backward
chaining.

Forward chaining starts with the available data and uses inference rules

to

extract more data (from an end user for example) until an optimal goal is
reached. An inference engine using forward chaining searches the inference
rules until it finds one where the antecedent (If clause) is known to be
true. When found it 

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-14 Thread Ed Porter
Mark,

Since your attack on my statement below is based on nothing but conclusory
statements and contains neither reasoning or evidence to support them, there
is little in your below email to respond to other than your personal spleen.


You have said my statement which your email quotes is simply incorrect
without giving any justification.  

Your statement that Temporal criteria are *NOT* relevant to forward and
backward chaining is itself a conclusory statement.  

Furthermore this statement about temporal criteria not being relevant is
more incorrect than correct.  If an if-then rule describes a situation where
one thing causes another, or comes before it time, the thing that comes
first is more commonly the if clause (although one can write the rule in the
reverse order).  The if clause is commonly called a condition, and the then
clause is sometimes called the consequence, implying a causal or temporal
relationship.  The notion of reasoning backward from a goal being backward
chaining, normally involves the notion of reasoning back in imagined time
from a desired goal state.  So often TEMPORAL CRITERIA *ARE* RELEVANT TO
WHICH DIRECTION IS FORWARD CHAINING AND WHICH IS BACKWARD.

Even if one were to make a reach, and try to justify your statement that
Temporal criteria are *NOT* relevant to forward and backward chaining as
being more than just conclusory by suggesting it was an implicit reference
to statements --- like that contained Richard's prior statements in this
thread or the Wikipedia quote in one of the posts below --- that the
definition of forward and backward chaining depended on whether the
reasoning was from if clause to then clause, or the reverse --- that would
still not correct the groundlessness of your criticism.  

This is because the rule that forward chaining is from if clause to then
clause and the reverse for backward chaining has no applicability to
situations where the implication goes both ways and there is no clear
indication of which pattern should be the if clause and which should be the
then clause --- which is precisely the situation I was describing in the
quote from me you unfairly criticized.  

Neither Richard's prior statement in this thread nor the Wikipedia
definition below define which direction is forward and which is backward in
many such situations.

In my quote which you attacked I was discussing exactly this situations when
it was not clear which part of an inference pattern should be considered the
if clause and which the then clause.  So it appears your criticism either
totally missed, or for other reasons, failed to deal with the issue I was
discussing.

Mark, in general I do not read your posts because, among other things, like
your email below, they are generally poorly reasoned and seemed more
concerned with issues of ego and personality than with learning and teaching
truthful information or insights.  I skip many of Richard's for the same
reason, but I do read some of Richard's because despite all his pompous BS
he does occasionally say something quite thoughtful and worth while.

If you care about improving your reputation on this list, it would make you
seem more like someone who cared about truth and reason, and less like
someone who cared more about petty squabbles and personal ego, if you gave
reasons for your criticisms, and if you took the time to ensure your
criticism actually addressed what you are criticizing.

In your post immediately below you did neither.

Ed Porter

-Original Message-
From: Mark Waser [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 14, 2008 9:19 AM
To: agi@v2.listbox.com
Subject: RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE
BINDING PROBLEM?

 Anyone who reads this thread will know who was being honest and 
 reasonable
and who was not.

The question is not honest and reasonable but factually correct . . . .

The following statement of yours
  In this case it becomes unclear which side is the if clause, and which
 the then clause, and, thus, unclear which way is forward and which
 backward by the definition contained in Wikipedia --- unless there is a
 temporal criteria.
is simply incorrect.  Temporal criteria are *NOT* necessarily relevant to
forward and backward chaining.

As far as I can tell, Richard is trying to gently correct you and you are 
both incorrect and unwilling to even attempt to interpret his words in the 
way he meant (i.e. an honest and reasonable fashion).

- Original Message - 
From: Ed Porter [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, July 14, 2008 8:58 AM
Subject: **SPAM** RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND

BY THE BINDING PROBLEM?


Richard,

You just keep digging yourself in deeper.

Look at the original email in which you said This is not correct.  The
only quoted text that precedes it is quoted from me.  So why are you saying
Jim's statement was a misunderstanding?

Furthermore, I think your criticisms of my statements are 

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-14 Thread Mark Waser

Ed,

   Take the statements

   IF it croaks, THEN it is a frog.
   IF it is a frog, THEN it is green.

   Given an additional statement that it croaks, forward-chaining says that 
it is green.  There is nothing temporal involved.

   - OR -
   Given an additional statement that it is green, backward-chaining says 
that it MAY croak.  Again, nothing temporal involved.


   How do you see temporal criteria as being related to my example?

   Mark

- Original Message - 
From: Ed Porter [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, July 14, 2008 10:40 AM
Subject: **SPAM** RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND 
BY THE BINDING PROBLEM?



Mark,

Since your attack on my statement below is based on nothing but conclusory
statements and contains neither reasoning or evidence to support them, there
is little in your below email to respond to other than your personal spleen.


You have said my statement which your email quotes is simply incorrect
without giving any justification.

Your statement that Temporal criteria are *NOT* relevant to forward and
backward chaining is itself a conclusory statement.

Furthermore this statement about temporal criteria not being relevant is
more incorrect than correct.  If an if-then rule describes a situation where
one thing causes another, or comes before it time, the thing that comes
first is more commonly the if clause (although one can write the rule in the
reverse order).  The if clause is commonly called a condition, and the then
clause is sometimes called the consequence, implying a causal or temporal
relationship.  The notion of reasoning backward from a goal being backward
chaining, normally involves the notion of reasoning back in imagined time
from a desired goal state.  So often TEMPORAL CRITERIA *ARE* RELEVANT TO
WHICH DIRECTION IS FORWARD CHAINING AND WHICH IS BACKWARD.

Even if one were to make a reach, and try to justify your statement that
Temporal criteria are *NOT* relevant to forward and backward chaining as
being more than just conclusory by suggesting it was an implicit reference
to statements --- like that contained Richard's prior statements in this
thread or the Wikipedia quote in one of the posts below --- that the
definition of forward and backward chaining depended on whether the
reasoning was from if clause to then clause, or the reverse --- that would
still not correct the groundlessness of your criticism.

This is because the rule that forward chaining is from if clause to then
clause and the reverse for backward chaining has no applicability to
situations where the implication goes both ways and there is no clear
indication of which pattern should be the if clause and which should be the
then clause --- which is precisely the situation I was describing in the
quote from me you unfairly criticized.

Neither Richard's prior statement in this thread nor the Wikipedia
definition below define which direction is forward and which is backward in
many such situations.

In my quote which you attacked I was discussing exactly this situations when
it was not clear which part of an inference pattern should be considered the
if clause and which the then clause.  So it appears your criticism either
totally missed, or for other reasons, failed to deal with the issue I was
discussing.

Mark, in general I do not read your posts because, among other things, like
your email below, they are generally poorly reasoned and seemed more
concerned with issues of ego and personality than with learning and teaching
truthful information or insights.  I skip many of Richard's for the same
reason, but I do read some of Richard's because despite all his pompous BS
he does occasionally say something quite thoughtful and worth while.

If you care about improving your reputation on this list, it would make you
seem more like someone who cared about truth and reason, and less like
someone who cared more about petty squabbles and personal ego, if you gave
reasons for your criticisms, and if you took the time to ensure your
criticism actually addressed what you are criticizing.

In your post immediately below you did neither.

Ed Porter

-Original Message-
From: Mark Waser [mailto:[EMAIL PROTECTED]
Sent: Monday, July 14, 2008 9:19 AM
To: agi@v2.listbox.com
Subject: RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE
BINDING PROBLEM?


Anyone who reads this thread will know who was being honest and
reasonable

and who was not.

The question is not honest and reasonable but factually correct . . . .

The following statement of yours

 In this case it becomes unclear which side is the if clause, and which
the then clause, and, thus, unclear which way is forward and which
backward by the definition contained in Wikipedia --- unless there is a
temporal criteria.

is simply incorrect.  Temporal criteria are *NOT* necessarily relevant to
forward and backward chaining.

As far as I can tell, Richard is trying to gently correct 

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-14 Thread Abram Demski
It is true that Mark Waser did not provide much justification, but I
think he is right. The if-then rules involved in forward/backward
chaining do not need to be causal, or temporal. A mutual implication
is still treaded differently by forward chaining and backward
chaining, so it does not cause ambiguity. For example, if we have An
alarm sounds if and only if there is a fire, then a forward-chaining
algorithm would (1) conclude that there is an alarm sounding if it
learned that there was a fire, and (2) conclude that  there was a fire
if it learned that there was an alarm. A backwards-chainer would use
the rule differently, so that (1) it might look for a fire if it was
trying to determine if an alarm was sounding, and (2) it might look
for an alarm if it wanted to know about a fire. Even though the
implication goes in both directions, the meaning of forward chaining
and of backward chaining are quite different.

On Mon, Jul 14, 2008 at 10:40 AM, Ed Porter [EMAIL PROTECTED] wrote:
 Mark,

 Since your attack on my statement below is based on nothing but conclusory
 statements and contains neither reasoning or evidence to support them, there
 is little in your below email to respond to other than your personal spleen.


 You have said my statement which your email quotes is simply incorrect
 without giving any justification.

 Your statement that Temporal criteria are *NOT* relevant to forward and
 backward chaining is itself a conclusory statement.

 Furthermore this statement about temporal criteria not being relevant is
 more incorrect than correct.  If an if-then rule describes a situation where
 one thing causes another, or comes before it time, the thing that comes
 first is more commonly the if clause (although one can write the rule in the
 reverse order).  The if clause is commonly called a condition, and the then
 clause is sometimes called the consequence, implying a causal or temporal
 relationship.  The notion of reasoning backward from a goal being backward
 chaining, normally involves the notion of reasoning back in imagined time
 from a desired goal state.  So often TEMPORAL CRITERIA *ARE* RELEVANT TO
 WHICH DIRECTION IS FORWARD CHAINING AND WHICH IS BACKWARD.

 Even if one were to make a reach, and try to justify your statement that
 Temporal criteria are *NOT* relevant to forward and backward chaining as
 being more than just conclusory by suggesting it was an implicit reference
 to statements --- like that contained Richard's prior statements in this
 thread or the Wikipedia quote in one of the posts below --- that the
 definition of forward and backward chaining depended on whether the
 reasoning was from if clause to then clause, or the reverse --- that would
 still not correct the groundlessness of your criticism.

 This is because the rule that forward chaining is from if clause to then
 clause and the reverse for backward chaining has no applicability to
 situations where the implication goes both ways and there is no clear
 indication of which pattern should be the if clause and which should be the
 then clause --- which is precisely the situation I was describing in the
 quote from me you unfairly criticized.

 Neither Richard's prior statement in this thread nor the Wikipedia
 definition below define which direction is forward and which is backward in
 many such situations.

 In my quote which you attacked I was discussing exactly this situations when
 it was not clear which part of an inference pattern should be considered the
 if clause and which the then clause.  So it appears your criticism either
 totally missed, or for other reasons, failed to deal with the issue I was
 discussing.

 Mark, in general I do not read your posts because, among other things, like
 your email below, they are generally poorly reasoned and seemed more
 concerned with issues of ego and personality than with learning and teaching
 truthful information or insights.  I skip many of Richard's for the same
 reason, but I do read some of Richard's because despite all his pompous BS
 he does occasionally say something quite thoughtful and worth while.

 If you care about improving your reputation on this list, it would make you
 seem more like someone who cared about truth and reason, and less like
 someone who cared more about petty squabbles and personal ego, if you gave
 reasons for your criticisms, and if you took the time to ensure your
 criticism actually addressed what you are criticizing.

 In your post immediately below you did neither.

 Ed Porter

 -Original Message-
 From: Mark Waser [mailto:[EMAIL PROTECTED]
 Sent: Monday, July 14, 2008 9:19 AM
 To: agi@v2.listbox.com
 Subject: RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE
 BINDING PROBLEM?

 Anyone who reads this thread will know who was being honest and
 reasonable
 and who was not.

 The question is not honest and reasonable but factually correct . . . .

 The following statement of yours
  In this case it 

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-14 Thread Richard Loosemore

Ed Porter wrote:

Richard,

You just keep digging yourself in deeper.

Look at the original email in which you said This is not correct.  The
only quoted text that precedes it is quoted from me.  So why are you saying
Jim's statement was a misunderstanding?


Okay, looks like some confusion here:  the structure of Jim's message 
was such that I thought the relevant comment came from him.  Turns out 
he was just quoting you.  That's fine (sorry Jim):  it just means that 
you made the misleading statement.



Furthermore, I think your criticisms of my statements are generally
unfounded.  


My choice of the word reasoning was not not correct, as you imply, since
the Wikipedia definition says Forward chaining is one of the two main
methods of REASONING when using inference rules. (Emphasis added.)


That is fair enough.  I think it is a matter of taste, to some extent, 
but I will take the rap for going against the Wikipedia gospel.




My statement made it clear I was describing the forward direction as being
from the if clause to the then clause, which matches the Wikipedia
definition, so what is not correct about that.


I did not say that this part of the text was incorrect.



In addition, you said my statement that in the absence of a temporal
criteria the notion of what is forward and backward chaining might be
somewhat arbitrary  was a completely incorrect conclusion.

Offensively strong language, considering it is unfounded. 


Or, if it should turn out that it was well-founded, it would have been 
quite polite and matter-of-fact to say completely incorrect





It is unfounded because in the absence of a temporal distinction, many
if-then rules, particularly if they are probabilistic, can viewed in a two
way form, with a probabilistic inference going both ways.  In this case it
becomes unclear which side is the if clause, and which the then clause,
and, thus, unclear which way is forward and which backward by the definition
contained in Wikipedia --- unless there is a temporal criteria.  This issue
becomes even more problematic when dealing with patterns based on temporal
simultaneity, as in much of object recognition, in which even a temporal
distinction, does not distinguish between what should be consider the if
clause and what should be considered the then clause. 


Here is an example of backward chaining:

Start with a question:  Is it true that Socrates is mortal?

Start by looking for any knowledge that allows us to conclude that 
anything is or is not mortal.  We search the KB and come up with these 
candidates:


If x is a plant, then x is mortal
If x is a rock, then x is not mortal
If x is a robot, then x is not mortal
If x lives in a post-singularity era, then x is not mortal
If x is a slug, then x is mortal
If x is a japanese beetle, then x is mortal
If x is a side of beef, then x is mortal
If x is a screwdriver, then x is not mortal
If x is a god, then x is not mortal
If x is a living creature, then x is mortal
If x is a goat, then x is mortal
If x is a parrot in a Dead Parrot Sketch, then x is mortal

Now, before we go on to look at the second stage of this backward 
chaining example, could you perhaps explain to me how the absence of a 
temporal distinction applies or does not apply to any of these?  I do 
not believe that it is possible to reverse any of these rules, temporal 
distinctions or any other distinctions you cannot say if x is 
mortal, then x is a plant, nor if x is not mortal, then x lives in a 
post-singularity era, etc etc etc.


In the process of backward chaining, the next step is to see if the 
antecedents of any of these might allow us to connect up with Socrates 
in some way, so we start with the first one, If x is a plant and try 
to find out if anything allows us to conclude that Socrates is or is not 
a plant.  A search of the KB turns up these statements:


If x contains chlorophyll, then x is a plant
If x is a dandelion, then x is a plant
.. and on and on and on.

A couple of years later, after going several levels deep in its search, 
the system finally digs deep enough in its knowledge base to come up 
with the following chain of inference:


Socrates contains blood
If x contains blood, then x will bleed when pricked
If x bleeds when pricked, then x is a man
If x is a man, then x owns footwear
If x owns footwear, then x is a living creature
If x is a living creature, then x is mortal


And now, FINALLY, the backward chaining mechanism will be able to 
conclude that  Socrates is mortal


Please, Ed, could you explain to me how this typical case of backward 
chaining could be reversed so that it becomes just a variation on 
forward chaining?


The two mechanisms simply have different properties.  If you were to try 
to prove Socrates is Mortal by forward chaining, what would you do? 
Start from a random point in your KB and start proving facts in random 
order?  How would you use the reversibility of the rules, which you 
claim to exist, to set up a 

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-14 Thread Ed Porter


Response to Abram Demski message of Monday, July 14, 2008 10:59 AM


Abram  It is true that Mark Waser did not provide much
justification, but I
think he is right. The if-then rules involved in forward/backward
chaining do not need to be causal, or temporal. 

[Ed Porter]  I never said they did in the sense I think you are
talking about.  In fact, I gave the example of inference rules being applied
to co-occurring patterns, in which there would be no before or after
relationship.  However, it should be noted that almost all patterns have a
temporal component of one sort or another, including the patterns from which
inferences are made.

Abram  A mutual implication
is still treaded differently by forward chaining and backward
chaining, so it does not cause ambiguity. For example, if we have An
alarm sounds if and only if there is a fire, then a forward-chaining
algorithm would (1) conclude that there is an alarm sounding if it
learned that there was a fire, and (2) conclude that  there was a fire
if it learned that there was an alarm. A backwards-chainer would use
the rule differently, so that (1) it might look for a fire if it was
trying to determine if an alarm was sounding, and (2) it might look
for an alarm if it wanted to know about a fire. Even though the
implication goes in both directions, the meaning of forward chaining
and of backward chaining are quite different.

[Ed Porter]  I am not totally clear on what you are proposing as the
distinction between forward and backward chaining.  

I am I correct that you are implying the distinction is independent of
direction, but instead is something like this: forward chaining infers from
information you have to implications you don't yet have, and backward
chaining infers from patterns you are interested in to ones that might
either imply or negate them, or which they themselves might imply or negate.


If not, please give me a more exact definition of what you view as the
distinction between forward and backward chaining.

If so, this is very different than the notion that forward chaining is from
if clauses to then clauses, and backward chaining is the reverse, as implied
by the quote from Wikipedia I copied a few posts ago in this thread.  

What evidence do you have that your interpretation is the accepted
definition, or is there no widely one agreed upon definition. 

Ed Porter

On Mon, Jul 14, 2008 at 10:40 AM, Ed Porter [EMAIL PROTECTED] wrote:
 Mark,

 Since your attack on my statement below is based on nothing but conclusory
 statements and contains neither reasoning or evidence to support them,
there
 is little in your below email to respond to other than your personal
spleen.


 You have said my statement which your email quotes is simply incorrect
 without giving any justification.

 Your statement that Temporal criteria are *NOT* relevant to forward and
 backward chaining is itself a conclusory statement.

 Furthermore this statement about temporal criteria not being relevant is
 more incorrect than correct.  If an if-then rule describes a situation
where
 one thing causes another, or comes before it time, the thing that comes
 first is more commonly the if clause (although one can write the rule in
the
 reverse order).  The if clause is commonly called a condition, and the
then
 clause is sometimes called the consequence, implying a causal or temporal
 relationship.  The notion of reasoning backward from a goal being backward
 chaining, normally involves the notion of reasoning back in imagined time
 from a desired goal state.  So often TEMPORAL CRITERIA *ARE* RELEVANT TO
 WHICH DIRECTION IS FORWARD CHAINING AND WHICH IS BACKWARD.

 Even if one were to make a reach, and try to justify your statement that
 Temporal criteria are *NOT* relevant to forward and backward chaining as
 being more than just conclusory by suggesting it was an implicit reference
 to statements --- like that contained Richard's prior statements in this
 thread or the Wikipedia quote in one of the posts below --- that the
 definition of forward and backward chaining depended on whether the
 reasoning was from if clause to then clause, or the reverse --- that would
 still not correct the groundlessness of your criticism.

 This is because the rule that forward chaining is from if clause to then
 clause and the reverse for backward chaining has no applicability to
 situations where the implication goes both ways and there is no clear
 indication of which pattern should be the if clause and which should be
the
 then clause --- which is precisely the situation I was describing in the
 quote from me you unfairly criticized.

 Neither Richard's prior statement in this thread nor the Wikipedia
 definition below define which direction is forward and which is backward
in
 many such situations.

 In my quote which you attacked I was discussing exactly this situations
when
 it was not clear which part of an inference pattern should be considered

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-14 Thread Jim Bromer
I started reading a Riesenhuber and Poggio paper and there are some 
similarities to ideas that I have considered although my ideas were explicitly 
developed about computer programs that would use symbolic information and are 
not neural theories.  It is interesting that Risesnhuber and Poggio argued that 
the binding problem seems to be a problem for only some models of object 
recognition.  In other words, it seems that they are claiming that the problem 
disappears with their model of neural cognition! 

The study of feature detectors in cats eyes is old news and I did incorporate 
that information into the development of my own theories.

I have often talked about the use of multi-level complex methods and I see some 
similarity to the ideas that they discussed to my ideas.  In my model an input 
would be scanned for different features using different kinds of analysis on 
the input.  So then a configuration of simple features would be derived from 
the scan and these could be associated with a number of complex groups of 
objects that have been previously associated with the features.  Because the 
complex groups of objects are complexes (in the general sense), and would be 
learned by previous experience, they are not insipidly modeled on one standard 
model. These complex objects are complex in that they are not all cut from one 
standard.  The older implementations that used operations that were taken from 
set theory on groups were set on object models that were very old-world and 
were not derived from learning.  For example they were non-experiential. (I 
cannot remember the term that I am
 looking for but experiential is the anthropomorphic term).  All of the 
groupings in old models that looked for intersections were of a few predefined 
kinds, and most significantly they did not recognize that ideologically 
incommensurable references could affect meaning (or effect) even if the 
references were strongly associated and functionally related.  The complex 
groupings of objects that I have in mind would have been derived using 
different methods of analysis and combination and when a group of them is 
called from an input analysis their use should tend to narrow the objects that 
might be expected given the detection by the feature detectors. Although I 
haven't expressed myself very clearly, this is very similar to what Riesenhuber 
and Poggio were suggesting that their methods would be capable of. So, yes,I 
think some similar methods can be used in NLP.

However, my model also includes the recognition that comparing apples and 
oranges is not always straight forward.  This gives you an idea of what I mean 
by ideologically incommensurable associations. If I were to give some examples, 
a reasonable person might simply assume that the problems illustrated by the 
examples could easily be resolved with more information, and that is true.  But 
the point that I am making is that this view of ideologically incommensurable 
references can be helpful in the analysis of the kinds of problems that can be 
expected from more ambitious AI models.

Jim Bromer



  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-14 Thread Ed Porter
Mark,

Still fails to deal with what I was discussing.  I will leave it up to you
to figure out why.

Ed Porter

-Original Message-
From: Mark Waser [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 14, 2008 10:54 AM
To: agi@v2.listbox.com
Subject: RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE
BINDING PROBLEM?

Ed,

Take the statements

IF it croaks, THEN it is a frog.
IF it is a frog, THEN it is green.

Given an additional statement that it croaks, forward-chaining says that

it is green.  There is nothing temporal involved.
- OR -
Given an additional statement that it is green, backward-chaining says 
that it MAY croak.  Again, nothing temporal involved.

How do you see temporal criteria as being related to my example?

Mark

- Original Message - 
From: Ed Porter [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, July 14, 2008 10:40 AM
Subject: **SPAM** RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND

BY THE BINDING PROBLEM?


Mark,

Since your attack on my statement below is based on nothing but conclusory
statements and contains neither reasoning or evidence to support them, there
is little in your below email to respond to other than your personal spleen.


You have said my statement which your email quotes is simply incorrect
without giving any justification.

Your statement that Temporal criteria are *NOT* relevant to forward and
backward chaining is itself a conclusory statement.

Furthermore this statement about temporal criteria not being relevant is
more incorrect than correct.  If an if-then rule describes a situation where
one thing causes another, or comes before it time, the thing that comes
first is more commonly the if clause (although one can write the rule in the
reverse order).  The if clause is commonly called a condition, and the then
clause is sometimes called the consequence, implying a causal or temporal
relationship.  The notion of reasoning backward from a goal being backward
chaining, normally involves the notion of reasoning back in imagined time
from a desired goal state.  So often TEMPORAL CRITERIA *ARE* RELEVANT TO
WHICH DIRECTION IS FORWARD CHAINING AND WHICH IS BACKWARD.

Even if one were to make a reach, and try to justify your statement that
Temporal criteria are *NOT* relevant to forward and backward chaining as
being more than just conclusory by suggesting it was an implicit reference
to statements --- like that contained Richard's prior statements in this
thread or the Wikipedia quote in one of the posts below --- that the
definition of forward and backward chaining depended on whether the
reasoning was from if clause to then clause, or the reverse --- that would
still not correct the groundlessness of your criticism.

This is because the rule that forward chaining is from if clause to then
clause and the reverse for backward chaining has no applicability to
situations where the implication goes both ways and there is no clear
indication of which pattern should be the if clause and which should be the
then clause --- which is precisely the situation I was describing in the
quote from me you unfairly criticized.

Neither Richard's prior statement in this thread nor the Wikipedia
definition below define which direction is forward and which is backward in
many such situations.

In my quote which you attacked I was discussing exactly this situations when
it was not clear which part of an inference pattern should be considered the
if clause and which the then clause.  So it appears your criticism either
totally missed, or for other reasons, failed to deal with the issue I was
discussing.

Mark, in general I do not read your posts because, among other things, like
your email below, they are generally poorly reasoned and seemed more
concerned with issues of ego and personality than with learning and teaching
truthful information or insights.  I skip many of Richard's for the same
reason, but I do read some of Richard's because despite all his pompous BS
he does occasionally say something quite thoughtful and worth while.

If you care about improving your reputation on this list, it would make you
seem more like someone who cared about truth and reason, and less like
someone who cared more about petty squabbles and personal ego, if you gave
reasons for your criticisms, and if you took the time to ensure your
criticism actually addressed what you are criticizing.

In your post immediately below you did neither.

Ed Porter

-Original Message-
From: Mark Waser [mailto:[EMAIL PROTECTED]
Sent: Monday, July 14, 2008 9:19 AM
To: agi@v2.listbox.com
Subject: RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE
BINDING PROBLEM?

 Anyone who reads this thread will know who was being honest and
 reasonable
and who was not.

The question is not honest and reasonable but factually correct . . . .

The following statement of yours
  In this case it becomes unclear which side is the if 

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-14 Thread Ed Porter

With regard to your comments below, I don't think you have to be too
imaginative to think of how the direction of forward or backward chaining
across at least certain sets of rules could be reversed.  Abram Demski's
recent post gave an example of how both what he considers forward and
backward chaining can be performed in both directions across an inference
pattern.  

Plus, it should be noted that I never said all relationships involve a
before and after type of relationship.  In fact, I specifically said some
relationships involve simultaneity.  I do however think temporal
relationships are an important think to keep track of in inference, because
they are such and important part of reality, and predicting what is likely
to come next is such an important part of such reasoning, and reasoning
backward in imagined time from goals is such an important part of planning.

BY READING BOTH THE WIKIPEDIA ENTRIES FOR FORWARD AND BACKWARD CHAINING AND
READING ABRAM DEMSKI'S LAST POST, IT SEEMS THAT ONE OF THE DISTINCTIONS
COMMONLY GIVEN BETWEEN FORWARD AND BACKWARD CHAINING, IS WHETHER ONE IS
REASONING FROM DATA (IN THE CASE OF FORWARD CHAINING) OR FROM GOALS OR
HYPOTHESES (IN THE CASE OF BACKWARD CHAINING).   

According to this definition the distinction between forward and backward
chaining is not about direction the inference travels though an inference
network --- because as Abram show each can travel in both directions --- but
rather the purpose for which the inference is being performed.  According to
this definition, both bottom up and top down inference could each in certain
cases be considered both forward and backward chaining.  

This definition is probably more meaningful in an AGI context than having
the direction depend on which is the if clause and which is the then clause,
because in an AGI many of the rules would have been learned automatically
from correlations and there is often no reason to decide which of the
patterns that implies the other is the if clause pattern and which is the
then clause pattern.

But this definition of the distinction as depending on whether one is
reasoning from data on one hand or goal and hypotheses on the other, is
confused by the fact that both Wikipedia articles implying forward chaining
is from if clause to then clause, and the reverse for backward chaining.  

It is also confused by the fact that in AGIs the distinction between data,
evidence, probability, attention, and hypothesis are not always clear.  For
example, bottom-up feed forward inference from sensory input is often
considered to create perception hypotheses up the perception pathway, and
implication could be considered to be proceeding in a forward chaining way
from such each of such hypothesis.

For example, evidence may be derived from sensation, memory, cognition or
other means that a certain high level pattern should exist in roughly a
certain time and place, and the top down levels implication of what is
should expect to see could be considered forward chaining, but is could also
be considered backward chaining.

So I still find even this definition of forward and backward chaining would
be less than totally clear when applied in many possible situations in an
AGI.  But many definitions that are used every day are less than totally
clear.

Richard, you said below If I had a penny for every time you have accused me
of being wrong, when later discussion showed that I was quite correct, I'd
have enough money to build an AGI tomorrow.

Yea, Richard, an AGI about about as powerful as the typical Phantom Decoder
Ring you are likely to be able to purchase for one or two cents from the
back of a comic book.

If however the same rule were applied to me, I would be able to buy an AGI
as powerful as Phantom Decoder Ring worth at least a buck.

Ed Porter

-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 14, 2008 11:54 AM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE
BINDING PROBLEM?

Ed Porter wrote:
 Richard,
 
 You just keep digging yourself in deeper.
 
 Look at the original email in which you said This is not correct.  The
 only quoted text that precedes it is quoted from me.  So why are you
saying
 Jim's statement was a misunderstanding?

Okay, looks like some confusion here:  the structure of Jim's message 
was such that I thought the relevant comment came from him.  Turns out 
he was just quoting you.  That's fine (sorry Jim):  it just means that 
you made the misleading statement.

 Furthermore, I think your criticisms of my statements are generally
 unfounded.  
 
 My choice of the word reasoning was not not correct, as you imply,
since
 the Wikipedia definition says Forward chaining is one of the two main
 methods of REASONING when using inference rules. (Emphasis added.)

That is fair enough.  I think it is a matter of taste, to some extent, 
but I will take the rap for going against the 

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-14 Thread Abram Demski
Ed Porter wrote:

I am I correct that you are implying the distinction is independent
of direction, but instead is something like this: forward chaining
infers from information you have to implications you don't yet have,
and backward chaining infers from patterns you are interested in to
ones that might either imply or negate them, or which they themselves
might imply or negate.

BY READING BOTH THE WIKIPEDIA ENTRIES FOR FORWARD AND BACKWARD
CHAINING AND READING ABRAM DEMSKI'S LAST POST, IT SEEMS THAT ONE OF
THE DISTINCTIONS COMMONLY GIVEN BETWEEN FORWARD AND BACKWARD CHAINING,
IS WHETHER ONE IS REASONING FROM DATA (IN THE CASE OF FORWARD
CHAINING) OR FROM GOALS OR HYPOTHESES (IN THE CASE OF BACKWARD
CHAINING).

As I understand it, this is the proper definition. The reason it is
typically stated in terms of direction of inference over if/then
statements is because that is how it is implemented in rule-based
systems. However, reasoning from goals vs reasoning from data is the
more general definition.

Ed Porter also wrote:

For example, evidence may be derived from sensation, memory,
cognition or other means that a certain high level pattern should
exist in roughly a certain time and place, and the top down levels
implication of what is should expect to see could be considered
forward chaining, but is could also be considered backward chaining.

Perhaps there is some real ambiguity here, arising from the
probabilistic setting. If the network is passing down an expectation
based on other data, informing the lower network of what to expect,
then this is forward chaining. But if the signal is not an
expectation, but more like a query pay attention to data that might
conform/contradict this hypothesis, and notify me ASAP then it is
backwards chaining. And it seems realistic that it can be both of
these.


On Mon, Jul 14, 2008 at 1:43 PM, Ed Porter [EMAIL PROTECTED] wrote:

 With regard to your comments below, I don't think you have to be too
 imaginative to think of how the direction of forward or backward chaining
 across at least certain sets of rules could be reversed.  Abram Demski's
 recent post gave an example of how both what he considers forward and
 backward chaining can be performed in both directions across an inference
 pattern.

 Plus, it should be noted that I never said all relationships involve a
 before and after type of relationship.  In fact, I specifically said some
 relationships involve simultaneity.  I do however think temporal
 relationships are an important think to keep track of in inference, because
 they are such and important part of reality, and predicting what is likely
 to come next is such an important part of such reasoning, and reasoning
 backward in imagined time from goals is such an important part of planning.

 BY READING BOTH THE WIKIPEDIA ENTRIES FOR FORWARD AND BACKWARD CHAINING AND
 READING ABRAM DEMSKI'S LAST POST, IT SEEMS THAT ONE OF THE DISTINCTIONS
 COMMONLY GIVEN BETWEEN FORWARD AND BACKWARD CHAINING, IS WHETHER ONE IS
 REASONING FROM DATA (IN THE CASE OF FORWARD CHAINING) OR FROM GOALS OR
 HYPOTHESES (IN THE CASE OF BACKWARD CHAINING).

 According to this definition the distinction between forward and backward
 chaining is not about direction the inference travels though an inference
 network --- because as Abram show each can travel in both directions --- but
 rather the purpose for which the inference is being performed.  According to
 this definition, both bottom up and top down inference could each in certain
 cases be considered both forward and backward chaining.

 This definition is probably more meaningful in an AGI context than having
 the direction depend on which is the if clause and which is the then clause,
 because in an AGI many of the rules would have been learned automatically
 from correlations and there is often no reason to decide which of the
 patterns that implies the other is the if clause pattern and which is the
 then clause pattern.

 But this definition of the distinction as depending on whether one is
 reasoning from data on one hand or goal and hypotheses on the other, is
 confused by the fact that both Wikipedia articles implying forward chaining
 is from if clause to then clause, and the reverse for backward chaining.

 It is also confused by the fact that in AGIs the distinction between data,
 evidence, probability, attention, and hypothesis are not always clear.  For
 example, bottom-up feed forward inference from sensory input is often
 considered to create perception hypotheses up the perception pathway, and
 implication could be considered to be proceeding in a forward chaining way
 from such each of such hypothesis.

 For example, evidence may be derived from sensation, memory, cognition or
 other means that a certain high level pattern should exist in roughly a
 certain time and place, and the top down levels implication of what is
 should expect to see could be considered forward chaining, but is could also
 be considered 

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-14 Thread Mike Tintner
A tangential comment here. Looking at this and other related threads I can't 
help thinking: jeez, here are you guys still endlessly arguing about the 
simplest of syllogisms, seemingly unable to progress beyond them. (Don't you 
ever have that feeling?) My impression is that the fault lies with logic 
itself - as soon as you start to apply logic to the real world, even only 
tangentially with talk of forward and backward or temporal 
considerations, you fall into a quagmire of ambiguity, and no one is really 
sure what they are talking about. Even the simplest if p then q logical 
proposition is actually infinitely ambiguous. No?  (Is there a Godel's 
Theorem of logic?) 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-14 Thread Mark Waser
Still fails to deal with what I was discussing.  I will leave it up to 
you

to figure out why.


Last refuge when you realize you're wrong, huh?

I ask a *very* clear question in an attempt to move forward (i.e. How do you 
see temporal criteria as being related to my example?) and I get this You 
have to guess what I'm thinking answer.


How can you justify ranting on and on about Richard not being honest and 
reasonable when you won't even answer a simple, clear question?




- Original Message - 
From: Ed Porter [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, July 14, 2008 1:43 PM
Subject: **SPAM** RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND 
BY THE BINDING PROBLEM?



Mark,

Still fails to deal with what I was discussing.  I will leave it up to you
to figure out why.

Ed Porter

-Original Message-
From: Mark Waser [mailto:[EMAIL PROTECTED]
Sent: Monday, July 14, 2008 10:54 AM
To: agi@v2.listbox.com
Subject: RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE
BINDING PROBLEM?

Ed,

   Take the statements

   IF it croaks, THEN it is a frog.
   IF it is a frog, THEN it is green.

   Given an additional statement that it croaks, forward-chaining says that

it is green.  There is nothing temporal involved.
   - OR -
   Given an additional statement that it is green, backward-chaining says
that it MAY croak.  Again, nothing temporal involved.

   How do you see temporal criteria as being related to my example?

   Mark

- Original Message - 
From: Ed Porter [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, July 14, 2008 10:40 AM
Subject: **SPAM** RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND

BY THE BINDING PROBLEM?


Mark,

Since your attack on my statement below is based on nothing but conclusory
statements and contains neither reasoning or evidence to support them, there
is little in your below email to respond to other than your personal spleen.


You have said my statement which your email quotes is simply incorrect
without giving any justification.

Your statement that Temporal criteria are *NOT* relevant to forward and
backward chaining is itself a conclusory statement.

Furthermore this statement about temporal criteria not being relevant is
more incorrect than correct.  If an if-then rule describes a situation where
one thing causes another, or comes before it time, the thing that comes
first is more commonly the if clause (although one can write the rule in the
reverse order).  The if clause is commonly called a condition, and the then
clause is sometimes called the consequence, implying a causal or temporal
relationship.  The notion of reasoning backward from a goal being backward
chaining, normally involves the notion of reasoning back in imagined time
from a desired goal state.  So often TEMPORAL CRITERIA *ARE* RELEVANT TO
WHICH DIRECTION IS FORWARD CHAINING AND WHICH IS BACKWARD.

Even if one were to make a reach, and try to justify your statement that
Temporal criteria are *NOT* relevant to forward and backward chaining as
being more than just conclusory by suggesting it was an implicit reference
to statements --- like that contained Richard's prior statements in this
thread or the Wikipedia quote in one of the posts below --- that the
definition of forward and backward chaining depended on whether the
reasoning was from if clause to then clause, or the reverse --- that would
still not correct the groundlessness of your criticism.

This is because the rule that forward chaining is from if clause to then
clause and the reverse for backward chaining has no applicability to
situations where the implication goes both ways and there is no clear
indication of which pattern should be the if clause and which should be the
then clause --- which is precisely the situation I was describing in the
quote from me you unfairly criticized.

Neither Richard's prior statement in this thread nor the Wikipedia
definition below define which direction is forward and which is backward in
many such situations.

In my quote which you attacked I was discussing exactly this situations when
it was not clear which part of an inference pattern should be considered the
if clause and which the then clause.  So it appears your criticism either
totally missed, or for other reasons, failed to deal with the issue I was
discussing.

Mark, in general I do not read your posts because, among other things, like
your email below, they are generally poorly reasoned and seemed more
concerned with issues of ego and personality than with learning and teaching
truthful information or insights.  I skip many of Richard's for the same
reason, but I do read some of Richard's because despite all his pompous BS
he does occasionally say something quite thoughtful and worth while.

If you care about improving your reputation on this list, it would make you
seem more like someone who cared about truth and reason, and less like
someone who cared more about petty 

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-14 Thread Ed Porter
Abram Demski wrote below:  If the network is passing down an expectation
based on other data, informing the lower network of what to expect, then
this is forward chaining. But if the signal is not an expectation, but more
like a query pay attention to data that might conform/contradict this
hypothesis, and notify me ASAP then it is backwards chaining. And it seems
realistic that it can be both of these.

This is interesting.  The type of activation you claim would be backward
chaining in this above quote corresponds to the ? activation described in
Shasti's Shruiti (which I have cited earlier in this thread).  In Shruite
any node that receives ? activation spreads similar activation to other
nodes that that might supply feedback to it that might provide evidence of
an increase or decrease in probability of the asking node.  But receiving
? activation by itself does not change a nodes probability at all.
Interestingly increasing or decreasing a nodes activation tends to spread
? activation seeking feedback on whether the increased or decrease in
probability is supported or contradicted by other information in the
network.

Ed Porter 

-Original Message-
From: Abram Demski [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 14, 2008 2:29 PM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE
BINDING PROBLEM?

Ed Porter wrote:

I am I correct that you are implying the distinction is independent
of direction, but instead is something like this: forward chaining
infers from information you have to implications you don't yet have,
and backward chaining infers from patterns you are interested in to
ones that might either imply or negate them, or which they themselves
might imply or negate.

BY READING BOTH THE WIKIPEDIA ENTRIES FOR FORWARD AND BACKWARD
CHAINING AND READING ABRAM DEMSKI'S LAST POST, IT SEEMS THAT ONE OF
THE DISTINCTIONS COMMONLY GIVEN BETWEEN FORWARD AND BACKWARD CHAINING,
IS WHETHER ONE IS REASONING FROM DATA (IN THE CASE OF FORWARD
CHAINING) OR FROM GOALS OR HYPOTHESES (IN THE CASE OF BACKWARD
CHAINING).

As I understand it, this is the proper definition. The reason it is
typically stated in terms of direction of inference over if/then
statements is because that is how it is implemented in rule-based
systems. However, reasoning from goals vs reasoning from data is the
more general definition.

Ed Porter also wrote:

For example, evidence may be derived from sensation, memory,
cognition or other means that a certain high level pattern should
exist in roughly a certain time and place, and the top down levels
implication of what is should expect to see could be considered
forward chaining, but is could also be considered backward chaining.

Perhaps there is some real ambiguity here, arising from the
probabilistic setting. If the network is passing down an expectation
based on other data, informing the lower network of what to expect,
then this is forward chaining. But if the signal is not an
expectation, but more like a query pay attention to data that might
conform/contradict this hypothesis, and notify me ASAP then it is
backwards chaining. And it seems realistic that it can be both of
these.


On Mon, Jul 14, 2008 at 1:43 PM, Ed Porter [EMAIL PROTECTED] wrote:

 With regard to your comments below, I don't think you have to be too
 imaginative to think of how the direction of forward or backward chaining
 across at least certain sets of rules could be reversed.  Abram Demski's
 recent post gave an example of how both what he considers forward and
 backward chaining can be performed in both directions across an inference
 pattern.

 Plus, it should be noted that I never said all relationships involve a
 before and after type of relationship.  In fact, I specifically said some
 relationships involve simultaneity.  I do however think temporal
 relationships are an important think to keep track of in inference,
because
 they are such and important part of reality, and predicting what is likely
 to come next is such an important part of such reasoning, and reasoning
 backward in imagined time from goals is such an important part of
planning.

 BY READING BOTH THE WIKIPEDIA ENTRIES FOR FORWARD AND BACKWARD CHAINING
AND
 READING ABRAM DEMSKI'S LAST POST, IT SEEMS THAT ONE OF THE DISTINCTIONS
 COMMONLY GIVEN BETWEEN FORWARD AND BACKWARD CHAINING, IS WHETHER ONE IS
 REASONING FROM DATA (IN THE CASE OF FORWARD CHAINING) OR FROM GOALS OR
 HYPOTHESES (IN THE CASE OF BACKWARD CHAINING).

 According to this definition the distinction between forward and backward
 chaining is not about direction the inference travels though an inference
 network --- because as Abram show each can travel in both directions ---
but
 rather the purpose for which the inference is being performed.  According
to
 this definition, both bottom up and top down inference could each in
certain
 cases be considered both forward and backward chaining.

 This definition is probably more 

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-14 Thread Ed Porter
Jim,

 

In the Riesenhuber and Poggio paper the binding that were handled implicitly
involved spatial relationships, such as an observed roughly horizontal line
substantially touching an observed roughly vertical line at their respective
ends, even though their might be other horizontal and vertical lines not
having this relationship in the input pixel space.  It achieves such
implicit bindings by having enough separate models to be able to detect, by
direct mapping, such a touching relationship between a horizontal and
vertical lines at each of many different locations in the visual input
space.

 

But the Poggio paper deals with a relatively small number of relationships
in a relatively small (160x160) low dimensional (2d) space using 23 million
models.  You imply you have been able to accomplish a somewhat similar
implicit representation of bindings in a much higher dimensional and
presumably large semantic space.  Unfortunately I was unable to understand
from your description how you claimed to have accomplished this.

 

Could you please clarify you description with regard to this point.

 

Ed Porter 

 

-Original Message-
From: Jim Bromer [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 14, 2008 1:38 PM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE
BINDING PROBLEM?

 

I started reading a Riesenhuber and Poggio paper and there are some
similarities to ideas that I have considered although my ideas were
explicitly developed about computer programs that would use symbolic
information and are not neural theories.  It is interesting that Risesnhuber
and Poggio argued that the binding problem seems to be a problem for only
some models of object recognition.  In other words, it seems that they are
claiming that the problem disappears with their model of neural cognition! 

The study of feature detectors in cats eyes is old news and I did
incorporate that information into the development of my own theories.

I have often talked about the use of multi-level complex methods and I see
some similarity to the ideas that they discussed to my ideas.  In my model
an input would be scanned for different features using different kinds of
analysis on the input.  So then a configuration of simple features would be
derived from the scan and these could be associated with a number of complex
groups of objects that have been previously associated with the features.
Because the complex groups of objects are complexes (in the general sense),
and would be learned by previous experience, they are not insipidly modeled
on one standard model. These complex objects are complex in that they are
not all cut from one standard.  The older implementations that used
operations that were taken from set theory on groups were set on object
models that were very old-world and were not derived from learning.  For
example they were non-experiential. (I cannot remember the term that I am
looking for but experiential is the anthropomorphic term).  All of the
groupings in old models that looked for intersections were of a few
predefined kinds, and most significantly they did not recognize that
ideologically incommensurable references could affect meaning (or effect)
even if the references were strongly associated and functionally related.
The complex groupings of objects that I have in mind would have been derived
using different methods of analysis and combination and when a group of them
is called from an input analysis their use should tend to narrow the objects
that might be expected given the detection by the feature detectors.
Although I haven't expressed myself very clearly, this is very similar to
what Riesenhuber and Poggio were suggesting that their methods would be
capable of. So, yes,I think some similar methods can be used in NLP.

However, my model also includes the recognition that comparing apples and
oranges is not always straight forward.  This gives you an idea of what I
mean by ideologically incommensurable associations. If I were to give some
examples, a reasonable person might simply assume that the problems
illustrated by the examples could easily be resolved with more information,
and that is true.  But the point that I am making is that this view of
ideologically incommensurable references can be helpful in the analysis of
the kinds of problems that can be expected from more ambitious AI models.

Jim Bromer

 

  _  


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
0 Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-14 Thread Richard Loosemore

Mike Tintner wrote:
A tangential comment here. Looking at this and other related threads I 
can't help thinking: jeez, here are you guys still endlessly arguing 
about the simplest of syllogisms, seemingly unable to progress beyond 
them. (Don't you ever have that feeling?) My impression is that the 
fault lies with logic itself - as soon as you start to apply logic to 
the real world, even only tangentially with talk of forward and 
backward or temporal considerations, you fall into a quagmire of 
ambiguity, and no one is really sure what they are talking about. Even 
the simplest if p then q logical proposition is actually infinitely 
ambiguous. No?  (Is there a Godel's Theorem of logic?)


Well, now you have me in a cleft stick, methinks.

I *hate* logic as a way to understand cognition, because I think it is a 
derivative process within a high-functional AGI system, not a foundation 
process that sits underneath everything else.


But, on the other hand, I do understand how it works, and it seems a 
shame for someone to trample on the concept of forward and backward 
chaining when these are really quite clear and simple processes (at 
least conceptually).


You are right that logic is as clear as mud outside the pristine 
conceptual palace within which it was conceived, but if you're gonna 
hang out inside the palace it is a bit of a shame to question its 
elegance...




Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-14 Thread Mike Tintner
I'm not questioning logic's elegance, merely its relevance - the intention 
is at some point to apply it to the real world in your various systems, no? 
Yet there seems to be such a lot of argument and confusion about the most 
basic of terms, when you begin to do that. That elegance seems to come at a 
big price.


RL:Mike Tintner wrote:
A tangential comment here. Looking at this and other related threads I 
can't help thinking: jeez, here are you guys still endlessly arguing 
about the simplest of syllogisms, seemingly unable to progress beyond 
them. (Don't you ever have that feeling?) My impression is that the fault 
lies with logic itself - as soon as you start to apply logic to the real 
world, even only tangentially with talk of forward and backward or 
temporal considerations, you fall into a quagmire of ambiguity, and no 
one is really sure what they are talking about. Even the simplest if p 
then q logical proposition is actually infinitely ambiguous. No?  (Is 
there a Godel's Theorem of logic?)


Well, now you have me in a cleft stick, methinks.

I *hate* logic as a way to understand cognition, because I think it is a 
derivative process within a high-functional AGI system, not a foundation 
process that sits underneath everything else.


But, on the other hand, I do understand how it works, and it seems a shame 
for someone to trample on the concept of forward and backward chaining 
when these are really quite clear and simple processes (at least 
conceptually).


You are right that logic is as clear as mud outside the pristine 
conceptual palace within which it was conceived, but if you're gonna hang 
out inside the palace it is a bit of a shame to question its elegance...







---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-13 Thread Ed Porter
Jim, 

 

Thanks for your questions.  

 

Ben Goertzel is coming out with a book on Novamente soon and I assume it
will have a lot of good things to say on the topics you have mentioned.  

 

Below are some of my comments 

 

Ed Porter

 

JIM BROMER WROTE===

Can you describe some of the kinds of systems that you think would be
necessary for complex inference problems.  Do you feel that all AGI problems
(other than those technical problems that would be common to a variety of
complicated programs that use large data bases) are essentially inference
problems?  Is your use of the term inference here intended to be inclusive
of the various kinds of problems that would have to be dealt with or are you
referring to a class of problems which are inferential in the more
restricted sense of the term?  (I feel that the two senses of the term are
both legitimate, I am just a little curious about what it was that you were
saying.)



ED PORTER

I think complex inference involves inferring from remembered instances or
learned patterns of temporal correlations, including those where the things
inferred occurred before, after, and/or simultaneously with an activation
from which inference is to flow.  The events involved in such correlation
included not only sensory patterns but also emotional (i.e., value),
remembered, and/or imagined mental occurrences. I think complex inference
needs to be able to flow up and down compositional and generalization
hierarchies.  It needs to be sensitive to current context, and to prior
relevant memories.   Activations from prior activations should continue to
reside, in some form, at many nodes or node elements for various lengths of
times to provide a rich representation of context.  

 

The degree to which activation is spread at each hop as a result of a given
spreading activation could be a function --- not only of the original energy
allocated to origin of that spreading activation --- but also, the
probability and importance of a given node from which the next hop is being
considered, both a priori and given the current context of previous and
other current activation.  It should also be a function of the probability
and importance, both a priori and given the current context, of each link
from the current node with regard to which a determination is to be made
whether or not to activate such a link.   Also the spreading activation
should be controlled by some sort of measure of global gain control,
computational resource market, or other type of competitive measures used to
help focus the spreading activation on better scoring paths. 

 

As in Shurti, AGI inferencing needs to be able to mix both forward and
backward chaining, and mix inferencing up and down compositional and
generialization hierierachies.  Also AGIs need to learn over time which
inferencing patterns are most successful for what types of problems, and
learn to tune the parameters when applying one or more sets of inferencing
patterns to a given problem, based not only on experience learned from past
performances of the inferencing task, but also from feedback during a given
execution of such a task. 

 

Clearly something akin to a goal system is needed, and clearly something is
needed to focus attention on the patterns that currently appear most
relevant to current goals, sub-goals, or other things of importance.

 

Inferencing is clearly one of the major things AGI have to do.  Pattern
recognition can be viewed as a form of inferencing.  Even motor behavior can
be viewed as a type of inference.  For years there have been real world
control systems that have used if-then inference rules to control mechanical
outputs m. 

 

I don't know what you mean by the broad and narrow meaning of inferencing.
To me inferencing means implying or concluding one set of representations is
appropriate from another.  That's pretty broad.

 

I haven't thought about it enough to know if I would go so far as to say all
AGI is essentially inference problems, but clearly it is one of the major
things AGI is about.


JIM BROMER WROTE===

I only glanced at a couple of papers about SHRUTI, and I may be looking at a
different paper than you were talking about, but looking at the website it
looks like you were talking about a connectionist model.  Do you think a
connectionist model (probabilistic or not) is necessary for AGI.  In other
words, I think a lot of us agree that some kind of complex (or complicated)
system of interrelated data is necessary for AGI and this does correspond to
a network of some kind, but these are not necessarily connectionist.

ED PORTER

I don't know the exact definition of connectionist.  In its more strict
sense I think it tends to refer to systems where a high percent of the
knowledge has been learned automatically and is represented in automatically
learned weights and/or automatically learned graph nodes or connections, and
there are no human defined 

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-13 Thread Ed Porter
Richard,  

I think Wikipedia's definition of forward chaining (copied below) agrees
with my stated understanding as to what forward chaining means, i.e.,
reasoning from the if (i.e., conditions) to the then (i.e.,
consequences) in if-then statements.  

So, once again there is an indication you have unfairly criticized the
statements of another.

Ed Porter

==Wikipedia defines forward chaining as: ==

Forward chaining is one of the two main methods of reasoning when using
inference rules (in artificial intelligence). The other is backward
chaining.

Forward chaining starts with the available data and uses inference rules to
extract more data (from an end user for example) until an optimal goal is
reached. An inference engine using forward chaining searches the inference
rules until it finds one where the antecedent (If clause) is known to be
true. When found it can conclude, or infer, the consequent (Then clause),
resulting in the addition of new information to its data.

Inference engines will often cycle through this process until an optimal
goal is reached.

For example, suppose that the goal is to conclude the color of my pet Fritz,
given that he croaks and eats flies, and that the rule base contains the
following four rules:

If X croaks and eats flies - Then X is a frog 
If X chirps and sings - Then X is a canary 
If X is a frog - Then X is green 
If X is a canary - Then X is yellow 

This rule base would be searched and the first rule would be selected,
because its antecedent (If Fritz croaks and eats flies) matches our data.
Now the consequents (Then X is a frog) is added to the data. The rule base
is again searched and this time the third rule is selected, because its
antecedent (If Fritz is a frog) matches our data that was just confirmed.
Now the new consequent (Then Fritz is green) is added to our data. Nothing
more can be inferred from this information, but we have now accomplished our
goal of determining the color of Fritz.

Because the data determines which rules are selected and used, this method
is called data-driven, in contrast to goal-driven backward chaining
inference. The forward chaining approach is often employed by expert
systems, such as CLIPS.

One of the advantages of forward-chaining over backward-chaining is that the
reception of new data can trigger new inferences, which makes the engine
better suited to dynamic situations in which conditions are likely to
change.


-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Saturday, July 12, 2008 7:42 PM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE
BINDING PROBLEM?

Jim Bromer wrote:
 Ed Porter said:
 
 It should be noted that Shruiti uses a mix of forward changing and
backward
 chaining, with an architecture for controlling when and how each is used.
 ...
 
 My understanding that forward reasoning is reasoning from conditions to
 consequences, and backward reasoning is the opposite. But I think what is
a
 condition and what is a consequence is not always clear, since one can use
 if A then B rules to apply to situations where A occurs before B, B occurs
 before A, and A and B occur at the same time. Thus I think the notion of
 what is forward and backward chaining might be somewhat arbitrary, and
could
 be better clarified if it were based on temporal relationships. I see no
 reason that Shruiti's ? activation should not run be spread across all
 those temporal relationships, and be distinguished from Shruiti's + and
 - probabilistic activation by not having a probability, but just a
 temporary attentional characteristic. Additional inference control
mechanism
 could then be added to control which directions in time to reason with in
 different circumstances, if activation pruning was necessary.
 

This is not correct.

Forward chaining is when the inference engine starts with some facts and 
then uses its knowledge base to explore what consequences can be derived 
from those facts.  Going in this direction the inference engine does not 
know where it will end up.

Backward chaining is when a hypothetical conclusion is given, and the 
engine tries to see what possible deductions might lead to this 
conclusion.  In general, the candidates generated in this first pass are 
not themselves directly known to be true (their antecedents are not 
facts in the knowledge base), so the engine has to repeat the procedure 
to see what possible deductions might lead to the candidates being true. 
  The process is repeated until it bottoms out in known facts that are 
definitely true or false, or until the knowledge base is exhausted, or 
until the end of the universe, or until the engine imposes a cutoff 
(this one of the most common results).

The two procedures are quite fundamentally different.


Richard Loosemore





 Furthermore, Shruiti, does not use multi-level compositional hierarchies
for
 many of its patterns, and it only uses 

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-13 Thread Ed Porter
Jim,

 

In my prior posts I have listed some of the limitations of Shruiti.  The
lack of generalized generalizational and compositional hierarchies directly
relates to the problems of learning from experience generalized rules that
derived from learning in complex environements when the surface
representation of many high level concepts are virtually never the same.
This relates to your issue about failing to model the complexity of
antecedents.

 

But as the Serre paper I have cited  multiple times in this thread shows
that the type of gen/comp hierarchies need are very complex.  His system
model a 160x160 pixel greyscale image patch with 23 million models, probably
each having something like 256 inputs, for a total about 6 billion links,
and this is just to do very quick, feedforward, I-think-I-saw-a-lion
uncertain recognition for 1000 objects.  So for a Shruity system to capture
all the complexities involved in human level perception or semantic
reasoning would require much more in the way of computer resources than
Shastry had.

 

So although Shuiti's system is clearly very limited, it is amazing how much
it does considering how simple it is.

 

But the problem is not just complexity.  As I said, Shruiti has some severe
architectural limitations.  But again, it was smart for Shastri to get his
simplified system up and running first before he made all the architectural
fixes required to make it more capable of more generalized implication and
learning.

 

I have actually spend some time thinking about how to generalize Shruiti.
If they, or there equivalent, are not in Ben's new Novamente book I may take
the trouble to write them up,  but I am expecting a lot form Ben's new book.

 

I did not understand your last sentence

 

Ed Porter

 

-Original Message-
From: Jim Bromer [mailto:[EMAIL PROTECTED] 
Sent: Sunday, July 13, 2008 3:47 PM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE
BINDING PROBLEM?

 

I have read about half of Shastri's 1999 paper Advances in Shruti- A
neurally motivated model of relational knowledge representation and rapid
inference using temporal synchrony and I see that it he is describing a
method of encoding general information and then using it to do a certain
kind of reasoning which is usually called inferential although he seems to
have a novel way to do this using what he calls neural circuits. And he
does seem to touch on the multiple level issues that I am interested in.
The problem is that these kinds of systems, regardless of how interesting
they are, are not able to achieve extensibility because they do not truly
describe how the complexities of the antecedents would have themselves been
achieved (learned) using the methodology described. The unspoken assumption
behind these kinds of studies always seems to be that the one or two systems
of reasoning used in the method should be sufficient to explain how learning
takes place, but the failure to achieve intelligent-like behavior (as is
seen in higher intelligence) gives us a lot of evidence that there must be
more to it.

But, the real problem is just complexity (or complicatedity for Richard's
sake) isn't it?  Doesn't that seem like it is the real problem?  If the
program had the ability to try enough possibilities wouldn't it be likely to
learn after a while?  Well another part of the problem is that it would have
to get a lot of detailed information about how good its efforts were, and
this information would have to be pretty specific using the methods that are
common to most current thinking about AI.  So there seem to be two different
kinds of problems.  But the thing is, I think they are both complexity (or
complicatedity) problems.  Get a working solution for one, and maybe you'd
have a working solution for the other.

I think a working solution is possible, once you get beyond the simplistic
perception of seeing everything as if they were ideologically commensurate
just because you have the belief that you can understand them.
Jim Bromer

 

  _  


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
0 Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-13 Thread Richard Loosemore

Ed Porter wrote:
Richard,  


I think Wikipedia's definition of forward chaining (copied below) agrees
with my stated understanding as to what forward chaining means, i.e.,
reasoning from the if (i.e., conditions) to the then (i.e.,
consequences) in if-then statements.  


So, once again there is an indication you have unfairly criticized the
statements of another.


But  ... nothing in what I said contradicted the wikipedia 
definition of forward chaining.


Jim's statement was a misunderstanding of the meaning of forward and 
backward chaining because he oversimplified the two (forward reasoning 
is reasoning from conditions to consequences, and backward reasoning is 
the opposite ... this is kind of true, if you stretch the word 
reasoining a little, but it misses the point), and then he went from 
this oversimplification to come to a completely incorrect conclusion 
(...Thus I think the notion of what is forward and backward chaining 
might be somewhat arbitrary...).


This last conclusion was sufficiently inaccurate that I decided to point 
that out.  It was not a criticism, just a clarification;  a pointer in 
the right direction.



Richard Loosemore







Ed Porter

==Wikipedia defines forward chaining as: ==

Forward chaining is one of the two main methods of reasoning when using
inference rules (in artificial intelligence). The other is backward
chaining.

Forward chaining starts with the available data and uses inference rules to
extract more data (from an end user for example) until an optimal goal is
reached. An inference engine using forward chaining searches the inference
rules until it finds one where the antecedent (If clause) is known to be
true. When found it can conclude, or infer, the consequent (Then clause),
resulting in the addition of new information to its data.

Inference engines will often cycle through this process until an optimal
goal is reached.

For example, suppose that the goal is to conclude the color of my pet Fritz,
given that he croaks and eats flies, and that the rule base contains the
following four rules:

If X croaks and eats flies - Then X is a frog 
If X chirps and sings - Then X is a canary 
If X is a frog - Then X is green 
If X is a canary - Then X is yellow 


This rule base would be searched and the first rule would be selected,
because its antecedent (If Fritz croaks and eats flies) matches our data.
Now the consequents (Then X is a frog) is added to the data. The rule base
is again searched and this time the third rule is selected, because its
antecedent (If Fritz is a frog) matches our data that was just confirmed.
Now the new consequent (Then Fritz is green) is added to our data. Nothing
more can be inferred from this information, but we have now accomplished our
goal of determining the color of Fritz.

Because the data determines which rules are selected and used, this method
is called data-driven, in contrast to goal-driven backward chaining
inference. The forward chaining approach is often employed by expert
systems, such as CLIPS.

One of the advantages of forward-chaining over backward-chaining is that the
reception of new data can trigger new inferences, which makes the engine
better suited to dynamic situations in which conditions are likely to
change.


-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Saturday, July 12, 2008 7:42 PM

To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE
BINDING PROBLEM?

Jim Bromer wrote:

Ed Porter said:

It should be noted that Shruiti uses a mix of forward changing and

backward

chaining, with an architecture for controlling when and how each is used.
...

My understanding that forward reasoning is reasoning from conditions to
consequences, and backward reasoning is the opposite. But I think what is

a

condition and what is a consequence is not always clear, since one can use
if A then B rules to apply to situations where A occurs before B, B occurs
before A, and A and B occur at the same time. Thus I think the notion of
what is forward and backward chaining might be somewhat arbitrary, and

could

be better clarified if it were based on temporal relationships. I see no
reason that Shruiti's ? activation should not run be spread across all
those temporal relationships, and be distinguished from Shruiti's + and
- probabilistic activation by not having a probability, but just a
temporary attentional characteristic. Additional inference control

mechanism

could then be added to control which directions in time to reason with in
different circumstances, if activation pruning was necessary.



This is not correct.

Forward chaining is when the inference engine starts with some facts and 
then uses its knowledge base to explore what consequences can be derived 
from those facts.  Going in this direction the inference engine does not 
know where it will end up.


Backward chaining is when a hypothetical 

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-12 Thread Jim Bromer
Ed Porter said:

It should be noted that Shruiti uses a mix of forward changing and backward
chaining, with an architecture for controlling when and how each is used.
... 

My understanding that forward reasoning is reasoning from conditions to
consequences, and backward reasoning is the opposite.  But I think what is a
condition and what is a consequence is not always clear, since one can use
if A then B rules to apply to situations where A occurs before B, B occurs
before A, and A and B occur at the same time.  Thus I think the notion of
what is forward and backward chaining might be somewhat arbitrary, and could
be better clarified if it were based on temporal relationships.  I see no
reason that Shruiti's ? activation should not run be spread across all
those temporal relationships, and be distinguished from Shruiti's + and
- probabilistic activation by not having a probability, but just a
temporary attentional characteristic. Additional inference control mechanism
could then be added to control which directions in time to reason with in
different circumstances, if activation pruning was necessary.

Furthermore, Shruiti, does not use multi-level compositional hierarchies for
many of its patterns, and it only uses generalizational hierarchies for slot
fillers, not for patterns.  Thus, it does not many of the general reasoning
capabilities that are necessary for NL understanding  Much of the spreading
activation in a more general purpose AGI would be up and down compositional
and generaliztional hiearachies, which is not necessarily forward or
backward chaining, but which is important in NL understanding.  So I agree
that simple forward and backward chaining are not enough to solve general
inferences problems of any considerable complexity.

---
Can you describe some of the kinds of systems that you think would be necessary 
for complex inference problems.  Do you feel that all AGI problems (other than 
those technical problems that would be common to a variety of complicated 
programs that use large data bases) are essentially inference problems?  Is 
your use of the term inference here intended to be inclusive of the various 
kinds of problems that would have to be dealt with or are you referring to a 
class of problems which are inferential in the more restricted sense of the 
term?  (I feel that the two senses of the term are both legitimate, I am just a 
little curious about what it was that you were saying.)

I only glanced at a couple of papers about SHRUTI, and I may be looking at a 
different paper than you were talking about, but looking at the website it 
looks like you were talking about a connectionist model.  Do you think a 
connectionist model (probabilistic or not) is necessary for AGI.  In other 
words, I think a lot of us agree that some kind of complex (or complicated) 
system of interrelated data is necessary for AGI and this does correspond to a 
network of some kind, but these are not necessarily connectionist.

What were you thinking of when you talked about multi-level compositional 
hierarchies that you suggested were necessary for general reasoning?

If I understood what you were saying, you do not think that activation 
synchrony is enough to create insightful binding given the complexities that 
are necessary for higher level (or more sophisticated) reasoning. On the other 
hand you did seem to suggest that temporal synchrony spread across a rhythmic 
flux ofrelational knowledge of might be useful for detecting some significant 
aspects during learning.  What do you think?

I guess what I am getting at is I would like you to make some speculations 
about the kinds of systems that could work with complicated reasoning problems. 
 How would you go about solving the binding problem that you have been talking 
about?  (I haven't read the paper that I think you were referring to and I only 
glanced at one paper on SHRUTI but I am pretty sure that I got enough of what 
was being discussed to talk about it.)

Jim Bromer



  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-11 Thread Jim Bromer
 #ED PORTERS CURRENT RESPONSE 
 Forward and backward chaining are not hacks.  They has been two of the most
 commonly and often successfully techniques in AI search for at least 30
 years.  They  are not some sort of wave of the hand.  They are much more
 concretely grounded in successful AI experience than many of your much more
 ethereal, and very arguably hand waving, statements about having many of the
 difficult problems in AI are to be cured by some as yet unclearly defined
 emergence from complexity.

Richard Loosemore's response:
Oh dear:  yet again I have to turn a blind eye to the ad hominem insults.
--

There were no ad hominem insults in Ed's response. His comment about Richard's 
ethereal hand waiving was clearly and unmistakably within the boundaries that 
Richard has set in his own criticisms again and again.  And Ed specified the 
target of the criticism when he spoke of the difficult problems in AI 
...[which]... are to be cured by some as yet unclearly defined emergence from 
complexity.  All Richard had to do was to answer the question, and instead he 
ran for cover behind this bogus charge of being the victim of an ad hominem 
insult.

If upon reflection, Richard sincerely believes that Ed's comment was an ad 
hominem insult, then we can take this comment as a basis for detecting the true 
motivation behind those comments of Richard which are so similar in form.

For example, Richard said,  Understanding that they only have the status of 
hacks is a very  important sign of maturity as an AI researcher.  There is a 
very deep truth buried in that fact.

While I have some partial agreement with Richard's side on this one particular 
statement, I can only conclude that by using Richard's own measure of ad 
hominem insults that Richard must have intended this remark to have that kind 
of effect.  Similarly, I feel comfortable with the conclusion that every time 
Richard uses his hand waiving argument, there is a good chance that he is 
just using it as an all-purpose ad hominem insult.

It is too bad that Richard cannot discuss his complexity theory without running 
from the fact that his solution to the problem is based on his non-explanation 
that, 
...in this emergent (or, to be precise, complex system) answer to 
the question, there is no guarantee that binding will happen.  The 
binding problem in effect disappears - it does not need to be explicitly 
solved because it simply never arises.  There is no specific mechanism 
designed to construct bindings (although there are lots of small 
mechanisms that enforce constraints), there is only a general style of 
computation, which is the relaxation-of-constraints style.

From reading Richard's postings I think that Richard does not believe there is 
a problem because the nature of complexity itself will solve the problem - 
once someone is lucky enough to find the right combination of initial rules.

For those who believe that problems are solved through study and 
experimentation, Richard has no response to the most difficult problems in 
contemporary AI research except to cry foul.  He does not even consider such 
questions to be valid.

Jim Bromer



  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-11 Thread Richard Loosemore

Jim Bromer wrote:

  #ED PORTERS CURRENT RESPONSE 
  Forward and backward chaining are not hacks. They has been two of the 
most

  commonly and often successfully techniques in AI search for at least 30
  years. They are not some sort of wave of the hand. They are much more
  concretely grounded in successful AI experience than many of your 
much more
  ethereal, and very arguably hand waving, statements about having many 
of the

  difficult problems in AI are to be cured by some as yet unclearly defined
  emergence from complexity.

Richard Loosemore's response:
Oh dear: yet again I have to turn a blind eye to the ad hominem insults.
--

There were no ad hominem insults in Ed's response. His comment about 
Richard's ethereal hand waiving was clearly and unmistakably within the 
boundaries that Richard has set in his own criticisms again and again.  
And Ed specified the target of the criticism when he spoke of the 
difficult problems in AI ...[which]... are to be cured by some as yet 
unclearly defined emergence from complexity.  All Richard had to do was 
to answer the question, and instead he ran for cover behind this bogus 
charge of being the victim of an ad hominem insult.


Jim,

Take a more careful look, if you please.

Ed and I were talking about a particular *topic*, but then in the middle 
of the discussion about that topic, he suddenly declared that the 
techniques in question were much more concretely grounded in successful 
AI experience than many of your much more ethereal, and very arguably 
hand waving, statements about having many of the difficult problems in 
AI are to be cured by some as yet unclearly defined emergence from 
complexity.   Instead of trying to make statements about the topic, he 
tries to denigrate some proposals that I have made.  Whether my 
proposals are or are not worthy of such criticism, that has nothing to 
do with the topic that was under discussion.  He just took a moment out 
to make a quick insult.


To make matters worse, what he actually says about my proposals is also 
a pretty bad misrepresentation of what I have said.  My central claim is 
that there is a problem at the heart of the current AI methodology.  I 
have said that there is a sickness there.  I have also given an outline 
of a possible cure - but I have been quite clear to everyone that this 
is just an outline of the cure, nothing more.  Now, do you really think 
that a physician should be criticised for IDENTIFYING a malady, because 
he did not, in the same breath, also propose a CURE for the malady?


Finally, you yourself say that I ran for cover behind this bogus
charge of being the victim of an ad hominem insult  but I did 
nothing of the sort.  I went on to ignore the insult, giving as full a 
reply to his point as I would have done if the insult had not been there.


As I said, I turned a blind eye to it, albeit after pointing it out.

Tut tut.



If upon reflection, Richard sincerely believes that Ed's comment was an 
ad hominem insult, then we can take this comment as a basis for 
detecting the true motivation behind those comments of Richard which are 
so similar in form.


For example, Richard said,  Understanding that they only have the 
status of hacks is a very  important sign of maturity as an AI 
researcher. There is a very deep truth buried in that fact.


While I have some partial agreement with Richard's side on this one 
particular statement, I can only conclude that by using Richard's own 
measure of ad hominem insults that Richard must have intended this 
remark to have that kind of effect.  Similarly, I feel comfortable with 
the conclusion that every time Richard uses his hand waiving argument, 
there is a good chance that he is just using it as an all-purpose ad 
hominem insult.


Excuse me?  Ad hominem means that the remarks were designed to win an 
argument by insulting the other person.  Ed is not an AI researcher, he 
admits, himself, that he has only an outsider's perspective on this 
field, that he is learning.  I was mostly directing that comment at 
people who claim to be far more experienced than he.



It is too bad that Richard cannot discuss his complexity theory without 
running from the fact that his solution to the problem is based on his 
non-explanation that,

...in this emergent (or, to be precise, complex system) answer to
the question, there is no guarantee that binding will happen. The
binding problem in effect disappears - it does not need to be explicitly
solved because it simply never arises. There is no specific mechanism
designed to construct bindings (although there are lots of small
mechanisms that enforce constraints), there is only a general style of
computation, which is the relaxation-of-constraints style.

 From reading Richard's postings I think that Richard does not believe 
there is a problem because the nature of complexity itself will solve 
the problem - once someone 

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-11 Thread Richard Loosemore

Ed Porter wrote:

Ed Porter wrote:

## RICHARD LOOSEMORE LAST EMAIL #
My preliminary response to your suggestion that other Shastri papers 
do
describe ways to make binding happen correctly is as follows:  anyone 
can suggest ways that *might* cause correct binding to occur - anyone 
can wave their hands, write a program, and then say backward chaining 
- but there is a world of difference between suggesting mechanisms that 
*might* do it, and showing that those mechanisms actually do cause 
correct bindings to be established in practice.


What happens in practice is that the proposed mechanisms work for (a)
toy cases for which they were specifically designed to work, and/or (b) 
a limited number of the more difficult cases, and that what we also find 
is that they (c) tend to screw up in all kinds of interesting ways when 
the going gets tough.  At the end of the day, these proposals don't 
solve the binding problem, they just work some of the time, with no 
clear reason given why they should work all of the time.  They are, in a 
word, hacks.


Understanding that they only have the status of hacks is a very
important sign of maturity as an AI researcher.  There is a very deep 
truth buried in that fact.


#ED PORTERS CURRENT RESPONSE 
Forward and backward chaining are not hacks.  They has been two of the 
most commonly and often successfully techniques in AI search for at 
least 30 years.  They  are not some sort of wave of the hand.  They 
are much more concretely grounded in successful AI experience than 
many of your much more ethereal, and very arguably hand waving, 
statements about having many of the difficult problems in AI are to be 
cured by some as yet unclearly defined emergence from complexity.



## RICHARD LOOSEMORE LAST EMAIL #

Oh dear:  yet again I have to turn a blind eye to the ad hominem insults.


#ED PORTERS CURRENT RESPONSE 

Why is it that your criticisms of my ideas as being dead ends, hand
waving, and hack is not an ad hominem insults, but my use of similar or
even less critical language against you is? 


Because:

(a) These are not your ideas I am criticising, they are the proposals of 
many other people.


(b) Because even if they were your ideas, I would be criticising them by 
giving exact and detailed reasons for the criticisms.


(c) Because your reference to my other proposals has no relevance to the 
topic under discussion:  you suddenly, out of the blue, tried to defend 
your argument by heaping abuse on some OTHER proposals of mine, which 
had nothing to do with the argument itself.



When someone tries to win an argument about a topic by attacking the 
credibility of the person with whom they are arguing, that is standardly 
taken as an ad hominem.










## RICHARD LOOSEMORE LAST EMAIL #
Binding is finding out if two referents are the same thing, or if they 
are closely linked by some intermediary.  Here is an example sentence 
that needs to have its referents untangled:


At one time, one of them might have said to one of his own that one and 
one and one is one, but one does not know whether, on the one hand, this 
is one of those platitudes that those folks are so fond of, or if, on 
the other, the One is literally meant to be more than one.


..backward chaining is not being used to proceed to the solution of the
binding problem in this case, it is being used as a mechanism that might, if
you are lucky, work.

A mechanism that might, if you are lucky, work is a hack  


#ED PORTERS CURRENT RESPONSE 

This NL understanding problem much more complex than the type of predicate
logic that Shruiti dealt with.  


Your original claim was that Shruiti had no way to learn what should talk to
what for purposes of determining binding in the class of problems it dealt
with.


That is completely incorrect.  I made no such claim.

I have been arguing that it is not good enough to solve the binding 
problem unless you can solve it in the general case.  If all you can 
do is show that some mechanism works for a restricted (toy) set of 
cases, you have not solved the problem, but just hacked something to 
work in one set of circumstances.


I have further argued that it is not good enough to say that all you 
need to solve the general problem is more of whatever you used to 
solve the easier case.  In general, this is not true, it is simply 
speculation.


I also pointed out that people claim to solve the binding problem when 
in fact they solve something different - a lesser problem of (so to 
speak) making the actual phone connection after you have discovered who 
you need to call.




I think anyone who read the Shastri paper I gave a link to and who
read my prior discussion in this thread, and who can think in AI terms
could, contrary to your implication, figure out how to make a Shruiti-like
system determine which things should talk to which for purposes of binding
in the class 

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-10 Thread Ed Porter
## RICHARD LOOSEMORE WROTE #
Now I must repeat what I said before about some (perhaps many?) claimed 
solutions to the binding problem:  these claimed solutions often 
establish the *mechanism* by which a connection could be established IF 
THE TWO ITEMS WANT TO TALK TO EACH OTHER.  In other words, what these 
people (e.g. Shastri and Ajjannagadde) do is propose a two step 
solution:  (1) the two instances magically decide that they need to get 
hooked up, and (2) then, some mechanism must allow these two to make 
contact and set up a line to one another.  Think of it this way:  (1) 
You decide that at this moment that you need to call Britney Spears, and 
(2) You need some mechanism whereby you can actually establish a phone 
connection that goes from your place to Britney's place.

The crazy part of this solution to the binding problem is that people 
often make the quiet and invisible assumption that (1) is dealt with 
(the two items KNOW that they need to talk), and then they go on to work 
out a fabulously powerful way (e.g. using neural synchronisation) to get 
part (2) to happen.  The reason this is crazy is that the first part IS 
the binding problem, not the second part!  The second phase (the 
practical aspects of making the phone call get through) is just boring 
machinery.  By the time the two parties have decided that they need to 
hook up, the show is already over... the binding problem has been 
solved.  But if you look at papers describing these so-called solutions 
to the binding problem you will find that the first part is never talked 
about.

At least, that was true of the S  A paper, and at least some of the 
papers that followed it, so I gave up following that thread in utter 
disgust. 

 MY REPLY 
[Your description of Shastri's work is inaccurate --- at least from his
papers I have read, which include among others,  Advances in Shruti -- A
neurally motivated model of relational knowledge representation and rapid
inference using temporal synchrony Applied Intelligence, 11: 79-108, 1999 (
http://www.icsi.berkeley.edu/~shastri/psfiles/shruti_adv_98.ps ); and
Massively parallel knowledge representation and reasoning: Taking a cue
from the Brain, by Shastri and Mani.

It is obvious from reading Shasti that his notion of what should talk to
what (i.e., i.e., be searched by spreading activation) is determined by a
form of forward and/or backward chaining, which can automatically be learned
from temporal associations between pattern activations, and the bindings
involved can be learned by the occurrences of the same one or more pattern
element instances as a part or as an attribute in one or more of those
temporally related patterns.

Shruiti's representational scheme has limitations that make it ill suited
for use as the general representation scheme in an AGI (problems which I
think can be fixed with a more generalized architecture), but the particular
problem you are accusing his system of here --- i.e., that it provides no
guidance as to what should be searched for when to answer a given query ---
is not in fact a problem (other than the issue of possible exponential
explosion of the search tree, which is discussed in my answers below)]

## RICHARD LOOSEMORE WROTE #
It is very important to break through this confusion and find out 
exactly why the two relevant entities would decide to talk to each 
other.  Solving any other aspect of the problem is not of any value.

Now, going back to your question about how it would happen:  if you look 
for a determinstic solution to the problem, I am not sure you can come 
up with a general answer.  Whereas there is a nice, obvious solution to 
the question Is Socrates mortal? given the facts Socrates is a man 
and All men are mortal, it is not at all clear how to do more complex 
forms of binding without simply doing massive searches.

 MY REPLY 
[You often do have to do massive searches -- it is precisely because the
human brains can do such massive searches (averaging roughly 3 to 300
trillion/second in the cortex alone)  that lets us so often come up with the
appropriate memory or reason at the appropriate time.  But the massive
searches in a large Shruiti-like or Novamente-like system are not
totally-blind searches --- instead they are often massive search guided by
forward and/or backward chaining -- by previously learned and/or recently
activated probabilities and importances --- by relative scores of various
search threads or pattern activations --- by inference patterns that may
have proven successful in previous similar searches --- by similar episodic
memories --- and --- by interaction with the current context as represented
by the other current activations] 

## RICHARD LOOSEMORE WROTE #
Or rather, it is not clear how you can *guarantee* the finding of a
solution. 

 MY REPLY 
 [in such massive searching, humans often miss the most 

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-10 Thread Ed Porter
=FROM ED'S ORIGINAL POST=

it is precisely because the human brains can do such massive searches
(averaging roughly 3 to 300 trillion/second in the cortex alone)  that lets
us so often come up with the appropriate memory or reason at the appropriate
time.  

 

== MIKE'S RESPONSE=

Do you think the brain works by massive search in dealing with problems?
Chess - a top master may consider consciously v. roughly 150 moves in a
minute. Do you think his unconscious brain is considering a lot more? How
many, roughly in what time?

 

===ED PORTER =

Big Blue showed the power of massive search in Chess.

 

In the human brain, we are not capable of massive exact searches,
particularly ones involving complex rapid accurate sequential processing,
but we are capable of massive less accurate parallel search.  If a human
considers 150 moves a minute, that is a period with roughly 2000 gama waves,
during each of which there can be a massive separately encoded spreading
activation. And many forms of spreading activation may be independent, to
various decrease, from such gama waves.  

 

I think the consideration of each move probably involve massive searches in
memory for patterns related to that move in the current context, and
multiple massive searches involving multiple levels of implication from such
patterns.

 

If you think about how implication spread, you realize that is easy to have
millions or billions of potential activation in just 3 or 4 inferencing
steps, without some sort of filtering process. 

 

You also have to realize that your consciousness is only aware of that small
portion of the activations in your brain which win competitive process for
the attention necessary to make your consciousness aware of them.

 

== MIKE'S RESPONSE=

Name 10 famous Frenchmen. How many Frenchmen roughly do you think your
brain is checking out and how fast as you deal with that?

 

===ED PORTER =

This task might not require as broad a search as some, because it only
requires a relatively few indices and the number of object indexed by some
of those indices is rather small, but I would still assume it involves
millions of activations.

 

I think the brain's indexing is often much less simple and clear than that
used in most simple databases programs.  You may well not have a clearly
defined index (is a French person), and even if you do, many people whom you
know of who are French may not be clearly labeled under it.  Instead you
probably have experiences of people in many different contexts which might
indicate they are French.  

 

Also I think our brains often can most quickly recover things that are
indexed by multiple indices.  If you recently spent a year working in France
for a French company with many French co-workers, you would probably be able
to rattle off names of Frenchmen much more quickly because you could just
think of all the people you have spent hundreds of hours with within the
offices of your French employer, and there you would have many indices
coding for the desired quality, making the appropriate answer pop out above
the noise much more boldly.

 

== MIKE'S RESPONSE=

Do you dispute Hawkins' one hundred step rule? He argues that the brain
can recognize a face in 1/2 sec. - which can involve information traversing
a chain of at most 100 neurons in that time. And the largest conceivable
parallel computer can't do anything useful in one hundred steps, no matter
how large or how fast. [See On Intelligence pp 66-7] This rule would
presumably severely limit the number  of associations that can be made with
any idea in a given time, or no?

 

===ED PORTER =

The Poggio/Serre work that I have cited so many times before (including my
post that started this thread) provides a working computer model for the
very type of fast feed-forward object recognition that is done very rapidly
by the brain.  I think it was modeling the type of recognition the brain can
do in about 150ms, which lets you think you saw an alligator, a lion, a dog,
a fish, etc.  In that system a 160x160 pixel input patch required 23 million
models, each with many inputs and outputs. If I remember correctly the lower
level models had 16x16 pixel receptive fields, which is 256 inputs for each
such model, so presumably many tens of millions of node to node
communications would be involved in the spreading activation involved in
each such recognition rapid recognition.  And the 160x160 grayscale input
space is much smaller than the input space of the human visual field which
is probably a roughly 300x300 foviated field with red, green, blue,
grayscale, and stereo vision --- a whole separate set of models for motion
perception --- and an ability to recognize many more than the, I think,
roughly 1000K objects the Poggio/Serre system could recognize.  To top this
all off, if a person is scanning a changing scene, this processes of many
million of activation could be 

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-10 Thread Richard Loosemore

Ed Porter wrote:

## RICHARD LOOSEMORE WROTE #
Now I must repeat what I said before about some (perhaps many?) claimed 
solutions to the binding problem:  these claimed solutions often 
establish the *mechanism* by which a connection could be established IF 
THE TWO ITEMS WANT TO TALK TO EACH OTHER.  In other words, what these 
people (e.g. Shastri and Ajjannagadde) do is propose a two step 
solution:  (1) the two instances magically decide that they need to get 
hooked up, and (2) then, some mechanism must allow these two to make 
contact and set up a line to one another.  Think of it this way:  (1) 
You decide that at this moment that you need to call Britney Spears, and 
(2) You need some mechanism whereby you can actually establish a phone 
connection that goes from your place to Britney's place.


The crazy part of this solution to the binding problem is that people 
often make the quiet and invisible assumption that (1) is dealt with 
(the two items KNOW that they need to talk), and then they go on to work 
out a fabulously powerful way (e.g. using neural synchronisation) to get 
part (2) to happen.  The reason this is crazy is that the first part IS 
the binding problem, not the second part!  The second phase (the 
practical aspects of making the phone call get through) is just boring 
machinery.  By the time the two parties have decided that they need to 
hook up, the show is already over... the binding problem has been 
solved.  But if you look at papers describing these so-called solutions 
to the binding problem you will find that the first part is never talked 
about.


At least, that was true of the S  A paper, and at least some of the 
papers that followed it, so I gave up following that thread in utter 
disgust. 


 MY REPLY 
[Your description of Shastri's work is inaccurate --- at least from his
papers I have read, which include among others,  Advances in Shruti -- A
neurally motivated model of relational knowledge representation and rapid
inference using temporal synchrony Applied Intelligence, 11: 79-108, 1999 (
http://www.icsi.berkeley.edu/~shastri/psfiles/shruti_adv_98.ps ); and
Massively parallel knowledge representation and reasoning: Taking a cue
from the Brain, by Shastri and Mani.

It is obvious from reading Shasti that his notion of what should talk to
what (i.e., i.e., be searched by spreading activation) is determined by a
form of forward and/or backward chaining, which can automatically be learned
from temporal associations between pattern activations, and the bindings
involved can be learned by the occurrences of the same one or more pattern
element instances as a part or as an attribute in one or more of those
temporally related patterns.

Shruiti's representational scheme has limitations that make it ill suited
for use as the general representation scheme in an AGI (problems which I
think can be fixed with a more generalized architecture), but the particular
problem you are accusing his system of here --- i.e., that it provides no
guidance as to what should be searched for when to answer a given query ---
is not in fact a problem (other than the issue of possible exponential
explosion of the search tree, which is discussed in my answers below)]




## RICHARD LOOSEMORE WROTE #
It is very important to break through this confusion and find out 
exactly why the two relevant entities would decide to talk to each 
other.  Solving any other aspect of the problem is not of any value.


Now, going back to your question about how it would happen:  if you look 
for a determinstic solution to the problem, I am not sure you can come 
up with a general answer.  Whereas there is a nice, obvious solution to 
the question Is Socrates mortal? given the facts Socrates is a man 
and All men are mortal, it is not at all clear how to do more complex 
forms of binding without simply doing massive searches.


 MY REPLY 
[You often do have to do massive searches -- it is precisely because the
human brains can do such massive searches (averaging roughly 3 to 300
trillion/second in the cortex alone)  that lets us so often come up with the
appropriate memory or reason at the appropriate time.  But the massive
searches in a large Shruiti-like or Novamente-like system are not
totally-blind searches --- instead they are often massive search guided by
forward and/or backward chaining -- by previously learned and/or recently
activated probabilities and importances --- by relative scores of various
search threads or pattern activations --- by inference patterns that may
have proven successful in previous similar searches --- by similar episodic
memories --- and --- by interaction with the current context as represented
by the other current activations] 


Well, I will hold my fire until I get to your comments below, but I must 
insist that what I said was accurate:  his first major paper on this 
topic was a sleight of hand that avoided the 

Re: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-03 Thread Richard Loosemore

Ed Porter wrote:

WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

 

Here is an important practical, conceptual problem I am having trouble 
with.


 

In an article entitled “Are Cortical Models Really Bound by the ‘Binding 
Problem’? ” Tomaso Poggio’s group at MIT takes the position that there 
is no need for special mechanisms to deal with the famous “binding 
problem” --- at least in certain contexts, such as 150 msec feed forward 
visual object recognition.  This article implies that a properly 
designed hierarchy of patterns that has both compositional and 
max-pooling layers (I call them “gen/comp hierarchies”) automatically 
handles the problem of what sub-elements are connected with which 
others, preventing the need for techniques like synchrony to handle this 
problem.


 

Poggio’s group has achieved impressive results without the need for 
special mechanisms to deal with binding in this type of visual 
recognition, as is indicated by the two papers below by Serre (the later 
of which summarizes much of what is in the first, which is an excellent, 
detailed PhD thesis.) 

 

The two  works by Geoffrey Hinton cited below are descriptions of 
Hinton’s hierarchical feed-forward neural net recognition system (which, 
when run backwards, generates patterns similar to those it has been 
trained on).  These two works by Hinton show impressive results in 
handwritten digit recognition without any explicit mechanism for 
binding.  In particular, watch the portion of the Hinton YouTube video 
starting at 21:35 - 26:39 where Hinton shows his system alternating 
between recognizing a pattern and then generating a similar pattern 
stochastically from the higher level activations that have resulted from 
the previous recognition.  See how amazingly well his system seems to 
capture the many varied forms in which the various parts and sub-shapes 
of numerical handwritten digits are related.


 

So my question is this: HOW BROADLY DOES THE IMPLICATION THAT THE 
BINDING PROBLEM CAN BE AUTOMATICALLY HANDLED BY A GEN/COMP HIERARCHY OR 
A HINTON-LIKE HIERARCHY APPLY TO THE MANY TYPES OF PROBLEMS A BRAIN 
LEVEL ARTIFICIAL GENERAL INTELLIGENCE WOULD BE EXPECTED TO HANDLE?  In 
particular HOW APPLICABLE IS IT TO SEMANTIC PATTERN RECOGNITION AND 
GENERATION --- WITH ITS COMPLEX AND HIGHLY VARIED RELATIONS --- SUCH AS 
IS COMMONLY INVOLVED IN HUMAN LEVEL NATURAL LANGUAGE UNDERSTANDING AND 
GENERATION?


The answer lies in the confusion over what the binding problem 
actually is.  There are many studies out there that misunderstand the 
problem is such a substantial way that their conclusions are 
meaningless.  I refer, for example, to the seminal paper by Shastri and 
Ajjangadde, which I remember discussing with a colleague (Janet Vousden) 
back in the early 90s.  We both went into that paper in great depth, an 
independently came to the conclusion that S  A had their causality so 
completely screwed up that the paper said nothing at all:  they claimed 
to be able to explain binding by showing that synhcronized firing could 
make it happen, but they completely failed to show how the RELEVANT 
neurons would become synchronized.


Distressingly, the Shastri and Ajjangadde paper then went on to become, 
as I say, seminal, and there has been a lot of research on something 
that these people call the binding problem, but which seems (from my 
limited coverage of that area) to be about getting various things to 
connect using synchronized signals, but without any explanation of how 
the the things that are semantically required to connect, actual connect.


So, to be able to answer your question, you have to be able to 
disentangle that entire mess and become clear what is the real binding 
problem, what is the fake binding problem, and whether the new idea 
makes any difference to one or other of these.


In my opinion, it sounds like Poggio is correct in making the claim that 
he does, but that Janet Vousden and I already understood that general 
point back in 1994, just by using general principles.  And, most 
probably, the solution Poggio refers to DOES apply as well to what you 
are calling the semantic level.


 

The paper “Are Cortical Models Really Bound by the ‘Binding Problem’?”, 
suggests in the first full paragraph on its second page that gen/comp 
hierarchies avoids the “binding problem” by


 

“coding an object through a set of intermediate features made up of 
local arrangements of simpler features [that] sufficiently constrain the 
representation to uniquely code complex objects without retaining global 
positional information.


This is exactly the position that I took a couple of decades ago.  You 
will recall that I am always talking about doing this with CONSTRAINTS, 
and using those constraints at many different levels of the hierarchy.


 


For example, in the context of speech recognition,

 

...rather than using individual letters to code words, letter pairs or 

Re: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-03 Thread Abram Demski
In general I agree with Richard Loosemore's reply.

Also, I think that it is not surprising that the approaches referred
to (gen/comp hierarchies, Hinton's hierarchies, hierarchical-temporal
memory, and many similar approaches) become too large if we try to use
them for more than the first few levels of perception. The reason is
not because recursive composition becomes insufficient, but rather
because these systems do not take full advantage of it: they typically
cannot model arbitrary context-free patterns, much less
context-sensitive and beyond. Their computational power is low, so to
compensate the model size becomes large. (It's like trying to
approximate a Turing machine with a finite-state machine: more and
more states are needed, and although the approximation gets better, it
is never enough.)

On Thu, Jul 3, 2008 at 1:41 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
 Ed Porter wrote:

 WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?


 Here is an important practical, conceptual problem I am having trouble
 with.


 In an article entitled Are Cortical Models Really Bound by the 'Binding
 Problem'?  Tomaso Poggio's group at MIT takes the position that there is no
 need for special mechanisms to deal with the famous binding problem --- at
 least in certain contexts, such as 150 msec feed forward visual object
 recognition.  This article implies that a properly designed hierarchy of
 patterns that has both compositional and max-pooling layers (I call them
 gen/comp hierarchies) automatically handles the problem of what
 sub-elements are connected with which others, preventing the need for
 techniques like synchrony to handle this problem.


 Poggio's group has achieved impressive results without the need for
 special mechanisms to deal with binding in this type of visual recognition,
 as is indicated by the two papers below by Serre (the later of which
 summarizes much of what is in the first, which is an excellent, detailed PhD
 thesis.)

 The two  works by Geoffrey Hinton cited below are descriptions of Hinton's
 hierarchical feed-forward neural net recognition system (which, when run
 backwards, generates patterns similar to those it has been trained on).
  These two works by Hinton show impressive results in handwritten digit
 recognition without any explicit mechanism for binding.  In particular,
 watch the portion of the Hinton YouTube video starting at 21:35 - 26:39
 where Hinton shows his system alternating between recognizing a pattern and
 then generating a similar pattern stochastically from the higher level
 activations that have resulted from the previous recognition.  See how
 amazingly well his system seems to capture the many varied forms in which
 the various parts and sub-shapes of numerical handwritten digits are
 related.


 So my question is this: HOW BROADLY DOES THE IMPLICATION THAT THE BINDING
 PROBLEM CAN BE AUTOMATICALLY HANDLED BY A GEN/COMP HIERARCHY OR A
 HINTON-LIKE HIERARCHY APPLY TO THE MANY TYPES OF PROBLEMS A BRAIN LEVEL
 ARTIFICIAL GENERAL INTELLIGENCE WOULD BE EXPECTED TO HANDLE?  In particular
 HOW APPLICABLE IS IT TO SEMANTIC PATTERN RECOGNITION AND GENERATION --- WITH
 ITS COMPLEX AND HIGHLY VARIED RELATIONS --- SUCH AS IS COMMONLY INVOLVED IN
 HUMAN LEVEL NATURAL LANGUAGE UNDERSTANDING AND GENERATION?

 The answer lies in the confusion over what the binding problem actually
 is.  There are many studies out there that misunderstand the problem is such
 a substantial way that their conclusions are meaningless.  I refer, for
 example, to the seminal paper by Shastri and Ajjangadde, which I remember
 discussing with a colleague (Janet Vousden) back in the early 90s.  We both
 went into that paper in great depth, an independently came to the conclusion
 that S  A had their causality so completely screwed up that the paper said
 nothing at all:  they claimed to be able to explain binding by showing that
 synhcronized firing could make it happen, but they completely failed to show
 how the RELEVANT neurons would become synchronized.

 Distressingly, the Shastri and Ajjangadde paper then went on to become, as I
 say, seminal, and there has been a lot of research on something that these
 people call the binding problem, but which seems (from my limited coverage
 of that area) to be about getting various things to connect using
 synchronized signals, but without any explanation of how the the things that
 are semantically required to connect, actual connect.

 So, to be able to answer your question, you have to be able to disentangle
 that entire mess and become clear what is the real binding problem, what is
 the fake binding problem, and whether the new idea makes any difference to
 one or other of these.

 In my opinion, it sounds like Poggio is correct in making the claim that he
 does, but that Janet Vousden and I already understood that general point
 back in 1994, just by using general principles.  And, most probably, the
 solution Poggio 

[agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-02 Thread Ed Porter
WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

 

Here is an important practical, conceptual problem I am having trouble with.


 

In an article entitled Are Cortical Models Really Bound by the 'Binding
Problem'?  Tomaso Poggio's group at MIT takes the position that there is no
need for special mechanisms to deal with the famous binding problem --- at
least in certain contexts, such as 150 msec feed forward visual object
recognition.  This article implies that a properly designed hierarchy of
patterns that has both compositional and max-pooling layers (I call them
gen/comp hierarchies) automatically handles the problem of what
sub-elements are connected with which others, preventing the need for
techniques like synchrony to handle this problem.

 

Poggio's group has achieved impressive results without the need for special
mechanisms to deal with binding in this type of visual recognition, as is
indicated by the two papers below by Serre (the later of which summarizes
much of what is in the first, which is an excellent, detailed PhD thesis.)  

 

The two  works by Geoffrey Hinton cited below are descriptions of Hinton's
hierarchical feed-forward neural net recognition system (which, when run
backwards, generates patterns similar to those it has been trained on).
These two works by Hinton show impressive results in handwritten digit
recognition without any explicit mechanism for binding.  In particular,
watch the portion of the Hinton YouTube video starting at 21:35 - 26:39
where Hinton shows his system alternating between recognizing a pattern and
then generating a similar pattern stochastically from the higher level
activations that have resulted from the previous recognition.  See how
amazingly well his system seems to capture the many varied forms in which
the various parts and sub-shapes of numerical handwritten digits are
related.

 

So my question is this: HOW BROADLY DOES THE IMPLICATION THAT THE BINDING
PROBLEM CAN BE AUTOMATICALLY HANDLED BY A GEN/COMP HIERARCHY OR A
HINTON-LIKE HIERARCHY APPLY TO THE MANY TYPES OF PROBLEMS A BRAIN LEVEL
ARTIFICIAL GENERAL INTELLIGENCE WOULD BE EXPECTED TO HANDLE?  In particular
HOW APPLICABLE IS IT TO SEMANTIC PATTERN RECOGNITION AND GENERATION --- WITH
ITS COMPLEX AND HIGHLY VARIED RELATIONS --- SUCH AS IS COMMONLY INVOLVED IN
HUMAN LEVEL NATURAL LANGUAGE UNDERSTANDING AND GENERATION?

 

The paper Are Cortical Models Really Bound by the 'Binding Problem'?,
suggests in the first full paragraph on its second page that gen/comp
hierarchies avoids the binding problem by 

 

coding an object through a set of intermediate features made up of local
arrangements of simpler features [that] sufficiently constrain the
representation to uniquely code complex objects without retaining global
positional information.

 

For example, in the context of speech recognition,

 

...rather than using individual letters to code words, letter pairs or
higher-order combinations of letters can be used-i.e., although the word
tomaso might be confused with the word somato if both were coded by the
sets of letters they are made up of, this ambiguity is resolved if both are
represented through letter pairs.

 

The issue then becomes, WHAT SUB-SETS OF THE TYPES OF PROBLEMS THE HUMAN
BRAIN HAS TO PERFORM CAN BE PERFORMED IN A MANNER THAT AVOIDS THE BINDING
PROBLEM JUST BY USING A GEN/COMP HIERARCHY WITH SUCH A SET OF SIMPLER
FEATURES [THAT] SUFFICIENTLY CONSTRAIN THE REPRESENTATION TO UNIQUELY CODE
THE TYPE OF PATTERNS SUCH TASKS REQUIRE? 

 

There is substantial evidence that the brain does require synchrony for some
of its tasks --- as has been indicated by the work of people like Wolf
Singer --- suggesting that binding may well be a problem that cannot be
handled alone by the specificity of the brain's gen/comp hierarchies for all
mental tasks.

 

The table at the top of page 75 of Serre's impressive PhD thesis suggests
that his system --- which performs very quick feedforwad object recognition
roughly as well as a human --- has an input of 160 x 160 pixels, and
requires 23 million pattern models.  Such a large number of patterns helps
provide the simpler features [that] sufficiently constrains the
representation to uniquely code complex objects without retaining global
positional information.

 

But, it should be noted --- as is recognized in Serre's paper --- that the
very rapid 150 msec feed forward recognition described in that paper is far
from all of human vision.  Such rapid recognition --- although surprisingly
accurate given how fast it is --- is normally supplemented by more top down
vision processes to confirm its best guesses.  For example, if a human is
shown a photograph of a face, his eyes will normally saccade over it, with
multiple fixation points, often on key features such as eyes, nose, corners
of mouth, points on the outline of the face, all indicating the recognition
of the face is normally much more than one rapid feed forward