Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-15 Thread Brad Paulsen
I've been following this thread pretty much since the beginning.  I hope I 
didn't miss anything subtle.  You'll let me know if I have, I'm sure. ;=)


It appears the need for temporal dependencies or different levels of reasoning 
has been conflated with the terms forward-chaining (FWC) and 
backward-chaining (BWC), which are typically used to describe different rule 
base evaluation algorithms used by expert systems.


The terms “forward-chaining” and “backward-chaining” when used to refer to 
reasoning strategies have absolutely nothing to do with temporal dependencies or 
levels of reasoning.  These two terms refer simply, and only, to the algorithms 
used to evaluate “if/then” rules in a rule base (RB).  In the FWC algorithm, the 
“if” part is evaluated and, if TRUE, the “then” part is added to the FWC 
engine's output.  In the BWC algorithm, the “then” part is evaluated and, if 
TRUE, the “if” part is added to the BWC engine's output.  It is rare, but some 
systems use both FWC and BWC.


That's it.  Period.  No other denotations or connotations apply.

To help remove any mystery that may still surround these concepts, here is an 
FWC algorithm in pseudo-code (WARNING: I'm glossing over quite a few details 
here – I'll be happy to answer questions on list or off):


   0. set loop index to 0
   1. got next rule?
 no: goto 5
   2. is rule FIRED?
 yes: goto 1
   3. is key equal to rule's antecedent?
 yes: add consequent to output, mark rule as FIRED,
  output is new key, goto 0
   4. goto 1
   5. more input data?
 yes: input data is new key, goto 0
   6. done.

To turn this into a BWC algorithm, we need only modify Step #3 to read as 
follows:

   3. is key equal to rule's consequent?
 yes: add antecedent to output, mark rule as FIRED,
 output is new key, goto 0

If you need to represent temporal dependencies in FWC/BWC systems, you have to 
express them using rules.  For example, if washer-a MUST be placed on bolt-b 
before nut-c can be screwed on, the rule base might look something like this:


   1. if installed(washer-x) then install(nut-z)
   2. if installed(bolt-y) then install(washer-x)
   3. if notInstalled(bolt-y) then install(bolt-y)

In this case, rule #1 won't get fired until rule #2 fires (nut-z can't get 
installed until washer-x has been installed).  Rule #2 won't get fired until 
rule #3 has fired (washer-x can't get installed until bolt-y has been 
installed). NUT-Z!  (Sorry, couldn't help it.)


To kick things off, we pass in “bolt-y” as the initial key.  This triggers rule 
#3, which will trigger rule #2, which will trigger rule #1. These temporal 
dependencies result in the following assembly sequence: install bolt-y, then 
install washer-x, and, finally, install nut-z.


A similar thing can be done to implement rule hierarchies.

   1. if levelIs(0) and installed(washer-x) then install(nut-z)
   2. if levelIs(0) and installed(nut-z) goLevel(1)
   3. if levelIs(1) and notInstalled(gadget-xx) then install(gadget-xx)
   4. if levelIs(0) and installed(bolt-y) then install(washer-x)
   5. if levelIs(0) and notInstalled(bolt-y) then install(bolt-y)

Here rule #2 won't fire until rule #1 has fired.  Rule #1 won't fire unless rule 
#4 has fired.  Rule #4 won't fire until rule #5 has fired.  And, finally, Rule 
#3 won't fire until Rule #2 has fired. So, level 0 could represent the reasoning 
required before level 1 rules (rule #3 here) will be of any use. (That's not the 
case here, of course, just stretching my humble example as far as I can.)


Note, again, that the temporal and level references in the rules are NOT used by 
the BWC.  They probably will be used by the part of the program that does 
something with the BWC's output (the install(), goLevel(), etc. functions). 
And, again, the results should be completely unaffected by the order in which 
the RB rules are evaluated or fired.


I hope this helps.

Cheers,

Brad

Richard Loosemore wrote:

Mike Tintner wrote:
A tangential comment here. Looking at this and other related threads I 
can't help thinking: jeez, here are you guys still endlessly arguing 
about the simplest of syllogisms, seemingly unable to progress beyond 
them. (Don't you ever have that feeling?) My impression is that the 
fault lies with logic itself - as soon as you start to apply logic to 
the real world, even only tangentially with talk of forward and 
backward or temporal considerations, you fall into a quagmire of 
ambiguity, and no one is really sure what they are talking about. Even 
the simplest if p then q logical proposition is actually infinitely 
ambiguous. No?  (Is there a Godel's Theorem of logic?)


Well, now you have me in a cleft stick, methinks.

I *hate* logic as a way to understand cognition, because I think it is a 
derivative process within a high-functional AGI system, not a foundation 
process that sits underneath everything else.


But, on the other hand, I do understand how it 

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-15 Thread Lukasz Stafiniak
On Tue, Jul 15, 2008 at 8:01 AM, Brad Paulsen [EMAIL PROTECTED] wrote:

 The terms forward-chaining and backward-chaining when used to refer to
 reasoning strategies have absolutely nothing to do with temporal
 dependencies or levels of reasoning.  These two terms refer simply, and
 only, to the algorithms used to evaluate if/then rules in a rule base
 (RB).  In the FWC algorithm, the if part is evaluated and, if TRUE, the
 then part is added to the FWC engine's output.  In the BWC algorithm, the
 then part is evaluated and, if TRUE, the if part is added to the BWC
 engine's output.  It is rare, but some systems use both FWC and BWC.

 That's it.  Period.  No other denotations or connotations apply.

Curiously, the definition put by Abram Demski is the only one I've
been aware of until yesterday (I believe it's the one used among
theorem proving people). Let's see what googling says on forward
chaining:

1. (Wikipedia)

2. http://www.amzi.com/ExpertSystemsInProlog/05forward.htm
A large number of expert systems require the use of forward chaining,
or data driven inference. [...]
Data driven expert systems are different from the goal driven, or
backward chaining systems seen in the previous chapters.
The goal driven approach is practical when there are a reasonable
number of possible final answers, as in the case of a diagnostic or
identification system. The system methodically tries to prove or
disprove each possible answer, gathering the needed information as it
goes.
The data driven approach is practical when combinatorial explosion
creates a seemingly infinite number of possible right answers, such as
possible configurations of a machine.

3. http://ai.eecs.umich.edu/cogarch0/common/prop/chain.html
Forward-chaining implies that upon assertion of new knowledge, all
relevant inductive and deductive rules are fired exhaustively,
effectively making all knowledge about the current state explicit
within the state. Forward chaining may be regarded as progress from a
known state (the original knowledge) towards a goal state(s).
Backward-chaining by an architecture means that no rules are fired
upon assertion of new knowledge. When an unknown predicate about a
known piece of knowledge is detected in an operator's condition list,
all rules relevant to the knowledge in question are fired until the
question is answered or until quiescence. Thus, backward chaining
systems normally work from a goal state back to the original state.

4. http://www.ontotext.com/inference/reasoning_strategies.html
* Forward-chaining: to start from the known facts and to perform
the inference in an inductive fashion. This kind of reasoning can have
diverse objectives, for instance: to compute the inferred closure; to
answer a particular query; to infer a particular sort of knowledge
(e.g. the class taxonomy); etc.
* Backward-chaining: to start from a particular fact or from a
query and by means of using deductive reasoning to try to verify that
fact or to obtain all possible results of the query. Typically, the
reasoner decomposes the fact into simpler facts that can be found in
the knowledge base or transforms it into alternative facts that can be
proven applying further recursive transformations. 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-15 Thread Jim Bromer
Ed Porter said:
You imply you have been able to accomplish a somewhat
similar implicit representation of bindings in a much higher
dimensional and presumably large semantic space.  Unfortunately I was
unable to understand from your description how you claimed to have
accomplished this. 
 
-
I never implied that I have been able to accomplish a somewhat similar implicit 
representation of bindings in a much higher dimension and presumably large 
semantic space.

I clearly stated:

I have often talked about the
use of multi-level complex methods and I see some similarity to the
ideas that they discussed to my ideas.
-and,
The complex groupings of
objects that I have in mind would have been derived using different
methods of analysis and combination and when a group of them is called
from an input analysis their use should tend to narrow the objects that
might be expected given the detection by the feature detectors.
Although I haven't expressed myself very clearly, this is very similar
to what Riesenhuber and Poggio were suggesting that their methods would
be capable of. So, yes,I think some similar methods can be used in NLP.

I clearly used the expression in mind just to avoid the kind of  
misunderstanding that you made. I never made the exaggerated claim that I had 
accomplished it.


The difference between having an idea in mind and having claimed to have 
accomplished a goal, which the majority of participants in the group would 
acknowledge is elusive, should be obvious and easy to understand.


I am not claiming that I have a method that would work in all semantic space.  
I would be happy to claim that I do have a theory which I believe should show 
some limited extensibility in semantic space that goes beyond other current 
theories.  However, I will not know for sure until I test it and right now that 
looks like it would be years off.


I
would be happy to continue the dialog if it can be conducted in a less
confrontational and more genial manner than it has been during
the past week.

Jim Bromer




Jim,
 
In
the Riesenhuber and Poggio paper the binding that were handled
implicitly involved spatial relationships, such as an observed roughly
horizontal line substantially touching an observed roughly vertical
line at their respective ends, even though their might be other
horizontal and vertical lines not having this relationship in the input
pixel space.  It achieves such implicit bindings by having enough
separate models to be able to detect, by direct mapping, such a
touching relationship between a horizontal and vertical lines at each
of many different locations in the visual input space.
 
But
the Poggio paper deals with a relatively small number of relationships
in a relatively small (160x160) low dimensional (2d) space using 23
million models.  You imply you have been able to accomplish a somewhat
similar implicit representation of bindings in a much higher
dimensional and presumably large semantic space.  Unfortunately I was
unable to understand from your description how you claimed to have
accomplished this.
 
Could you please clarify you description with regard to this point.
 
Ed Porter
 
-Original Message-
From: Jim Bromer [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 14, 2008 1:38 PM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE 
BINDING PROBLEM?
 
I
started reading a Riesenhuber and Poggio paper and there are some
similarities to ideas that I have considered although my ideas were
explicitly developed about computer programs that would use symbolic
information and are not neural theories.  It is interesting that
Risesnhuber and Poggio argued that the binding problem seems to be a
problem for only some models of object recognition.  In other words,
it seems that they are claiming that the problem disappears with their
model of neural cognition! 

The study of feature detectors in
cats eyes is old news and I did incorporate that information into the
development of my own theories.

I have often talked about the
use of multi-level complex methods and I see some similarity to the
ideas that they discussed to my ideas.  In my model an input would be
scanned for different features using different kinds of analysis on the
input.  So then a configuration of simple features would be derived
from the scan and these could be associated with a number of complex
groups of objects that have been previously associated with the
features.  Because the complex groups of objects are complexes (in the
general sense), and would be learned by previous experience, they are
not insipidly modeled on one standard model. These complex objects are
complex in that they are not all cut from one standard.  The older
implementations that used operations that were taken from set theory on
groups were set on object models that were very old-world and were not
derived from learning.  For example they were non-experiential. (I

Re: Location of goal/purpose was Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-15 Thread William Pearson
2008/7/14 Terren Suydam [EMAIL PROTECTED]:

 Will,

 --- On Fri, 7/11/08, William Pearson [EMAIL PROTECTED] wrote:
 Purpose and goal are not intrinsic to systems.

 I agree this is true with designed systems.

And I would also say of evolved systems. My fingers purpose could
equally well be said to be for picking ticks out of the hair of my kin
or for touch typing. E.g. why do I keep my fingernails short, so that
they do not impede my typing. The purpose of gut bacteria is to help
me digest my food. The purpose of part of my brain is to do
differentiation of functions, because I have .

 The designed system is ultimately an extension of the designer's mind, 
 wherein lies the purpose.

Oddly enough that is what I want the system to be. Rather an extension
of my brain.

Of course, as you note, the system in question can serve multiple purposes, 
each of which lies in the mind of some other observer. The same is true of 
your system, even though its behavior may evolve. Your button is what tethers 
its purpose to your mind.


 On the other hand, we can create simulations in which purpose is truly 
 emergent. To support emergence our design must support large-scale, (global) 
 interactions of locally specified entities. Conway's Game of Life is an 
 example of such a system - what is its purpose?

To provide an interesting system for researchers to research cellular
automata? ;) I think I can see your point, It has no practical purpose
as such. Just a research purpose.

It certainly wasn't specified.

And neither am I specifying the purpose of mine! I'm quite happy to
hook up the button to something I press when I feel like it. I could
decide the purpose of the system was to learn and be good at
backgammon one day, in which case my presses would reflect that, or I
could decide the purpose of the system was to search the web.

If you want to think of a good analogy for how emergent I want the
system to be. Imagine someone came along to one of your life
simulations and interfered with the simulation to give some more food
to some of the entities that he liked the look of. This wouldn't be
anything so crude as to specify the fitness or artificial breeding,
but it would tilt the scales in the favour of entities that he liked
all else being equal. Would this invalidate the whole simulation
because he interfered and bought some of his purpose into it? If so, I
don't see why.

 The simplest answer is probably that it has none. But what if our design of 
 the local level was a little more interesting, such that at the global level, 
 we would eventually see self-sustaining entities that reproduced, competed 
 for resources, evolved, etc, and became more complex over a large number of 
 iterations?

Then the system itself still wouldn't have a practical purpose. For a
system Y to have a purpose, you have to have be able to say part X is
like it is for Y to perform its function. Internal state corresponding
to the entities might be said to have purpose, but not the system as a
whole.

 Whether that's possible is another matter, but assuming for the moment it 
 was, the purpose of that system could be defined in roughly the same way as 
 trying to define the purpose of life itself.

We have to be careful here.  What meaning of the word life are you using?

1) The biosphere + evolution
2) And individuals exsistance.

The first has no purpose. You can never look at the biosphere and
figure out what bits are for what in the grander scheme of things, or
ask yourself what mutations are likely to be thrown up to better
achieve its goal. That we have some self-regulation on the Gaian scale
is purely anthropic, biospheres without it would likely have driven
themselves to a state not able to support lives. An individual entity
has a purpose, though. So to that extent the purposeless can create
the purposeful.

 So unless you believe that life was designed by God (in which case the 
 purpose of life would lie in the mind of God), the purpose of the system is 
 indeed intrinsic to the system itself.


I think I would still say it didn't have a purpose. If I get your meaning right.

   Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-15 Thread Richard Loosemore

Brad Paulsen wrote:
I've been following this thread pretty much since the beginning.  I hope 
I didn't miss anything subtle.  You'll let me know if I have, I'm sure. ;=)


It appears the need for temporal dependencies or different levels of 
reasoning has been conflated with the terms forward-chaining (FWC) and 
backward-chaining (BWC), which are typically used to describe 
different rule base evaluation algorithms used by expert systems.


The terms “forward-chaining” and “backward-chaining” when used to refer 
to reasoning strategies have absolutely nothing to do with temporal 
dependencies or levels of reasoning.  These two terms refer simply, and 
only, to the algorithms used to evaluate “if/then” rules in a rule base 
(RB).  In the FWC algorithm, the “if” part is evaluated and, if TRUE, 
the “then” part is added to the FWC engine's output.  In the BWC 
algorithm, the “then” part is evaluated and, if TRUE, the “if” part is 
added to the BWC engine's output.  It is rare, but some systems use both 
FWC and BWC.


That's it.  Period.  No other denotations or connotations apply.


Whooaa there.  Something not right here.

Backward chaining is about starting with a goal statement that you would 
like to prove, but at the beginning it is just a hypothesis.  In BWC you 
go about proving the statement by trying to find facts that might 
support it.  You would not start from the statement and then add 
knowledge to your knowledgebase that is consistent with it.


So for example, if your goal is to prove that Socrates is mortal, then 
your above desciption of BWC would cause the following to occur


1) Does any rule allow us to conclude that x is/is not mortal?

2) Answer: yes, the following rules allow us to do that:

If x is a plant, then x is mortal
If x is a rock, then x is not mortal
If x is a robot, then x is not mortal
If x lives in a post-singularity era, then x is not mortal
If x is a slug, then x is mortal
If x is a japanese beetle, then x is mortal
If x is a side of beef, then x is mortal
If x is a screwdriver, then x is not mortal
If x is a god, then x is not mortal
If x is a living creature, then x is mortal
If x is a goat, then x is mortal
If x is a parrot in a Dead Parrot Sketch, then x is mortal

3) Ask the knowledge base if Socrates is a plant, if Socrates is a rock, 
etc etc . working through the above list.


3) [According to your version of BWC, if I understand you aright] Okay, 
if we cannot find any facts in the KB that say that Socrates is known to 
be one of these things, then add the first of these to the KB:


Socrates is a plant

[This is the bit that I question:  we don't do the opposite of forward 
chaining at this step].


4) Now repeat to find all rules that allow us to conclude that x is a 
plant.  For this set of  ... then x is a plant rules, go back and 
repeat the loop from step 2 onwards.  Then if this does not work, 



Well, you can imagine the rest of the story: keep iterating until you 
can prove or disprove that Socrates is mortal.


I cannot seem to reconcile this with your statement above that backward 
chaining simply involves the opposite of forward chaining, namely adding 
antecedents to the KB and working backwards.







To help remove any mystery that may still surround these concepts, here 
is an FWC algorithm in pseudo-code (WARNING: I'm glossing over quite a 
few details here – I'll be happy to answer questions on list or off):


   0. set loop index to 0
   1. got next rule?
 no: goto 5
   2. is rule FIRED?
 yes: goto 1
   3. is key equal to rule's antecedent?
 yes: add consequent to output, mark rule as FIRED,
  output is new key, goto 0
   4. goto 1
   5. more input data?
 yes: input data is new key, goto 0
   6. done.

To turn this into a BWC algorithm, we need only modify Step #3 to read 
as follows:


   3. is key equal to rule's consequent?
 yes: add antecedent to output, mark rule as FIRED,
 output is new key, goto 0

If you need to represent temporal dependencies in FWC/BWC systems, you 
have to express them using rules.  For example, if washer-a MUST be 
placed on bolt-b before nut-c can be screwed on, the rule base might 
look something like this:


   1. if installed(washer-x) then install(nut-z)
   2. if installed(bolt-y) then install(washer-x)
   3. if notInstalled(bolt-y) then install(bolt-y)

In this case, rule #1 won't get fired until rule #2 fires (nut-z can't 
get installed until washer-x has been installed).  Rule #2 won't get 
fired until rule #3 has fired (washer-x can't get installed until bolt-y 
has been installed). NUT-Z!  (Sorry, couldn't help it.)


To kick things off, we pass in “bolt-y” as the initial key.  This 
triggers rule #3, which will trigger rule #2, which will trigger rule 
#1. These temporal dependencies result in the following assembly 
sequence: install bolt-y, then install washer-x, and, finally, install 
nut-z.


A similar thing can be done to 

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-15 Thread Ed Porter
Lukasz,

 

Your post below was great.

 

Your clippings from Google confirm much of the understanding that Abram
Demski was helping me reach yesterday.

 

In one of his posts Abram was discussing my prior statement that top-down
activation could be either forward or backward chaining.  He said If the
network is passing down an expectation based on other data, informing the
lower network of what to expect, then this is forward chaining. But if the
signal is not an expectation, but more like a query pay attention to data
that might conform/contradict this hypothesis, and notify me ASAP then it
is backwards chaining. And it seems realistic that it can be both of these.

 

I am interpreting this quoted statement as implying the purpose of backward
chaining is to search for forward chaining paths that either confirm or
contradict a pattern of interest or that provide a path or plan to a desired
goal.  In this view the backward part of backward chaining provides no
changes in probability, only changes in attention, and it is only the
forward chaining that is found by such backward chaining that changes
probabilities.

 

Am I correct in this interpretation of what Abram said, and is that
interpretation included in what your Google clippings indicate is the
generally understood meaning of the term backward chaining.

 

Ed Porter

 

P.S. I would appreciate answers for Abram or any else on this list who
understands the question and has some knowledge on the subject.

 

-Original Message-
From: Lukasz Stafiniak [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, July 15, 2008 3:05 AM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE
BINDING PROBLEM?

 

On Tue, Jul 15, 2008 at 8:01 AM, Brad Paulsen [EMAIL PROTECTED]
wrote:



 The terms forward-chaining and backward-chaining when used to refer to

 reasoning strategies have absolutely nothing to do with temporal

 dependencies or levels of reasoning.  These two terms refer simply, and

 only, to the algorithms used to evaluate if/then rules in a rule base

 (RB).  In the FWC algorithm, the if part is evaluated and, if TRUE, the

 then part is added to the FWC engine's output.  In the BWC algorithm,
the

 then part is evaluated and, if TRUE, the if part is added to the BWC

 engine's output.  It is rare, but some systems use both FWC and BWC.



 That's it.  Period.  No other denotations or connotations apply.



Curiously, the definition put by Abram Demski is the only one I've

been aware of until yesterday (I believe it's the one used among

theorem proving people). Let's see what googling says on forward

chaining:

 

1. (Wikipedia)

 

2. http://www.amzi.com/ExpertSystemsInProlog/05forward.htm

A large number of expert systems require the use of forward chaining,

or data driven inference. [...]

Data driven expert systems are different from the goal driven, or

backward chaining systems seen in the previous chapters.

The goal driven approach is practical when there are a reasonable

number of possible final answers, as in the case of a diagnostic or

identification system. The system methodically tries to prove or

disprove each possible answer, gathering the needed information as it

goes.

The data driven approach is practical when combinatorial explosion

creates a seemingly infinite number of possible right answers, such as

possible configurations of a machine.

 

3. http://ai.eecs.umich.edu/cogarch0/common/prop/chain.html

Forward-chaining implies that upon assertion of new knowledge, all

relevant inductive and deductive rules are fired exhaustively,

effectively making all knowledge about the current state explicit

within the state. Forward chaining may be regarded as progress from a

known state (the original knowledge) towards a goal state(s).

Backward-chaining by an architecture means that no rules are fired

upon assertion of new knowledge. When an unknown predicate about a

known piece of knowledge is detected in an operator's condition list,

all rules relevant to the knowledge in question are fired until the

question is answered or until quiescence. Thus, backward chaining

systems normally work from a goal state back to the original state.

 

4. http://www.ontotext.com/inference/reasoning_strategies.html

* Forward-chaining: to start from the known facts and to perform

the inference in an inductive fashion. This kind of reasoning can have

diverse objectives, for instance: to compute the inferred closure; to

answer a particular query; to infer a particular sort of knowledge

(e.g. the class taxonomy); etc.

* Backward-chaining: to start from a particular fact or from a

query and by means of using deductive reasoning to try to verify that

fact or to obtain all possible results of the query. Typically, the

reasoner decomposes the fact into simpler facts that can be found in

the knowledge base or transforms it into alternative facts that can be

proven applying further 

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-15 Thread Ed Porter
Jim, Sorry.  Obviously I did not understand you. Ed Porter

 

-Original Message-
From: Jim Bromer [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, July 15, 2008 9:33 AM
To: agi@v2.listbox.com
Subject: RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE
BINDING PROBLEM?

 

Ed Porter said:

You imply you have been able to accomplish a somewhat similar implicit
representation of bindings in a much higher dimensional and presumably large
semantic space.  Unfortunately I was unable to understand from your
description how you claimed to have accomplished this. 

 

-

I never implied that I have been able to accomplish a somewhat similar
implicit representation of bindings in a much higher dimension and
presumably large semantic space.

 

I clearly stated:

I have often talked about the use of multi-level complex methods and I see
some similarity to the ideas that they discussed to my ideas.

-and,

The complex groupings of objects that I have in mind would have been
derived using different methods of analysis and combination and when a group
of them is called from an input analysis their use should tend to narrow the
objects that might be expected given the detection by the feature detectors.
Although I haven't expressed myself very clearly, this is very similar to
what Riesenhuber and Poggio were suggesting that their methods would be
capable of. So, yes,I think some similar methods can be used in NLP.



I clearly used the expression in mind just to avoid the kind of
misunderstanding that you made. I never made the exaggerated claim that I
had accomplished it.

The difference between having an idea in mind and having claimed to have
accomplished a goal, which the majority of participants in the group would
acknowledge is elusive, should be obvious and easy to understand.

 

I am not claiming that I have a method that would work in all semantic
space.  I would be happy to claim that I do have a theory which I believe
should show some limited extensibility in semantic space that goes beyond
other current theories.  However, I will not know for sure until I test it
and right now that looks like it would be years off.

 

I would be happy to continue the dialog if it can be conducted in a less
confrontational and more genial manner than it has been during the past
week.

 

Jim Bromer

 

 

 

Jim,

 

In the Riesenhuber and Poggio paper the binding that were handled implicitly
involved spatial relationships, such as an observed roughly horizontal line
substantially touching an observed roughly vertical line at their respective
ends, even though their might be other horizontal and vertical lines not
having this relationship in the input pixel space.  It achieves such
implicit bindings by having enough separate models to be able to detect, by
direct mapping, such a touching relationship between a horizontal and
vertical lines at each of many different locations in the visual input
space.

 

But the Poggio paper deals with a relatively small number of relationships
in a relatively small (160x160) low dimensional (2d) space using 23 million
models.  You imply you have been able to accomplish a somewhat similar
implicit representation of bindings in a much higher dimensional and
presumably large semantic space.  Unfortunately I was unable to understand
from your description how you claimed to have accomplished this.

 

Could you please clarify you description with regard to this point.

 

Ed Porter

 

-Original Message-
From: Jim Bromer [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 14, 2008 1:38 PM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE
BINDING PROBLEM?

 

I started reading a Riesenhuber and Poggio paper and there are some
similarities to ideas that I have considered although my ideas were
explicitly developed about computer programs that would use symbolic
information and are not neural theories.  It is interesting that Risesnhuber
and Poggio argued that the binding problem seems to be a problem for only
some models of object recognition.  In other words, it seems that they are
claiming that the problem disappears with their model of neural cognition! 

The study of feature detectors in cats eyes is old news and I did
incorporate that information into the development of my own theories.

I have often talked about the use of multi-level complex methods and I see
some similarity to the ideas that they discussed to my ideas.  In my model
an input would be scanned for different features using different kinds of
analysis on the input.  So then a configuration of simple features would be
derived from the scan and these could be associated with a number of complex
groups of objects that have been previously associated with the features.
Because the complex groups of objects are complexes (in the general sense),
and would be learned by previous experience, they are not insipidly modeled
on one 

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-15 Thread Abram Demski
Am I correct in this interpretation of what Abram said, and is that
interpretation included in what your Google clippings indicate is the
generally understood meaning of the term backward chaining.
Ed Porter

It sounds to me like you are interpreting me correctly.

One important note. Lukasz quoted one source that claimed that forward
chaining can help to cut down the combinatorial explosion arising from
the huge search space in backwards-chaining. This is true in some
situations, but the opposite can also be the case; backwards-chaining
can help to focus inferences when it would be impossible to deduce
every fact that would follow by forward-chaining. It depends on the
forward and backwards branching factors. If every fact fires an
average of five rules forwards, but three backwards, then
backwards-chaining will be less expensive; 5^n vs 3^n, where n is the
length of the actual deductive chain being searched for. Simultaneous
backwards/forwards chaining that meets in the middle can be even less
expensive; with a branching factor of 2 in both directions, the search
time goes down from 2^n for forward or backward chaining to 2^(n/2 +
1).

On the other hand, what we want the system to do makes a big
difference. If we really do have a single goal-sentence we want to
prove or disprove, the above arguments hold. But if we want to deduce
all consequences of our current knowledge, we should use forward
chaining regardless of branching factors and so on.

Most of this stuff should be in any intro AI textbook.

--Abram

On Tue, Jul 15, 2008 at 11:08 AM, Ed Porter [EMAIL PROTECTED] wrote:
 Lukasz,



 Your post below was great.



 Your clippings from Google confirm much of the understanding that Abram
 Demski was helping me reach yesterday.



 In one of his posts Abram was discussing my prior statement that top-down
 activation could be either forward or backward chaining.  He said If the
 network is passing down an expectation based on other data, informing the
 lower network of what to expect, then this is forward chaining. But if the
 signal is not an expectation, but more like a query pay attention to data
 that might conform/contradict this hypothesis, and notify me ASAP then it
 is backwards chaining. And it seems realistic that it can be both of these.



 I am interpreting this quoted statement as implying the purpose of backward
 chaining is to search for forward chaining paths that either confirm or
 contradict a pattern of interest or that provide a path or plan to a desired
 goal.  In this view the backward part of backward chaining provides no
 changes in probability, only changes in attention, and it is only the
 forward chaining that is found by such backward chaining that changes
 probabilities.



 Am I correct in this interpretation of what Abram said, and is that
 interpretation included in what your Google clippings indicate is the
 generally understood meaning of the term backward chaining.



 Ed Porter



 P.S. I would appreciate answers for Abram or any else on this list who
 understands the question and has some knowledge on the subject.



 -Original Message-
 From: Lukasz Stafiniak [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, July 15, 2008 3:05 AM
 To: agi@v2.listbox.com
 Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE
 BINDING PROBLEM?



 On Tue, Jul 15, 2008 at 8:01 AM, Brad Paulsen [EMAIL PROTECTED]
 wrote:



 The terms forward-chaining and backward-chaining when used to refer to

 reasoning strategies have absolutely nothing to do with temporal

 dependencies or levels of reasoning.  These two terms refer simply, and

 only, to the algorithms used to evaluate if/then rules in a rule base

 (RB).  In the FWC algorithm, the if part is evaluated and, if TRUE, the

 then part is added to the FWC engine's output.  In the BWC algorithm,
 the

 then part is evaluated and, if TRUE, the if part is added to the BWC

 engine's output.  It is rare, but some systems use both FWC and BWC.



 That's it.  Period.  No other denotations or connotations apply.



 Curiously, the definition put by Abram Demski is the only one I've

 been aware of until yesterday (I believe it's the one used among

 theorem proving people). Let's see what googling says on forward

 chaining:



 1. (Wikipedia)



 2. http://www.amzi.com/ExpertSystemsInProlog/05forward.htm

 A large number of expert systems require the use of forward chaining,

 or data driven inference. [...]

 Data driven expert systems are different from the goal driven, or

 backward chaining systems seen in the previous chapters.

 The goal driven approach is practical when there are a reasonable

 number of possible final answers, as in the case of a diagnostic or

 identification system. The system methodically tries to prove or

 disprove each possible answer, gathering the needed information as it

 goes.

 The data driven approach is practical when combinatorial explosion

 creates a seemingly infinite number 

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-15 Thread Ed Porter
Abram, 

Thanks, for the info.  The concept that the only purpose of backward
chaining to find appropriate forward chaining paths, is an important
clarification of my understanding.

Ed Porter

-Original Message-
From: Abram Demski [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, July 15, 2008 11:38 AM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE
BINDING PROBLEM?

Am I correct in this interpretation of what Abram said, and is that
interpretation included in what your Google clippings indicate is the
generally understood meaning of the term backward chaining.
Ed Porter

It sounds to me like you are interpreting me correctly.

One important note. Lukasz quoted one source that claimed that forward
chaining can help to cut down the combinatorial explosion arising from
the huge search space in backwards-chaining. This is true in some
situations, but the opposite can also be the case; backwards-chaining
can help to focus inferences when it would be impossible to deduce
every fact that would follow by forward-chaining. It depends on the
forward and backwards branching factors. If every fact fires an
average of five rules forwards, but three backwards, then
backwards-chaining will be less expensive; 5^n vs 3^n, where n is the
length of the actual deductive chain being searched for. Simultaneous
backwards/forwards chaining that meets in the middle can be even less
expensive; with a branching factor of 2 in both directions, the search
time goes down from 2^n for forward or backward chaining to 2^(n/2 +
1).

On the other hand, what we want the system to do makes a big
difference. If we really do have a single goal-sentence we want to
prove or disprove, the above arguments hold. But if we want to deduce
all consequences of our current knowledge, we should use forward
chaining regardless of branching factors and so on.

Most of this stuff should be in any intro AI textbook.

--Abram

On Tue, Jul 15, 2008 at 11:08 AM, Ed Porter [EMAIL PROTECTED] wrote:
 Lukasz,



 Your post below was great.



 Your clippings from Google confirm much of the understanding that Abram
 Demski was helping me reach yesterday.



 In one of his posts Abram was discussing my prior statement that top-down
 activation could be either forward or backward chaining.  He said If the
 network is passing down an expectation based on other data, informing the
 lower network of what to expect, then this is forward chaining. But if the
 signal is not an expectation, but more like a query pay attention to data
 that might conform/contradict this hypothesis, and notify me ASAP then it
 is backwards chaining. And it seems realistic that it can be both of
these.



 I am interpreting this quoted statement as implying the purpose of
backward
 chaining is to search for forward chaining paths that either confirm or
 contradict a pattern of interest or that provide a path or plan to a
desired
 goal.  In this view the backward part of backward chaining provides no
 changes in probability, only changes in attention, and it is only the
 forward chaining that is found by such backward chaining that changes
 probabilities.



 Am I correct in this interpretation of what Abram said, and is that
 interpretation included in what your Google clippings indicate is the
 generally understood meaning of the term backward chaining.



 Ed Porter



 P.S. I would appreciate answers for Abram or any else on this list who
 understands the question and has some knowledge on the subject.



 -Original Message-
 From: Lukasz Stafiniak [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, July 15, 2008 3:05 AM
 To: agi@v2.listbox.com
 Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY
THE
 BINDING PROBLEM?



 On Tue, Jul 15, 2008 at 8:01 AM, Brad Paulsen [EMAIL PROTECTED]
 wrote:



 The terms forward-chaining and backward-chaining when used to refer
to

 reasoning strategies have absolutely nothing to do with temporal

 dependencies or levels of reasoning.  These two terms refer simply, and

 only, to the algorithms used to evaluate if/then rules in a rule base

 (RB).  In the FWC algorithm, the if part is evaluated and, if TRUE, the

 then part is added to the FWC engine's output.  In the BWC algorithm,
 the

 then part is evaluated and, if TRUE, the if part is added to the BWC

 engine's output.  It is rare, but some systems use both FWC and BWC.



 That's it.  Period.  No other denotations or connotations apply.



 Curiously, the definition put by Abram Demski is the only one I've

 been aware of until yesterday (I believe it's the one used among

 theorem proving people). Let's see what googling says on forward

 chaining:



 1. (Wikipedia)



 2. http://www.amzi.com/ExpertSystemsInProlog/05forward.htm

 A large number of expert systems require the use of forward chaining,

 or data driven inference. [...]

 Data driven expert systems are different from the goal driven, or

 backward chaining systems 

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-15 Thread Mike Archbold

 4. http://www.ontotext.com/inference/reasoning_strategies.html
 * Forward-chaining: to start from the known facts and to perform
 the inference in an inductive fashion. This kind of reasoning can have
 diverse objectives, for instance: to compute the inferred closure; to
 answer a particular query; to infer a particular sort of knowledge
 (e.g. the class taxonomy); etc.
 * Backward-chaining: to start from a particular fact or from a
 query and by means of using deductive reasoning to try to verify that
 fact or to obtain all possible results of the query. Typically, the
 reasoner decomposes the fact into simpler facts that can be found in
 the knowledge base or transforms it into alternative facts that can be
 proven applying further recursive transformations. 



A system like Clips is forward chaining but there is no induction going
on.  Whether fwd- or bkw- chaining it is deduction as far as I've ever
heard of.  With induction we are implying repeated observations that lead
to some new knowledge (ie., some new rule in this case).  That was my
understanding anyway, and I'm no PhD scientist.
Mike Archbold



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com