Ed Porter wrote:
Ed Porter wrote:
###### RICHARD LOOSEMORE LAST EMAIL #############>>
My preliminary response to your suggestion that other Shastri papers do describe ways to make binding happen correctly is as follows: anyone can suggest ways that *might* cause correct binding to occur - anyone can wave their hands, write a program, and then say "backward chaining" - but there is a world of difference between suggesting mechanisms that *might* do it, and showing that those mechanisms actually do cause correct bindings to be established in practice.

What happens in practice is that the proposed mechanisms work for (a)
toy cases for which they were specifically designed to work, and/or (b) a limited number of the more difficult cases, and that what we also find is that they (c) tend to screw up in all kinds of interesting ways when the going gets tough. At the end of the day, these proposals don't solve the binding problem, they just work some of the time, with no clear reason given why they should work all of the time. They are, in a word, hacks.

Understanding that they only have the status of hacks is a very
important sign of maturity as an AI researcher. There is a very deep truth buried in that fact.

#####ED PORTERS CURRENT RESPONSE ########>
Forward and backward chaining are not hacks. They has been two of the most commonly and often successfully techniques in AI search for at least 30 years. They are not some sort of wave of the hand. They are much more concretely grounded in successful AI experience than many of your much more ethereal, and very arguably hand waving, statements about having many of the difficult problems in AI are to be cured by some as yet unclearly defined emergence from complexity.

###### RICHARD LOOSEMORE LAST EMAIL #############>>
Oh dear:  yet again I have to turn a blind eye to the ad hominem insults.

#####ED PORTERS CURRENT RESPONSE ########>
Why is it that your criticisms of my ideas as being "dead ends", "hand
waving", and "hack" is not an ad hominem insults, but my use of similar or
even less critical language against you is?

Because:

(a) These are not your ideas I am criticising, they are the proposals of many other people.

(b) Because even if they were your ideas, I would be criticising them by giving exact and detailed reasons for the criticisms.

(c) Because your reference to my other proposals has no relevance to the topic under discussion: you suddenly, out of the blue, tried to defend your argument by heaping abuse on some OTHER proposals of mine, which had nothing to do with the argument itself.


When someone tries to win an argument about a topic by attacking the credibility of the person with whom they are arguing, that is standardly taken as an ad hominem.







###### RICHARD LOOSEMORE LAST EMAIL #############>>
Binding is finding out if two referents are the same thing, or if they are closely linked by some intermediary. Here is an example sentence that needs to have its referents untangled:

"At one time, one of them might have said to one of his own that one and one and one is one, but one does not know whether, on the one hand, this is one of those platitudes that those folks are so fond of, or if, on the other, the One is literally meant to be more than one."

..backward chaining is not being used to proceed to the solution of the
binding problem in this case, it is being used as a mechanism that might, if
you are lucky, work.

A mechanism that might, if you are lucky, work is a hack ....
#####ED PORTERS CURRENT RESPONSE ########>
This NL understanding problem much more complex than the type of predicate
logic that Shruiti dealt with.
Your original claim was that Shruiti had no way to learn what should talk to
what for purposes of determining binding in the class of problems it dealt
with.

That is completely incorrect.  I made no such claim.

I have been arguing that it is not good enough to solve the binding problem unless you can "solve" it in the general case. If all you can do is show that some mechanism works for a restricted ("toy") set of cases, you have not solved the problem, but just hacked something to work in one set of circumstances.

I have further argued that it is not good enough to say that all you need to solve the general problem is "more" of whatever you used to solve the easier case. In general, this is not true, it is simply speculation.

I also pointed out that people claim to "solve" the binding problem when in fact they solve something different - a lesser problem of (so to speak) making the actual phone connection after you have discovered who you need to call.


I think anyone who read the Shastri paper I gave a link to and who
read my prior discussion in this thread, and who can think in AI terms
could, contrary to your implication, figure out how to make a Shruiti-like
system determine which things should talk to which for purposes of binding
in the class of problems it deals with.  The only problem being that in
really large knowledge bases, additional inference control mechanisms would
be required to prune the spreading activation to keep it within a realistic
budget.   But you original problem was not with how well Shruiti would
scale, but with how it could learn to work at all, which I think I have
shown is inaccurate.

I am sorry, but you have not shown that. Perhaps you give me a hint of where it happened?

Solving the above problem is not about just doing the mechanics of the inference process, it is about finding a way to model the situation in order to map out the kinds of logical statements that might be relevant.

For example, should the system begin by loading all of the statements in its knowledge base that use the word [or numeral signified by] "one"? How is it going to restrict the set of statements to look at?

The fact is that these choices about how to control the inference process ARE the things which determine how well the inference process works. Are you familiar with this idea?


So you are doing something you have done many times before in discussion
with me, which is when I show that a statement you make is ungrounded, you
pretend your original statement was other than that it was.

Nevertheless, your example, of the sentence with 12 "one"s is interesting.

It should be noted that Shruiti uses a mix of forward changing and backward
chaining, with an architecture for controlling when and how each is used.
It is more subtle than you seem willing to acknowledge.

You are kidding, of course. I am quite well aware of the mixtures of forward and backward chaining that occur in practice (I had to write systems like that as student exercises 20 years ago, so the practice is a familar one).

That makes no difference to the argument I set out.



Richard Loosemore


My understanding that forward reasoning is reasoning from conditions to
consequences, and backward reasoning is the opposite.  But I think what is a
condition and what is a consequence is not always clear, since one can use
if A then B rules to apply to situations where A occurs before B, B occurs
before A, and A and B occur at the same time.  Thus I think the notion of
what is forward and backward chaining might be somewhat arbitrary, and could
be better clarified if it were based on temporal relationships.  I see no
reason that Shruiti's "?" activation should not run be spread across all
those temporal relationships, and be distinguished from Shruiti's "+" and
"-" probabilistic activation by not having a probability, but just a
temporary attentional characteristic. Additional inference control mechanism
could then be added to control which directions in time to reason with in
different circumstances, if activation pruning was necessary.

Furthermore, Shruiti, does not use multi-level compositional hierarchies for
many of its patterns, and it only uses generalizational hierarchies for slot
fillers, not for patterns.  Thus, it does not many of the general reasoning
capabilities that are necessary for NL understanding, and could not be
expect to deal with you sentence with 12 "one"s.  Much of the spreading
activation in a more general purpose AGI would be up and down compositional
and generaliztional hiearachies, which is not necessarily forward or
backward chaining, but which is important in NL understanding.  So I agree
that simple forward and backward chaining are not enough to solve general
inferences problems of any considerable complexity.

Ed Porter

-----Original Message-----
From: Richard Loosemore [mailto:[EMAIL PROTECTED] Sent: Thursday, July 10, 2008 8:13 PM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE
BINDING PROBLEM"?

Ed Porter wrote:
###### RICHARD LOOSEMORE LAST EMAIL #############>>
My preliminary response to your suggestion that other Shastri papers do describe ways to make binding happen correctly is as follows: anyone can suggest ways that *might* cause correct binding to occur - anyone can wave their hands, write a program, and then say "backward chaining" - but there is a world of difference between suggesting mechanisms that *might* do it, and showing that those mechanisms actually do cause correct bindings to be established in practice.

What happens in practice is that the proposed mechanisms work for (a) toy cases for which they were specifically designed to work, and/or (b) a limited number of the more difficult cases, and that what we also find is that they (c) tend to screw up in all kinds of interesting ways when the going gets tough. At the end of the day, these proposals don't solve the binding problem, they just work some of the time, with no clear reason given why they should work all of the time. They are, in a word, hacks.

Understanding that they only have the status of hacks is a very important sign of maturity as an AI researcher. There is a very deep truth buried in that fact.

#####ED PORTERS CURRENT RESPONSE ########>
Forward and backward chaining are not hacks.  They has been two of the
most
commonly and often successfully techniques in AI search for at least 30
years.  They  are not some sort of wave of the hand.  They are much more
concretely grounded in successful AI experience than many of your much
more
ethereal, and very arguably hand waving, statements about having many of
the
difficult problems in AI are to be cured by some as yet unclearly defined
emergence from complexity.

Oh dear:  yet again I have to turn a blind eye to the ad hominem insults.

You are using terms like "forward and backward chaining" without understanding exactly what they mean, and what role they play in their context, and what their limitations are.

These mechanisms work great, but only in carefully proscribed settings! It is precisely my point that when it comes to establishing a correct binding between two referents, it means absolutely nothing to say that you are going to invoke mechanisms such as backward chaining.

Let me illustrate with an example.

Binding is finding out if two referents are the same thing, or if they are closely linked by some intermediary. Here is an example sentence that needs to have its referents untangled:

"At one time, one of them might have said to one of his own that one and one and one is one, but one does not know whether, on the one hand, this is one of those platitudes that those folks are so fond of, or if, on the other, the One is literally meant to be more than one."

A backward chaining search to resolve the referents of "one" in this sentence would very likely not return a result before it would have to be cut off due to time constraints .... but if that happens, then the result of using backward chaining is that the sentence cannot be understood by the system. Which means that backward chaining is not being used to proceed to the solution of the binding problem in this case, it is being used as a mechanism that might, if you are lucky, work.

A mechanism that might, if you are lucky, work is a hack .... a heuristic if you want to use polite language. It is handwaving to say that backward chaining can resolve the binding problem, because when faced with difficult cases like this one, the mechanism goes belly up. Some thing that works some of the time, but gives garbage in other cases, and with no clear distinction between the cases, is a hack.



Learning patterns of temporal activation and using them to guide forward
and
backward chaining are not hacks either.  It his been used for years also.

The operation of the binding in the type of Shruiti-like system I have
described has been demonstrated in systems with over a hundred thousand
nodes, and worked well.
Expanding a Shruite-like or Novamente-like system to a much large size
would
call for more sophisticated inference control.  This is a challenge that
has
not been properly experimented with, but for which there are many
promising
paths.
Nevertheless, the type of inference systems I have suggested are much more
concrete, easy to understand, and have more support from success in
similar
system that have been built than your unexplained notion that binding ---
even when it can't be handled implicitly but the use of multiple models,
as
described in the Poggio paper about which I started this thread --- can be
handled by some unexplained constraint and emergence from complexity.

Sigh.



Richard Loosemore




###### RICHARD LOOSEMORE EMAIL BEFORE LAST #############>>
Or rather, it is not clear how you can *guarantee* the finding of a
solution.
#### ED PORTERS LAST EMAIL ############>>
 [in such massive searching, humans often miss the most appropriate
implication, particularly if it involves a mullti-stage inference such as
that shown in the "John fell in the Hallway" example I previously sent
you.]
###### RICHARD LOOSEMORE EMAIL BEFORE LAST #############>>
Basically, I think the best you can do is to use various heuristics to shorten the computational problem of proving that the two books can relate. For example, the system can learn the general rule "If you receive a gift of X, then you subsequently own X", and then it can work backwards from all facts that allow you to conclude ownership, to see if one fulfills the requirement.
#### ED PORTERS LAST EMAIL ############>>
 [I think human-level thinking will require tens to thousands of billions
of
activations a second, so that in a shruiti like scheme each of the
roughly
32 gama wave frequency phases would be allowed 100 million to 100 billion
activations, allowing a large amount of searching.  But this searching it
is
not totally blind.  Instead it would use something like the parallel
terraced search discussed in Hafstadter's description of his Copycat
system,
which would be guided by the search guiding factors I have listed above.]

###### RICHARD LOOSEMORE LAST EMAIL #############>>
We are converging on the same conclusion here. But I do not think that the shruiti system is necessarily the best way to go. I think, in fact, that if it involves billions of activations, it is the wrong approach.

#####ED PORTERS CURRENT RESPONSE ########>
Good, I am pleased we share some agreement. I agreed that the Shruiti
system
had certain severe limitations.
I think that to do deep implication very large numbers of activation would
be required in any system, because of the very large search space that is
involved.  It is clear that constraints could narrow that space, but
communicating constraints itself takes massive activation, so I disagree
with the notion that it is wrong to think brain-like system will involve
billions of activations a second.  Neurological evidence indicates the
brain
sends trillions of messages a second --- perhaps hundreds of trillions of
activations a second --- so I don't think it is a dead end to think in
terms
of AGIs that send billions of messages a second.

###### RICHARD LOOSEMORE EMAIL BEFORE LAST #############>>
This is not so very different from finding a good heuristic, but the philosophy is very very different. If the system is continually building models of the world, using constraints among as many aspects of those models as possible, and applying as much pragmatic, real-world general knowledge as it can, then I believe that such a system would quickly home in on a model in which the question "Does Mary own a book?" was sitting alongside a model describing a recent fact such as "Mary got a book as a gift", and the two would gel.
#### ED PORTERS LAST EMAIL ############>>
 [Yes, constraints help.  As I have said for decades, "More is more in
AI",
because the more relevant experiential representation you have above
given
problem or ones similar to it, the more constraints you have to help
guide
you on the more appropriate search and inference paths.  If this is what
you
mean by emergent, yes, I agree emergence is helpful.
###### RICHARD LOOSEMORE LAST EMAIL #############>>
No! You don't add constraints to a system that is basically driven by search and inference paths, because those latter mechanisms are a sorry dead end that will prevent your system from being able to exploit the fullest forms of the constraint-relaxation idea. The idea is to allow constraints to be the dominant force in the system, and to stop kidding ourselves that the "search and inference" mechanisms are where the action happens. This switch in perspective - this rearrangement of the horse and the cart - is at the core of what is meant by calling the system "complex".

#####ED PORTERS CURRENT RESPONSE ########>
I don't understand what you are saying.  Having the inhibitions of
constraints spread through a system is very similar to having activation
spread through a system.  They are similar forms of activation, only one
is
positive and one is negative.
Of course, constraint and activation can be much more complex than just a
feed forward process.  It can be a top down/lateral/bottom up process.  It
can involve not only finding patterns that best match a given input, but
which also have implications that match a current context.  It can involve
projecting instantiations of patterns that match bottom up and contextual
features.  Their also can be market based competition for computational
resources, such as spreading activation.

I have no reason to believe the methods I have suggested are a dead end,
and
I have never seen you give a convincing reason for why they are.
Ben has said that when he gets back from Alaska he will try to refute many
of your claims that RL complexity prevents the engineering of AGI systems
because of the local-global disconnect.  I look forward to reading it.

#### ED PORTERS LAST EMAIL ############>>
But this is not necessarily the mysterious and ill defined RL complexity,
###### RICHARD LOOSEMORE LAST EMAIL #############>>
Ahhh.... please don't take gratuitous pot shots: you do not understand what I mean by "complexity".
..

And this where you demonstrate that you do not. You are usiung the word "complexity" as if it just meant "complicated", and this is a misunderstanding.

#####ED PORTERS CURRENT RESPONSE ########>
Correct, I do not understand what you mean by RL complexity, even though I
have read multiple articles by you on the subject.
One thing you discuss that I do understand is the concept of the
local-global disconnect, although I tend to think of it be a matter of
degree rather than a sharp dichotomy between regimes where you have it and
those where you don't.
Certainly one can build systems where the relationship between local
behavior and higher level behavior is so convoluted that humans cannot
design higher level behaviors from lower level ones, but it is not at all
clear that applies to all extremely complicated systems, nor that it will
apply to all human level AGIs.
I do not think the local-global disconnect it is going to be that big a
problem for many useful and powerful AGI's that can be built.  Such
systems
will be complex, and the models for thinking about them may need to change
as you deal with higher level organization in such system, but I think one
level can be reasonably engineered from the one below.

But I could be wrong, perhaps the local-global dichotomy is much more
important in AI than I think.  Time will tell

###### RICHARD LOOSEMORE LAST EMAIL #############>>
Binding happens because of a complex emergent consequence of the mechanisms that you are calling inference control (but which are more general and powerful than that .... see previous comment). Anyone who thinks that there is a binding problem that can be solved in some other (more deterministic way) is putting the cart before the horse.

#####ED PORTERS CURRENT RESPONSE ########>
Talk about hand waving: "Binding happense because of a complex emergent
consequence of ..."

Binding IS part of inference control, but it is a special aspect of it,
because it requires that decisions made at a higher level of composition
or
generalization take into account¸ either implicitly or explicitly, that
certain relationships existed between elements matched at a lower levels.
Where this binding information can be handled implicitly, it can be
handled
by normal inferencing mechanism, but it often requires a much large number
of models to ensure the proper binding required for the matching of a
higher
level pattern has occurred.
In a prior response in this thread, you, yourself, said that in complex
spaces such as those used in certain semantic reasoning the number of
models
required to do the necessary binding for some types of reasoning would be
to
large to be practical.  But I have not received any description from you
as
to how this would be performed without explicit binding encoding, such as
by
synchrony, even though you have implied that by constraint and emergence
it
can be handled without such explicit mechanisms.

You have provided no sound reason for believing your method of dealing
with
the binding problem is any more promising that the combination of the type
of implicity binding Poggio's paper has shown can be provided by using
many
models to encode bindings implicitly, and --- where such implicit binding
is
not practical --- the use of explicit representations of binding, such as
a
Shruiti-like synchrony or a numerical representations of binding
information
in spreading activation.


-----Original Message-----
From: Richard Loosemore [mailto:[EMAIL PROTECTED] Sent: Thursday, July 10, 2008 3:00 PM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY
"THE
BINDING PROBLEM"?

Ed Porter wrote:
###### RICHARD LOOSEMORE WROTE #############>>
Now I must repeat what I said before about some (perhaps many?) claimed solutions to the binding problem: these claimed solutions often establish the *mechanism* by which a connection could be established IF THE TWO ITEMS WANT TO TALK TO EACH OTHER. In other words, what these people (e.g. Shastri and Ajjannagadde) do is propose a two step solution: (1) the two instances magically decide that they need to get hooked up, and (2) then, some mechanism must allow these two to make contact and set up a line to one another. Think of it this way: (1) You decide that at this moment that you need to call Britney Spears, and (2) You need some mechanism whereby you can actually establish a phone connection that goes from your place to Britney's place.

The crazy part of this "solution" to the binding problem is that people often make the quiet and invisible assumption that (1) is dealt with (the two items KNOW that they need to talk), and then they go on to work out a fabulously powerful way (e.g. using neural synchronisation) to get part (2) to happen. The reason this is crazy is that the first part IS the binding problem, not the second part! The second phase (the practical aspects of making the phone call get through) is just boring machinery. By the time the two parties have decided that they need to hook up, the show is already over... the binding problem has been solved. But if you look at papers describing these so-called solutions to the binding problem you will find that the first part is never talked about.

At least, that was true of the S & A paper, and at least some of the papers that followed it, so I gave up following that thread in utter disgust.
#### MY REPLY ############>>
[Your description of Shastri's work is inaccurate --- at least from his
papers I have read, which include among others, " Advances in Shruti -- A
neurally motivated model of relational knowledge representation and rapid
inference using temporal synchrony" Applied Intelligence, 11: 79-108,
1999
(
http://www.icsi.berkeley.edu/~shastri/psfiles/shruti_adv_98.ps ); and
"Massively parallel knowledge representation and reasoning: Taking a cue
from the Brain", by Shastri and Mani.

It is obvious from reading Shasti that his notion of what should talk to
what (i.e., i.e., be searched by spreading activation) is determined by a
form of forward and/or backward chaining, which can automatically be
learned
from temporal associations between pattern activations, and the bindings
involved can be learned by the occurrences of the same one or more
pattern
element instances as a part or as an attribute in one or more of those
temporally related patterns.

Shruiti's representational scheme has limitations that make it ill suited
for use as the general representation scheme in an AGI (problems which I
think can be fixed with a more generalized architecture), but the
particular
problem you are accusing his system of here --- i.e., that it provides no
guidance as to what should be searched for when to answer a given query
---
is not in fact a problem (other than the issue of possible exponential
explosion of the search tree, which is discussed in my answers below)]
 >
###### RICHARD LOOSEMORE WROTE #############>>
It is very important to break through this confusion and find out exactly why the two relevant entities would decide to talk to each other. Solving any other aspect of the problem is not of any value.

Now, going back to your question about how it would happen: if you look for a determinstic solution to the problem, I am not sure you can come up with a general answer. Whereas there is a nice, obvious solution to the question "Is Socrates mortal?" given the facts "Socrates is a man" and "All men are mortal", it is not at all clear how to do more complex forms of binding without simply doing massive searches.

#### MY REPLY ############>>
[You often do have to do massive searches -- it is precisely because the
human brains can do such massive searches (averaging roughly 3 to 300
trillion/second in the cortex alone)  that lets us so often come up with
the
appropriate memory or reason at the appropriate time.  But the massive
searches in a large Shruiti-like or Novamente-like system are not
totally-blind searches --- instead they are often massive search guided
by
forward and/or backward chaining -- by previously learned and/or recently
activated probabilities and importances --- by relative scores of various
search threads or pattern activations --- by inference patterns that may
have proven successful in previous similar searches --- by similar
episodic
memories --- and --- by interaction with the current context as
represented
by the other current activations]
Well, I will hold my fire until I get to your comments below, but I must insist that what I said was accurate: his first major paper on this topic was a sleight of hand that avoided the crucial problem of ensuring that the relevant things wanted to contact each other.

My preliminary response to your suggestion that other Shastri papers do describe ways to make binding happen correctly is as follows: anyone can suggest ways that *might* cause correct binding to occur - anyone can wave their hands, write a program, and then say "backward chaining" - but there is a world of difference between suggesting mechanisms that *might* do it, and showing that those mechanisms actually do cause correct bindings to be established in practice.

What happens in practice is that the proposed mechanisms work for (a) toy cases for which they were specifically designed to work, and/or (b) a limited number of the more difficult cases, and that what we also find is that they (c) tend to screw up in all kinds of interesting ways when the going gets tough. At the end of the day, these proposals don't solve the binding problem, they just work some of the time, with no clear reason given why they should work all of the time. They are, in a word, hacks.

Understanding that they only have the status of hacks is a very important sign of maturity as an AI researcher. There is a very deep truth buried in that fact.



###### RICHARD LOOSEMORE WROTE #############>>
Or rather, it is not clear how you can *guarantee* the finding of a
solution.
#### MY REPLY ############>>
 [in such massive searching, humans often miss the most appropriate
implication, particularly if it involves a mullti-stage inference such as
that shown in the "John fell in the Hallway" example I previously sent
you.]
###### RICHARD LOOSEMORE WROTE #############>>
Basically, I think the best you can do is to use various heuristics to shorten the computational problem of proving that the two books can relate. For example, the system can learn the general rule "If you receive a gift of X, then you subsequently own X", and then it can work backwards from all facts that allow you to conclude ownership, to see if one fulfills the requirement.
#### MY REPLY ############>>
 [I think human-level thinking will require tens to thousands of billions
of
activations a second, so that in a shruiti like scheme each of the
roughly
32 gama wave frequency phases would be allowed 100 million to 100 billion
activations, allowing a large amount of searching.  But this searching it
is
not totally blind.  Instead it would use something like the parallel
terraced search discussed in Hafstadter's description of his Copycat
system,
which would be guided by the search guiding factors I have listed above.]

We are converging on the same conclusion here. But I do not think that the shruiti system is necessarily the best way to go. I think, in fact, that if it involves billions of activations, it is the wrong approach.



###### RICHARD LOOSEMORE WROTE #############>>

You then also have to deal with problems such as receiving a gift and then giving it away or losing it. The question is, do you search through all of those subjunctive worlds?

#### MY REPLY ############>>
[Again, you do often need to use massive search, but it has to be
somewhat
focused to have a good probability of producing a good solution because
the
possible search space is usually way too large to be totally searched]

###### RICHARD LOOSEMORE WROTE #############>>
A nightmare, in general, to do an exhaustive search.

So if a determinstic answer is not the way to go, what about the alternative, which I have called the "emergent" answer?

This is not so very different from finding a good heuristic, but the philosophy is very very different. If the system is continually building models of the world, using constraints among as many aspects of those models as possible, and applying as much pragmatic, real-world general knowledge as it can, then I believe that such a system would quickly home in on a model in which the question "Does Mary own a book?" was sitting alongside a model describing a recent fact such as "Mary got a book as a gift", and the two would gel.
#### MY REPLY ############>>
 [Yes, constraints help.  As I have said for decades, "More is more in
AI",
because the more relevant experiential representation you have above
given
problem or ones similar to it, the more constraints you have to help
guide
you on the more appropriate search and inference paths.  If this is what
you
mean by emergent, yes, I agree emergence is helpful.
No! You don't add constraints to a system that is basically driven by search and inference paths, because those latter mechanisms are a sorry dead end that will prevent your system from being able to exploit the fullest forms of the constraint-relaxation idea. The idea is to allow constraints to be the dominant force in the system, and to stop kidding ourselves that the "search and inference" mechanisms are where the action happens. This switch in perspective - this rearrangement of the horse and the cart - is at the core of what is meant by calling the system "complex".

But this is not necessarily the mysterious and ill defined RL complexity,
Ahhh.... please don't take gratuitous pot shots: you do not understand what I mean by "complexity".

but rather just the complexity of many different contextual and
experiential
factors interacting.
And this where you demonstrate that you do not. You are usiung the word "complexity" as if it just meant "complicated", and this is a misunderstanding.



As I said above the inferencing would not be "a totally blind massive
search
--- instead it would often be guiding factors like those I listed above.]

###### RICHARD LOOSEMORE WROTE #############>>
How *exactly* would they gel? Now that is where the philosophical difference comes in. I cannot give any reason why the two models will find each other and make a strong mutual fit! I do not claim to be able to prove that the binding will take place. Instead, what I claim is that as a matter of pure, empirical fact, a system with a rich enough set of contextual facts, and with a rich enough model-building mechanism, will simply tend to build models in which (most of the time) the bindings will get sorted out.
#### MY REPLY ############>>
 [Almost all of your discussion above has been about inference control
and
not really binding per se.  Of course, binding, whether handled
implicitly
or implicitly is a part of inference, so you comments are not irrelevant
---
but they do not specifically address the particular problems associated
with
binding.]
Arrggghhh!  By saying this, you show that you have missed my core point.

Binding happens because of a complex emergent consequence of the mechanisms that you are calling inference control (but which are more general and powerful than that .... see previous comment). Anyone who thinks that there is a binding problem that can be solved in some other (more deterministic way) is putting the cart before the horse.


Richard Loosemore


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to