Ed Porter wrote:
###### RICHARD LOOSEMORE WROTE #############>>
Now I must repeat what I said before about some (perhaps many?) claimed solutions to the binding problem: these claimed solutions often establish the *mechanism* by which a connection could be established IF THE TWO ITEMS WANT TO TALK TO EACH OTHER. In other words, what these people (e.g. Shastri and Ajjannagadde) do is propose a two step solution: (1) the two instances magically decide that they need to get hooked up, and (2) then, some mechanism must allow these two to make contact and set up a line to one another. Think of it this way: (1) You decide that at this moment that you need to call Britney Spears, and (2) You need some mechanism whereby you can actually establish a phone connection that goes from your place to Britney's place.

The crazy part of this "solution" to the binding problem is that people often make the quiet and invisible assumption that (1) is dealt with (the two items KNOW that they need to talk), and then they go on to work out a fabulously powerful way (e.g. using neural synchronisation) to get part (2) to happen. The reason this is crazy is that the first part IS the binding problem, not the second part! The second phase (the practical aspects of making the phone call get through) is just boring machinery. By the time the two parties have decided that they need to hook up, the show is already over... the binding problem has been solved. But if you look at papers describing these so-called solutions to the binding problem you will find that the first part is never talked about.

At least, that was true of the S & A paper, and at least some of the papers that followed it, so I gave up following that thread in utter disgust.
#### MY REPLY ############>>
[Your description of Shastri's work is inaccurate --- at least from his
papers I have read, which include among others, " Advances in Shruti -- A
neurally motivated model of relational knowledge representation and rapid
inference using temporal synchrony" Applied Intelligence, 11: 79-108, 1999 (
http://www.icsi.berkeley.edu/~shastri/psfiles/shruti_adv_98.ps ); and
"Massively parallel knowledge representation and reasoning: Taking a cue
from the Brain", by Shastri and Mani.

It is obvious from reading Shasti that his notion of what should talk to
what (i.e., i.e., be searched by spreading activation) is determined by a
form of forward and/or backward chaining, which can automatically be learned
from temporal associations between pattern activations, and the bindings
involved can be learned by the occurrences of the same one or more pattern
element instances as a part or as an attribute in one or more of those
temporally related patterns.

Shruiti's representational scheme has limitations that make it ill suited
for use as the general representation scheme in an AGI (problems which I
think can be fixed with a more generalized architecture), but the particular
problem you are accusing his system of here --- i.e., that it provides no
guidance as to what should be searched for when to answer a given query ---
is not in fact a problem (other than the issue of possible exponential
explosion of the search tree, which is discussed in my answers below)]
>

###### RICHARD LOOSEMORE WROTE #############>>
It is very important to break through this confusion and find out exactly why the two relevant entities would decide to talk to each other. Solving any other aspect of the problem is not of any value.

Now, going back to your question about how it would happen: if you look for a determinstic solution to the problem, I am not sure you can come up with a general answer. Whereas there is a nice, obvious solution to the question "Is Socrates mortal?" given the facts "Socrates is a man" and "All men are mortal", it is not at all clear how to do more complex forms of binding without simply doing massive searches.

#### MY REPLY ############>>
[You often do have to do massive searches -- it is precisely because the
human brains can do such massive searches (averaging roughly 3 to 300
trillion/second in the cortex alone)  that lets us so often come up with the
appropriate memory or reason at the appropriate time.  But the massive
searches in a large Shruiti-like or Novamente-like system are not
totally-blind searches --- instead they are often massive search guided by
forward and/or backward chaining -- by previously learned and/or recently
activated probabilities and importances --- by relative scores of various
search threads or pattern activations --- by inference patterns that may
have proven successful in previous similar searches --- by similar episodic
memories --- and --- by interaction with the current context as represented
by the other current activations]

Well, I will hold my fire until I get to your comments below, but I must insist that what I said was accurate: his first major paper on this topic was a sleight of hand that avoided the crucial problem of ensuring that the relevant things wanted to contact each other.

My preliminary response to your suggestion that other Shastri papers do describe ways to make binding happen correctly is as follows: anyone can suggest ways that *might* cause correct binding to occur - anyone can wave their hands, write a program, and then say "backward chaining" - but there is a world of difference between suggesting mechanisms that *might* do it, and showing that those mechanisms actually do cause correct bindings to be established in practice.

What happens in practice is that the proposed mechanisms work for (a) toy cases for which they were specifically designed to work, and/or (b) a limited number of the more difficult cases, and that what we also find is that they (c) tend to screw up in all kinds of interesting ways when the going gets tough. At the end of the day, these proposals don't solve the binding problem, they just work some of the time, with no clear reason given why they should work all of the time. They are, in a word, hacks.

Understanding that they only have the status of hacks is a very important sign of maturity as an AI researcher. There is a very deep truth buried in that fact.




###### RICHARD LOOSEMORE WROTE #############>>
Or rather, it is not clear how you can *guarantee* the finding of a
solution.
#### MY REPLY ############>>
 [in such massive searching, humans often miss the most appropriate
implication, particularly if it involves a mullti-stage inference such as
that shown in the "John fell in the Hallway" example I previously sent you.]

###### RICHARD LOOSEMORE WROTE #############>>
Basically, I think the best you can do is to use various heuristics to shorten the computational problem of proving that the two books can relate. For example, the system can learn the general rule "If you receive a gift of X, then you subsequently own X", and then it can work backwards from all facts that allow you to conclude ownership, to see if one fulfills the requirement.
#### MY REPLY ############>>
 [I think human-level thinking will require tens to thousands of billions of
activations a second, so that in a shruiti like scheme each of the roughly
32 gama wave frequency phases would be allowed 100 million to 100 billion
activations, allowing a large amount of searching.  But this searching it is
not totally blind.  Instead it would use something like the parallel
terraced search discussed in Hafstadter's description of his Copycat system,
which would be guided by the search guiding factors I have listed above.]

We are converging on the same conclusion here. But I do not think that the shruiti system is necessarily the best way to go. I think, in fact, that if it involves billions of activations, it is the wrong approach.



###### RICHARD LOOSEMORE WROTE #############>>

You then also have to deal with problems such as receiving a gift and then giving it away or losing it. The question is, do you search through all of those subjunctive worlds?

#### MY REPLY ############>>
[Again, you do often need to use massive search, but it has to be somewhat
focused to have a good probability of producing a good solution because the
possible search space is usually way too large to be totally searched]

###### RICHARD LOOSEMORE WROTE #############>>
A nightmare, in general, to do an exhaustive search.

So if a determinstic answer is not the way to go, what about the alternative, which I have called the "emergent" answer?

This is not so very different from finding a good heuristic, but the philosophy is very very different. If the system is continually building models of the world, using constraints among as many aspects of those models as possible, and applying as much pragmatic, real-world general knowledge as it can, then I believe that such a system would quickly home in on a model in which the question "Does Mary own a book?" was sitting alongside a model describing a recent fact such as "Mary got a book as a gift", and the two would gel.
#### MY REPLY ############>>
 [Yes, constraints help.  As I have said for decades, "More is more in AI",
because the more relevant experiential representation you have above given
problem or ones similar to it, the more constraints you have to help guide
you on the more appropriate search and inference paths.  If this is what you
mean by emergent, yes, I agree emergence is helpful.

No! You don't add constraints to a system that is basically driven by search and inference paths, because those latter mechanisms are a sorry dead end that will prevent your system from being able to exploit the fullest forms of the constraint-relaxation idea. The idea is to allow constraints to be the dominant force in the system, and to stop kidding ourselves that the "search and inference" mechanisms are where the action happens. This switch in perspective - this rearrangement of the horse and the cart - is at the core of what is meant by calling the system "complex".


But this is not necessarily the mysterious and ill defined RL complexity,

Ahhh.... please don't take gratuitous pot shots: you do not understand what I mean by "complexity".

but rather just the complexity of many different contextual and experiential
factors interacting.

And this where you demonstrate that you do not. You are usiung the word "complexity" as if it just meant "complicated", and this is a misunderstanding.



As I said above the inferencing would not be "a totally blind massive search
--- instead it would often be guiding factors like those I listed above.]

###### RICHARD LOOSEMORE WROTE #############>>
How *exactly* would they gel? Now that is where the philosophical difference comes in. I cannot give any reason why the two models will find each other and make a strong mutual fit! I do not claim to be able to prove that the binding will take place. Instead, what I claim is that as a matter of pure, empirical fact, a system with a rich enough set of contextual facts, and with a rich enough model-building mechanism, will simply tend to build models in which (most of the time) the bindings will get sorted out.
#### MY REPLY ############>>
 [Almost all of your discussion above has been about inference control and
not really binding per se.  Of course, binding, whether handled implicitly
or implicitly is a part of inference, so you comments are not irrelevant ---
but they do not specifically address the particular problems associated with
binding.]

Arrggghhh!  By saying this, you show that you have missed my core point.

Binding happens because of a complex emergent consequence of the mechanisms that you are calling inference control (but which are more general and powerful than that .... see previous comment). Anyone who thinks that there is a binding problem that can be solved in some other (more deterministic way) is putting the cart before the horse.


Richard Loosemore


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to