###### RICHARD LOOSEMORE WROTE #############>> Now I must repeat what I said before about some (perhaps many?) claimed solutions to the binding problem: these claimed solutions often establish the *mechanism* by which a connection could be established IF THE TWO ITEMS WANT TO TALK TO EACH OTHER. In other words, what these people (e.g. Shastri and Ajjannagadde) do is propose a two step solution: (1) the two instances magically decide that they need to get hooked up, and (2) then, some mechanism must allow these two to make contact and set up a line to one another. Think of it this way: (1) You decide that at this moment that you need to call Britney Spears, and (2) You need some mechanism whereby you can actually establish a phone connection that goes from your place to Britney's place.
The crazy part of this "solution" to the binding problem is that people often make the quiet and invisible assumption that (1) is dealt with (the two items KNOW that they need to talk), and then they go on to work out a fabulously powerful way (e.g. using neural synchronisation) to get part (2) to happen. The reason this is crazy is that the first part IS the binding problem, not the second part! The second phase (the practical aspects of making the phone call get through) is just boring machinery. By the time the two parties have decided that they need to hook up, the show is already over... the binding problem has been solved. But if you look at papers describing these so-called solutions to the binding problem you will find that the first part is never talked about. At least, that was true of the S & A paper, and at least some of the papers that followed it, so I gave up following that thread in utter disgust. #### MY REPLY ############>> [Your description of Shastri's work is inaccurate --- at least from his papers I have read, which include among others, " Advances in Shruti -- A neurally motivated model of relational knowledge representation and rapid inference using temporal synchrony" Applied Intelligence, 11: 79-108, 1999 ( http://www.icsi.berkeley.edu/~shastri/psfiles/shruti_adv_98.ps ); and "Massively parallel knowledge representation and reasoning: Taking a cue from the Brain", by Shastri and Mani. It is obvious from reading Shasti that his notion of what should talk to what (i.e., i.e., be searched by spreading activation) is determined by a form of forward and/or backward chaining, which can automatically be learned from temporal associations between pattern activations, and the bindings involved can be learned by the occurrences of the same one or more pattern element instances as a part or as an attribute in one or more of those temporally related patterns. Shruiti's representational scheme has limitations that make it ill suited for use as the general representation scheme in an AGI (problems which I think can be fixed with a more generalized architecture), but the particular problem you are accusing his system of here --- i.e., that it provides no guidance as to what should be searched for when to answer a given query --- is not in fact a problem (other than the issue of possible exponential explosion of the search tree, which is discussed in my answers below)] ###### RICHARD LOOSEMORE WROTE #############>> It is very important to break through this confusion and find out exactly why the two relevant entities would decide to talk to each other. Solving any other aspect of the problem is not of any value. Now, going back to your question about how it would happen: if you look for a determinstic solution to the problem, I am not sure you can come up with a general answer. Whereas there is a nice, obvious solution to the question "Is Socrates mortal?" given the facts "Socrates is a man" and "All men are mortal", it is not at all clear how to do more complex forms of binding without simply doing massive searches. #### MY REPLY ############>> [You often do have to do massive searches -- it is precisely because the human brains can do such massive searches (averaging roughly 3 to 300 trillion/second in the cortex alone) that lets us so often come up with the appropriate memory or reason at the appropriate time. But the massive searches in a large Shruiti-like or Novamente-like system are not totally-blind searches --- instead they are often massive search guided by forward and/or backward chaining -- by previously learned and/or recently activated probabilities and importances --- by relative scores of various search threads or pattern activations --- by inference patterns that may have proven successful in previous similar searches --- by similar episodic memories --- and --- by interaction with the current context as represented by the other current activations] ###### RICHARD LOOSEMORE WROTE #############>> Or rather, it is not clear how you can *guarantee* the finding of a solution. #### MY REPLY ############>> [in such massive searching, humans often miss the most appropriate implication, particularly if it involves a mullti-stage inference such as that shown in the "John fell in the Hallway" example I previously sent you.] ###### RICHARD LOOSEMORE WROTE #############>> Basically, I think the best you can do is to use various heuristics to shorten the computational problem of proving that the two books can relate. For example, the system can learn the general rule "If you receive a gift of X, then you subsequently own X", and then it can work backwards from all facts that allow you to conclude ownership, to see if one fulfills the requirement. #### MY REPLY ############>> [I think human-level thinking will require tens to thousands of billions of activations a second, so that in a shruiti like scheme each of the roughly 32 gama wave frequency phases would be allowed 100 million to 100 billion activations, allowing a large amount of searching. But this searching it is not totally blind. Instead it would use something like the parallel terraced search discussed in Hafstadter's description of his Copycat system, which would be guided by the search guiding factors I have listed above.] ###### RICHARD LOOSEMORE WROTE #############>> You then also have to deal with problems such as receiving a gift and then giving it away or losing it. The question is, do you search through all of those subjunctive worlds? #### MY REPLY ############>> [Again, you do often need to use massive search, but it has to be somewhat focused to have a good probability of producing a good solution because the possible search space is usually way too large to be totally searched] ###### RICHARD LOOSEMORE WROTE #############>> A nightmare, in general, to do an exhaustive search. So if a determinstic answer is not the way to go, what about the alternative, which I have called the "emergent" answer? This is not so very different from finding a good heuristic, but the philosophy is very very different. If the system is continually building models of the world, using constraints among as many aspects of those models as possible, and applying as much pragmatic, real-world general knowledge as it can, then I believe that such a system would quickly home in on a model in which the question "Does Mary own a book?" was sitting alongside a model describing a recent fact such as "Mary got a book as a gift", and the two would gel. #### MY REPLY ############>> [Yes, constraints help. As I have said for decades, "More is more in AI", because the more relevant experiential representation you have above given problem or ones similar to it, the more constraints you have to help guide you on the more appropriate search and inference paths. If this is what you mean by emergent, yes, I agree emergence is helpful. But this is not necessarily the mysterious and ill defined RL complexity, but rather just the complexity of many different contextual and experiential factors interacting. As I said above the inferencing would not be "a totally blind massive search --- instead it would often be guiding factors like those I listed above.] ###### RICHARD LOOSEMORE WROTE #############>> How *exactly* would they gel? Now that is where the philosophical difference comes in. I cannot give any reason why the two models will find each other and make a strong mutual fit! I do not claim to be able to prove that the binding will take place. Instead, what I claim is that as a matter of pure, empirical fact, a system with a rich enough set of contextual facts, and with a rich enough model-building mechanism, will simply tend to build models in which (most of the time) the bindings will get sorted out. #### MY REPLY ############>> [Almost all of your discussion above has been about inference control and not really binding per se. Of course, binding, whether handled implicitly or implicitly is a part of inference, so you comments are not irrelevant --- but they do not specifically address the particular problems associated with binding.] ###### RICHARD LOOSEMORE WROTE #############>> There is a bit more to the story than that, but you have to understand that in this "emergent" (or, to be precise, "complex system") answer to the question, there is no guarantee that binding will happen. The binding problem in effect disappears - it does not need to be explicitly solved because it simply never arises. There is no specific mechanism designed to construct bindings (although there are lots of small mechanisms that enforce constraints), there is only a general style of computation, which is the relaxation-of-constraints style. #### MY REPLY ############>> [Again your answer is really addressed more to inferencing in general and does not address the issue of binding specifically] ###### RICHARD LOOSEMORE WROTE #############>> Overall, then, I believe that any attempts to find a guaranteed solution, or an explicit mechanism, that causes bindings to be established is actually a folly: guarantees are not possible, and in practice the people who offer this style of explanation never do suply the guarantees anyway, but just solve peripheral problems. #### MY REPLY ############>> [The possible search space involved in many inferencing problems have many more possible states than there are particles in the observable universe --- so, of course, inference searches in such large spaces are not going to be guaranteed to always come up with the best answer. But human reasoning is full of errors and failures to make appropriate inferences, so we should not expect human level AGI's to not also make somewhat similar mistakes.] ###### RICHARD LOOSEMORE WROTE #############>> That is my view of the binding problem. It is a variant of the general idea that things happen because of complexity (although that is putting it so crudely as to almost confuse the issue). -----Original Message----- From: Richard Loosemore [mailto:[EMAIL PROTECTED] Sent: Wednesday, July 09, 2008 12:02 PM To: [email protected] Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"? Ed, I only have time to look at one small part of your post today... Ed Porter wrote: > The "Does Mary own a book?" example, once the own relationship is > activated with Mary in the owner slot and "a book" in the owned-object > slot, spreads "?" activation, which asks if there any related > relationships or instances or generalization related to them support the > statement that Mary owns a book. The activation causes instances of the > "give" relationship in which Mary was a recipient and a book was the > think given to be activated, since if Mary was given a book that would > indicate she owned a book. Such and instance is found, tending to > confirm that Mary does own a book, called book-17 in the example, which > was given to her by John. > > The "John fell in the hallway" example, when told that (1) "John fell in > the hallway", (2) "Tom had cleaned it", and (3) He was hurt, > automatically implies that it was John who was hurt, and that the floor > in the hallway was probably wet after Tom cleaned it and John slipped > and feel when walking in the wet hallway. > > Tell me how you could perform they type of implication and cognition > shown in these two Shruiti examples without some form of binding? > > I for one cannot figure out how to do this with anything like Poggio's > type of binding that would fit into a human brain. Okay, so the question is what happens if the system is asked "Does Mary own a book?", given that the system does in fact know, as a result of some previous situation, that Mary received a gift which was a book. How does the system achieve the "binding" that links the books referred to in the two situations, so that the question can be answered? This is what would be called a "binding problem". First, you have to notice that there are two types of answer to this question. One is (speaking very loosely) "deterministic" and one is (even more loosely) "emergent". The determinstic answer would find some kind of mechanism that obviously, or clearly results in a connecton being established between the two book instances - the book given as a gift, and the hypothetical book mentioned in the question about whether she owns a book. A deterministic answer would *convince* us that the two instances must become connected, as a result of the semantic (or other) properties of the two pieces of knowledge. Now I must repeat what I said before about some (perhaps many?) claimed solutions to the binding problem: these claimed solutions often establish the *mechanism* by which a connection could be established IF THE TWO ITEMS WANT TO TALK TO EACH OTHER. In other words, what these people (e.g. Shastri and Ajjannagadde) do is propose a two step solution: (1) the two instances magically decide that they need to get hooked up, and (2) then, some mechanism must allow these two to make contact and set up a line to one another. Think of it this way: (1) You decide that at this moment that you need to call Britney Spears, and (2) You need some mechanism whereby you can actually establish a phone connection that goes from your place to Britney's place. The crazy part of this "solution" to the binding problem is that people often make the quiet and invisible assumption that (1) is dealt with (the two items KNOW that they need to talk), and then they go on to work out a fabulously powerful way (e.g. using neural synchronisation) to get part (2) to happen. The reason this is crazy is that the first part IS the binding problem, not the second part! The second phase (the practical aspects of making the phone call get through) is just boring machinery. By the time the two parties have decided that they need to hook up, the show is already over... the binding problem has been solved. But if you look at papers describing these so-called solutions to the binding problem you will find that the first part is never talked about. At least, that was true of the S & A paper, and at least some of the papers that followed it, so I gave up following that thread in utter disgust. It is very important to break through this confusion and find out exactly why the two relevant entities would decide to talk to each other. Solving any other aspect of the problem is not of any value. Now, going back to your question about how it would happen: if you look for a determinstic solution to the problem, I am not sure you can come up with a general answer. Whereas there is a nice, obvious solution to the question "Is Socrates mortal?" given the facts "Socrates is a man" and "All men are mortal", it is not at all clear how to do more complex forms of binding without simply doing massive searches. Or rather, it is not clear how you can *guarantee* the finding of a solution. Basically, I think the best you can do is to use various heuristics to shorten the computational problem of proving that the two books can relate. For example, the system can learn the general rule "If you receive a gift of X, then you subsequently own X", and then it can work backwards from all facts that allow you to conclude ownership, to see if one fulfills the requirement. You then also have to deal with problems such as receiving a gift and then giving it away or losing it. The question is, do you search through all of those subjunctive worlds? A nightmare, in general, to do an exhaustive search. So if a determinstic answer is not the way to go, what about the alternative, which I have called the "emergent" answer? This is not so very different from finding a good heuristic, but the philosophy is very very different. If the system is continually building models of the world, using constraints among as many aspects of those models as possible, and applying as much pragmatic, real-world general knowledge as it can, then I believe that such a system would quickly home in on a model in which the question "Does Mary own a book?" was sitting alongside a model describing a recent fact such as "Mary got a book as a gift", and the two would gel. How *exactly* would they gel? Now that is where the philosophical difference comes in. I cannot give any reason why the two models will find each other and make a strong mutual fit! I do not claim to be able to prove that the binding will take place. Instead, what I claim is that as a matter of pure, empirical fact, a system with a rich enough set of contextual facts, and with a rich enough model-building mechanism, will simply tend to build models in which (most of the time) the bindings will get sorted out. There is a bit more to the story than that, but you have to understand that in this "emergent" (or, to be precise, "complex system") answer to the question, there is no guarantee that binding will happen. The binding problem in effect disappears - it does not need to be explicitly solved because it simply never arises. There is no specific mechanism designed to construct bindings (although there are lots of small mechanisms that enforce constraints), there is only a general style of computation, which is the relaxation-of-constraints style. Overall, then, I believe that any attempts to find a guaranteed solution, or an explicit mechanism, that causes bindings to be established is actually a folly: guarantees are not possible, and in practice the people who offer this style of explanation never do suply the guarantees anyway, but just solve peripheral problems. That is my view of the binding problem. It is a variant of the general idea that things happen because of complexity (although that is putting it so crudely as to almost confuse the issue). Richard Loosemore ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?& Powered by Listbox: http://www.listbox.com ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225 Powered by Listbox: http://www.listbox.com
<<attachment: winmail.dat>>
