We don't know how the human mind works so to conclude that it works like some kind of weighted net is not valid. Someone once said to me was that Neural Nets were the only systems that were capable of learning. They can learn, but other methods can learn as well. You haven't seen them yet because they do not work any better than connectionist methods. If you are advertising direct representational methods and everything gets smushed up just like they do with connectionist methods, the demo is not going to be very convincing.
The argument is not that the human mind only manipulates words and images vs some connectionist method but that a system which uses a symbolic IO system should be easier to work with then a system which does not. If connectionist methods worked as well as you seem to think they do, I wouldn't be arguing with you. On Fri, Apr 5, 2013 at 6:42 AM, Andrew G. Babian <[email protected]>wrote: > Well, if you had to look up Rodney Brooks in Wikipedia, then the ideas > won't seem as familiar as if you had seen him from the beginning. One of > the main ideas is robots with no representations at all. They had bug bots > and flocking stuff, and surprisingly effective behaviors. Certainly they > haven't gotten to human level, but that would be a long path, anyway in > that approach, since it's a hand-coding system. I use that as just another > example of a system with some success not based on explicit representation. > > But connectionism not getting human level results? They aren't full > systems, but some of the isolated recognizers with deep learning are close > to the best things we have for some problems. They're learning is far > better than anything the symbolic systems can even dream of. Turns out > that support vector machines are a superior technology to neural nets, > though, so we actually have a strong possibility that we can do even better > than the brain using computer technology than all the brain/mind copy > people who are trying to exactly copy that arcgitecture. Again, the system > is so huge, that it's not a simple problem that can be done with the modest > effort of most projects. > > So then the objection comes to, they haven't done it yet. I.m going to > say that the effort just hasn't been invested. One of the biggest things > is that there is funding competition, and those kinds of approaches are not > as obvious. The representational systems, just to call them something, > are just basically more popular, and for good reason. Handling straight > text or something similar is natural and obvious for computer people. And > you can get a lot of results that we can use fairly well. All that > statistical NLP? Google? It can be hard to argue with success like that. > But then it becomes like chess. You solve the problems with number > crunching, and programming solutions worked out in advance instead of any > kind of general intelligence mechanisms. Anyway, I wouldn't fault them for > yet finishing a while system. > > I guess my biggest thing is just how we don't really apprecuate even our > own inner workings, because consciously, we kind of think we only have > these words and images floating around, so we think intelligence is just > manipulating these words and images. I find that really very silly. > andi > > > > > On Apr 4, 2013, at 10:41 AM, Jim Bromer <[email protected]> wrote: > > Andrew G. Babian <[email protected]> wrote: > People don't talk about neural networks having "representations". You > guys ever heard of Rodney Brooks. It seems like at the linguistic level, > we have representations, but you look closely, no, we don't have retrieval, > we have dynamic recreations. > ------------- > > This doesn't make any sense. If you look closely, we don't have > retrieval, we just have insight popping into consciousness. There are > times when I can't figure something out I will "think" real hard without > actually thinking of anything and there are occasions when I do this and a > reasonable solution will pop up. I presume that when I go through this > there is some kind of search for a way to proceed, whether it is symbolic > or sub-symbolic it is not conscious other than to feel like I am "thinking > real hard". That doesn't mean that the components for the potential to > create (or recreate) the insight does not exist in the mind. (I also > sometimes use the phrase, "I am thinking real hard," in a different way.) > > I looked Rodney Brooks up in Wikipedia and he introduced a robot that used > subsumption architecture. Wikipedia does not describe subsumption > architecture as if it was non-representational or sub-symbolic. > > In the 70s there was some hope for an AI methodology that was analogous to > a hologram. I think that Hubert Dreyfus made some kind of reference to a > holographic memory or something like that. Wikipedia just says that > Dreyfus was vindicated when connectionist methods were rediscovered. > However, my recollection was that Dreyfus was talking about something more > robust then connectionists methods. I think he was talking about a still > undiscovered mysterious method that would reveal human-like intelligence in > which understanding almost automatically emerged from any conceptual > vantage point like the image of a large hologram. (Although I think he was > opposed to the Platonic (Aristotlian) method (of logic) as a model of > thought). But notice that 30 years and counting later connectionist > methods have not achieved anything like human-level intelligence. So if > you were to take on an early Dreyfus-like skepticism you would have > to agree with the conclusion that connectionism just did not cut it either. > > And notice that Steve has been talking about turning a representational > model into a mathematical model that would not be human-readable. If we > figured out how to get a representational model working we could begin > exploring non-readable variations which would implement features that we > wanted. This could turn out to be theoretically important. > > I could go on about this if you wanted me to. > > Jim Bromer > > > > > > > > On Wed, Apr 3, 2013 at 9:53 PM, Andrew G. Babian <[email protected]>wrote: > >> Thanks Aaron! >> Computer people always want data structures and representation because >> it would make the problem straightforward, but there is no reason that >> there have to be any. People don't talk about neural networks having >> "representations". You guys ever heard of Rodney Brooks. It seems like at >> the linguistic level, we have representations, but you look closely, no, we >> don't have retrieval, we have dynamic recreations. >> >> As for arguing why propositions are not sufficient, how about all of >> non-explicit learning, like motor learning for example. Now, it's tricky >> because representational systems can be made informationally complete, so >> you might get the idea that you could get such a system to eventually work. >> But we have many years of constantly working at it. Brittleness is one of >> the big ones, but there are plenty more. One particular problem I am >> focusing on is the difficulty in incremental learning for them. You have >> to attach a completely different mechanism (like weights) onto the >> proposition to deal with that, so as a simple counterexample, you might see >> that propositional representations are insufficient. >> >> I apologize for not having a proper positive proposal, as that is one of >> the stated reasons for posting to this forum, but that's such a harder >> problem. This is still room for criticism of existing approaches. So let >> me throw in that some of the dynamic network architectures do seem to be >> promising. I would want the, to address the question of epistemic >> emotional judgements more directly, but I would say they are capable of it. >> I haven't looked into them enough I guess. Ben's system (sorry I have >> already forgotten what he's calling it, and I think I even download bits of >> it ) and maybe Stan Franklin's Lida. Personally, As I have said, I'm not >> great when there is a representational level in there anywhere. I'd want >> to stay closer to pure sensori-motor data, with an openness to deep machine >> learning for features if you just have to have something like >> representations. Ok, there. I tried! >> Andi >> >> >> On Apr 3, 2013, at 12:02 PM, Aaron Hosford <[email protected]> wrote: >> >> PM said: >> >>> One suggestion is that you compile language into a "database of facts" >>> using a propositional representation. >>> In addition, you convert all sensory input to the AGI into the same >>> propositional representation. >>> Then you do inferencing within and generate behaviors from the >>> aforesaid representation. >> >> >> If by "propositional representation", a logical statement with a Boolean >> true/false value, this will not be sufficient. The reason is, "facts" are >> never certain, and you never know in advance which ones will later prove >> wrong. Facts have associated confidence levels, based on supporting and >> conflicting evidence. Boolean truth values are an idealization of this, >> throwing away the ongoing accumulation of evidence and giving us only >> whether a particular proposition is currently accepted as reliable or not. >> The failure to recognize this has held back many seemingly promising AI >> projects in the past. >> >> Rich said: >> >>> So, what the heck can we compile NL into that would support prospective >>> AGI operation? >> >> >> This is what I've been describing to you. Semantic networks, properly >> structured, are up to the task. Any proposition from PM's "propositional >> representation" can be represented in a semantic network. The advantage >> that a semantic network then conveys is that the relationships between >> elements contained within a proposition can themselves be given confidence >> levels; the analysis of evidential support is no longer limited only to the >> proposition as a whole. For example, suppose I am looking at the >> proposition, *ate(Billy, Nicky's_Popsicle)*. In a standard propositional >> representation like this, I can't analyze where the proposition is wrong, I >> just have to accept it is either right or wrong as a whole. If I use a >> semantic network-style representation, * >> Billy<--SUBJECT--ate--OBJECT-->Nicky's_Popsicle*, I now have two >> separate locations where I can attribute the failure of the proposition as >> a whole to be true: the *SUBJECT *and *OBJECT *links. Propositions come >> so close to doing this, but fail when we attempt to attribute failure to a >> particular substructure, because they aren't generalized enough to permit >> full analysis of the relationships of substructures to the parent structure. >> >> Andi said: >> >>> I would go with Todor on this one. More specifically, it's very clear >>> to me that language cannot be the bottom or basis of representation. A >>> language system has to be a piece on top of the basic system. It may be >>> the most important piece to us, because for interaction with us, and >>> ability to use our body of written knowledge and contribute to it, a system >>> will need to use language. But, that need in no way implies that you could >>> ever get any intelligent behavior if you just start at the level of >>> language. There are plenty of reasons to think otherwise. >> >> >> The problem Steve and I both agree needs to be solved is: What, inside >> the mind, represents the *meanings *of natural language, and how do we >> go about designing an analogous structure programmatically? When you say >> someone understands a sentence, what happens in that person's mind? Is >> there not some sort of internal structure to which that sentence gets >> mapped through the act of understanding? In most AI/AGI projects to date, >> there have been three basic approaches: (1) use it directly in text form, >> (2) pull out what you need and stash it in "frames", (3) convert it to a >> parse tree. I think each of these is inadequate to the task. I think there >> is a more comprehensive data structure used in the human mind to represent >> what a sentence actually means, and this is the data structure, the lingua >> franca of the mind, on which the mind operates in the act of thinking. What >> would that data structure look like, were we to reverse engineer it to work >> on a computer? Language is useful towards accomplishing this task, not >> because it is already in the proper form, but because its structure >> necessarily closely mirrors that form, due to its purpose of communicating >> knowledge in that form from one mind to another. Once we have a proper >> understanding of how meaning is represented in the mind, it should be >> possible to begin mapping sensory information to that format, just as can >> be done with natural language. >> >> >> On Wed, Apr 3, 2013 at 10:48 AM, Piaget Modeler < >> [email protected]> wrote: >> >>> Steve Richfield: "So, what the heck can we compile NL into that would >>> support prospective AGI operation?" >>> >>> One suggestion is that you compile language into a "database of facts" >>> using a propositional representation. >>> In addition, you convert all sensory input to the AGI into the same >>> propositional representation. >>> Then you do inferencing within and generate behaviors from the >>> aforesaid representation. >>> >>> ~PM >>> >>> >>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> >>> <https://www.listbox.com/member/archive/rss/303/23050605-2da819ff> | >>> Modify <https://www.listbox.com/member/?&> Your Subscription >>> <http://www.listbox.com> >>> >> >> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> >> <https://www.listbox.com/member/archive/rss/303/3870391-266c919a> | >> Modify <https://www.listbox.com/member/?&> Your Subscription >> <http://www.listbox.com> >> >> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> >> <https://www.listbox.com/member/archive/rss/303/10561250-470149cf> | >> Modify <https://www.listbox.com/member/?&> Your Subscription >> <http://www.listbox.com> >> > > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/3870391-266c919a> | > Modify<https://www.listbox.com/member/?&>Your Subscription > <http://www.listbox.com> > > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/10561250-470149cf> | > Modify<https://www.listbox.com/member/?&>Your Subscription > <http://www.listbox.com> > ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
