What I found most interesting about the paper is that it only was able to achieve linguistic capabilities of a 4-year old. I wonder if this is because of the neural level dendritic cleaning and rewiring that is experienced at that level, at which point language capabilities are less reliant on mimesis and more reliant on high-level cognitive processes like syntactic patterns.
It seems that the neural net model may be perpetually limited if the human linguistic system isn't following the patterns recognized by the system. -- Justin E. Lane From: Jim Bromer Sent: Wednesday, November 25, 2015 12:01 PM To: AGI Thank you for getting me to look at the paper Mike. There are some interesting things in it. I apologize for starting out with this but, "What we try to emphasize in our work is that the decision processes operated by the central executive are not rule-based process, they are statistical decision processes." Oh, ok. That will do it alright.(?) It is, in my opinion, a naïve emphasis of an issue that is not based on a deeper understanding of the problem. A little like MT's inability to grasp that a computer program (that is pre-programmed) might be programmed to learn new things which could then be used in a process as a kind of program to notice and learn things that the programmer did not specifically incorporate into the program. So the author is taking a narrow technical definition and using it as if were a broad distinguishing characteristic of all possible AI. For example, the author does not realize that rule-based programming could hypothetically acquire the ability to make simple statistical assessments and that statistical associations of knowledge are not enough to explain higher level conceptual integration. (When I talk about conceptual integration I am not limiting myself to definitions that can be attributed to 'authoritative' references by published authors which can then be used to make narrow technical definitions. I am not saying that conceptual integration has an unbounded definition just that there is more to the concept than Berkeley blending.) And statistical methods are of course rules. Now I started thinking that, hey - maybe I was misinterpreting the author's intent in emphasizing that point. I started feeling a little sheepish and I started wondering if I was getting paranoid about it or something. But then, glancing at the same paragraph in the paper I found this startling comment. "If the central executive was not a statistical tool, the system would not be able to generalize." Well, I guess that might be implicitly right, but, uh, no. Come on. You guys can understand what I am saying. You don't need a statistical tool to generalize. We do want to have the ability to learn over many trials or based on many variations and this would take something that has, at the very least, some of the characteristics of a statistical method whether it be done explicitly or implicitly. But remember, his point was that a rule-based system would not be able to do the trick. A rule-based system (using few minor variations of the traditional model) can learn to generalize. My comment has something to do with your comment that it can learn to count by first realizing that there is a need to count. (I could not find the exact example that you mentioned but they did talk about counting and suggested that it was learned implicitly. The use of syntactic snippets as a method to implicitly teach a computer to use some simple mechanisms of language (to generalize) is not new but it seems that by using the neural net as the executive function they were able to get their system a little further. But there was a reason they used discrete words and word-parts and things like stacks and other discrete computational methods to represent phonological memory and attention and episodic memory in a hybrid. It made an infeasible experiment feasible. There is something special in using discrete methods. They are more efficient than neural nets. And to reverse an old argument, a discrete method can do anything that a neural network can do. (This is especially obvious since neural networks are run on discrete systems.) What is being lost by methods like this is the conquest of the problems of complexity. Jim Bromer On Tue, Nov 24, 2015 at 5:22 PM, Mike Archbold <[email protected]> wrote: > I think there are a lot of interesting things about this paper. Read > this little abstract from the "discussion" which follows some > examples. They are referring to the examples: > > ----------------------- > the first example involves counting skills, ability to compare small num- > bers, ability to associate the words "your friend" to a known person, > ability to retrieve information about her age from the LTM, ability to > use personal pronouns. The system is able to learn how to answer this > question through a rewarding procedure, and to generalize the acquired > knowledge to similar questions involving different people with > different ages. In the second example ("how many games did you play?") > the system is able to retrieve the three games from LTM and to count > them. It is important to point out that our model does not include a > specialized structure for counting, or a specialized structure for > number comparison, or a specialized structure for mapping names into > personal pronouns... > > All its abilities arise from a relatively small set of mental actions > that are compatible with psychological findings. > > ----------------- > > What I find interesting about this is that they didn't just teach the > system to count which I suppose is not that momentus in this age. > Their system FIRST figured out there was such a need to learn how to > count, THEN it learned how to count. So it seems to have developed a > skill based upon need. > > On 11/15/15, Jim Bromer <[email protected]> wrote: >> I feel that a symbolic approach would be easier to start with and it >> could be feasible with better insight and some stronger methods. I >> do, however, also feel that (what I think is) a gated recurrent >> artificial neural network with n-space mapping (or bus-state mapping) >> could be made to work, but this would in essence be very similar to a >> hybrid approach. >> >> Jim Bromer >> >> On Sat, Nov 14, 2015 at 9:57 PM, Ben Goertzel <[email protected]> wrote: >> >>> The paper is here... http://arxiv.org/abs/1506.03229 >>> >>> Sensationalist media article here: >>> >>> http://www.iflscience.com/technology/scientists-create-artificial-system-capable-learning-human-language >>> >>> This is from Angelo Cangelosi (among others), who works with the iCub >>> robot and gave a keynote at AGI-12 at Oxford... >>> >>> It's very good stuff, but unlike what that news article says, this is not >>> the first time automated response-generation has been done w/ neural >>> nets.... I recall a paper by some Russian dude giving similar results in >>> the "Artificial Brains" special issue of Neurocomputing that Hugo DeGaris >>> and I co-edited some years ago... >>> >>> What distinguishes this work is more the sophistication of the underlying >>> cognitive architecture ... maybe it works better than prior NNs trained >>> for >>> dialogue-response or maybe it doesn't; careful comparison isn't given >>> (understandably -- there is no standard test corpus for this stuff, and >>> prior researchers mostly didn't open their code).... But the cognitive >>> architecture is very carefully constructed in a psychologically realistic >>> way; combined with the interesting practical results, this is pretty >>> nifty... >>> >>> The training method is interesting, incrementally feeding the system >>> facts >>> with increasing complexity, while interacting with it along the way, and >>> letting it build up its knowledge bit by bit. A couple weeks ago I >>> talked >>> to a Russian company at RobotWorld in Korea who was training a Russian >>> NLP >>> dialogue system in a similar way.... (again with those Russians!!) >>> >>> Note that with this method, the system can respond to questions involving >>> the word "dad" without really knowing what a "dad" is (e.g. without >>> knowing >>> that a dad is a human or is older than a child, etc.). This is just >>> fine, >>> and people can do this too. But we should avoid assuming that just >>> because it gives responses that, if heard from a human, would result from >>> a >>> certain sort of understanding, the system is demonstrating that same sort >>> of understanding. This system is building up question-response >>> patterns >>> from the data fed into it, and then performing some generalization. The >>> AI >>> question is whether the kind of generalization it is performing is really >>> the right kind to support generally intelligent cognition. >>> >>> My thought is that the kind of processing their network is doing, >>> actually >>> plays only a minor supporting rule in human question-answering and >>> dialogue >>> behavior. They are using a somewhat realistic cognitive architecture >>> for >>> reactive processing, and a somewhat realistic neural learning mechanism >>> -- >>> but the way the learning mechanism is used within the architecture for >>> processing language, is not very much like the way the brain processes >>> language. The consequence of this difference is that their system is >>> not >>> really forming the kinds of abstractions that a human mind (even a >>> child's >>> mind) automatically forms when processing this kind of linguistic >>> information.... The result of this is that the kinds of >>> question-answering, question-asking, concept formation etc. their system >>> can do will not actually resemble that of a human child, even though >>> their >>> system's answer-generation process may, under certain restrictions, give >>> results resembling those you get from a human child... >>> >>> These observations do not really contradict anything they say in the >>> paper, at least upon my quick read.... >>> >>> An interesting step, anyway... >>> >>> >>> -- >>> Ben Goertzel, PhD >>> http://goertzel.org >>> >>> "The reasonable man adapts himself to the world: the unreasonable one >>> persists in trying to adapt the world to himself. Therefore all progress >>> depends on the unreasonable man." -- George Bernard Shaw >>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> >>> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5> | >>> Modify >>> <https://www.listbox.com/member/?&> >>> Your Subscription <http://www.listbox.com> >>> >> >> >> >> ------------------------------------------- >> AGI >> Archives: https://www.listbox.com/member/archive/303/=now >> RSS Feed: https://www.listbox.com/member/archive/rss/303/11943661-d9279dae >> Modify Your Subscription: >> https://www.listbox.com/member/?& >> Powered by Listbox: http://www.listbox.com >> > > > ------------------------------------------- > AGI > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/24379807-653794b5 > Modify Your Subscription: https://www.listbox.com/member/?& > Powered by Listbox: http://www.listbox.com ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/27682064-ed43a290 Modify Your Subscription: https://www.listbox.com/member/?& Powered by Listbox: http://www.listbox.com ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
