On Sun, 02 Nov 2008 11:30:37 -0800, Christopher D. Green wrote: Mike Palij wrote: >> It is possible that neural networks were too mathematical for most >> psychologists but, if so, avoiding this area would have prevented them >> from realizing that neural network models are inadequate in capturing >> meaningful aspects of cognitive processing. > >You may be right that parallel distributed connectionist models (a less >tendentious name for "neural networks")
"Tendentious"? Not to argue the point but considering that one can trace these models back to McCulloch & Pitts (1943) and other models of the nervous system (and even earlier connectionist conceptions), couldn't one claim that "neural network" has priority in describing these types of models? >were mathematically too difficult for most psychologists, >but that only goes to show just how mathematically inept >most psychologists are. There is nothing in connectionist >models that can't be understood after a single year of >basic calculus. Actually, wouldn't it be true that one could understand connectionist models after a review of matrix algebra? Correct me if I'm wrong but isn't the calculus used mainly for find a minimum or maximum solution for the weight matrix? >(How does a "mere historian" like me know? I programmed >a bunch of them from scratch for my doctoral dissertation. Most people >these days don't have to actually program them (anymore than they have >to write their own stats programs). There is a bunch of software >packages that reduce the whole process to pointing and clicking.) I assume you've heard that portable fMRI machines will be available in the near future. I imagine that neuroimaging may become as common later this century as the use of the PC became in the late 20th century. >On the other side, there is a fair bit of mathematics implicit in the >software that "massages" brain scan data, turning it into interpretable >images, so I would think that anyone who could handle that software >would be able to handle connectionist software. I'm not sure whether you're referring to my comments about Donders' method or not but here is one place where the presentation of the use of Donders' subtractive method is rather uncritical: http://www.ru.nl/neuroimaging/general/biography_fc_donders/ The problems with the Donders' subtractive method (i.e., strict linear stages of processing vs parallel processing, one stage completes before another begins vs overlapping stages, addition of a stage increases RT by a constant amount vs variable and overlapping with other stages) may or may not undermine neuroimaging analysis but I have reservations about the general analytic approach. "Neurometrics" was an earlier attempt to measure brain activity but which, I believe, ran into significant problems of analysis. >Annette seems to imply that connectionism went away, but that certainly >isn't my impression. One may not find it much in traditional psychology >departments, but that's only because most of the interesting >computational stuff is going on in cognitive science and artificial >intelligence departments (although I was trained in a psychology >department, I was more interested in jobs in cognitive science programs >when I graduated. That seemed to be where the most interesting cognitive >work -- whether psychological, linguistic, AI-ish, or philosophical -- >was happening. Mike is correct that issues of "embodiment" have come to >play a great and greater role in computational cognitive science. As >Rodney Brooks (I think) once put it, the best model of the world is the >world itself (i.e., why make up a detailed cognitive model of the world >when one can simply scan the world itself for the information one >requires?). Brooks overstates the case , of course, but he was able to >cut through a whole whack of pseudo-problems with this "Occamish" >attitude. There is a whole stream of robotics now that jettisons complex >mental models in favor of a few simple rules of basic bodily interaction >(and often gets better results). However, as Ekabia makes clear in his chapter 8 "Cog: Neorobotic AI", Rodney Brooks simplified research worked well when he was concerned with creating insect robots with "insect level intelligence". Brooks apparently wanted to take an incremental approach in modeling intelligence by working up from simpler "organism-robots" up to "human robots" as represented by the neorobot "Cog". Unfortunately, jumping from "insect robots" to "human robots" may have been a bad strategy because it again raises issues that could be avoided with the simpler insect robots such as internal representations (Ekbia points out that Brooks has been inconsistent in his position about the role of representation in neorobots like Cog; see p264 in Ekbia). Brooks appears to have come back to the problems that have plague traditional AI researchers. >In response to the more general question -- what ever happened to >traditional "infer-from-behavior" cogntiive psychology? -- I think the >answer is that it was overtaken by a bunch of new technologies >(neuroscientific, computational, and whathaveyou) that rendered it a >fairly primitive methodology to continue using on a widespread basis. Perhaps another way of putting this is that radical behaviorism in the form that Skinner advocated essentially made the organism a "black box" and the less said about the contents of the box, the better. Better to observe the regularities between the input to the black box and its output. The development of information theory (i.e., the mathematical theory of communication) helped to transform the black box into a "white box", that is, a theoretical model of the steps or stages that intervened between input and output. More complex and sophisticated formulations of these stages (as Chris says: neuroscientific, computational, etc.) can be thought of as constituting what cognitive psychology has been about for the past 50 or so years. Recent work appears to extend the focus from the black/white box to the context in which is exists and how it interacts with that context. It might be just me but it seems that J.J. Gibson might be becoming increasingly relevant to cognitive psychology. Aren't Gibsonians fond of saying "it's not what's inside your head that's important, it's what your head is inside of that is". Or something like that. -Mike Palij New York University [EMAIL PROTECTED] --- To make changes to your subscription contact: Bill Southerly ([EMAIL PROTECTED])
