Ben wrote: |This paper indicates Jeff Hawkins' neuroscience theory gradually converging |on ideas more similar to those in Novamente, via the use of the common |language of probability theory. | |http://www.stanford.edu/~dil/RNI/DilJeffTechReport.pdf
The paper is very interesting, but their model may be too complicated to analyse. I'm still beginning to explore the simple feedforward with feedback approach for static object recgnition. Does anyone know of similar projects and what about their success or failure? It seems that I should not be the only one exploring it... |Of course there are some oversimplifications that need to be relaxed in |further research (Markovicity, for one; and the absence of heterarchical |connections, for another). But, the basic approach seems to make sense to |me. Heterarchical connections will be an important element in all hierarchical models, but we've yet to make the basic model work first. |As I've said before (see my essay on "Hebbian Logic"), I believe that |conditional probability based inference on the neural-cluster level results |in a pretty direct way from Hebbian learning on the neuronal level --- and |there is a long, mostly not yet understood story in the way neural cluster |properties tune the parameters of neuron-level Hebbian learning to make this |happen. But I agree with Jeff and Dileep that one can study the conditional |probability dynamics in a neural context without getting down to the |Hebbian-learning level of granularity. Your Hebbian logic network seems to be an associative one (not pyramidal or hierarchical). The dynamics of such networks are probably more complicated than feedforward ones. And you may have problem dealing with and organizing large number of nodes. |This requires learning of what Gerald Edelman calls "neural maps." It seems |to me that learning nontrivial maps of this sort requires, to use |mathematical vocabulary, the construction of moderately complex predicates |involving both perception and action variables. It is for the formation of |these predicates that Edelman proposed the "neural darwinist" |quasi-evolutionary-programming neural learning mechanism. These are |probabilistic predicates that involve (among others) the same probabilistic |variables that are isolated in their papern. But I'll be curious what |learning mechanism they will propose when your research gets to the learning |of nontrivial perception-action maps (let alone cognition!). Simple |manipulations of conditional probabilities won't do the trick anymore. There |seems to be nothing in Jeff Hawkins' recent book addressing this problem. |Perhaps they'll rediscover your own version of Edelmanian evolutionary |learning, or invent something else analogous.... My library doesn't have Jeff's book yet. I remember Edelman's idea is also associative in nature (neurons are not organized in a hierarchical way). It seems that his approach is more akin to evolving "primodial" networks rather than cortex-like areas. YKY ------- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
