Hi all,

This paper indicates Jeff Hawkins' neuroscience theory gradually converging
on ideas more similar to those in Novamente, via the use of the common
language of probability theory.

http://www.stanford.edu/~dil/RNI/DilJeffTechReport.pdf

Of course there are some oversimplifications that need to be relaxed in
further research (Markovicity, for one; and the absence of heterarchical
connections, for another). But, the basic approach seems to make sense to
me.

As I've said before (see my essay on "Hebbian Logic"), I believe that
conditional probability based inference on the neural-cluster level results
in a pretty direct way from Hebbian learning on the neuronal level --- and
there is a long, mostly not yet understood story in the way neural cluster
properties tune the parameters of neuron-level Hebbian learning to make this
happen. But I agree with Jeff and Dileep that one can study the conditional
probability dynamics in a neural context without getting down to the
Hebbian-learning level of granularity.

Where things will get interesting is when they try to extend the framework
you describe in your paper to model a system that:

* perceives visually in the manner you describe in your paper
* responds via actuators to the visual stimuli it perceives, in a way that
requires it to do some object recognition in the visual stimuli

This requires learning of what Gerald Edelman calls "neural maps." It seems
to me that learning nontrivial maps of this sort requires, to use
mathematical vocabulary, the construction of moderately complex predicates
involving both perception and action variables. It is for the formation of
these predicates that Edelman proposed the "neural darwinist"
quasi-evolutionary-programming neural learning mechanism. These are
probabilistic predicates that involve (among others) the same probabilistic
variables that are isolated in their papern. But I'll be curious what
learning mechanism they will propose when your research gets to the learning
of nontrivial perception-action maps (let alone cognition!). Simple
manipulations of conditional probabilities won't do the trick anymore. There
seems to be nothing in Jeff Hawkins' recent book addressing this problem.
Perhaps they'll rediscover your own version of Edelmanian evolutionary
learning, or invent something else analogous....

Anyway, it is nice to see some convergence btw neuroscience ideas and AI
ideas, with probablity theory as the unifying language ;-)

-- Ben G


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to