Vladimir Nesov wrote:
Here's an impressive movie:
http://video.google.com/videoplay?docid=-2874207418572601262
Henry Markram, EPFL/BlueBrain: The Emergence of Intelligence in the
Neocortical Microcircuit
Good link. Thanks Vladimir.
A mini-review:
1) A positive comment: that is a *huge* amount of work they are doing.
2) Even when he gets to the stage of using the SGI machine to visualize
the firing pattern in a column, he confesses that he is not sure what
the visualization is for ("maybe just for fun" he says). In the same
way that he is not sure what good it does to see the pretty patterns, I
also wonder what good it does to simply know how every neuron is firing:
will he ever really deduce the *function* of the column from such
low-level circuit information?
3) I wonder about the accuracy: he is a little vague at times, about
how much of it is an exact reproduction of the circuit and how much is
statistical extrapolation? So, for example, if I knew the complete
circuit for a CPU chip, except that 5% of all the connections in my
model circuit were a statistical guess, would my copy of the circuit
actually work? Would it work well enough for me to deduce the function
of the CPU? I doubt it.
4) His attempt to shift the emphasis from spikes to dendritic activity
is important, I think. I am not sure what that idea will lead to, but I
have a feeling it could be useful.
5) I was very interested to hear that he looked at the connections in a
real brain circuit, then went back four hours later and discovered that
the connections were all different. (AND then when he tried to publish
this in Science they were not interested!)
6) My biggest gripe: towards the end he starts talking about "injecting
intelligence into the circuit" and representing patterns, or
representing the world as analog copies of 3D objects or spaces. This
is where I want to throw my hands up in horror and ask him to stop:
typically for a neuroscientist, he embellishes some really good science
with a sudden burst of utterly naive, completely useless speculation
about "cognition". Everything he said at that point was utterly
meaningless.
But of course he HAD to say something like that, because this was a
Conference on Cognitive Computing!
It wasn't: the "cognitive" bit of this talk was a piece of silly
speculation that spoilt some otherwise interesting experimental
neuroscience.
So: it confirms my standard perception of neuroscience: interesting
stuff, right up to the point where the two C words (Cognition and
Consciousness) suddenly make an appearance. After that, it's a complete
waste of time.
Richard Loosemore
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=65834245-9caff5