Hi guys,
thanks for the insightful discussion.
I agree totally with cogmission when he says that:
To me they are naively stabbing in the dark, with no theoretical
framework with which to define capacity or storage. They also mention
the 10 to 20% probability of activation of any neuron; and attribute
this to a functional "unreliability" which is now "remedied" by their
discovery of the existence of 26 discrete synaptic sizes which now
(they think) hold information that is independent of dendritic
activation. This seems to me to be conjecture with no correlation to
an over all theory.
The original one
(http://elifesciences.org/content/elife/4/e10778.full.pdf) may give you
less sensationalistic info. Also, the Authors are expressly referring to
the hippocampus (which is, simplifying a lot and among other things, the
"harddrive"[1] of the brain) - the HTM model has been instead created
trying to follow the structure of our neocortex (which includes the
structure of pyramidal cells -found to be fair in the hippocampus too-).
In the hippocampus this seems very important (the hippocampus is
strictly linked with the amygdala/limbic system), also because different
synaptic strenghts means a hierarchy in memory: it is well studied how
we retain info depending also on the emotion it generated/-es on us.
I totally disagree with this instead:
I'm couldn't tell you whether their discovery of 26 discrete synaptic
sizes in the hippocampus is "useful" and is an important distinction
which could add utility to HTM theory or not - that would be left up
to the neuroscientifically inclined here; but I really doubt it is
even all that useful? There is a lot of biological detail which is
overtly and purposefully left out of HTM theory; either because it is
only a necessity in an organic context, or it really doesn't convey
any significant information within the translation to computer
software. This to me would be one of those "details".
For two reasons. One is almost philosophical thus not that useful: the
fact that nature is lazy and the 26 (at least for the hippocampus and IF
that will confirmed -I doubt there are such precise numbers-) possible
synaptic sizes would be very expensive to mantain in a phylogenetic
scale. The second is just about the fine modulation and the complexity
of the signal (adding 26+ possible "states" per neuron adds an enormous
amount of different "combinations").
This actually could help with "mood" understanding which is something
very complex in my opinion[2] and requires a level of abstraction only
superior mammals can make use of. I'll try to make an example (sorry but
I'm not a native english speaker): let's imagine two sentences that I
suddenly tell to a girl in a street after just having gotten close to
her[3].
1) Hey, may I offer you a drink?
2) Hey, may I offer you a drink?
I imagine the Sparse Distribution Representation of these two sentences
are the same (there is also no previous context because I've just met
the girl). So, a system based on the Cortical.io method (if I'm not
wrong - which could easily be the case) would probably literally
understand if I'm asking whether I can or I'm allowed to offer her a
drink. So, the answer should be something like "Yes. (It is in your
possibility/you are allowed to do that by law)".
But, with her mammal brain, she looks at me. She thinks I'm cute,
hopefully. Her pupils dilate under the effect of dopamine released in
the amygdala. Maybe her bpm rises a bit, she starts to sweat profusely,
the adrenal glands release cortisol, etc (you know, all the sympathetic
nervous system effects). These new chemicals released make certain areas
easier to depolarize (see EPSP, etc.) and allow her understand not just
the basic question but also the context I'm trying to arise and the
implications behind it (which is literally: "do you think we could be a
couple/mate in the next future?"). What matters to me, though, is the
effect of the catecholamines in her neocortex. Notoriously, that gets
inhibited (see IPSP, etc..) during the "falling in love" experience
(:D). That is why she replies with "Of course!".
Just kidding but... I think that the diameter/size of the synapses,
along with other factors (receptors, general inhibition/eccitation,
etc.. which I summarized in the first part of this post ages ago
http://lists.numenta.org/pipermail/nupic_lists.numenta.org/2015-December/012441.html),
will be essential to grasp the fine possibilities of the human brain.
Cheers!
Raf
[1] : Personally I think that in a distant future, when computational
power will be very cheap, our computers will use a file system that will
have a lot more to do with neural networks than with raw bit storage.
The reason? A ridiculous amount of data can be stored in very little
space and with other nice perks (fast indexing, some level of
abstraction etc..). As a pure fun thing to do in past I've intentionally
overfitted a MLP to recreate pixel per pixel a 3.5Mb photo. After this
overfitting, the whole net was less than 60.something kb and it could
easily "recall" the whole pic (and I could have probably shrank that
even more). That is the advantage of having lots of "nodes" (in this
case neurons into a hidden layer) in combinatorial math (that is really
what happens in the hippocampus too): the more neurons/nodes the easier
gets to overfit/remember info... and in much less "physical" space!
[2] : In fact people with certain brain development abnormalities such
as autistic spectrum disorder may in fact struggle with this (although
this is still an open field for research, so every word of this sentence
must not be taken as a fact). The same goes, in a certain way, for
metaphors too: the sentence "This packet is a bomb!", in no context, can
mean two different things (like "Wow what a cool package!" or "Hey! This
is a real exploding bomb!") but having the same SDR.
[3] : Just for clarification: it's just an example :)
On 24/01/2016 19:03, cogmission (David Ray) wrote:
Hi Laurent,
Thank you for sharing this link. I am not a neuroscientist, but here's
my take on the article. First, the Salk scientists construe the
variation of discrete states and sizes of synapses in the hippocampus
to represent "bits" of information, which is pure speculation. Because
they are not working with any theoretical framework which defines the
way information is represented by neural structures - they conclude
that every single discrete variation represents a bit of information;
then they extrapolate across the total number of estimated synapses to
derive their figures.
To me they are naively stabbing in the dark, with no theoretical
framework with which to define capacity or storage. They also mention
the 10 to 20% probability of activation of any neuron; and attribute
this to a functional "unreliability" which is now "remedied" by their
discovery of the existence of 26 discrete synaptic sizes which now
(they think) hold information that is independent of dendritic
activation. This seems to me to be conjecture with no correlation to
an over all theory.
In my estimation, and according to what I know about HTM theory, this
just simply is not how neural connectivity conveys information, and
just simply is not how the brain works. They ignore the significance
of sparsity and SDRs (sparse distributed representations); attributing
it to "unreliability" which to me underlines the fact that they aren't
really working with an understanding with which to attribute their
findings as a contribution to an over all theory.
I'm couldn't tell you whether their discovery of 26 discrete synaptic
sizes in the hippocampus is "useful" and is an important distinction
which could add utility to HTM theory or not - that would be left up
to the neuroscientifically inclined here; but I really doubt it is
even all that useful? There is a lot of biological detail which is
overtly and purposefully left out of HTM theory; either because it is
only a necessity in an organic context, or it really doesn't convey
any significant information within the translation to computer
software. This to me would be one of those "details".
Anyway, thank you again for sharing this link and contributing new
information to the group; it helps our community to thrive that we
have people committed to making sure we stay relevant and current.
Let's see what others think about this particular article? :-)
Cheers,
David
On Sun, Jan 24, 2016 at 1:48 AM, Laurent Julliard <[email protected]
<mailto:[email protected]>> wrote:
Guys,
I came across this article
(http://www.kurzweilai.net/memory-capacity-of-brain-is-10-times-more-than-previously-thought)
and I was wondering if what they discover on synapse behavior
could either improve in any way the current model of synapses in
HTM and/or confirm the way synapses are potentiated today through
the management of their permanence value ?
--
Laurent Julliard
Twitter @lrjay
--
/With kind regards,/
David Ray
Java Solutions Architect
*Cortical.io <http://cortical.io/>*
Sponsor of: HTM.java <https://github.com/numenta/htm.java>
[email protected] <mailto:[email protected]>
http://cortical.io <http://cortical.io/>
--
Raf
www.madraf.com/algotrading
reply to: [email protected]
skype: algotrading_madraf