Great break-ups Fergal Byrne :)

Adaptive encoder -> Lets discuss it when we start the implementation (or
someone already doing it, In that case, could explain what are all the
brain's scenarios considered for that)

Thanks
Ramesh Ganesan.



On Wed, Jul 31, 2013 at 8:42 PM, Fergal Byrne
<[email protected]>wrote:

>
>  Hi Ramesh,
>
>  That's extremely interesting, thanks for sharing it. Just a couple of
> comments having admittedly only skimmed the article (please have plenty of
> salt to hand, I'm just trying to provoke a discussion!):
>
>  1. This proposed algorithm is reminiscent of the CLA in that it
> incorporates spatial pattern recognition, learning of sequences of same
> patterns, inhibition to create sparseness, data-driven activation of
> columns, and so on. There are elements corresponding to the SP and TP here,
> albeit mediated via thalamic connections and the division of labour of
> several layers simultaneously, rather than what is used in the CLA
> (implementation) model.
>
>  2. The CLA achieves all of these things using only one layer, albeit by
> clamping on non-cortical algorithmic equipment such as the SP, TP and
> classifier (and indeed I intuit there are ways to "convert" these to more
> cortex-like structures).
>
>  3. A reconciliation of the two theories could involve the designation of
> much non-cortical CLA functionality (SP, TP, classifier, anomaly detection,
> and my favourite adaptive sensory encoding) as thalamus-operated, with some
> of the functions currently done in single-layer CLA by "asking it different
> questions" (ie changing how many steps ahead it predicts) being replaced by
> multi-layer CLA's, each splitting the work as described, with the thalamus
> used to co-ordinate it.
>
>  4. The two theories are compatible in the sense that in CLA, inhibition,
> sequence selection and prediction are executed using inter-column, in-layer
> direct connections, and single-cell selection for predictive activation
> (again based on direct in-layer activations), whereas thalamic circuitry
> replaces or augments both of these in the paper.
>
>  5. Perhaps the CLA is a simplification which attempts to computationally
> incorporate the thalamic circuits and the function of multiple layers.  The
> necessity for all the "lookup tables" attached to the cells is probably
> evidence that we're not doing a couple of things "naturally" and have
> artefacts which take their place.
>
>  6. The "adaptive encoder" we discussed several weeks ago, which would
> involve a two-way forward and "lookback" circuit instead of the lookup
> tables, could be a model which is similar in many ways with this thalamic
> circuit functionality.
>
>  7. This theory appears to nicely explain one or more of our other
> conundrums about topography, local/global inhibition, predictive distance
> and so on. Perhaps the thalamus is the seat of these algorithmic choices.
>
>  8. Looking at the anatomy and reading the description (on the Wikipedia
> page), it appears that the thalamus serves as the subcortical Internet, as
> well as the subcortical I/O layer for sensory-motor interfaces.
>
>  9. As Jeff repeats every time, this stuff is hard. Perhaps I simply
> haven't spent the time, but it looks to me that the actual learning
> algorithm described in this paper is even harder to understand and far more
> difficult to believe than the CLA is...
>
>  Regards
>
>  Fergal Byrne
>
>
>
> _______________________________________________
> nupic mailing list
> [email protected]
> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
>
>
_______________________________________________
nupic mailing list
[email protected]
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org

Reply via email to