Sorry, error in the principles of HTM: should include Sequence Memory as a
key principle. Also, that should read "10-15 years"!

On Fri, Jan 9, 2015 at 1:27 PM, Fergal Byrne <[email protected]>
wrote:

> Hi Dinesh,
>
> HTM refers to the general theory developed by Jeff Hawkins and Numenta
> over the past 1-15 years. You can think of HTM as the general "big idea" of
> how we believe the neocortex works. The key aspects of HTM are Jeff's six
> principles, which refer to hierarchy, sparse distributed representations,
> online learning from streaming data, a uniform algorithm, combination of
> sensory and motor function everywhere, and attention. While the theory will
> accumulate detail (for example what the roles of the layers inside a region
> might be doing), it grows outwards stably from this kernel.
>
> Officially, CLA refers to the particular detailed algorithmic design for a
> single layer of neurons, which is outlined in the 2011 White Paper and
> (partially) implemented by NuPIC. Jeff Hawkins and Numenta have indicated
> that they wish to "freeze" this meaning of CLA and use a different name for
> new versions of their detailed algorithmic designs.
>
> The rest of us have become accustomed to using "CLA" to refer to an
> algorithmic design which is close to Numenta's, but might differ in some
> minor or major aspects. The key features of CLA, which generalise across
> most of our models, are:
>
> - Neurons arranged in columns ("mini-columns" in neocortex) which share
> feedforward inputs and have similar feedforward responses.
> - Sparsity imposed by a columnar inhibition algorithm.
> - Feedforward inputs appear on proximal dendrites (to a column in official
> CLA, also to cells in some models).
> - Neurons in a layer have axons connected to distal dendrites in the same
> layer, allowing for prediction.
> - Proximal dendrites perform some version of linear summing.
> - Distal dendrite segments act independently as coincidence detectors.
> - Layers can learn first-order transitions between feedforward patterns,
> and also higher-order sequences using choices of active cells in an active
> column.
> - Columns which correctly predict their activity have one cell active,
> otherwise several cells activate (burst).
>
> HTM is quite general, allowing for many more detailed theories and designs
> to be claimed to correspond to HTM, but It's much easier to quantify how
> well a design matches up with CLA proper.
>
> We tend to use CLA when referring to processes in some detail (at the
> layer, column, neuron, dendrite, synapse levels), and HTM when talking
> about how things work at the layer, region and brain levels. We'll also be
> seen using "HTM" when we propose ideas which supercede or contradict
> assumptions underlying Numenta's "official" CLA design.
>
> The other thing to bear in mind is that CLA is an internal name (within
> the community) which has no general currency in either neuroscience or
> AI/ML, while HTM is well-known (at least by name) to researchers in both
> fields.
>
> Regards,
>
> Fergal Byrne
>
>
> On Fri, Jan 9, 2015 at 12:48 PM, David Ragazzi <[email protected]>
> wrote:
>
>> Dear Dinesh,
>>
>> > 1.What is the difference between CLA and HTM? 2.Is CLA generalization
>> of HTM as the CLA(the agorithms based on cortex) name suggests so?
>> Explain if wrong.
>>
>> CLA => Cortical Learning **ALGORITHMS**
>> HTM => Hierarchical Temporal **THEORY**
>>
>> As the names say, CLA tries simulate what the HTM states about how cortex
>> could work. Something we use wrongly HTM acronym to refer to CLA. But the
>> names are clear, one is the theory, the other is the algorithmic model of
>> it. Just remember neither all features addressed on HTM are implemented on
>> CLA (yet).
>>
>> David
>>
>> On 9 January 2015 at 10:08, Dinesh Deshmukh <[email protected]> wrote:
>>
>>> Hi
>>>
>>> 1.What is the difference between CLA and HTM?
>>> 2.Is CLA generalization of HTM as the CLA(the agorithms based on cortex) 
>>> name
>>> suggests so?Explain if wrong.
>>>
>>> Thank you.
>>>
>>>
>>
>>
>> --
>> David Ragazzi
>> MSc in Sofware Engineer (University of Liverpool)
>> OS Community Commiter at Numenta.org
>> --
>> "I think James Connolly, the Irish revolutionary, is right when he says that
>> the only prophets are those who make their future. So we're not
>> anticipating, we're working for it."
>>
>
>
>
> --
>
> Fergal Byrne, Brenter IT
>
> http://inbits.com - Better Living through Thoughtful Technology
> http://ie.linkedin.com/in/fergbyrne/ - https://github.com/fergalbyrne
>
> Founder of Clortex: HTM in Clojure -
> https://github.com/nupic-community/clortex
>
> Author, Real Machine Intelligence with Clortex and NuPIC
> Read for free or buy the book at https://leanpub.com/realsmartmachines
>
> Speaking on Clortex and HTM/CLA at euroClojure Krakow, June 2014:
> http://euroclojure.com/2014/
> and at LambdaJam Chicago, July 2014: http://www.lambdajam.com
>
> e:[email protected] t:+353 83 4214179
> Join the quest for Machine Intelligence at http://numenta.org
> Formerly of Adnet [email protected] http://www.adnet.ie
>



-- 

Fergal Byrne, Brenter IT

http://inbits.com - Better Living through Thoughtful Technology
http://ie.linkedin.com/in/fergbyrne/ - https://github.com/fergalbyrne

Founder of Clortex: HTM in Clojure -
https://github.com/nupic-community/clortex

Author, Real Machine Intelligence with Clortex and NuPIC
Read for free or buy the book at https://leanpub.com/realsmartmachines

Speaking on Clortex and HTM/CLA at euroClojure Krakow, June 2014:
http://euroclojure.com/2014/
and at LambdaJam Chicago, July 2014: http://www.lambdajam.com

e:[email protected] t:+353 83 4214179
Join the quest for Machine Intelligence at http://numenta.org
Formerly of Adnet [email protected] http://www.adnet.ie

Reply via email to