Hi Traun,

I think they have a lot of similarities. Some of the differences I am aware
of:

- HTM representations are binary, not analog

- There is no explicit minimization of a global reconstruction error in
HTMs, such as L1 minimization.

- The HTM learning algorithm has a very close mapping to Hebbian learning
and the way inhibition occurs in the cortex.

- HTM's can operate in a continuous learning environment where the whole
system continuously learns. I don't know if this can be done with a deep
auto-encoder setup.

- HTM's rely on the "union" property for some of its key functions. This
might require the binary nature of HTM SDR's. I don't know if this has even
been proven or discussed with analog sparse representations. (Maybe someone
else can comment on that.)

--Subutai

On Sat, Sep 27, 2014 at 10:50 AM, Traun Leyden <[email protected]>
wrote:

>
> I was reading Andrew Ng's CS294A Lecture notes on Sparse Autoencoders (
> link <http://web.stanford.edu/class/cs294a/sparseAutoencoder.pdf>), and
> came across this line:
>
> We would like to constrain the neurons to be inactive most of the
> time.
>
>
> and it struck me as being identical to the approach in the CLA with sparse
> distributed representations.
>
> I googled it and couldn't find any mention of Sparse Autoencoders in the
> Nupic docs, so I thought I'd mention it in case it was news to anyone.  I
> remember seeing a wiki page trying to document the relationship of Nupic
> with "conventional" approaches to machine learning, so maybe this
> similarity is worth a mention there.
>
>
>

Reply via email to