I was reading Andrew Ng's CS294A Lecture notes on Sparse Autoencoders (link
<http://web.stanford.edu/class/cs294a/sparseAutoencoder.pdf>), and came
across this line:

We would like to constrain the neurons to be inactive most of the
time.


and it struck me as being identical to the approach in the CLA with sparse
distributed representations.

I googled it and couldn't find any mention of Sparse Autoencoders in the
Nupic docs, so I thought I'd mention it in case it was news to anyone.  I
remember seeing a wiki page trying to document the relationship of Nupic
with "conventional" approaches to machine learning, so maybe this
similarity is worth a mention there.

Reply via email to