Hi everybody,
Does any one know if a sparse autoencoder with sigmoidal function (as its 
activation function ) and less hidden units than the inputs units, works as a 
feature extractor or as PCA? 
I know if the number of hidden units are less than the number of input units 
and if the activation function is linear then the autoencoder works as PCA 
unless we put restrictions on the autoencoder. (source 2)  So, I wondered if 
making the autoencoder "sparse" and using sigmoidal activation function helps 
to employ the autoencoder as a feature extractor (deep learner) when it has 
less hidden units than input units. My goal is to stack them to make a deep 
architecture, but each autoencoder has less hidden units than inputs. Source 1 
claimed that even one sigmoidal layer will lead the autoencoder works as PCA.

These two resources have different opinions : 

http://en.wikipedia.org/wiki/Autoencoderhttp://deeplearning.net/tutorial/dA.html
 


Thanks, 
Arezou


------------------------------------------------------------------------------
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to