Hi Scikit,

I'm planning to push a Sparse Autoencoder code I implemented, but since the scikit page says, "To avoid duplicating work, it is highly advised that you contact the developers on the mailing list before starting work on a non-trivial feature" , I decided to ask whether pushing Sparse AutoEncoder to github would be beneficial. Here is a sneak preview of the main methods,
1) fit(X)
        Trains the weights to minimize the cost function.
2)Transform(X)
Applies forward pass using the trained weights to get the hidden features

As you know, Sparse Autoencoder (SAE) applies backpropagation with one hidden layer, setting the target values to be equal to the inputs. The weights can be used as starting point for MLP to improve prediction, or the hidden neurons in the layer can be used as new features. Furthermore, SAE is preliminary for building Deep Networks.

So, would this add value to scikit? :)

If you find a flaw, please don't hesitate to criticize

Thank you!


SAE Reference: http://ufldl.stanford.edu/wiki/index.php/UFLDL_Tutorial



------------------------------------------------------------------------------
This SF.net email is sponsored by Windows:

Build for Windows Store.

http://p.sf.net/sfu/windows-dev2dev
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to