On Thu, Nov 3, 2011 at 6:28 AM, David Warde-Farley
<[email protected]> wrote:

> I wonder how this compares to learning a linear tied-weights autoencoder
> with SGD and then just orthogonalizing the weight vectors (I suppose you'd
> also need to do one run with a single "neuron" in order to orient the basis
> with respect to the first p.c.).

I was thinking of something similar: just a least-squares objective
minimized by SGD. Would be nice to compare with RandomizedPCA both in
terms of training time and performance on the final supervised
objective.

By the way, how would go about the orthogonalization? Gram–Schmidt?

Mathieu

------------------------------------------------------------------------------
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to