To clarify, it is *not* the case that `x.dot(spca.components_.T) ` is equivalent to `spca.transform(x)`. The latter performs a solve.
Best, Vlad On Fri, Oct 17, 2014 at 12:03 PM, Vlad Niculae <zephy...@gmail.com> wrote: > Hi Luca > >> x_3_dimensional = x.dot(spca.components_.T) # this is equivalent to >> spca.transform(x) > > This part is specific to PCA. In general, the transform part of such a > decomposition is `X * components ^ -1`. In PCA, because `components` > is orthogonal, `components ^ -1` is `components.T`. The relationship > between SparsePCA and PCA omits the orthogonality constraints, but > adds sparsity constraints instead. The connection between the is given > (only) by the reconstruction objective, which for both is `min ||X - > x_transf * components||`. > > What you get when you call `spca.transform` is essentially solving for > x_transf in `min ||X - x_transf * components||` subject to the > constraints. So the reconstruction is still `x_transf * components`. > > Hope this helps, > Vlad > > On Fri, Oct 17, 2014 at 11:23 AM, Luca Puggini <lucapug...@gmail.com> wrote: >> >> Hi Vlad thanks for the answer. >> >> I was thinking about that and I am not 100 % sure that this is right. >> >> If we consider SPCA to work as PCA than we do: >> >> x_3_dimensional = x.dot(spca.components_.T) # this is equivalent to >> spca.transform(x) >> >> and so the reconstruction of x is x_reconstruction = >> x_3_dimensional.dot(spca.components_) >> >> This works on PCA because the components are orthogonal >> components_.dot(components_.T) = Id >> >> In SPCA this does not hold any more. >> >> So I am not sure that the reconstruction can be obtained in this way. >> >> What do you think about that? >> >> Thanks, >> >> Luca >> >> >> >> Hi Luca, >> >> The other part of the decomposition that you're missing is available >> in `spca.components_` and has shape `(n_components, n_features)`. The >> approximation of X is therefore `np.dot(x_3_dimensional, >> spca.components_)`. >> >> Best, >> Vlad >> >> On Thu, Oct 16, 2014 at 6:07 PM, Luca Puggini <lucapuggio@...> wrote: >>> Hi, >>> is there any way to reconstruct the data after SparsePCA? >>> >>> If I do >>> >>> spca = SparsePCA(alpha=1, n_components=3).fit(x) >>> x_3_dimensional = SparsePCA.transform(x) >>> >>> How can I get the best lower rank approximation of x after SparsePCA >>> decomposition? >>> >>> Thanks, >>> Luca >> >> >> ------------------------------------------------------------------------------ >> Comprehensive Server Monitoring with Site24x7. >> Monitor 10 servers for $9/Month. >> Get alerted through email, SMS, voice calls or mobile push notifications. >> Take corrective actions from your mobile device. >> http://p.sf.net/sfu/Zoho >> _______________________________________________ >> Scikit-learn-general mailing list >> Scikit-learn-general@lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general >> ------------------------------------------------------------------------------ Comprehensive Server Monitoring with Site24x7. Monitor 10 servers for $9/Month. Get alerted through email, SMS, voice calls or mobile push notifications. Take corrective actions from your mobile device. http://p.sf.net/sfu/Zoho _______________________________________________ Scikit-learn-general mailing list Scikit-learn-general@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/scikit-learn-general