Thanks a lot great explanation.
You have saved me a lot of work !
--
To clarify, it is *not* the case that `x.dot(spca.components_.T) ` is
equivalent to `spca.transform(x)`. The latter performs a solve.
Best,
Vlad
On Fri, Oct 17, 2014 at 12:03 PM, Vlad Niculae wrote:
> Hi Luca
>
>> x_3_di
To clarify, it is *not* the case that `x.dot(spca.components_.T) ` is
equivalent to `spca.transform(x)`. The latter performs a solve.
Best,
Vlad
On Fri, Oct 17, 2014 at 12:03 PM, Vlad Niculae wrote:
> Hi Luca
>
>> x_3_dimensional = x.dot(spca.components_.T) # this is equivalent to
>> spca.trans
Hi Luca
> x_3_dimensional = x.dot(spca.components_.T) # this is equivalent to
> spca.transform(x)
This part is specific to PCA. In general, the transform part of such a
decomposition is `X * components ^ -1`. In PCA, because `components`
is orthogonal, `components ^ -1` is `components.T`. The r
Hi Vlad thanks for the answer.
I was thinking about that and I am not 100 % sure that this is right.
If we consider SPCA to work as PCA than we do:
x_3_dimensional = x.dot(spca.components_.T) # this is equivalent to
spca.transform(x)
and so the reconstruction of x is x_reconstruction =
x_3_dime
Hi Luca,
The other part of the decomposition that you're missing is available
in `spca.components_` and has shape `(n_components, n_features)`. The
approximation of X is therefore `np.dot(x_3_dimensional,
spca.components_)`.
Best,
Vlad
On Thu, Oct 16, 2014 at 6:07 PM, Luca Puggini wrote:
> Hi,
Hi,
is there any way to reconstruct the data after SparsePCA?
If I do
spca = SparsePCA(alpha=1, n_components=3).fit(x)
x_3_dimensional = SparsePCA.transform(x)
How can I get the best lower rank approximation of x after SparsePCA
decomposition?
Thanks,
Luca
--