Forwarding your question to the mailing-list.

On Thu, Jul 14, 2016 at 10:33 PM, Christos Lataniotis <
[email protected]> wrote:

> Dear Mathieu Blondel,
>
> I am a PhD student working on some machine-learning aspects related to
> dimensionality reduction. One of the methods that is of interest to me is
> kernel PCA so I tested the implementation that is offered by scikit-learn
> which I think is the most complete from the ones I could find on the web.
>
> I would like to ask for some clarification regarding the way you
> implemented the inverse transform, i.e. solving the pre-image problem.
>
> Although the paper from Bakir et. al, 2004 is cited, I think there is some
> difference in your implementation and the methodology that is discussed on
> that paper. Bakir suggests ‘learning' the pre-image map by solving a kernel
> ridge regression problem with some kernel function, say l, that is
> different than the kernel function, say k, that is used in kernel PCA,
> However by going through the source code of your implementation I think
> that kernel functions l and k coincide. It that correct? If yes, is there
> some justification (e.g. empirical) for making such assumption? I am asking
> this because as far as I have read in the literature selecting the kernel
> function l is kind of an open question still so I would expect it to be a
> parameter that can be selected by the user on top of selecting the kernel
> function for kernel PCA.
>
> Thank you for your time in advance.
>
> Best Regards,
> Christos
>
>
> --
> Christos Lataniotis
> Institute of Structural Engineering
> Chair of Risk, Safety and Uncertainty Quantification ETH Zürich - HIL E
> 35.1
> Wolfgang-Pauli-Str. 15
> CH-8093 Zürich, Switzerland
> Tel: +41 44 633 06 70
> E-Mail: [email protected]
>
>
_______________________________________________
scikit-learn mailing list
[email protected]
https://mail.python.org/mailman/listinfo/scikit-learn

Reply via email to