If I'm not mistaken, Gaussian Processes are expensive for large n_samples,
not for large n_features. The reason is because the kernel matrix (called
covariance matrix in the GP literature) needs to be inversed, which takes
O(n_samples^3) complexity with a Cholesky decomposition. That said, kernels
methods like SVMs or Gaussian Processes are usually not used much with
high-dimensional data. Kernels are useful to implicitly project
low-dimensional data to higher (even infinite) dimensional spaces. If your
data is already high-dimensional, there's nothing to gain from using
kernels. A good example is text classification, where everyone is using
linear kernels.

HTH,
Mathieu
------------------------------------------------------------------------------
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to