Hi Andy,

Thanks for the example.  I actually started experimenting with defining my
own python function kernel, which caches its results so that it is fast
once it has already been called once with the same input.  (Useful since I
am training on the same data multiple classifiers and comparing different
parameters..)
I noticed that at testing, the kernel gets called with the test data and
ALL of the training data, like you mentioned.
That could make things a lot slower than they need to be, right?  I thought
one of the main advantages of SVMs is the sparse representation of the
training set which they derive- and this is being lost apparently.

Cheers,
Matt

On 9 November 2011 10:25, Andreas Müller <[email protected]> wrote:

> Hi Matt.
> Did you figure it out yet?
>
> Here is an example:
> https://gist.github.com/1351047
> It seems that at the moment, you have to use the whole training set to
> generate the kernel at test time.
> Not sure why, maybe for ease of use.
>
> Can anyone comment on that?
>
> Cheers,
> Andy
>
>
>
> ------------------------------------------------------------------------------
> RSA(R) Conference 2012
> Save $700 by Nov 18
> Register now
> http://p.sf.net/sfu/rsa-sfdev2dev1
> _______________________________________________
> Scikit-learn-general mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
>
------------------------------------------------------------------------------
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to