On 11/10/2011 12:27 PM, Mathieu Blondel wrote:
>
> How many training instances do you have?
>
In the particular example I was thinking of. I had <3k for training, I
think.

> On Nov 10, 2011 7:21 PM, "Andreas Müller" <[email protected]
> <mailto:[email protected]>> wrote:
>
>     On 11/10/2011 12:18 AM, Gael Varoquaux wrote:
>     > On Wed, Nov 09, 2011 at 11:00:34PM +0100, Andreas Mueller wrote:
>     >> As in the other thread, usually one has to scan for parameters
>     any way.
>     >> Computing every value just once and then storing it seems ok to
>     me. For
>     >> example, for the chi2 kernel, there is very efficient code
>     available by
>     >> Christoph Lampert using SSE2 instructions. I used precomputed
>     kernel
>     >> matrices for multi instance kernels. I could easily implement
>     them on
>     >> the GPU using batches and then store them one and for all. If I
>     had to
>     >> do memory transfers for every single example that I need the kernel
>     >> for, it would be very slow.
>     >> Maybe these are special use cases but I think they are valid ones.
>     > They are, but the question is: can they be answered in a toolkit
>     meant to
>     > be used from Python, where there is a large function-call
>     overhead? I
>     > don't know the answer to this question, to be fair, I am just
>     raising it.
>     Maybe I wasn't clear in making my point: I was trying to say
>     that computing the whole gram matrix worked just fine for me.
>
>     I think the large function call overhead makes other solutions
>     impractical.
>

------------------------------------------------------------------------------
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to