Hi Olivier,

On Fri, Sep 28, 2012 at 2:28 PM, Olivier Grisel <[email protected]>wrote:

> What about the memory usage? Do you need to precompute the kernel
> matrix in advance or do you use some LRU cache for columns as in
> libsvm?
>

unlike libsvm I definitely precompute the kernel matrix.

Is it the same scalabilité w.r.t. n_samples as libsvm?
>

I've compared the openopt implementation against libsvm for epsilon
insensitive SVR,
with RBF kernel managing to optimize exactly the same objective function.

On the same problem I tested 100 and 1000 samples and the time in both
cases
the elapsed time was x4 (always the same ratio also when varying C/alpha).

Please note that for my problem the number of SV for the best performing
cross validated
model is very high (not a sparse solution). That may play a role.

I swapped a couple of opt algorithms but ralg is the one that provided the
more consistent
performance as far as speed and converge success are concerned.

The very interesting part is that it's very easy to experiment with
different loss and penalization
functions (Elastic-net anyone?) and the optimization algorithm seems very
robust to everything
you throw at it ...

On the other hand, I would not advice taking a look at openopt
implementation ...
you have to take it as a black magic black box ...

I'll post a gist ASAP if anyone wants to play with it ...

Paolo
------------------------------------------------------------------------------
Got visibility?
Most devs has no idea what their production app looks like.
Find out how fast your code is with AppDynamics Lite.
http://ad.doubleclick.net/clk;262219671;13503038;y?
http://info.appdynamics.com/FreeJavaPerformanceDownload.html
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to