If you can afford to store the entire kernel matrix in memory, "training
support vector machines in the primal" [*] seems like the way to go for me.
It's really easy to implement in Python + Numpy (OpenOPT cannot be added to
scikit-learn). It's restricted to the squared hinge loss (what Lin et al.
call L2-SVM) but can be easily extended to squared epsilon-insensitive loss
(L2-SVR). The basic idea is to repeatedly solve ridge regressions on the
current set of support vectors.

Mathieu


[*] http://www.kyb.mpg.de/publications/attachments/primal_[0].pdf
------------------------------------------------------------------------------
Got visibility?
Most devs has no idea what their production app looks like.
Find out how fast your code is with AppDynamics Lite.
http://ad.doubleclick.net/clk;262219671;13503038;y?
http://info.appdynamics.com/FreeJavaPerformanceDownload.html
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to