Hi Mathieu,
On Fri, Sep 28, 2012 at 3:16 PM, Mathieu Blondel <[email protected]>wrote:
> If you can afford to store the entire kernel matrix in memory, "training
> support vector machines in the primal" [*] seems like the way to go for me.
My openopt experimentation was motivated exactly by that paper.
The reason I tried openopt is that it support automatic differentiation by
FuncDesigner module.
This means that the Gradient vector and the Hessian matrix are
automatically implemented
by OpenOpt without the need of manual derivation.
In a couple of hours I could test L2 loss (easy to double check) and
HuberLoss (the one
I was interested in).
What I was not expecting is that ralg (the opt algorithm that I used) is
very robust also
in case of non C1 smooth loss function (epsilon insensitive).
Paolo
BTW: want to experiment with non negative constraints? ralg supports
them... it's just
two lines of code away ...
------------------------------------------------------------------------------
Got visibility?
Most devs has no idea what their production app looks like.
Find out how fast your code is with AppDynamics Lite.
http://ad.doubleclick.net/clk;262219671;13503038;y?
http://info.appdynamics.com/FreeJavaPerformanceDownload.html
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general