On Fri, Sep 28, 2012 at 3:48 PM, Mathieu Blondel <[email protected]>wrote:

> # If you do subgradient descent, you can use non-smooth losses. In the
> paper I mentioned, the author is using Newton's method, which is why he's
> using differentiable losses.
>

Exactly, In fact ralg supports non smooth functions [1] via (automatically
calcualted)
sub gradient information and I found it extremely robust and fast.
At least for my problems...

[1] http://en.wikipedia.org/wiki/Naum_Z._Shor#r-algorithm
------------------------------------------------------------------------------
Got visibility?
Most devs has no idea what their production app looks like.
Find out how fast your code is with AppDynamics Lite.
http://ad.doubleclick.net/clk;262219671;13503038;y?
http://info.appdynamics.com/FreeJavaPerformanceDownload.html
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to