Hi Dmitrey,
On Fri, Sep 28, 2012 at 8:36 PM, Dmitrey <[email protected]> wrote:
>
> > For small non linear problems having an exact SVM/SVR solver
> > (not approximated) is very useful IMHO.
>
> I'm not sure what does this mean "For small non linear problems having
> an exact SVM/SVR solver (not approximated) is very useful IMHO"
>
>
ralg cannot search solution with required tolerance, and thus is approximate
> solver (maybe in ML "exact/approximate" have a certain meaning? I'm not
> aware
> though). That "ftol" in the code is only a stopping criterion.
>
When I used "approximate" I was thinking to Stochastic Gradient Descent
where you don't even have a stopping criterion or LA-SVM where you
use some heuristics to make the problem "smaller" ...
In general for machine learning you are not interested in the best tolerance
for optimum solution ... a "good enough" solution is usually sufficient.
I'd argue that having a solution with a low sensitivity to the solution
vector
can lead to better generalization performance than the optimum solution.
FYI in 2012, after 41 years since initial ralg article in 1971,
> N.G.Zhurbenko,
> co-author of r-algorithm (http://openopt.org/NikolayZhurbenko) seems to
> have
> invented major enhancement for r-algorithm, but I haven't possibilities to
> code
> it into my implementation of the solver (http://openopt.org/ralg) right
> now, mb
> it will be done several months later.
>
I'm looking forward to the new enhancement. Have you got any links about
them?
Paolo
------------------------------------------------------------------------------
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://ad.doubleclick.net/clk;258768047;13503038;j?
http://info.appdynamics.com/FreeJavaPerformanceDownload.html
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general