On Thu, Sep 27, 2012 at 2:57 PM, Joseph Turian <[email protected]>wrote:

> Isn't gradient boosting a form of coordinate descent?
>

It's coordinate descent with greedy selection of the coordinates and early
stopping when n_estimators is reached.


>
> (Also, I believe that GB in sklearn is unregularized in its current
> implementation?)
>
>
It doesn't have a regularization term but the learning rate parameter can
be used to avoid taking overly big steps:
http://scikit-learn.org/stable/auto_examples/ensemble/plot_gradient_boosting_regularization.html#example-ensemble-plot-gradient-boosting-regularization-py

Mathieu
------------------------------------------------------------------------------
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://ad.doubleclick.net/clk;258768047;13503038;j?
http://info.appdynamics.com/FreeJavaPerformanceDownload.html
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to