>> (Also, I believe that GB in sklearn is unregularized in its current
>> implementation?)
>>
>
> It doesn't have a regularization term but the learning rate parameter can be
> used to avoid taking overly big steps:
> http://scikit-learn.org/stable/auto_examples/ensemble/plot_gradient_boosting_regularization.html#example-ensemble-plot-gradient-boosting-regularization-py

Yeah.

In my thesis, I boosted l1-regularized trees, and decayed the
regularization parameter every time it converged.
I might dig into the code and see if it would be easy to add this to sklearn.

Best,
   Joseph

------------------------------------------------------------------------------
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://ad.doubleclick.net/clk;258768047;13503038;j?
http://info.appdynamics.com/FreeJavaPerformanceDownload.html
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to