2011/12/6 James Bergstra <[email protected]>:
> On Fri, Dec 2, 2011 at 12:54 PM, Peter Prettenhofer
> <[email protected]> wrote:
>> [...]
>>
>
> How does the current tree implementation support boosting? I don't see
> anything in the code about weighted samples.
>
> - James

You're right - we don't support sample weights at the moment but one
might use sampling with replacement to implement e.g. AdaBoost.

Gradient boosting [1], on the other hand, does not need sample weights
but fits a series of regression trees on the residuals of their
predecessors. You can think of gradient boosting as a generalization
of boosting (forward stage-wise additive modelling) for arbitrary loss
functions (e.g. if you use exponential loss you recover AdaBoost)

[1] http://en.wikipedia.org/wiki/Gradient_boosting

best,
 Peter



>
> ------------------------------------------------------------------------------
> Cloud Services Checklist: Pricing and Packaging Optimization
> This white paper is intended to serve as a reference, checklist and point of
> discussion for anyone considering optimizing the pricing and packaging model
> of a cloud services business. Read Now!
> http://www.accelacomm.com/jaw/sfnl/114/51491232/
> _______________________________________________
> Scikit-learn-general mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general



-- 
Peter Prettenhofer

------------------------------------------------------------------------------
Cloud Services Checklist: Pricing and Packaging Optimization
This white paper is intended to serve as a reference, checklist and point of 
discussion for anyone considering optimizing the pricing and packaging model 
of a cloud services business. Read Now!
http://www.accelacomm.com/jaw/sfnl/114/51491232/
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to