Hi,
which paper or book is the foundation of the implementation of
`gradient_boost.py:BinomialDeviance`?

I recently read the paper: Friedman: greedy function approximation - a
gradient boosting machine. I believe that L2_TreeBoost in the paper should
be equivalent to BinomialDeviance in scikit-learn, while their
implementation are different, for example:

+ negative_gradient:
   - in scikit:  \tilde{y} = y - expit(pred.ravel())
                                 = y - \frac{1}{1 + exp(- F)}
   - in paper: \tilde{y} = \frac{2 y}{1 + exp(2yF)}

Does anyone can help me?
Thanks.
_______________________________________________
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn

Reply via email to