Le 25 mars 2012 17:09, Peter Prettenhofer
<[email protected]> a écrit :
> 2012/3/25 Olivier Grisel <[email protected]>:
>> Le 25 mars 2012 12:44, Peter Prettenhofer
>> <[email protected]> a écrit :
>>> Olivier,
>>>
>>> In my experience GBRT usually requires more base learners than random
>>> forests to get the same level of accuracy. I hardly use less than 100.
>>> Regarding the poor performance of GBRT on the olivetti dataset:
>>> multi-class GBRT fits ``k`` trees at each stage, thus, if you have
>>> ``n_estimators`` this means you have to grow ``k * n_estimators``
>>> trees in total (4000 trees is quite a lot :-) ). Personally, I haven't
>>> used multi-class GBRT much (part of the reason is that GBM does not
>>> support it) - I know that the learning to rank folks use multi-class
>>> GBRT for ordinal scaled output values (e.g. "not-relevant",
>>> "relevant", "highly relevant") but these involve usually less than 5
>>> classes.
>>
>> Interesting I think this kind of practical considerations should be
>> added to the docs.
>
> Absolutely - I'll add them immediately.

Great, thanks. Please use `n_classes` instead of `k` in the docstrings
or narrative doc.

And thanks for the other comments and references.

-- 
Olivier
http://twitter.com/ogrisel - http://github.com/ogrisel

------------------------------------------------------------------------------
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to