What is your dataset like? How are you building your individual classifier that 
you are ensembling with AdaBoost? A common-use case would be boosted decision 
stumps (one-level decision trees).

http://en.wikipedia.org/wiki/Decision_stump

http://lyonesse.stanford.edu/~langley/papers/stump.ml92.pdf

So with decision stumps  and/or a very high learning rate, you would, in 
general, need more (relatively speaking) estimators. Whether  your dataset has 
10 features or 100 features (or more...or less) will be important as well as 
the depth of each tree (assuming that you're boosting decision trees). Boosting 
is an iterative process, so you'd like as many trees as you can get and a 
small-ish learning rate in order to get the best results, with the limiting 
factor (as always) being your computational and time budgets, respectively.

My 2 cents. :D

-Jason

From: Pagliari, Roberto [mailto:rpagli...@appcomsci.com]
Sent: Friday, April 10, 2015 1:18 PM
To: scikit-learn-general@lists.sourceforge.net
Subject: [Scikit-learn-general] adaboost parameters

When using adaboost, what is a range of values of n_estimators and learning 
rate that makes sense to optimize over?

Thank you,
------------------------------------------------------------------------------
BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT
Develop your own process in accordance with the BPMN 2 standard
Learn Process modeling best practices with Bonita BPM through live exercises
http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_
source=Sourceforge_BPM_Camp_5_6_15&utm_medium=email&utm_campaign=VA_SF
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to