You may want to use the parameter called "max_features".

Indeed:

"1.11.2.3. Parameters -- The main parameters to adjust when using these
methods is n_estimators and max_features. The former is the number of trees
in the forest. The larger the better, but also the longer it will take to
compute. In addition, note that results will stop getting significantly
better beyond a critical number of trees. *The latter is the size of the
random subsets of features to consider when splitting a node.*"


Best regards,
Nicolas


2016-09-13 10:15 GMT+02:00 斌洪 <[email protected]>:

> I have read the Guide of sklearn's RandomForest :
>
> """
> In random forests (see RandomForestClassifier
> <http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html#sklearn.ensemble.RandomForestClassifier>
> and RandomForestRegressor
> <http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html#sklearn.ensemble.RandomForestRegressor>
> classes), each tree in the ensemble is built from a sample drawn with
> replacement (i.e., a bootstrap sample) from the training set.
> """
>
> But I prefer RandomForest as :
> """
> features ("attributes", "predictors", "independent variables") are
> randomly sampled
> """
>
> is RandomForest random samples or random features? where can I find a
> features random version of RandomForest?
>
> thx.
>
> _______________________________________________
> scikit-learn mailing list
> [email protected]
> https://mail.python.org/mailman/listinfo/scikit-learn
>
>
_______________________________________________
scikit-learn mailing list
[email protected]
https://mail.python.org/mailman/listinfo/scikit-learn

Reply via email to