Hello everyone,

I'm having a bit trouble with the parameters that I've got from
gridsearchCV.


For example:

If i'm using the parameter what i've got from grid seardh CV for example on
RF oder k-nn and i test the model on the train set, i get everytime an AUC
value about 1.00 or 0.99.
The dataset have 1200 samples.

Does that mean that i can't use the Parameters that i've got from the
gridsearchCV? Cause it was in actually every case. I already tried the
nested-CV to compare the algorithms.


example for RF with the values i have got from gridsearchCV (10-fold):

RandomForestClassifier(n_estimators=200,oob_score=True,max_features=None,random_state=1,min_samples_leaf=
2,class_weight='balanced_subsample')


and then i'm just using    *clf.predict(X_train) *and test it on the*
y_train set. *

the AUC-value from the  clf.predict(X_test)  are about 0.73, so there is a
big difference from  the train and test dataset.

best,
------------------------------------------------------------------------------
Mobile security can be enabling, not merely restricting. Employees who
bring their own devices (BYOD) to work are irked by the imposition of MDM
restrictions. Mobile Device Manager Plus allows you to control only the
apps on BYO-devices by containerizing them, leaving personal data untouched!
https://ad.doubleclick.net/ddm/clk/304595813;131938128;j
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to