Hey Guys,
I am using Random forest classifier to perform binary classification on my
dataset. I wanted to have a confidence value of both the classes
corresponding to each sample. For that purpose, I used "predict_proba"
method to predict class probabilities for X samples.
I saw 2-3 strange observations in my samples as below:
S.No. Y_true *Y_predicted_forest* Class_0_prob Class_1_prob
1. 1 0 0.28
0.72
2. 0 1 0.56
0.44
Here, based on the probabilities of classes, the algorithm should provide
true positives. But it gave wrong predictions in spite of the high
probability value of each class.
Can anyone please explain this strange observation when the predicted
probability of class 0 is more than class 1, still the output is class 1
and visa-versa?
For further details, I am providing a chunk of my code used:
#For Random Forest
clf = RandomForestClassifier(n_estimators=40)
scores = clf.fit(X_train, y_train).score(X_test, y_test)
y_pred = clf.predict(X_test)
* #Get proba for each class:*
y_score = clf.fit(X_train, y_train).predict_proba(X_test)
#Get value of each class as:
y_score[:,0] - #For 0 class
y_score[:,1] - #For 1 class
thanks!
Shalu
------------------------------------------------------------------------------
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the
conversation now. http://goparallel.sourceforge.net/
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general