Hello everyone,

i wrote a function to calculate the sensitivity,specificity, ballance
accuracy and accuracy from a confusion matrix.


Now i have a Problem, I'm getting different values when I'm comparing my
Values with those from the metrics.classification_report function.
The general problem ist, my predicted sensitivity is in the classification
report the precision value. I'm computing every sensitivity  with the one
vs all approach. So e.g. Class 1 == true, class 2,3,4,5 are the rest (not
true).

I did this only to get the specificity, and to compare if i computed
everything right.



----------- ensemble -----------

             precision    recall  f1-score   support

        1.0      * 0.56 *     0.68      0.61       129
        2.0       *0.28*      0.15      0.20        78
        3.0      * 0.45  *    0.47      0.46       116
        4.0       *0.29*      0.05      0.09        40
        5.0      * 0.44 *     0.66      0.53        70

avg / total       0.43      0.47      0.43       433


Class: 1
 sensitivity:*0.556962025316*
 specificity: 0.850909090909
 ballanced accuracy: 0.703935558113

Class: 2
 sensitivity:*0.279069767442*
 specificity: 0.830769230769
 ballanced accuracy: 0.554919499106

Class: 3
 sensitivity*:0.446280991736*
 specificity: 0.801282051282
 ballanced accuracy: 0.623781521509

Class: 4
 sensitivity:*0.285714285714*
 specificity: 0.910798122066
 ballanced accuracy: 0.59825620389

Class: 5
 sensitivity:*0.442307692308*
 specificity: 0.927051671733
 ballanced accuracy: 0.68467968202
------------------------------------------------------------------------------
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to