Hello all,

I'm looking at the confidence matrix and performance measures (precision,
recall, f-measure etc.) produced by scikit.

It seems that scikit calculates these measures per each outcome class, and
then combines them into some sort of average.

I would really like to see these measures presented in the traditional(?)
context, where sensitivity is TP / TP + FN. (and is combined, and NOT per
class!)

If I were to take scikit predictions, and calculate sensitivity using the
above, then my results wont match up to what scikit says :(

How can I switch to seeing overall performance measures, and not per class?
and also, how may I obtain 95% confidence intervals foreach of these
measures?

-- 
Best Regards,
Suranga
_______________________________________________
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn

Reply via email to