"Traditional" sensitivity is defined for binary classification only.

Maybe micro-average is what you're looking for, but in the multiclass case
without anything more specified, you'll merely be calculating accuracy.

Perhaps quantiles of the scores returned by permutation_test_score will
give you the CIs you seek.

On 24 April 2017 at 01:50, Suranga Kasthurirathne <suranga...@gmail.com>
wrote:

>
> Hello all,
>
> I'm looking at the confidence matrix and performance measures (precision,
> recall, f-measure etc.) produced by scikit.
>
> It seems that scikit calculates these measures per each outcome class, and
> then combines them into some sort of average.
>
> I would really like to see these measures presented in the traditional(?)
> context, where sensitivity is TP / TP + FN. (and is combined, and NOT per
> class!)
>
> If I were to take scikit predictions, and calculate sensitivity using the
> above, then my results wont match up to what scikit says :(
>
> How can I switch to seeing overall performance measures, and not per
> class? and also, how may I obtain 95% confidence intervals foreach of these
> measures?
>
> --
> Best Regards,
> Suranga
>
> _______________________________________________
> scikit-learn mailing list
> scikit-learn@python.org
> https://mail.python.org/mailman/listinfo/scikit-learn
>
>
_______________________________________________
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn

Reply via email to