Sorry for the late email,

just wanted to say thank you for the grate description!



On 17 June 2015 at 18:29, Joel Nothman <joel.noth...@gmail.com> wrote:

> Scikit-learn has had a default of a weighted (micro-)average. This is a
> bit non-standard, so from now users are expected to specify the average
> when using precision/recall/fscore. Once
> https://github.com/scikit-learn/scikit-learn/pull/4622 is merged,
> classification_report will show all the common averages.
>
> I might also note that for multiclass problems with all classes included,
> micro precision == recall == fscore == accuracy. In the development
> version, it is now possible to specify that not all classes should be
> included in micro-averages, so micro average is now more useful for
> multiclass evaluation...
>
> On 18 June 2015 at 01:42, Sebastian Raschka <se.rasc...@gmail.com> wrote:
>
>> About the average: The two common scenarios are "micro" and "macro"
>> average (I think "macro" is typically the default in scikit-learn) -- you
>> calculated the macro average in your example.
>>
>> To further explain the difference betw. macro and micro, let's consider a
>> simple 2-class scenario and calculate the precision
>>
>> a) macro-average precision:
>> (PRE1 + PRE2) / 2
>>
>> b) micro-average precision:
>>  (TP1+TP2)/(TP1+TP2+FP1+FP2)
>>
>> Hope that helps.
>>
>> Best,
>> Sebastian
>>
>>
>> > On Jun 17, 2015, at 10:49 AM, Herbert Schulz <hrbrt....@gmail.com>
>> wrote:
>> >
>> > Ok i think i have it, thanks everyone for the help!
>> >
>> > But there is an another problem.
>> >
>> > How are you calculating the avg?
>> >
>> > example:
>> >
>> > ----------- k-NN -----------
>> >
>> >              precision    recall  f1-score   support
>> >
>> >         1.0       0.50      0.43      0.46       129
>> >         2.0       0.31      0.40      0.35        88
>> >         3.0       0.45      0.36      0.40       107
>> >         4.0       0.06      0.03      0.04        33
>> >         5.0       0.42      0.58      0.49        76
>> >
>> > avg / total       0.40      0.40      0.40       433
>> >
>> > so: (0.5+0.31+0.45+0.06+0.42) / 5 = 0.348 ~ 0.35    like i calculated
>> it in my avg part. Are you using some weights?
>> >
>> > Class: 1
>> >  sensitivity:0.43
>> >  specificity: 0.81
>> >  ballanced accuracy: 0.62
>> >  precision 0.50
>> > .
>> > .
>> > .
>> > .
>> >
>> > Class: 5
>> >  sensitivity:0.58
>> >  specificity: 0.83
>> >  ballanced accuracy: 0.70
>> >  precision 0.42
>> >
>> > avg total:
>> >  sensitivity: 0.36
>> >  specificity: 0.85
>> >  avg ballance: 0.60
>> >  avg precision: 0.35
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > On 17 June 2015 at 16:06, Herbert Schulz <hrbrt....@gmail.com> wrote:
>> > I actually computed it like this, maybe I did something in my
>> TP,FP,FN,TN calculation wrong?
>> >
>> >
>>  c1,c2,c3,c4,c5=[1,0,0,0,0],[2,0,0,0,0],[3,0,0,0,0],[4,0,0,0,0],[5,0,0,0,0]
>> >         alle=[c1,c2,c3,c4,c5]
>> >
>> >
>> > #as i mentioned 1 vs all, so the first element in the array is just the
>> class
>> > #[1,0,0,0,0]  == class 1, then in the order:   TP,FP,FN,TN
>> > #maybe here is something wring:
>> >
>> >         for i in alle:
>> >             pred=predicted
>> >
>> >             for k in range(len(predicted)):
>> >
>> >                 if float(i[0]) == y_test[k]:
>> >                     if float(i[0]) == pred[k]:
>> >                         i[1]+=1
>> >                     else:
>> >                         i[2]+=1
>> >
>> >                 elif pred[k] == float(i[0]):
>> >                     i[3]+=1
>> >                 elif pred[k] !=float(i[0]) and y_test[k] !=float(i[0]):
>> >                     i[4]+=1
>> >
>> > #specs looks like this: [1, 71, 51, 103, 208]
>> >
>> > sens=specs[1]/float(specs[1]+specs[3])
>> >
>> >
>> >
>> >
>> > if I'm calculatig
>> >
>> > sens=specs[1]/float(specs[1]+specs[2]) im getting also the recall like
>> in the matrix.
>> >
>> > On 17 June 2015 at 15:42, Andreas Mueller <t3k...@gmail.com> wrote:
>> > Sensitivity is recall:
>> > https://en.wikipedia.org/wiki/Sensitivity_and_specificity
>> >
>> > Recall is TP / (TP + FN) and precision is TP / (TP + FP).
>> >
>> > What did you compute?
>> >
>> >
>> > On 06/17/2015 09:32 AM, Herbert Schulz wrote:
>> >> Yeah i know, thats why I'm asking. i thought precision is not the same
>> like recall/sensitivity.
>> >>
>> >> recall == sensitivity!?
>> >>
>> >> But in this matrix, the precision is my calculated sensitivity, or is
>> the precision in this case the sensitivity?
>> >>
>> >> On 17 June 2015 at 15:29, Andreas Mueller <t3k...@gmail.com> wrote:
>> >> Yeah that is the rounding of using %2f in the classification report.
>> >>
>> >>
>> >> On 06/17/2015 09:20 AM, Joel Nothman wrote:
>> >>> To me, those numbers appear identical at 2 decimal places.
>> >>>
>> >>> On 17 June 2015 at 23:04, Herbert Schulz <hrbrt....@gmail.com> wrote:
>> >>> Hello everyone,
>> >>>
>> >>> i wrote a function to calculate the sensitivity,specificity, ballance
>> accuracy and accuracy from a confusion matrix.
>> >>>
>> >>>
>> >>> Now i have a Problem, I'm getting different values when I'm comparing
>> my Values with those from the metrics.classification_report function.
>> >>> The general problem ist, my predicted sensitivity is in the
>> classification report the precision value. I'm computing every sensitivity
>> with the one vs all approach. So e.g. Class 1 == true, class 2,3,4,5 are
>> the rest (not true).
>> >>>
>> >>> I did this only to get the specificity, and to compare if i computed
>> everything right.
>> >>>
>> >>>
>> >>>
>> >>> ----------- ensemble -----------
>> >>>
>> >>>              precision    recall  f1-score   support
>> >>>
>> >>>         1.0       0.56      0.68      0.61       129
>> >>>         2.0       0.28      0.15      0.20        78
>> >>>         3.0       0.45      0.47      0.46       116
>> >>>         4.0       0.29      0.05      0.09        40
>> >>>         5.0       0.44      0.66      0.53        70
>> >>>
>> >>> avg / total       0.43      0.47      0.43       433
>> >>>
>> >>>
>> >>> Class: 1
>> >>>  sensitivity:0.556962025316
>> >>>  specificity: 0.850909090909
>> >>>  ballanced accuracy: 0.703935558113
>> >>>
>> >>> Class: 2
>> >>>  sensitivity:0.279069767442
>> >>>  specificity: 0.830769230769
>> >>>  ballanced accuracy: 0.554919499106
>> >>>
>> >>> Class: 3
>> >>>  sensitivity:0.446280991736
>> >>>  specificity: 0.801282051282
>> >>>  ballanced accuracy: 0.623781521509
>> >>>
>> >>> Class: 4
>> >>>  sensitivity:0.285714285714
>> >>>  specificity: 0.910798122066
>> >>>  ballanced accuracy: 0.59825620389
>> >>>
>> >>> Class: 5
>> >>>  sensitivity:0.442307692308
>> >>>  specificity: 0.927051671733
>> >>>  ballanced accuracy: 0.68467968202
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> ------------------------------------------------------------------------------
>> >>>
>> >>> _______________________________________________
>> >>> Scikit-learn-general mailing list
>> >>> Scikit-learn-general@lists.sourceforge.net
>> >>> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> ------------------------------------------------------------------------------
>> >>>
>> >>>
>> >>>
>> >>> _______________________________________________
>> >>> Scikit-learn-general mailing list
>> >>>
>> >>> Scikit-learn-general@lists.sourceforge.net
>> >>> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
>> >>
>> >>
>> >>
>> ------------------------------------------------------------------------------
>> >>
>> >> _______________________________________________
>> >> Scikit-learn-general mailing list
>> >> Scikit-learn-general@lists.sourceforge.net
>> >> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
>> >>
>> >>
>> >>
>> >>
>> >>
>> ------------------------------------------------------------------------------
>> >>
>> >>
>> >>
>> >> _______________________________________________
>> >> Scikit-learn-general mailing list
>> >>
>> >> Scikit-learn-general@lists.sourceforge.net
>> >> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
>> >
>> >
>> >
>> ------------------------------------------------------------------------------
>> >
>> > _______________________________________________
>> > Scikit-learn-general mailing list
>> > Scikit-learn-general@lists.sourceforge.net
>> > https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
>> >
>> >
>> >
>> >
>> ------------------------------------------------------------------------------
>> > _______________________________________________
>> > Scikit-learn-general mailing list
>> > Scikit-learn-general@lists.sourceforge.net
>> > https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
>>
>>
>>
>> ------------------------------------------------------------------------------
>> _______________________________________________
>> Scikit-learn-general mailing list
>> Scikit-learn-general@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
>>
>
>
>
> ------------------------------------------------------------------------------
>
> _______________________________________________
> Scikit-learn-general mailing list
> Scikit-learn-general@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
>
>
------------------------------------------------------------------------------
Monitor 25 network devices or servers for free with OpManager!
OpManager is web-based network management software that monitors 
network devices and physical & virtual servers, alerts via email & sms 
for fault. Monitor 25 devices for free with no restriction. Download now
http://ad.doubleclick.net/ddm/clk/292181274;119417398;o
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to