Thanks! Could also an item to study and constrast the convergence of
classical MLE / EM GMM with Bayesian GMM? I would like to check that
they effectively convergence to the same solution when the number of
samples grow. It would be interesting also to study their respective
behaviors when facing
Sorry for the typos in my first sentence, I meant: could you also
please add an item.
--
Olivier
--
___
Scikit-learn-general mailing list
To me, those numbers appear identical at 2 decimal places.
On 17 June 2015 at 23:04, Herbert Schulz hrbrt@gmail.com wrote:
Hello everyone,
i wrote a function to calculate the sensitivity,specificity, ballance
accuracy and accuracy from a confusion matrix.
Now i have a Problem, I'm
Yeah that is the rounding of using %2f in the classification report.
On 06/17/2015 09:20 AM, Joel Nothman wrote:
To me, those numbers appear identical at 2 decimal places.
On 17 June 2015 at 23:04, Herbert Schulz hrbrt@gmail.com
mailto:hrbrt@gmail.com wrote:
Hello everyone,
Yeah i know, thats why I'm asking. i thought precision is not the same like
recall/sensitivity.
recall == sensitivity!?
But in this matrix, the precision is my calculated sensitivity, or is the
precision in this case the sensitivity?
On 17 June 2015 at 15:29, Andreas Mueller t3k...@gmail.com
Hm, the sensitivity (TP/[TP+FN]) should be equal to recall, not the
precision. Maybe it would help if you could print the confusion matrices for
a simpler binary case to track what's going on here
On Jun 17, 2015, at 9:32 AM, Herbert Schulz hrbrt@gmail.com wrote:
Yeah i know, thats why
Hello everyone,
i wrote a function to calculate the sensitivity,specificity, ballance
accuracy and accuracy from a confusion matrix.
Now i have a Problem, I'm getting different values when I'm comparing my
Values with those from the metrics.classification_report function.
The general problem
Sensitivity is recall:
https://en.wikipedia.org/wiki/Sensitivity_and_specificity
Recall is TP / (TP + FN) and precision is TP / (TP + FP).
What did you compute?
On 06/17/2015 09:32 AM, Herbert Schulz wrote:
Yeah i know, thats why I'm asking. i thought precision is not the same
like
I actually computed it like this, maybe I did something in my TP,FP,FN,TN
calculation wrong?
c1,c2,c3,c4,c5=[1,0,0,0,0],[2,0,0,0,0],[3,0,0,0,0],[4,0,0,0,0],[5,0,0,0,0]
alle=[c1,c2,c3,c4,c5]
#as i mentioned 1 vs all, so the first element in the array is just the
class
#[1,0,0,0,0] ==
Ok i think i have it, thanks everyone for the help!
But there is an another problem.
How are you calculating the avg?
example:
--- k-NN ---
precisionrecall f1-score support
1.0 0.50 0.43 0.46 129
2.0 0.31
Scikit-learn has had a default of a weighted (micro-)average. This is a bit
non-standard, so from now users are expected to specify the average when
using precision/recall/fscore. Once
https://github.com/scikit-learn/scikit-learn/pull/4622 is merged,
classification_report will show all the common
About the average: The two common scenarios are micro and macro average (I
think macro is typically the default in scikit-learn) -- you calculated the
macro average in your example.
To further explain the difference betw. macro and micro, let's consider a
simple 2-class scenario and calculate
12 matches
Mail list logo