Re: [Scikit-learn-general] Micro and Macro F-measure for text classification

2015-04-13 Thread Andreas Mueller
Yeah that would probably be best :) On 04/11/2015 07:04 AM, Joel Nothman wrote: Or report macro and micro in classification_report. Micro is equivalent to accuracy for multiclass without #4287 . On 10 April 2015 at 01:00, Andreas Mueller

Re: [Scikit-learn-general] Micro and Macro F-measure for text classification

2015-04-11 Thread Joel Nothman
Or report macro and micro in classification_report. Micro is equivalent to accuracy for multiclass without #4287 . On 10 April 2015 at 01:00, Andreas Mueller wrote: > Hi Jack. > You mean in the classification report? > That give micro-aver

Re: [Scikit-learn-general] Micro and Macro F-measure for text classification

2015-04-09 Thread Andreas Mueller
Hi Jack. You mean in the classification report? That give micro-average from looking at the code: https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/classification.py#L1265 If you use the f1_score function instead you can give the averaging scheme: http://scikit-learn.org/st

[Scikit-learn-general] Micro and Macro F-measure for text classification

2015-04-09 Thread Jack Alan
Hi folks, I wonder for classification of text documents available on: http://scikit-learn.org/stable/auto_examples/text/mlcomp_sparse_document_classification.html#example-text-mlcomp-sparse-document-classification-py What sort of F-measure that has been used? Is it Micro or Macro? and how to chan