2013/1/14 Andreas Mueller <[email protected]>: > Hey everybody. > I've been playing around with the classification score. > I want something that is called "average precision over classes" in the > computer vision > literature (the standard measure on the MSRC task). > I'm not entirely sure what that is. > I thought it would be recall_score. It turns out that recall_score (with > the default average='weighted') > is always the same as accuracy_score on my problem. > Is that expected / normal?
I am not sure. The weighted recall averaging seemed intuitive at the time I wrote it to handle the multiclass case when the classes are imbalanced but apparently it is much more common to use either the micro or macro averaging methods. Note that if you do micro-averaging, then f1 == recall == precision: http://metaoptimize.com/qa/questions/8284/does-precision-equal-to-recall-for-micro-averaging -- Olivier http://twitter.com/ogrisel - http://github.com/ogrisel ------------------------------------------------------------------------------ Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS, MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft MVPs and experts. SALE $99.99 this month only -- learn more at: http://p.sf.net/sfu/learnmore_122412 _______________________________________________ Scikit-learn-general mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
