I've not seen this metric used (references?). Am I right in thinking that
in the binary case, this is identical to accuracy? If I predict all
elements to be the majority class, then adding more minority classes into
the problem increases my score. I'm not sure what this metric is getting at.

On 8 March 2016 at 11:57, Sebastian Raschka <se.rasc...@gmail.com> wrote:

> Hi,
>
> I was just wondering why there’s no support for the average per-class
> accuracy in the scorer functions (if I am not overlooking something).
> E.g., we have 'f1_macro', 'f1_micro', 'f1_samples', ‘f1_weighted’ but I
> didn’t see a ‘accuracy_macro’, i.e.,
> (acc.class_1 + acc.class_2 + … + acc.class_n) / n
>
> Would you discourage its usage (in favor of other metrics in imbalanced
> class problems) or was it simply not implemented, yet?
>
> Best,
> Sebastian
>
> ------------------------------------------------------------------------------
> Transform Data into Opportunity.
> Accelerate data analysis in your applications with
> Intel Data Analytics Acceleration Library.
> Click to learn more.
> http://makebettercode.com/inteldaal-eval
> _______________________________________________
> Scikit-learn-general mailing list
> Scikit-learn-general@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
>
------------------------------------------------------------------------------
Transform Data into Opportunity.
Accelerate data analysis in your applications with
Intel Data Analytics Acceleration Library.
Click to learn more.
http://makebettercode.com/inteldaal-eval
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to