Hi,

the best way to get this fixed is to send us a PR updating this file:

https://github.com/scikit-learn/scikit-learn/blob/master/doc/modules/model_evaluation.rst

thanks for your help

Alex

On Sun, Aug 30, 2020 at 7:38 AM 최우정 <cwjbria...@gmail.com> wrote:
>
> To whom it may concern
>
> I am a good user of scikit-learn. First of all, I am grateful to scikit-learn 
> for providing a good service. The reason I write an email is to report some 
> typos.
>
> While studying about AUC, I think I found some typos in API documentation. As 
> it is written in 
> "https://scikit-learn.org/stable/modules/model_evaluation.html#roc-metrics"; 
> One-vs-one Algorithm, the multiclass macro AUC metric was defined in the 
> reference [HT2001]. But there is a double difference in macro ovo AUC between 
> the reference and documentation. Furthermore, under the macro AUC, there is 
> weighted AUC which is defined in the reference [FC2009] as it is written in 
> documentation. But there is no same metric in the documentation in the 
> reference however there is a similar one, AU1P. After reviewing the code 
> which includes the definition of roc_auc_score, I notice that it is different 
> from both of the expressions in the documentation and the reference. 
> Additionally, my scikit-learn version is 0.23.1. I hope this part will be 
> fixed well. Thank you for reading my email.
> _______________________________________________
> scikit-learn mailing list
> scikit-learn@python.org
> https://mail.python.org/mailman/listinfo/scikit-learn
_______________________________________________
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn

Reply via email to