Hi Stelios.
I haven't heard of that, but I'm no expert on the subject. How does that compare to Cohen's kappa, which I think is the standard for two rater agreement?
Can you give references of its use and why it is chosen?
And why would you use it as a metric for a regression model instead of R^2?
Why do you say it is robust?

Best,
Andy

On 09/04/2015 08:15 AM, Stylianos Kampakis wrote:
Hello everyone,

I was thinking to add the concordance correlation coefficient as a metric for regression models and I wanted to ask first whether you think this is a good idea.

The concordance correlation coefficient is a measure of inter-rater agreement. I stumbled upon it about a year ago, and I realized it is a really useful for evaluating regression models.

In short, the RMSE gets too affected by outliers. The correlation can be a good alternative in this case. However, the correlation ignores systematic bias in the predictions. So, it can still be misleading in some cases. The concordance correlation coefficient measures how closely the predicted and the true values fall on the 45 degree line through the origin.

https://en.wikipedia.org/wiki/Concordance_correlation_coefficient

Regards,
Stelios


------------------------------------------------------------------------------


_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

------------------------------------------------------------------------------
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to