Hi Andy,

For a linear regression model R^2 is indeed a good choice for a metric,
since it has an interpretation as both the variance explained and the
square of the correlation coefficient. However, its interpretation for
other models is unclear. For example, in generalized linear models the
standard is to use the deviance for comparing models (or something like the
Akaike or Bayesian information criterion), and some pseudo-R^2 measure for
the fit.

The interpretation of R^2 is less useful for machine learning models. For
example, Weka omits it all together for regression models. A useful
alternative is to simply use the correlation between the true and the
predicted values.

However, the problem with the correlation coefficient is that it can ignore
a systematic bias on the predictions. For example, if you have a vector of
*x* and a vector of predictions *y*=*x*+10, then the correlation will be 1,
but the concordance correlation coefficient will not be 1.

The kappa statistic might be indeed the most popular measure for
inter-rater agreement, but it can also be used only for classification and
not for continuous predictions.

The concordance correlation coefficient is not very popular indeed, but in
my own experience, I believe this to be an omission from most packages as
it, alongside other measures can provide a more complete picture of the
performance of a regressor.

Best regards,
Stelios

2015-09-05 20:37 GMT+01:00 Andy <t3k...@gmail.com>:

> Hi Stelios.
> I haven't heard of that, but I'm no expert on the subject. How does that
> compare to Cohen's kappa, which I think is the standard for two rater
> agreement?
> Can you give references of its use and why it is chosen?
> And why would you use it as a metric for a regression model instead of R^2?
> Why do you say it is robust?
>
> Best,
> Andy
>
>
> On 09/04/2015 08:15 AM, Stylianos Kampakis wrote:
>
> Hello everyone,
>
> I was thinking to add the concordance correlation coefficient as a metric
> for regression models and I wanted to ask first whether you think this is a
> good idea.
>
> The concordance correlation coefficient is a measure of inter-rater
> agreement. I stumbled upon it about a year ago, and I realized it is a
> really useful for evaluating regression models.
>
> In short, the RMSE gets too affected by outliers. The correlation can be a
> good alternative in this case. However, the correlation ignores systematic
> bias in the predictions. So, it can still be misleading in some cases. The
> concordance correlation coefficient measures how closely the predicted and
> the true values fall on the 45 degree line through the origin.
>
> https://en.wikipedia.org/wiki/Concordance_correlation_coefficient
>
> Regards,
> Stelios
>
>
> ------------------------------------------------------------------------------
>
>
>
> _______________________________________________
> Scikit-learn-general mailing 
> listScikit-learn-general@lists.sourceforge.nethttps://lists.sourceforge.net/lists/listinfo/scikit-learn-general
>
>
>
>
> ------------------------------------------------------------------------------
>
> _______________________________________________
> Scikit-learn-general mailing list
> Scikit-learn-general@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
>
>
------------------------------------------------------------------------------
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to