"Rajarshi Guha" <[EMAIL PROTECTED]> wrote:
"I have some neural network models which makes predictions for a
dataset. When comparing various models we evalute the effectiveness by
looking the RMS error and the value of R^2 between the predicted and
actual values.

However, I seem to have read somewhere that R^2 is not always a 'good
indicator' - in that a data set can be randomly generated yet show a
good R^2. Is this true? And if so, does anybody know how I can
reference this (paper/book)?"


"CybercafeUser" <[EMAIL PROTECTED]> responded:
"Yes it is.
Look at almost all econometrics books"


This is a classic case of "argumentum ad verecundiam".  Whether
R-squared is a "good indicator" of fit or not depends on the context
of the problem- no measure is "best" for all purposes.  R-squared is a
measure of  linear  relation between variables.

Some interesting commentary may be foudnd at:
  http://www.statisticalengineering.com/r-squared.htm
  
http://faculty.mville.edu/derrellr/metrics%20notes/ap%20metrics%20f%2003%20notes%20chp%202.pdf

-Will Dwinnell
http://will.dwinnell.com
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to