Hello,
  I was wondering whether anybody whould be able to help with this query.


I have some neural network models which makes predictions for a dataset. When
comparing various models we evalute the effectiveness by looking the RMS
error and the value of R^2 between the predicted and actual values.

However, I seem to have read somewhere that R^2 is not always a 'good
indicator' - in that a data set can be randomly generated yet show a good
R^2. Is this true? And if so, does anybody know how I can reference this
(paper/book)?

Thanks,
Rajarshi
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to