I think it's important to say why you're unhappy with your current measures?
Are they not capturing aspects of the data you understand?
I typically use several residual measures in conjunction, each has it's
benefits/drawbacks. I just throw them all in a table.
--
View this message in
There are many ways to measure prediction quality, and what you choose
depends on the data and your goals. A common measure for a
quantitative response is mean squared error (i.e. 1/n * sum((observed
- predicted)^2)) which incorporates bias and variance. Common terms
for what you are looking for
Dear R-friends,
How do you test the goodness of prediction of a model, when you predict on a
set of data DIFFERENT from the training set?
I explain myself: you train your model M (e.g. glm,gam,regression tree, brt)
on a set of data A with a response variable Y. You then predict the value of
3 matches
Mail list logo