Hello

I try to fit a LDA and RDA model to the same data, which has two classes.
The problem now is that the training errors of the LDA model and the
training error of the RDA model with alpha=0 are not the same. In my
understanding this should be the case. Am I wrong? Can someone explain what
the reason for this difference could be?

Here my code:

LDA model:
===========
% x is a dataframe
tmp = lda(response ~ ., data=x)
tmp.hat = predict(tmp)
tab = table(x$response, tmp.hat$class)
lda.training.err = 1 - sum(tab[row(tab)==col(tab)])/sum(tab)

RDA model:
===========
% x is converted into a matrix without the response
% variable. This matrix is then transposed
tmp = rda(x, y, alpha=0, delta=0)
rda.training.err =  tmp$error / dim(x)[2]                         

% The training error provided by rda.cv() is also different
% from the training errors provided by lda() or rda()
tmp.cv = rda.cv(tmp, x=x, y=y, nfold=10)
tmp.cv$err / dim(x)[2] / 10


Thanks a lot!

______________________________________________
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to