My experience is that nnet needs a lot of tuning, not only in terms of
numbers of layers, but also in terms of the other parameters. My first
results where I kept very much of the default parameter values with nnet
have been very bad, as bad as you say. (But as Brian Ripley already wrote,
it's not
On Sun, 14 Mar 2004, Albedo wrote:
> The only thing that I could have done wrong with nnet
> (that I
> can think of) is not enough nuerons in hidden layer,
> but then
> again this is actually limited by my computer memory.
Perhaps you had too many, not too few? Perhaps you didn't choose the
we
The only thing that I could have done wrong with nnet
(that I
can think of) is not enough nuerons in hidden layer,
but then
again this is actually limited by my computer memory.
However, I did estimate the error a little bit
different - I have
enough data for test set, which I used for classifica
I think that you are using nnet incorrectly. I have compared several
classifiers (including that ones that you mention in your e-mail) on the
same dataset and I have never found more of a 20% of difference in the
missclassification error. Of course, I estimated the misclassification
error by cross
I was wandering if anybody ever tried to compare the classification
accuracy of nnet to other (rpart, tree, bagging) models. From what I
know, there is no reason to expect a significant difference in
classification accuracy between these models, yet in my particular case
I get about 10% error rate