Dear all,
could someone suggest some strategies for detecting fitting problems in
neural network estimation?

I'm using the nnet package for fitting standardized simulated data (some
thousands estimations are required).
The estimation is generally ok, but sometimes (about 1-3 every 1000) I found
too big final weights in the neural network
(and so, output saturation...). In my specific application this is not a
real problem, and I can simply check if fitted values
are costant ('cause this is what I've seen in those bad fits), but I'm
asking if there are better
strategies for marking a fitted model as "possibly wrong".

For example, is there a way for checking if convergence was reached during
error criterion optimization?

Tnx all,
Antonio, Fabio Di Narzo.

        [[alternative HTML version deleted]]

______________________________________________
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Reply via email to