Hi,
I used the nnet R module to classify my data using Neural Networks:
nnet(input_matrix, obs_vect, size=h, linout=FALSE, entropy=TRUE)
I used as NN input my "raw" data. 
After that I tried to use the normalized input data (with z-scores,
i.e. mean=0 and std=1) and have found NNs with a little smaller 
Cross Entropy Error.
My question is: 
Is it *wrong* to feed nnet directly with the raw input data?

I found in
http://www.faqs.org/faqs/ai-faq/neural-nets/part2/section-16.html
that it depends on the minimization training algorithm:
- "Steepest descent is very sensitive to scaling.
- Quasi-Newton and conjugate gradient methods... therefore are scale 
sensitive. However,... are less scale sensitive than pure gradient
descent.
- Newton-Raphson and Gauss-Newton, if implemented correctly, are
theoretically invariant under scale changes..."

I know that nnet is a Quasi-Newton algorithm, so it make sense 
that I found a small improvement using the normalized data.
Can someone confirme if it is really so?

Thank you very much!

-- 
[EMAIL PROTECTED]

______________________________________________
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Reply via email to