On Sat, 13 Sep 2003, Ovid wrote: > Hi all, > > I'm working with AI::NeuralNet::Mesh and I've seen a few areas that it can be > improved slightly. > Mainly, I can make it run clean under warnings and, according to some initial > benchmarks, I can > give it a nice performance boost with a few tweaks. However, I backed out my > changes to be able > to build a more comprehensive test suite to ensure that I don't break anything. > This raises a > question for me. > > After training the neural network, assuming that I am using the same training data > every time (in > the same order), are the results deterministic across operating systems, CPUs, Perl > versions, etc? > From reading through the code, I don't see anything that would cause problems here, > but I'm not > sure. > > If the results *are* deterministic then I can go ahead and build the test suite and > send this back > to the author. Otherwise, I can only build the tests for me, but I'd prefer to be > able let others > take advantage of my work. >
Some speedups would certainly be welcome. I don't know the module code, but some general considerations on perfect repeatability across OSes and CPUs would include the fact that some OS/CPU combinations compute with doubles and some with long doubles. Also, special function libraries (for the logistic and tanh activation functions) will vary across the C libraries. Theoretically, the error landscape for an NN optimization is full of local minima, wich implies the existence of separatrices that can magnify even small numerical discrepanies. I have no idea, however, if this is a practical problem with NN testing; if one tested just a single batch learing step, I can't see how the divergence would grow large. -- Mark Kvale, neurobiophysicist http://www.keck.ucsf.edu/~kvale/