In essence, *all* the torch code <https://github.com/torch/nn> is matrix/tensor operations. The only exception is the function StochasticGradient:train, which performs the training iteration. This is approximately zero percent of the code base.
In retrospect, this makes sense. The result of training is nothing but an array of numbers, so everything else, including loss functions (Criteria) must also be expressed in that form. Still it was a big, big shock. I'll never forget it. I'm trying to think what I was expecting. First, I was expecting some form of statistics and partial derivatives. They are there, I think, but expressed in matrix/tensor form. Second, I'm so used to thinking in terms of algorithms. To see a supremely important "algorithm" that is nothing but math operations was quite a shock. It shows the latent power of mathematics. Evolution almost certainly acts on similar arrays of numbers, namely the weights attached to synapses. Just as evolution is "merely" a process, without any understanding, direction or purpose, the deep learning algorithm has no understanding, direction or purpose. *Processes are not the kinds of things that can have understanding, direction or purpose! * Otoh, just as we can speak of selection pressure when talking about evolution, the *results *of deep learning create agents that appear to understand games (and other things) very deeply indeed. Two questions come to mind. First, how is it that arrays can be trained so effectively? Second, how do those arrays can then drive actions, say the playing of Atari games with super-human skill? I'll be investigating these questions in my spare time. Edward -- You received this message because you are subscribed to the Google Groups "leo-editor" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at https://groups.google.com/group/leo-editor. For more options, visit https://groups.google.com/d/optout.
