Hi, all. I post this question to the list, since it might be related to the
MLP being developed.

I found two versions of the error function for output layer of MLP are used
in the literature.


   1. \delta_o = (y-a) f'(z)
   http://ufldl.stanford.edu/wiki/index.php/Backpropagation_Algorithm
   2. \delta_o = (y-a)  http://www.idsia.ch/NNcourse/backprop.html

Given that they all use the same sigmoid activation function and loss
function, how can the error function be different? Also note that the error
functions will ultimately lead to different propagating errors in the
hidden layers.

-- 
Best Wishes
--------------------------------------------
Meng Xinfan(蒙新泛)
Institute of Computational Linguistics
Department of Computer Science & Technology
School of Electronic Engineering & Computer Science
Peking University
Beijing, 100871
China
------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to