On Wed, Jun 06, 2012 at 04:38:16PM +0800, xinfan meng wrote:
> Hi, all. I post this question to the list, since it might be related to the
> MLP being developed.
> 
> I found two versions of the error function for output layer of MLP are used
> in the literature.
> 
> 
>    1. \delta_o = (y-a) f'(z)
>    http://ufldl.stanford.edu/wiki/index.php/Backpropagation_Algorithm
>    2. \delta_o = (y-a)  http://www.idsia.ch/NNcourse/backprop.html
> 
> Given that they all use the same sigmoid activation function and loss
> function, how can the error function be different? Also note that the error
> functions will ultimately lead to different propagating errors in the
> hidden layers.

If the output layer has no nonlinearity, then "f(z)" is the identity function
and f'(z) is just 1.

If you have a nonlinearity, you need to backpropagate through it, which is
where the f'(z) comes from.

Note that in both those examples, they are using squared error, which is only
really appropriate for real-valued targets. Cross-entropy is much more
appropriate for classification with softmax outputs. You can derive other
cross-entropy-based error functions if you're predicting a collection of
binary targets.

David

------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to