Thank you. I see the differences now. Your explanation should be put into
the MLP docs :-)
On Thu, Jun 7, 2012 at 2:27 AM, David Warde-Farley <
warde...@iro.umontreal.ca> wrote:
> On Wed, Jun 06, 2012 at 04:38:16PM +0800, xinfan meng wrote:
> > Hi, all. I post this question to the list, since it
On Wed, Jun 06, 2012 at 04:38:16PM +0800, xinfan meng wrote:
> Hi, all. I post this question to the list, since it might be related to the
> MLP being developed.
>
> I found two versions of the error function for output layer of MLP are used
> in the literature.
>
>
>1. \delta_o = (y-a) f'(z
Yes, I think your explanation is correct. Thanks.
Those notation differences really make me confused, given that MLP is much
more complex than Perceptron. :-(
On Wed, Jun 6, 2012 at 8:59 PM, David Marek wrote:
>
> On Wed, Jun 6, 2012 at 1:50 PM, xinfan meng wrote:
>>
>> I think these two delta
On Wed, Jun 6, 2012 at 1:50 PM, xinfan meng wrote:
>
> I think these two delta_o have the same meaning. If you have "Pattern
> Recognition and Machine Learning" by Bishop, you can find that Bishop use
> exactly the second formula in the back propagation algorithm. I suspect
> these two formulae le
Thanks for your reply.
I think these two delta_o have the same meaning. If you have "Pattern
Recognition and Machine Learning" by Bishop, you can find that Bishop use
exactly the second formula in the back propagation algorithm. I suspect
these two formulae lead to the same update iterations, but
Hi
On Wed, Jun 6, 2012 at 10:38 AM, xinfan meng wrote:
> Hi, all. I post this question to the list, since it might be related to
> the MLP being developed.
>
> I found two versions of the error function for output layer of MLP are
> used in the literature.
>
>
>1. \delta_o = (y-a) f'(z)
>
Hi, all. I post this question to the list, since it might be related to the
MLP being developed.
I found two versions of the error function for output layer of MLP are used
in the literature.
1. \delta_o = (y-a) f'(z)
http://ufldl.stanford.edu/wiki/index.php/Backpropagation_Algorithm
2.