I think this idea would be something like
y = [1, 2, 3, 0]

## Advertising

y_current_avgpool = (1 + 2 + 3 + 0) / 4
y_new_avgpool = (1 + 2 + 3) / 3
I'm not sure that there is a simple way to do this currently. You could do
sum pooling first, then compute the divisors by looking at the number of
non-zero elements using this
http://deeplearning.net/software/theano/library/tensor/nnet/neighbours.html#theano.tensor.nnet.neighbours.images2neibs
and T.switch
On Wednesday, August 9, 2017 at 11:36:29 AM UTC-7, nouiz wrote:
>
> I don't understand the problem with using normal operation. Can you give
> this code? I don't see more problem with that implementation vs a normal
> average pooling.
>
> Le mar. 8 août 2017 07:36, Feras Almasri <fsal...@gmail.com <javascript:>>
> a écrit :
>
>> I want to have an node that take the average of the only activated points
>> in the last feature map. what I mean by activated points is any pixel
>> higher than zero. instead of taking the global average of the full feature
>> map I'd rather take it of the only activated pixels.
>> If I just do this in normal operation then gradient descent will be
>> discontinued in a way that location for back prorogation are not there.
>>
>> Any hint or advice would be appreciated, thank you.
>>
>> --
>>
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "theano-users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to theano-users...@googlegroups.com <javascript:>.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
--
---
You received this message because you are subscribed to the Google Groups
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.