Yes most of the work done in this tutorial assume the transposed 
convolution by applying convolution with padding or otherwise it doesn't 
match the input size. 
What I can't see in this tutorial is how to apply deconvolution on the 
output let's say one feature map from the first layer to reproduce the 
input image with it is original size to figure out the pattern or the 
attention that activated to this feature map in the original image?

later on I don't see how to apply de-convolution on multiple 
channels(feature maps) to reproduce single image?


On Wednesday, December 21, 2016 at 8:00:19 PM UTC+1, Pascal Lamblin wrote:
>
> I'm not sure I understand your question. 
> Have you read the convolution arithmetic tutorial [1]? It might help. 
>
> [1] http://theano.readthedocs.io/en/master/tutorial/conv_arithmetic.html 
>
> On Wed, Dec 21, 2016, Feras Almasri wrote: 
> > What is saw by using conv2d_grad_wrt_inputs *it is to do 
> de**convolution** on 
> > feature maps by using a 2D kernel. in the case where you have multiple 
> > channels who is this applied? * 
> > *I did think of using conv2d function with stride and padding but then 
> the 
> > idea how to match the assertion between between the input channels and 
> the 
> > filter size. If I chose one feature map to deconv and the output should 
> be 
> > RGB image ? * 
> >   
> > 
> > -- 
> > 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups "theano-users" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to [email protected] <javascript:>. 
> > For more options, visit https://groups.google.com/d/optout. 
>
>
> -- 
> Pascal 
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to