Thank you Pascal. I've found it very useful. If I may ask a couple of 
questions since I'm still in need of some clarifications.

1. In order to do a deconvolution or up sampling then I should use the 
conv2d_grad_wrt_inputs() function correct?
2. Are a deconvolution and and up-convolution the same thing?
3. when I use the conv2d_grad_wrt_inputs() I always get an error saying 
that the 
*** TypeError: conv2d_grad_wrt_inputs() takes at least 3 arguments (5 given)

Reading the documentation for the function it says that the first argument 
should be either a 4D tensor or the gradient.

What I am providing as the first argument is the output of a convolution. 
For instance after n=5 convolutions, now I need to upscale using a 
deconvolution.
If we say that h5 is the fifth layer in the network and it holds the output 
of a convolution. Then what I did was the following:

s1, s2 = 2, 2
p1, p2 = 1, 1
c1, c2, k1, k2 = W[10].shape

W[10].shape = (1024,1024,2,2)

deconv = conv2d_grad_wrt_inputs(h5, W[10], filter_shape=(c1, c2, k1, k2),  
border_mode=(0, 
0),  subsample=(s1, s2))
 

4. Finally, in this paper I'm trying to implement describing a u-net shape 
network, they describe the steps for the upward direction as:
1. up sampling of feature map
2. 2x2 convolution (up-convolution) -> halves the number of feature channels
3. concatenation with corresponding cropped feature maps from contracting 
path.

Hence my question number 2 above. Shouldn't they be getting either the same 
number or more. Or am I mistaking it with image HxW?

Thank you for you help!




On Thursday, September 1, 2016 at 11:27:54 PM UTC+1, Pascal Lamblin wrote:
>
> In case you have not read it already, the following tutorial is quite 
> useful for helping understand convolutions and their gradients: 
>
> http://deeplearning.net/software/theano_versions/dev/tutorial/conv_arithmetic.html
>  
>
> On Thu, Sep 01, 2016, [email protected] <javascript:> wrote: 
> > Hello everyone, 
> > 
> > I've trying to implement a deconvolution in a convnet and I'm stuck 
> because 
> > I'm mainly confused. I've been searching the forum and I've found 
> different 
> > way to implement it and I'm not quite sure which one is the correct or 
> how 
> > to properly implement it. I was hopping that maybe you guys can shed 
> some 
> > light into this issue. 
> > 
> > As I've mentioned I have a convnet with 4*[conv>rely->pool] layers and 
> the 
> > next step is to implement a deconvolution. 
> > So far I've found the following ways to implement it. I'll start with 
> what 
> > I initially started to do but was confused. (Also the terminology is a 
> bit 
> > confused is it a deconvolution or a transposed convolution?) 
> > 
> > This the output from my 4th layer in the convnet. In this stage the 
> input 
> > image is of size 28x28 giving out 1024 feature maps. 
> > h4 = relu(conv2d(h3_pool, W[5], mode='valid')) 
> > h4 = relu(conv2d(h4,      W[6], mode='valid')) 
> > h4_pool = pool_2d(h3, ds=(2,2), st=(2, 2)) 
> > 
> > And now I want to implement an upscale/deconvolution where the result 
> would 
> > be and image of dimension 56x56 providing 1024 feature maps. 
> > This what I tried to do: 
> > upscale1 = T.extra_ops.repeat(h4, 2, axis=3) 
> > upscale1 = T.extra_ops.repeat(h4, 2, axis=2) 
> > concat1  = T.concatenate([upscale1, h4], axis=1) 
> > deconv1  = T.nnet.abstract_conv.AbstractConv2d_gradInputs(concat1, 
> kshp=2x2, 
> >  border_mode='valid',  subsample=(56, 56),  filter_flip=False) 
> > output1 = deconv1(filters, input, output_shape[2:]) 
> > 
> > In this case I don't know what to use for kasha, is the weight matrix 
> from 
> > h4 or h4_pool? 
> > Am I using subsampling correct? 
> > I am also totally lost in what should I use for the arguments of 
> deconv1()? 
> > filters? input?, output_shape? 
> > 
> > The other ways I've seen people do this is the following: 
> > 
> > shp = h4.shape 
> > upsample = T.zeros((shp[0], shp[1], shp[2] * 2, shp[3] * 2), 
> dtype=h4.dtype) 
> > upsample = T.set_subtensor(upsample[:, :, ::2, ::2], h4) 
> > upsampled_convolution = T.nnet.conv2d(upsample, filters.dimshuffle(1, 0, 
> 2, 3)[:, :, ::-1, ::-1], border_mode='full') 
> > f = theano.function([h4], upsampled_convolution) 
> > 
> > 
> > Another way is by using some dummy convolution and the gradient of it if 
> > I'm not mistaken. To be honest I don't really understand the below 
> method. 
> > 
> > def deconv(X, w, subsample=(1, 1), border_mode=(0, 0), 
> conv_mode='conv'): 
> >     """ 
> >     sets up dummy convolutional forward pass and uses its grad as deconv 
> >     currently only tested/working with same padding 
> >     """ 
> >     img = gpu_contiguous(X) 
> >     kerns = gpu_contiguous(w) 
> >     desc = GpuDnnConvDesc(border_mode=border_mode, subsample=subsample, 
> >                           
> > conv_mode=conv_mode)(gpu_alloc_empty(img.shape[0], kerns.shape[1], 
> > img.shape[2]*subsample[0], img.shape[3]*subsample[1]).shape, 
> kerns.shape) 
> >     out = gpu_alloc_empty(img.shape[0], kerns.shape[1], 
> > img.shape[2]*subsample[0], img.shape[3]*subsample[1]) 
> >     d_img = GpuDnnConvGradI()(kerns, img, out, desc) 
> >     return d_img 
> > 
> > Any help or explanation will be much appreciated. 
> > 
> > Thanks! 
> > 
> > 
> > -- 
> > 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups "theano-users" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to [email protected] <javascript:>. 
> > For more options, visit https://groups.google.com/d/optout. 
>
>
> -- 
> Pascal 
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to