@Doug Thanks for the help, appreciate it!
I just realized my mistake was exhaustion and looking for too long the same 
piece of code.
BTW do you happen to know the difference between T.nnet.conv2d() and the 
newer T.signal.conv2d(). The documentation is lacking any examples between 
the difference among the 2.
>From what I can tell the newer one T.signal.conv2d() takes inputs and 
kernels of 2D.3D but not 4D. How can one use this in the case of a convnet 
where you might have 4D tensors representing your data.
Should you reshape them to 3D and if so what is the right way to do so?

Thanks!

On Monday, July 25, 2016 at 11:31:30 PM UTC+1, Doug wrote:
>
> Your interpretation of what should happen is correct, I can't replicate 
> that issue on my machine. See the code below which runs to completion on my 
> machine.
>
> import theano
> import theano.tensor as T
>
> import numpy as np
>
> inp = T.tensor4()
> w1 = theano.shared(np.random.rand(32,1,1,1).astype('float32'))
> h1 = T.nnet.conv2d(inp, w1)
> h1p = T.signal.pool.pool_2d(h1, ds=(2,2), ignore_border=True)
>
> d = np.random.rand(64,1,28,28).astype('float32')
>
> f = theano.function([inp], h1)
> t = f(d).shape
> assert t==(64L, 32L, 28L, 28L)
>
> fp = theano.function([inp], h1p)
> tp = fp(d).shape
>
> assert tp==(64L, 32L, 14L, 14L)
>
>
>
>
> On Monday, July 25, 2016 at 4:25:10 PM UTC-4, [email protected] 
> wrote:
>>
>> Thanks @Doug, 
>> You were right about the stride. I just found that pool_2d() is returning 
>> a tensor of shape different from what I would expect which I don't know how 
>> to interpret. Meaning is my fault or theano?
>>
>> Using again my previous example:
>> h1 = conv2d(X, w1, image_shape=(64, 1, 28, 28), filter_shape=(32, 1, 1, 
>> 1))
>> print(h1.shape)
>> >>> (64,32,28,28)
>> >>>
>> >>>
>> >>>
>> h1p = pool_2d(h1, ds=(2,2), ignore_border=True)
>> print(h1p.shape)
>> >>> (64,1,14,14)
>> >>>
>> >>>
>>
>> I should be getting a shape of (64,32,14,14) for h1p right? and not a 
>> shape of (64,1,14,14). I don't know why pool_2d() is messing with number of 
>> feature maps?
>> It shouldn't be doing that right? Or am I wrong?
>>
>> Thank you for helpful hint.
>>
>>
>>
>> On Monday, July 25, 2016 at 4:12:38 PM UTC+1, Doug wrote:
>>>
>>> I'd recommend compiling a function to see what the actual shape of h1p 
>>> is, that will help  you understand if the problem is on your end or with 
>>> theano. In this specific case I think the issue is that you've specified 
>>> st=(1,1) for the pooling, so you aren't actually doing a traditional 2x2 
>>> maxpool. You need to either set st=(2,2), or leave it undefined in which 
>>> case it defaults to whatever ds is set to. 
>>>
>>> On Sunday, July 24, 2016 at 7:28:21 PM UTC-4, [email protected] 
>>> wrote:
>>>>
>>>> Hello dear community members.
>>>> I am facing this weird behavior of 
>>>> conv2d()
>>>> function in a convent.
>>>>
>>>> I'll try to explain the situation with a simple example of a convent 
>>>> with only two convolutions.
>>>> Imagine that I have the following filters and their corresponding size
>>>> w1 = (32, 1, 1, 1)
>>>> w2 = (64, 32, 3, 3)
>>>>
>>>> Then my convent would be something like the following:
>>>> h1 = conv2d(X, w1, image_shape=(64, 1, 28, 28), filter_shape=w1.shape)
>>>> h1p = pool_2d(h1, ds=(2,2), st=(1,1), ignore_border=True)
>>>> h2 = conv2d(h1p, w2, image_shape=(64, 32, 14, 14), filter_shape=w2.
>>>> shape)
>>>> h2p = pool_2d(h2, ds(2,2), st=(1,1), ignore_border=True)
>>>>
>>>> In this case I get an error complaining about the image shape as an 
>>>> input regarding the second convolution. It says that should be 27 instead 
>>>> of 14.
>>>> This where things start to get a bit unclear for me. By looking at the 
>>>> conv2d() documentation its says that if you use the valid mode which is 
>>>> the 
>>>> default
>>>> in this case then the output image_shape from the convolution is 
>>>> computed as image_shape - filter_shape + 1.
>>>> If we consider that then from our first image_shape=(64, 1, 28, 28) as 
>>>> input to the first convolution operation we would have the following 
>>>> image_shape dimension:
>>>> image_height = 28 - 1 (filter_height) + 1 (Constant)
>>>> new_image_height = 28 (unchanged from the above computations)
>>>>
>>>> now if we do a downsampling with filter size = (2,2)
>>>>
>>>> final_new_image_height = 28/2 = 14
>>>>
>>>> As I have exactly putted in my second convolution. Now why is theano 
>>>> complaining about that and is asking that the input should be 27 instead 
>>>> for the image height and width. It seems like the pooling is either being 
>>>> skipped or never considered by theano in this case. Why is that happening?
>>>>
>>>> Any developers of theano who can shed some light on this topic?
>>>>
>>>> Thanks!
>>>>
>>>>
>>>>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to