I'll try it.
many thanks

On Thursday, October 13, 2016 at 9:33:17 PM UTC+2, nouiz wrote:
>
> do you want 2d or 3d pooling? We merged (today I think) a good interface 
> for pool 3d: theano.tensor.signal.pool.pool_3d()
>
> that would be better then using the 2d pooling to mimic 3d pooling.
>
> On Wed, Oct 12, 2016 at 8:12 AM, <luca.wag...@gmail.com <javascript:>> 
> wrote:
>
>> Hi Pascal,
>> Input dimension is 5; 
>> problem fixed: I use pool_2d code 
>> thanks
>>
>> On Tuesday, October 11, 2016 at 4:43:58 PM UTC+2, Pascal Lamblin wrote:
>>>
>>> Hi, 
>>>
>>> The code throwing the exception is: 
>>> > if x.type.ndim != 4: 
>>> >     raise TypeError() 
>>>
>>> What is the number of dimensions of 'input' in your case? 
>>>
>>> Usually, the `pool_2d` helper function takes care of reshaping the input 
>>> if necessary, and passing correctly ws/ds to the underlying Op. Is there 
>>> any reason in particular you are calling Pool directly? 
>>>
>>> Also, please note that if you edit your post in the web interface, it 
>>> sends a new message to the list each time. 
>>>
>>> On Tue, Oct 11, 2016, luca.wag...@gmail.com wrote: 
>>> > 
>>> > Hi Pascal, 
>>> > in maxpool3d.py 
>>> > I tried op = pool.Pool(ignore_border=False, mode='max', 
>>> openmp=None)(input, 
>>> > ws=(ds[1],ds[2])) 
>>> >  instead of op = pool.Pool((ds[1],ds[2]), ignore_border) that worked 
>>> in the 
>>> > previous Theano version. 
>>> > 
>>> > 
>>> > This is the output : 
>>> > 
>>> > Python 2.7.12 |Anaconda custom (64-bit)| (default, Jul  2 2016, 
>>> 17:42:40) 
>>> > [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2 
>>> > Type "help", "copyright", "credits" or "license" for more information. 
>>> > Anaconda is brought to you by Continuum Analytics. 
>>> > Please check out: http://continuum.io/thanks and https://anaconda.org 
>>> > >>> runfile('/home/luca/data/ 
>>> > 
>>> DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv_t.py',
>>>  
>>>
>>> > 
>>> wdir='/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core')
>>>  
>>>
>>> > Using gpu device 0: GeForce 840M (CNMeM is disabled, cuDNN 5103) 
>>> > Traceback (most recent call last): 
>>> >   File "<stdin>", line 1, in <module> 
>>> >   File 
>>> > 
>>> "/home/luca/anaconda2/lib/python2.7/site-packages/spyderlib/widgets/externalshell/sitecustomize.py",
>>>  
>>>
>>> > line 714, in runfile 
>>> >     execfile(filename, namespace) 
>>> >   File 
>>> > 
>>> "/home/luca/anaconda2/lib/python2.7/site-packages/spyderlib/widgets/externalshell/sitecustomize.py",
>>>  
>>>
>>> > line 81, in execfile 
>>> >     builtins.execfile(filename, *where) 
>>> >   File 
>>> > 
>>> "/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv_t.py",
>>>  
>>>
>>> > line 32, in <module> 
>>> >     run_experiments() 
>>> >   File 
>>> > 
>>> "/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv_t.py",
>>>  
>>>
>>> > line 25, in run_experiments 
>>> >     Learning_rate=0.001 
>>> >   File "mpr_convnet_class_t.py", line 176, in __init__ 
>>> >     self.pool_layer_dim)) 
>>> >   File "convnet3d.py", line 255, in __init__ 
>>> >     out = max_pool_3d(input,pool_shape) 
>>> >   File "maxpool3d.py", line 53, in max_pool_3d 
>>> >     op = Pool(ignore_border=False, mode='max', openmp=None)(input, 
>>> > ws=(ds[1],ds[2])) 
>>> >   File "/home/luca/data/Theano-master/theano/gof/op.py", line 602, in 
>>> > __call__ 
>>> >     node = self.make_node(*inputs, **kwargs) 
>>> >   File "/home/luca/data/Theano-master/theano/tensor/signal/pool.py", 
>>> line 
>>> > 293, in make_node 
>>> >     raise TypeError() 
>>> > TypeError 
>>> > 
>>> > Many thanks 
>>> > Luca 
>>> > 
>>> > -- 
>>> > 
>>> > --- 
>>> > You received this message because you are subscribed to the Google 
>>> Groups "theano-users" group. 
>>> > To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to theano-users...@googlegroups.com. 
>>> > For more options, visit https://groups.google.com/d/optout. 
>>>
>>> > """ Max pooling spatio-temporal inputs for Theano """ 
>>> > 
>>> > from theano import tensor 
>>> > from theano.tensor.signal.downsample import DownsampleFactorMax 
>>> > 
>>> > 
>>> > #it was originally ignore_border=False and then corrected as suggested 
>>> by Pascal 
>>> > '''Pascal update on ignore_border'''   
>>> > def max_pool_3d(input, ds, ignore_border=True): 
>>> >     """ 
>>> >     Takes as input a N-D tensor, where N >= 3. It downscales the input 
>>> video by 
>>> >     the specified factor, by keeping only the maximum value of 
>>> non-overlapping 
>>> >     patches of size (ds[0],ds[1],ds[2]) (time, height, width) 
>>> > 
>>> >     :type input: N-D theano tensor of input images. 
>>> >     :param input: input images. Max pooling will be done over the 3 
>>> last dimensions. 
>>> >     :type ds: tuple of length 3 
>>> >     :param ds: factor by which to downscale. (2,2,2) will halve the 
>>> video in each dimension. 
>>> >     :param ignore_border: boolean value. When True, (5,5,5) input with 
>>> ds=(2,2,2) will generate a 
>>> >       (2,2,2) output. (3,3,3) otherwise. 
>>> >     """ 
>>> > 
>>> >     if input.ndim < 3: 
>>> >         raise NotImplementedError('max_pool_3d requires a dimension >= 
>>> 3') 
>>> > 
>>> >     # extract nr dimensions 
>>> >     vid_dim = input.ndim 
>>> >     # max pool in two different steps, so we can use the 2d 
>>> implementation of 
>>> >     # downsamplefactormax. First maxpool frames as usual. 
>>> >     # Then maxpool the time dimension. Shift the time dimension to the 
>>> third 
>>> >     # position, so rows and cols are in the back 
>>> > 
>>> >     # extract dimensions 
>>> >     frame_shape = input.shape[-2:] 
>>> >     
>>> >     # count the number of "leading" dimensions, store as dmatrix 
>>> >     # tensor.prod: product of every term in x along axis 
>>> >     batch_size = tensor.prod(input.shape[:-2]) 
>>> >     # Reshape x by right padding the shape with n_ones 1s. 
>>> >     batch_size = tensor.shape_padright(batch_size,1) 
>>> >     
>>> >     # store as 4D tensor with shape: (batch_size,1,height,width) 
>>> >     #tensor.cast     
>>> >     # Cast any tensor x to a Tensor of the same shape, but with a 
>>> different numerical type dtype. 
>>> >     new_shape = tensor.cast(tensor.join(0, batch_size, 
>>> >                                         tensor.as_tensor([1,]), 
>>> >                                         frame_shape), 'int32') 
>>> >     input_4D = tensor.reshape(input, new_shape, ndim=4) 
>>> > 
>>> >     # downsample mini-batch of videos in rows and cols 
>>> >     op = DownsampleFactorMax((ds[1],ds[2]), ignore_border) 
>>> >     
>>> >     output = op(input_4D) 
>>> >     # restore to original shape 
>>> >     outshape = tensor.join(0, input.shape[:-2], output.shape[-2:]) 
>>> >     out = tensor.reshape(output, outshape, ndim=input.ndim) 
>>> > 
>>> >     # now maxpool time 
>>> > 
>>> >     # output (time, rows, cols), reshape so that time is in the back 
>>> >     shufl = (list(range(vid_dim-3)) + 
>>> [vid_dim-2]+[vid_dim-1]+[vid_dim-3]) 
>>> >     input_time = out.dimshuffle(shufl) 
>>> >     # reset dimensions 
>>> >     vid_shape = input_time.shape[-2:] 
>>> >     
>>> >     # count the number of "leading" dimensions, store as dmatrix     
>>> >     batch_size = tensor.prod(input_time.shape[:-2]) 
>>> >     batch_size = tensor.shape_padright(batch_size,1) 
>>> >     
>>> >     # store as 4D tensor with shape: (batch_size,1,width,time) 
>>> >     new_shape = tensor.cast(tensor.join(0, batch_size, 
>>> >                                         tensor.as_tensor([1,]), 
>>> >                                         vid_shape), 'int32') 
>>> >     input_4D_time = tensor.reshape(input_time, new_shape, ndim=4) 
>>> >     # downsample mini-batch of videos in time 
>>> >     op = DownsampleFactorMax((1,ds[0]), ignore_border) 
>>> >     outtime = op(input_4D_time) 
>>> >     # output 
>>> >     # restore to original shape (xxx, rows, cols, time) 
>>> >     outshape = tensor.join(0, input_time.shape[:-2], 
>>> outtime.shape[-2:]) 
>>> >     shufl = (list(range(vid_dim-3)) + 
>>> [vid_dim-1]+[vid_dim-3]+[vid_dim-2]) 
>>> >     return tensor.reshape(outtime, outshape, 
>>> ndim=input.ndim).dimshuffle(shufl) 
>>>
>>> -- 
>>> Pascal 
>>>
>> -- 
>>
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "theano-users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to theano-users...@googlegroups.com <javascript:>.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to