On Thu, Oct 13, 2016, luca.wagner.0...@gmail.com wrote:
> whether I use float 16 or float32, theano.gpuarray.dnn.dnn_conv  or 
> theano.tensor.nnet.conv3d2d.conv3d  
> it works only for small images but  if I increase the images size there are 
> problems 
> with memory.
> I had not this issue when I was using float32 with the previous Theano 
> version and same parameters and images size.
> 
> I try:
> 
> floatX = float16
> device = cuda
> 
>  #theano.gpuarray.dnn.dnn_conv               
>  out = dnn_conv(img= input, 
> if I increase the size of the images: pygpu.gpuarray.GpuArrayException: an 
> illegal memory access was encountered

This is due to memory being accessed outside of what is allowed, not
because of too much memory being used. It may be that this happens when
trying to report another exception, though.

Can you use pdb and try to see what happens during "raise_with_op" in
that call?

>   File "/home/luca/data/Theano-master/theano/gof/link.py", line 167, in 
> raise_with_op
>     "\nInputs values: %s" % scalar_values)
>   File "pygpu/gpuarray.pyx", line 1941, in pygpu.gpuarray.GpuArray.__repr__ 
> (pygpu/gpuarray.c:24742)
>   File 
> "/home/luca/anaconda2/lib/python2.7/site-packages/numpy/core/numeric.py", 
> line 482, in asarray
>     return array(a, dtype, copy=False, order=order)
>   File "pygpu/gpuarray.pyx", line 1572, in 
> pygpu.gpuarray.GpuArray.__array__ (pygpu/gpuarray.c:20224)
>   File "pygpu/gpuarray.pyx", line 1320, in pygpu.gpuarray.pygpu_as_ndarray 
> (pygpu/gpuarray.c:17346)
>   File "pygpu/gpuarray.pyx", line 347, in pygpu.gpuarray.array_read 
> (pygpu/gpuarray.c:6114)
> pygpu.gpuarray.GpuArrayException: an illegal memory access was encountered


> -----------------
> If I try float 32 and theano.gpuarray.dnn.dnn_conv  I also have memory 
> problems:  pygpu.gpuarray.GpuArrayException: out of memory

This seems to happen even before the function is running, when you are
transferring the shared parameters to the GPU. This is quite strange,
because it should have failed regardless of the Theano version, back-end
or type of convolution.

> ------
> If I try 
> theano.tensor.nnet.conv3d2d.conv3d, 
> floatX = float32,
> device = gpu
> 
> I also have memory problems: MemoryError: ('Error allocating 14224896000 
> bytes of device memory (out of memory).', "you might consider using 
> 'theano.shared(..., borrow=True)'")

This is a 14GB shared variable that you are trying to transfer to GPU.
Is that what you expected?

>   File "mpr_convnet_class.py", line 242, in __init__
>     b)
>   File "mlp.py", line 199, in __init__
>     borrow=True,  
>   File "mlp.py", line 138, in __init__
>     self.W = shared(value=W_val, borrow=borrow, name=layer_name+'_W')
>   File "/home/luca/data/Theano-master/theano/compile/sharedvalue.py", line 
> 247, in shared
>     allow_downcast=allow_downcast, **kwargs)
>   File "/home/luca/data/Theano-master/theano/sandbox/cuda/var.py", line 
> 242, in float32_shared_constructor
>     deviceval = type_support_filter(value, type.broadcastable, False, None)
> MemoryError: ('Error allocating 14224896000 bytes of device memory (out of 
> memory).', "you might consider using 'theano.shared(..., borrow=True)'")
> 
> ----------------
> If I try 
> theano.tensor.nnet.conv3d2d.conv3d, 
> floatX = float16,
> device = gpu
> 
> I have TypeError

That is normal, the old back-end (device=gpu) does not support float16.

-- 
Pascal

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to