Re: [theano-users] Error using floatX = float16 to save memory

2016-10-14 Thread Frédéric Bastien
Update Theano to the dev version. We merged that this week:

>>> import theano
>>> theano.tensor.nnet.conv3d



On Fri, Oct 14, 2016 at 4:18 AM,  wrote:

>
>
> On Thursday, October 13, 2016 at 5:42:30 PM UTC+2, nouiz wrote:
>>
>> For the memory error, the problem is that you try to allocate 14G for a
>> shared variable on a 12G GPU. This is probably not what you want to do.
>>
>> Use theano.tensor.nnet.conv3d now (not conv3d2d.conv3d() or dnn_conv3d).
>> But we need to fix the memory problem. conv3d2d.conv3d probably cause an
>> upcast to float32 in the computation. That would explain the last error.
>>
>
>
> Hi Fred,
> looking in theano.tensor.nnet I don't see theano.tensor.nnet.conv3d: i
> find Conv3D.conv3D, conv3d2d.conv3d or conv2d
> Thanks
> Luca
>
>>
>> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-10-14 Thread Pascal Lamblin
On Fri, Oct 14, 2016, luca.wagner.0...@gmail.com wrote:
> Hi Pascal,
> I don't know how to see what happens during "raise_with_op" in  that call.

So, when you are at the following point, are you still in the pdb shell
or did it crash the interpreter?

If you are still in pdb, you can call "up" until you are at " File
"/home/luca/data/Theano-master/theano/gof/link.py", line 167, in
raise_with_op" and try to print "node" and "exc_info".

Otherwise, you can edit theano/gof/link.py and add "import pdb;
pdb.set_trace()" at the beginning of the definition of "raise_with_op"
and try to print those variables then.

> This is the output  using pdb inside spyder:
> /home/luca/data/DeepLearningTutorials/Theano-3D-Convnet-master/convnet3d/core/mpr_convnet_class.py(331)__init__()
> -> a = train_set_x[minibatch_index:minibatch_index+batch_size]
> (Pdb) Traceback (most recent call last):
>   File "", line 1, in 
>   File 
> "/home/luca/anaconda2/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py",
>  
> line 888, in debugfile
> debugger.run("runfile(%r, args=%r, wdir=%r)" % (filename, args, wdir))
>   File "/home/luca/anaconda2/lib/python2.7/bdb.py", line 400, in run
> exec cmd in globals, locals
>   File "", line 1, in 
>   File 
> "/home/luca/anaconda2/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py",
>  
> line 866, in runfile
> execfile(filename, namespace)
>   File 
> "/home/luca/anaconda2/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py",
>  
> line 94, in execfile
> builtins.execfile(filename, *where)
>   File 
> "/home/luca/data/DeepLearningTutorials/Theano-3D-Convnet-master/convnet3d/core/run_multi_conv.py",
>  
> line 42, in 
> run_experiments()
>   File 
> "/home/luca/data/DeepLearningTutorials/Theano-3D-Convnet-master/convnet3d/core/run_multi_conv.py",
>  
> line 33, in run_experiments
> Zoom = 0.0
>   File "mpr_convnet_class.py", line 333, in __init__
> training_cost_ij=train_model(a, b) 
>   File "/home/luca/data/Theano-master/theano/compile/function_module.py", 
> line 879, in __call__
> storage_map=getattr(self.fn, 'storage_map', None))
>   File "/home/luca/data/Theano-master/theano/gof/link.py", line 167, in 
> raise_with_op
> "\nInputs values: %s" % scalar_values)
>   File "pygpu/gpuarray.pyx", line 1941, in pygpu.gpuarray.GpuArray.__repr__ 
> (pygpu/gpuarray.c:24742)
>   File 
> "/home/luca/anaconda2/lib/python2.7/site-packages/numpy/core/numeric.py", 
> line 482, in asarray
> return array(a, dtype, copy=False, order=order)
>   File "pygpu/gpuarray.pyx", line 1572, in 
> pygpu.gpuarray.GpuArray.__array__ (pygpu/gpuarray.c:20224)
>   File "pygpu/gpuarray.pyx", line 1320, in pygpu.gpuarray.pygpu_as_ndarray 
> (pygpu/gpuarray.c:17346)
>   File "pygpu/gpuarray.pyx", line 347, in pygpu.gpuarray.array_read 
> (pygpu/gpuarray.c:6114)
> pygpu.gpuarray.GpuArrayException: an illegal memory access was encountered
> 
> -- 
> 
> --- 
> You received this message because you are subscribed to the Google Groups 
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.


-- 
Pascal

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-10-14 Thread luca . wagner . 0812
Hi Pascal,
I don't know how to see what happens during "raise_with_op" in  that call.
 
This is the output  using pdb inside spyder:


Python 2.7.12 |Continuum Analytics, Inc.| (default, Jul  2 2016, 17:42:40) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> 
debugfile('/home/luca/data/DeepLearningTutorials/Theano-3D-Convnet-master/convnet3d/core/run_multi_conv.py',
 
wdir='/home/luca/data/DeepLearningTutorials/Theano-3D-Convnet-master/convnet3d/core')
> 
/home/luca/data/DeepLearningTutorials/Theano-3D-Convnet-master/convnet3d/core/run_multi_conv.py(1)()
-> import mpr_convnet_class as conv
(Pdb) continue
Mapped name None to device cuda: Tesla K40c
Using cuDNN version 5103 on context None
> 
/home/luca/data/DeepLearningTutorials/Theano-3D-Convnet-master/convnet3d/core/run_multi_conv.py(16)run_experiments()
-> conv.mpr_convnet(
(Pdb) continue
> 
/home/luca/data/DeepLearningTutorials/Theano-3D-Convnet-master/convnet3d/core/mpr_convnet_class.py(163)__init__()
-> dataset_x = np.asarray(dataset_x, dtype=floatX)
(Pdb) next
> 
/home/luca/data/DeepLearningTutorials/Theano-3D-Convnet-master/convnet3d/core/mpr_convnet_class.py(164)__init__()
-> dataset_y = np.asarray(dataset_y, dtype=np.int32)
(Pdb) continue
continue
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
Disabling C code for MaxAndArgmax due to unsupported float16


start time:
14/10/2016
12:08:01


Image_dim_1: 90
Image_dim_2: 90
Image_dim_3: 90


training @ iter =  0
> 
/home/luca/data/DeepLearningTutorials/Theano-3D-Convnet-master/convnet3d/core/mpr_convnet_class.py(331)__init__()
-> a = train_set_x[minibatch_index:minibatch_index+batch_size]
(Pdb) Traceback (most recent call last):
  File "", line 1, in 
  File 
"/home/luca/anaconda2/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py",
 
line 888, in debugfile
debugger.run("runfile(%r, args=%r, wdir=%r)" % (filename, args, wdir))
  File "/home/luca/anaconda2/lib/python2.7/bdb.py", line 400, in run
exec cmd in globals, locals
  File "", line 1, in 
  File 
"/home/luca/anaconda2/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py",
 
line 866, in runfile
execfile(filename, namespace)
  File 
"/home/luca/anaconda2/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py",
 
line 94, in execfile
builtins.execfile(filename, *where)
  File 
"/home/luca/data/DeepLearningTutorials/Theano-3D-Convnet-master/convnet3d/core/run_multi_conv.py",
 
line 42, in 
run_experiments()
  File 
"/home/luca/data/DeepLearningTutorials/Theano-3D-Convnet-master/convnet3d/core/run_multi_conv.py",
 
line 33, in run_experiments
Zoom = 0.0
  File "mpr_convnet_class.py", line 333, in __init__
training_cost_ij=train_model(a, b) 
  File "/home/luca/data/Theano-master/theano/compile/function_module.py", 
line 879, in __call__
storage_map=getattr(self.fn, 'storage_map', None))
  File "/home/luca/data/Theano-master/theano/gof/link.py", line 167, in 
raise_with_op
"\nInputs values: %s" % scalar_values)
  File "pygpu/gpuarray.pyx", line 1941, in pygpu.gpuarray.GpuArray.__repr__ 
(pygpu/gpuarray.c:24742)
  File 
"/home/luca/anaconda2/lib/python2.7/site-packages/numpy/core/numeric.py", 
line 482, in asarray
return array(a, dtype, copy=False, order=order)
  File "pygpu/gpuarray.pyx", line 1572, in 
pygpu.gpuarray.GpuArray.__array__ (pygpu/gpuarray.c:20224)
  File "pygpu/gpuarray.pyx", line 1320, in pygpu.gpuarray.pygpu_as_ndarray 
(pygpu/gpuarray.c:17346)
  File "pygpu/gpuarray.pyx", line 347, in pygpu.gpuarray.array_read 
(pygpu/gpuarray.c:6114)
pygpu.gpuarray.GpuArrayException: an illegal memory access was encountered

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-10-14 Thread luca . wagner . 0812


On Thursday, October 13, 2016 at 5:42:30 PM UTC+2, nouiz wrote:
>
> For the memory error, the problem is that you try to allocate 14G for a 
> shared variable on a 12G GPU. This is probably not what you want to do.
>
> Use theano.tensor.nnet.conv3d now (not conv3d2d.conv3d() or dnn_conv3d). 
> But we need to fix the memory problem. conv3d2d.conv3d probably cause an 
> upcast to float32 in the computation. That would explain the last error.
>
 

Hi Fred,
looking in theano.tensor.nnet I don't see theano.tensor.nnet.conv3d: i find 
Conv3D.conv3D, conv3d2d.conv3d or conv2d
Thanks
Luca  

>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-10-13 Thread Pascal Lamblin
On Thu, Oct 13, 2016, luca.wagner.0...@gmail.com wrote:
> whether I use float 16 or float32, theano.gpuarray.dnn.dnn_conv  or 
> theano.tensor.nnet.conv3d2d.conv3d  
> it works only for small images but  if I increase the images size there are 
> problems 
> with memory.
> I had not this issue when I was using float32 with the previous Theano 
> version and same parameters and images size.
> 
> I try:
> 
> floatX = float16
> device = cuda
> 
>  #theano.gpuarray.dnn.dnn_conv   
>  out = dnn_conv(img= input, 
> if I increase the size of the images: pygpu.gpuarray.GpuArrayException: an 
> illegal memory access was encountered

This is due to memory being accessed outside of what is allowed, not
because of too much memory being used. It may be that this happens when
trying to report another exception, though.

Can you use pdb and try to see what happens during "raise_with_op" in
that call?

>   File "/home/luca/data/Theano-master/theano/gof/link.py", line 167, in 
> raise_with_op
> "\nInputs values: %s" % scalar_values)
>   File "pygpu/gpuarray.pyx", line 1941, in pygpu.gpuarray.GpuArray.__repr__ 
> (pygpu/gpuarray.c:24742)
>   File 
> "/home/luca/anaconda2/lib/python2.7/site-packages/numpy/core/numeric.py", 
> line 482, in asarray
> return array(a, dtype, copy=False, order=order)
>   File "pygpu/gpuarray.pyx", line 1572, in 
> pygpu.gpuarray.GpuArray.__array__ (pygpu/gpuarray.c:20224)
>   File "pygpu/gpuarray.pyx", line 1320, in pygpu.gpuarray.pygpu_as_ndarray 
> (pygpu/gpuarray.c:17346)
>   File "pygpu/gpuarray.pyx", line 347, in pygpu.gpuarray.array_read 
> (pygpu/gpuarray.c:6114)
> pygpu.gpuarray.GpuArrayException: an illegal memory access was encountered


> -
> If I try float 32 and theano.gpuarray.dnn.dnn_conv  I also have memory 
> problems:  pygpu.gpuarray.GpuArrayException: out of memory

This seems to happen even before the function is running, when you are
transferring the shared parameters to the GPU. This is quite strange,
because it should have failed regardless of the Theano version, back-end
or type of convolution.

> --
> If I try 
> theano.tensor.nnet.conv3d2d.conv3d, 
> floatX = float32,
> device = gpu
> 
> I also have memory problems: MemoryError: ('Error allocating 14224896000 
> bytes of device memory (out of memory).', "you might consider using 
> 'theano.shared(..., borrow=True)'")

This is a 14GB shared variable that you are trying to transfer to GPU.
Is that what you expected?

>   File "mpr_convnet_class.py", line 242, in __init__
> b)
>   File "mlp.py", line 199, in __init__
> borrow=True,  
>   File "mlp.py", line 138, in __init__
> self.W = shared(value=W_val, borrow=borrow, name=layer_name+'_W')
>   File "/home/luca/data/Theano-master/theano/compile/sharedvalue.py", line 
> 247, in shared
> allow_downcast=allow_downcast, **kwargs)
>   File "/home/luca/data/Theano-master/theano/sandbox/cuda/var.py", line 
> 242, in float32_shared_constructor
> deviceval = type_support_filter(value, type.broadcastable, False, None)
> MemoryError: ('Error allocating 14224896000 bytes of device memory (out of 
> memory).', "you might consider using 'theano.shared(..., borrow=True)'")
> 
> 
> If I try 
> theano.tensor.nnet.conv3d2d.conv3d, 
> floatX = float16,
> device = gpu
> 
> I have TypeError

That is normal, the old back-end (device=gpu) does not support float16.

-- 
Pascal

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-10-13 Thread Frédéric Bastien
For the memory error, the problem is that you try to allocate 14G for a
shared variable on a 12G GPU. This is probably not what you want to do.

Use theano.tensor.nnet.conv3d now (not conv3d2d.conv3d() or dnn_conv3d).
But we need to fix the memory problem. conv3d2d.conv3d probably cause an
upcast to float32 in the computation. That would explain the last error.

On Thu, Oct 13, 2016 at 6:14 AM,  wrote:

> Hi,
> I'm doing tests on Tesla K40 and  Theano==0.9.0.dev3:
> whether I use float 16 or float32, theano.gpuarray.dnn.dnn_conv  or
> theano.tensor.nnet.conv3d2d.conv3d  it works only for small images but  if
> I increase the images size there are problems with memory.
> I had not this issue when I was using float32 with the previous Theano
> version and same parameters and images size.
>
> I try:
>
> floatX = float16
> device = cuda
>
>  #theano.gpuarray.dnn.dnn_conv
>  out = dnn_conv(img= input,
> kerns= self.W,
> border_mode='valid',
> subsample=(1,1,1),
> conv_mode='conv',
> direction_hint=None,
> workmem=None,
> algo=None,
> precision=None)
>
>
> This is the output for small 3d images, the convnet is working:
>
> Python 2.7.12 |Continuum Analytics, Inc.| (default, Jul  2 2016, 17:42:40)
> [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
> Anaconda is brought to you by Continuum Analytics.
> Please check out: http://continuum.io/thanks and https://anaconda.org
> >>> runfile('/home/luca/data/DeepLearningTutorials/Theano-
> 3D-Convnet-master/convnet3d/core/run_multi_conv.py',
> wdir='/home/luca/data/DeepLearningTutorials/Theano-
> 3D-Convnet-master/convnet3d/core')
> Mapped name None to device cuda: Tesla K40c
> Using cuDNN version 5103 on context None
> Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
> Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
> Disabling C code for MaxAndArgmax due to unsupported float16
>
>
> start time:
> 13/10/2016
> 11:43:00
>
>
> Image_dim_1: 30
> Image_dim_2: 30
> Image_dim_3: 30
>
>
> training @ iter =  0
> training @ iter =  400
> training cost 0.701
> epoch 1, training batch 574/574, validation error 48.04 %
> ---
>
>
> if I increase the size of the images: pygpu.gpuarray.GpuArrayException:
> an illegal memory access was encountered
>
> Python 2.7.12 |Continuum Analytics, Inc.| (default, Jul  2 2016, 17:42:40)
> [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
> Anaconda is brought to you by Continuum Analytics.
> Please check out: http://continuum.io/thanks and https://anaconda.org
> >>> runfile('/home/luca/data/DeepLearningTutorials/Theano-
> 3D-Convnet-master/convnet3d/core/run_multi_conv.py',
> wdir='/home/luca/data/DeepLearningTutorials/Theano-
> 3D-Convnet-master/convnet3d/core')
> Mapped name None to device cuda: Tesla K40c
> Using cuDNN version 5103 on context None
> Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
> Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
> Disabling C code for MaxAndArgmax due to unsupported float16
>
>
> start time:
> 13/10/2016
> 11:52:45
>
>
> Image_dim_1: 90
> Image_dim_2: 90
> Image_dim_3: 90
>
>
> training @ iter =  0
> Traceback (most recent call last):
>   File "", line 1, in 
>   File "/home/luca/anaconda2/lib/python2.7/site-packages/
> spyder/utils/site/sitecustomize.py", line 866, in runfile
> execfile(filename, namespace)
>   File "/home/luca/anaconda2/lib/python2.7/site-packages/
> spyder/utils/site/sitecustomize.py", line 94, in execfile
> builtins.execfile(filename, *where)
>   File "/home/luca/data/DeepLearningTutorials/Theano-
> 3D-Convnet-master/convnet3d/core/run_multi_conv.py", line 42, in 
> run_experiments()
>   File "/home/luca/data/DeepLearningTutorials/Theano-
> 3D-Convnet-master/convnet3d/core/run_multi_conv.py", line 33, in
> run_experiments
> Zoom = 0.0
>   File "mpr_convnet_class.py", line 322, in __init__
> training_cost_ij=train_model(a, b)
>   File "/home/luca/data/Theano-master/theano/compile/function_module.py",
> line 879, in __call__
> storage_map=getattr(self.fn, 'storage_map', None))
>   File "/home/luca/data/Theano-master/theano/gof/link.py", line 167, in
> raise_with_op
> "\nInputs values: %s" % scalar_values)
>   File "pygpu/gpuarray.pyx", line 1941, in pygpu.gpuarray.GpuArray.__repr__
> (pygpu/gpuarray.c:24742)
>   File 
> "/home/luca/anaconda2/lib/python2.7/site-packages/numpy/core/numeric.py",
> line 482, in asarray
> return array(a, dtype, copy=False, order=order)
>   File "pygpu/gpuarray.pyx", line 1572, in pygpu.gpuarray.GpuArray.__array__
> (pygpu/gpuarray.c:20224)
>   File "pygpu/gpuarray.pyx", line 1320, in pygpu.gpuarray.

Re: [theano-users] Error using floatX = float16 to save memory

2016-10-13 Thread luca . wagner . 0812
Hi,
I'm doing tests on Tesla K40 and  Theano==0.9.0.dev3:
whether I use float 16 or float32, theano.gpuarray.dnn.dnn_conv  or 
theano.tensor.nnet.conv3d2d.conv3d  
it works only for small images but  if I increase the images size there are 
problems 
with memory.
I had not this issue when I was using float32 with the previous Theano 
version and same parameters and images size.

I try:

floatX = float16
device = cuda

 #theano.gpuarray.dnn.dnn_conv   
 out = dnn_conv(img= input, 
kerns= self.W, 
border_mode='valid', 
subsample=(1,1,1),
conv_mode='conv',
direction_hint=None, 
workmem=None,
algo=None, 
precision=None) 


This is the output for small 3d images, the convnet is working:

Python 2.7.12 |Continuum Analytics, Inc.| (default, Jul  2 2016, 17:42:40) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> 
runfile('/home/luca/data/DeepLearningTutorials/Theano-3D-Convnet-master/convnet3d/core/run_multi_conv.py',
 
wdir='/home/luca/data/DeepLearningTutorials/Theano-3D-Convnet-master/convnet3d/core')
Mapped name None to device cuda: Tesla K40c
Using cuDNN version 5103 on context None
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
Disabling C code for MaxAndArgmax due to unsupported float16


start time:
13/10/2016
11:43:00


Image_dim_1: 30
Image_dim_2: 30
Image_dim_3: 30


training @ iter =  0
training @ iter =  400
training cost 0.701
epoch 1, training batch 574/574, validation error 48.04 %
---


if I increase the size of the images: pygpu.gpuarray.GpuArrayException: an 
illegal memory access was encountered

Python 2.7.12 |Continuum Analytics, Inc.| (default, Jul  2 2016, 17:42:40) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> 
runfile('/home/luca/data/DeepLearningTutorials/Theano-3D-Convnet-master/convnet3d/core/run_multi_conv.py',
 
wdir='/home/luca/data/DeepLearningTutorials/Theano-3D-Convnet-master/convnet3d/core')
Mapped name None to device cuda: Tesla K40c
Using cuDNN version 5103 on context None
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
Disabling C code for MaxAndArgmax due to unsupported float16


start time:
13/10/2016
11:52:45


Image_dim_1: 90
Image_dim_2: 90
Image_dim_3: 90


training @ iter =  0
Traceback (most recent call last):
  File "", line 1, in 
  File 
"/home/luca/anaconda2/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py",
 
line 866, in runfile
execfile(filename, namespace)
  File 
"/home/luca/anaconda2/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py",
 
line 94, in execfile
builtins.execfile(filename, *where)
  File 
"/home/luca/data/DeepLearningTutorials/Theano-3D-Convnet-master/convnet3d/core/run_multi_conv.py",
 
line 42, in 
run_experiments()
  File 
"/home/luca/data/DeepLearningTutorials/Theano-3D-Convnet-master/convnet3d/core/run_multi_conv.py",
 
line 33, in run_experiments
Zoom = 0.0
  File "mpr_convnet_class.py", line 322, in __init__
training_cost_ij=train_model(a, b) 
  File "/home/luca/data/Theano-master/theano/compile/function_module.py", 
line 879, in __call__
storage_map=getattr(self.fn, 'storage_map', None))
  File "/home/luca/data/Theano-master/theano/gof/link.py", line 167, in 
raise_with_op
"\nInputs values: %s" % scalar_values)
  File "pygpu/gpuarray.pyx", line 1941, in pygpu.gpuarray.GpuArray.__repr__ 
(pygpu/gpuarray.c:24742)
  File 
"/home/luca/anaconda2/lib/python2.7/site-packages/numpy/core/numeric.py", 
line 482, in asarray
return array(a, dtype, copy=False, order=order)
  File "pygpu/gpuarray.pyx", line 1572, in 
pygpu.gpuarray.GpuArray.__array__ (pygpu/gpuarray.c:20224)
  File "pygpu/gpuarray.pyx", line 1320, in pygpu.gpuarray.pygpu_as_ndarray 
(pygpu/gpuarray.c:17346)
  File "pygpu/gpuarray.pyx", line 347, in pygpu.gpuarray.array_read 
(pygpu/gpuarray.c:6114)
pygpu.gpuarray.GpuArrayException: an illegal memory access was encountered

-
If I try float 32 and theano.gpuarray.dnn.dnn_conv  I also have memory 
problems:  pygpu.gpuarray.GpuArrayException: out of memory

Python 2.7.12 |Continuum Analytics, Inc.| (default, Jul  2 2016, 17:42:40) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you 

Re: [theano-users] Error using floatX = float16 to save memory

2016-10-12 Thread Pascal Lamblin
Hi,

According to the message below, you are still using conv3d2d instead of cuDNN 
for convolutions. Is that on purpose?

On Wed, Oct 12, 2016, luca.wagner.0...@gmail.com wrote:
> Hi Fred,
> now it works on a small test convnet withoyut delay; tomorrow I'll try  the 
> big one on Tesla K40.
> Many thanks
> 
> I Installed Theano==0.9.0.dev3 from  gvtulder/Theano forked from 
> Theano/Theano: https://github.com/Theano/Theano/pull/4862
> 
> .theanorc is:
> [global]
> floatX = float16
> device=cuda
> [cuda] 
> root = /usr/local/cuda-7.5
> 
> 
> 
> [nvcc]
> fastmath=True
> 
> optimizer = fast_compile
> 
> [dnn.conv]
> algo_fwd =  time_once
> algo_bwd_filter = time_once
> algo_bwd_data = time_once 
> 
> This the output:
> Python 2.7.12 |Anaconda custom (64-bit)| (default, Jul  2 2016, 17:42:40) 
> [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
> Anaconda is brought to you by Continuum Analytics.
> Please check out: http://continuum.io/thanks and https://anaconda.org
> >>> 
> runfile('/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv_t.py',
>  
> wdir='/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core')
> Mapped name None to device cuda: GeForce 840M
> Using cuDNN version 5103 on context None
> Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
> Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
> Disabling C code for DiagonalSubtensor{inplace} due to unsupported float16
> Disabling C code for IncDiagonalSubtensor due to unsupported float16
> Disabling C code for DiagonalSubtensor{inplace} due to unsupported float16
> Disabling C code for MaxAndArgmax due to unsupported float16
> 
> 
> start time:
> 12/10/2016
> 14:20:32
> 
> 
> training @ iter =  0
> training cost 0.69320
> epoch 1, training batch 316/316, validation error 39.286 %
> training @ iter =  400
> training cost 0.69127
> epoch 2, training batch 316/316, validation error 36.607 %
> training @ iter =  800
> training cost 0.68616
> epoch 3, training batch 316/316, validation error 33.929 %
> training @ iter =  1200
> training cost 0.68399
> epoch 4, training batch 316/316, validation error 33.036 %
> training cost 0.67953
> epoch 5, training batch 316/316, validation error 29.643 %
> 
> -- 
> 
> --- 
> You received this message because you are subscribed to the Google Groups 
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.


-- 
Pascal

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-10-12 Thread luca . wagner . 0812
Hi,
I started testing the convnet on TeslaK40 but I see an abnormal memory use.
Usually the convnet uses about 20 Gbyte of RAM but now I have 
pygpu.gpuarray.Exception. Out of memory because it wants more than 78 
gigabytes of the  available RAM.

Thanks
Luca



-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-10-12 Thread luca . wagner . 0812
Hi Fred,
now it works on a small test convnet withoyut delay; tomorrow I'll try  the 
big one on Tesla K40.
Many thanks

I Installed Theano==0.9.0.dev3 from  gvtulder/Theano forked from 
Theano/Theano: https://github.com/Theano/Theano/pull/4862

.theanorc is:
[global]
floatX = float16
device=cuda
[cuda] 
root = /usr/local/cuda-7.5



[nvcc]
fastmath=True

optimizer = fast_compile

[dnn.conv]
algo_fwd =  time_once
algo_bwd_filter = time_once
algo_bwd_data = time_once 

This the output:
Python 2.7.12 |Anaconda custom (64-bit)| (default, Jul  2 2016, 17:42:40) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> 
runfile('/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv_t.py',
 
wdir='/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core')
Mapped name None to device cuda: GeForce 840M
Using cuDNN version 5103 on context None
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
Disabling C code for DiagonalSubtensor{inplace} due to unsupported float16
Disabling C code for IncDiagonalSubtensor due to unsupported float16
Disabling C code for DiagonalSubtensor{inplace} due to unsupported float16
Disabling C code for MaxAndArgmax due to unsupported float16


start time:
12/10/2016
14:20:32


training @ iter =  0
training cost 0.69320
epoch 1, training batch 316/316, validation error 39.286 %
training @ iter =  400
training cost 0.69127
epoch 2, training batch 316/316, validation error 36.607 %
training @ iter =  800
training cost 0.68616
epoch 3, training batch 316/316, validation error 33.929 %
training @ iter =  1200
training cost 0.68399
epoch 4, training batch 316/316, validation error 33.036 %
training cost 0.67953
epoch 5, training batch 316/316, validation error 29.643 %

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-10-11 Thread luca . wagner . 0812
Hi Fred,
I cannot try the new conv interface: theano.tensor.nnet.conv3d() with 
float16 and device=cuda because in the latest Theano version  class Pool 
was changed and so I cannot do maxpool3d as I wrote to Pascal in another 
thread.
Thanks

Luca


On Friday, October 7, 2016 at 5:38:14 PM UTC+2, nouiz wrote:
>
>
>
> On Fri, Oct 7, 2016 at 11:31 AM, Pascal Lamblin  > wrote:
>
>> On Fri, Oct 07, 2016, luca.wag...@gmail.com  wrote:
>> > Hi Fred,
>> > I did a test using:
>> >
>> > theano.tensor.nnet.conv3d2d import conv3d
>>
>> That's the old conv3d2d code, that should not be needed with cuDNN, and
>> that has some pieces that do not work in float16.
>> These are not the problems we should try to solve, we should focus on
>> what happens when using dnn_conv3d instead.
>>
>
> not dnn_conv3d, but the new conv interface: theano.tensor.nnet.conv3d(). 
> Use that one, with float=float15 and device=cuda.
>  
>
>>
>> >
>> > this PR: https://github.com/Theano/Theano/pull/4862
>> >
>> > [global]
>> > floatX = float16
>> > device=cuda
>> > [cuda]
>> > root = /usr/local/cuda-7.5
>> >
>> > [nvcc]
>> > fastmath=True
>> >
>> > optimizer = fast_compile
>> >
>> > [dnn.conv]
>> > algo_fwd =  time_once
>> > algo_bwd_filter = time_once
>> > algo_bwd_data = time_once
>> >
>> > The output is much slower then using float32:
>> >
>> > Python 2.7.12 |Anaconda custom (64-bit)| (default, Jul  2 2016, 
>> 17:42:40)
>> > [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
>> > Type "help", "copyright", "credits" or "license" for more information.
>> > Anaconda is brought to you by Continuum Analytics.
>> > Please check out: http://continuum.io/thanks and https://anaconda.org
>> > >>>
>> > 
>> runfile('/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv_t.py',
>> > 
>> wdir='/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core')
>> > Mapped name None to device cuda: GeForce 840M
>> > WARNING (theano.gof.compilelock): Overriding existing lock by dead 
>> process
>> > '3119' (I am process '3598')
>> > Using cuDNN version 5103 on context None
>> > /home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6:
>> > UserWarning: downsample module has been moved to the
>> > theano.tensor.signal.pool module.
>> >   "downsample module has been moved to the theano.tensor.signal.pool
>> > module.")
>> > Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
>> > Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
>> > Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
>> > Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
>> > Disabling C code for Alloc due to unsupported float16
>> > ERROR (theano.gof.opt): SeqOptimizer apply 
>> > > object at 0x7f3944076110>
>> > ERROR (theano.gof.opt): Traceback:
>> > ERROR (theano.gof.opt): Traceback (most recent call last):
>> >   File "/home/luca/data/Theano-master/theano/gof/opt.py", line 235, in 
>> apply
>> > sub_prof = optimizer.optimize(fgraph)
>> >   File "/home/luca/data/Theano-master/theano/gof/opt.py", line 90, in
>> > optimize
>> > ret = self.apply(fgraph, *args, **kwargs)
>> >   File "/home/luca/data/Theano-master/theano/gpuarray/opt.py", line 
>> 355, in
>> > apply
>> > node.outputs)
>> >   File "/home/luca/data/Theano-master/theano/gpuarray/dnn.py", line 
>> 1874,
>> > in local_gpua_pool_dnn_alternative
>> > img, ws, stride, pad = inputs
>> > ValueError: need more than 1 value to unpack
>> >
>> > ERROR (theano.gof.opt): Optimization failure due to:
>> > local_gpua_pool_dnn_grad_stride
>> > ERROR (theano.gof.opt): node: MaxPoolGrad{ds=(3, 3), ignore_border=True,
>> > st=(3, 3), padding=(0, 0), mode='max'}(sigmoid.0, Pool{ds=(3, 3),
>> > ignore_border=True, st=(3, 3), padding=(0, 0), mode='max'}.0, 
>> Reshape{4}.0)
>> > ERROR (theano.gof.opt): TRACEBACK:
>> > ERROR (theano.gof.opt): Traceback (most recent call last):
>> >   File "/home/luca/data/Theano-master/theano/gof/opt.py", line 1820, in
>> > process_node
>> > replacements = lopt.transform(node)
>> >   File "/home/luca/data/Theano-master/theano/gpuarray/opt.py", line 
>> 203, in
>> > local_opt
>> > new_op = maker(node.op, context_name, node.inputs, node.outputs)
>> >   File "/home/luca/data/Theano-master/theano/gpuarray/dnn.py", line 
>> 1888,
>> > in local_gpua_pool_dnn_grad_stride
>> > inp, out, out_grad, ws, stride, pad = inputs
>> > ValueError: need more than 3 values to unpack
>> >
>> > ERROR (theano.gof.opt): Optimization failure due to:
>> > local_gpua_pool_dnn_grad_stride
>> > ERROR (theano.gof.opt): node: MaxPoolGrad{ds=(3, 3), ignore_border=True,
>> > st=(3, 3), padding=(0, 0), mode='max'}(HostFromGpu(gpuarray).0, 
>> Pool{ds=(3,
>> > 3), ignore_border=True, st=(3, 3), padding=(0, 0), mode='max'}.0,
>> > Reshape{4}.0)
>> > ERROR (theano.gof.opt): TRACEBACK:
>> > ERROR (theano.gof.opt): Traceback (most recent call last):
>> >   File "/home/luca/data/Theano-m

Re: [theano-users] Error using floatX = float16 to save memory

2016-10-11 Thread luca . wagner . 0812
Hi Fred,
I cannot try the new con interface with float16 and device=cuda because the 
latest Theano version changed class Pool and so I cannot do maxpool3d as I 
wrote to Pascal in another thread.hnaks

Luca

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-10-11 Thread luca . wagner . 0812

Hi Fred,
I installed the Theano version that contains  "Adding an AbstractConv3d 
interface #4862"  https://github.com/Theano/Theano/pull/4862 

but now  it doesn't work because in the previous Theano version, class Pool 
had these parameters: ds, ignore_border, st, padding, mode,openmp.
in the latest Theano version class Pool has no ds parameter: only  
ignore_border, mode,openmp.


In maxpool3d.py  I was calling op = DownsampleFactorMax((ds[1],ds[2]), 
ignore_border) where DownsampleFactorMax = pool.Pool

I tried   Pool(mode=..., ...)(input, ws=ds) but it doesn't work.
How can I call Pool passing (ds[1],ds[2]) ?

Many Thanks
Luca

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
""" Max pooling spatio-temporal inputs for Theano """

from theano import tensor
from theano.tensor.signal.downsample import DownsampleFactorMax

#it was originally ignore_border=False and then corrected as suggested by Pascal
'''Pascal update on ignore_border'''   
def max_pool_3d(input, ds, ignore_border=True):
"""
Takes as input a N-D tensor, where N >= 3. It downscales the input video by
the specified factor, by keeping only the maximum value of non-overlapping
patches of size (ds[0],ds[1],ds[2]) (time, height, width)

:type input: N-D theano tensor of input images.
:param input: input images. Max pooling will be done over the 3 last dimensions.
:type ds: tuple of length 3
:param ds: factor by which to downscale. (2,2,2) will halve the video in each dimension.
:param ignore_border: boolean value. When True, (5,5,5) input with ds=(2,2,2) will generate a
  (2,2,2) output. (3,3,3) otherwise.
"""

if input.ndim < 3:
raise NotImplementedError('max_pool_3d requires a dimension >= 3')

# extract nr dimensions
vid_dim = input.ndim
# max pool in two different steps, so we can use the 2d implementation of 
# downsamplefactormax. First maxpool frames as usual. 
# Then maxpool the time dimension. Shift the time dimension to the third 
# position, so rows and cols are in the back

# extract dimensions
frame_shape = input.shape[-2:]

# count the number of "leading" dimensions, store as dmatrix
# tensor.prod: product of every term in x along axis
batch_size = tensor.prod(input.shape[:-2])
# Reshape x by right padding the shape with n_ones 1s. 
batch_size = tensor.shape_padright(batch_size,1)

# store as 4D tensor with shape: (batch_size,1,height,width)
#tensor.cast
# Cast any tensor x to a Tensor of the same shape, but with a different numerical type dtype.
new_shape = tensor.cast(tensor.join(0, batch_size,
tensor.as_tensor([1,]), 
frame_shape), 'int32')
input_4D = tensor.reshape(input, new_shape, ndim=4)

# downsample mini-batch of videos in rows and cols
op = DownsampleFactorMax((ds[1],ds[2]), ignore_border)
output = op(input_4D)
# restore to original shape
outshape = tensor.join(0, input.shape[:-2], output.shape[-2:])
out = tensor.reshape(output, outshape, ndim=input.ndim)

# now maxpool time

# output (time, rows, cols), reshape so that time is in the back
shufl = (list(range(vid_dim-3)) + [vid_dim-2]+[vid_dim-1]+[vid_dim-3])
input_time = out.dimshuffle(shufl)
# reset dimensions
vid_shape = input_time.shape[-2:]

# count the number of "leading" dimensions, store as dmatrix
batch_size = tensor.prod(input_time.shape[:-2])
batch_size = tensor.shape_padright(batch_size,1)

# store as 4D tensor with shape: (batch_size,1,width,time)
new_shape = tensor.cast(tensor.join(0, batch_size,
tensor.as_tensor([1,]), 
vid_shape), 'int32')
input_4D_time = tensor.reshape(input_time, new_shape, ndim=4)
# downsample mini-batch of videos in time
op = DownsampleFactorMax((1,ds[0]), ignore_border)
outtime = op(input_4D_time)
# output 
# restore to original shape (xxx, rows, cols, time)
outshape = tensor.join(0, input_time.shape[:-2], outtime.shape[-2:])
shufl = (list(range(vid_dim-3)) + [vid_dim-1]+[vid_dim-3]+[vid_dim-2])
return tensor.reshape(outtime, outshape, ndim=input.ndim).dimshuffle(shufl)

Re: [theano-users] Error using floatX = float16 to save memory

2016-10-11 Thread luca . wagner . 0812
Hi Fred,
I installed the Theano version that contains  "Adding an AbstractConv3d 
interface #4862"  https://github.com/Theano/Theano/pull/4862 

but now  it doesn't work because in the previous Theano version, class Pool 
had these parameters: ds, ignore_border, st, padding, mode,openmp
in the latest Theano version class Pool there is no ds: only  
ignore_border, mode,openmp.

In maxpool3d.py  I was calling op = DownsampleFactorMax((ds[1],ds[2]), 
ignore_border) where DownsampleFactorMax = pool.Pool

I tried   Pool(mode=..., ...)(input, ws=ws) but it doesn't work.
How can I call Pool passing (ds[1],ds[2]) ?

Many Thanks
Luca

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
""" Max pooling spatio-temporal inputs for Theano """

from theano import tensor
from theano.tensor.signal.downsample import DownsampleFactorMax

#it was originally ignore_border=False and then corrected as suggested by Pascal
'''Pascal update on ignore_border'''   
def max_pool_3d(input, ds, ignore_border=True):
"""
Takes as input a N-D tensor, where N >= 3. It downscales the input video by
the specified factor, by keeping only the maximum value of non-overlapping
patches of size (ds[0],ds[1],ds[2]) (time, height, width)

:type input: N-D theano tensor of input images.
:param input: input images. Max pooling will be done over the 3 last dimensions.
:type ds: tuple of length 3
:param ds: factor by which to downscale. (2,2,2) will halve the video in each dimension.
:param ignore_border: boolean value. When True, (5,5,5) input with ds=(2,2,2) will generate a
  (2,2,2) output. (3,3,3) otherwise.
"""

if input.ndim < 3:
raise NotImplementedError('max_pool_3d requires a dimension >= 3')

# extract nr dimensions
vid_dim = input.ndim
# max pool in two different steps, so we can use the 2d implementation of 
# downsamplefactormax. First maxpool frames as usual. 
# Then maxpool the time dimension. Shift the time dimension to the third 
# position, so rows and cols are in the back

# extract dimensions
frame_shape = input.shape[-2:]

# count the number of "leading" dimensions, store as dmatrix
# tensor.prod: product of every term in x along axis
batch_size = tensor.prod(input.shape[:-2])
# Reshape x by right padding the shape with n_ones 1s. 
batch_size = tensor.shape_padright(batch_size,1)

# store as 4D tensor with shape: (batch_size,1,height,width)
#tensor.cast
# Cast any tensor x to a Tensor of the same shape, but with a different numerical type dtype.
new_shape = tensor.cast(tensor.join(0, batch_size,
tensor.as_tensor([1,]), 
frame_shape), 'int32')
input_4D = tensor.reshape(input, new_shape, ndim=4)

# downsample mini-batch of videos in rows and cols
op = DownsampleFactorMax((ds[1],ds[2]), ignore_border)
output = op(input_4D)
# restore to original shape
outshape = tensor.join(0, input.shape[:-2], output.shape[-2:])
out = tensor.reshape(output, outshape, ndim=input.ndim)

# now maxpool time

# output (time, rows, cols), reshape so that time is in the back
shufl = (list(range(vid_dim-3)) + [vid_dim-2]+[vid_dim-1]+[vid_dim-3])
input_time = out.dimshuffle(shufl)
# reset dimensions
vid_shape = input_time.shape[-2:]

# count the number of "leading" dimensions, store as dmatrix
batch_size = tensor.prod(input_time.shape[:-2])
batch_size = tensor.shape_padright(batch_size,1)

# store as 4D tensor with shape: (batch_size,1,width,time)
new_shape = tensor.cast(tensor.join(0, batch_size,
tensor.as_tensor([1,]), 
vid_shape), 'int32')
input_4D_time = tensor.reshape(input_time, new_shape, ndim=4)
# downsample mini-batch of videos in time
op = DownsampleFactorMax((1,ds[0]), ignore_border)
outtime = op(input_4D_time)
# output 
# restore to original shape (xxx, rows, cols, time)
outshape = tensor.join(0, input_time.shape[:-2], outtime.shape[-2:])
shufl = (list(range(vid_dim-3)) + [vid_dim-1]+[vid_dim-3]+[vid_dim-2])
return tensor.reshape(outtime, outshape, ndim=input.ndim).dimshuffle(shufl)

Re: [theano-users] Error using floatX = float16 to save memory

2016-10-10 Thread luca . wagner . 0812


On Friday, October 7, 2016 at 5:38:14 PM UTC+2, nouiz wrote:
>
>
>
> On Fri, Oct 7, 2016 at 11:31 AM, Pascal Lamblin  > wrote:
>
>> On Fri, Oct 07, 2016, luca.wag...@gmail.com  wrote:
>> > Hi Fred,
>> > I did a test using:
>> >
>> > theano.tensor.nnet.conv3d2d import conv3d
>>
>> That's the old conv3d2d code, that should not be needed with cuDNN, and
>> that has some pieces that do not work in float16.
>> These are not the problems we should try to solve, we should focus on
>> what happens when using dnn_conv3d instead.
>>
>
> not dnn_conv3d, but the new conv interface: theano.tensor.nnet.conv3d(). 
> Use that one, with float=float15 and device=cuda.
>

I don't find this new  new conv interface: theano.tensor.nnet.conv3d()
thanks
luca

>  
>
>>
>> >
>> > this PR: https://github.com/Theano/Theano/pull/4862
>> >
>> > [global]
>> > floatX = float16
>> > device=cuda
>> > [cuda]
>> > root = /usr/local/cuda-7.5
>> >
>> > [nvcc]
>> > fastmath=True
>> >
>> > optimizer = fast_compile
>> >
>> > [dnn.conv]
>> > algo_fwd =  time_once
>> > algo_bwd_filter = time_once
>> > algo_bwd_data = time_once
>> >
>> > The output is much slower then using float32:
>> >
>> > Python 2.7.12 |Anaconda custom (64-bit)| (default, Jul  2 2016, 
>> 17:42:40)
>> > [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
>> > Type "help", "copyright", "credits" or "license" for more information.
>> > Anaconda is brought to you by Continuum Analytics.
>> > Please check out: http://continuum.io/thanks and https://anaconda.org
>> > >>>
>> > 
>> runfile('/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv_t.py',
>> > 
>> wdir='/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core')
>> > Mapped name None to device cuda: GeForce 840M
>> > WARNING (theano.gof.compilelock): Overriding existing lock by dead 
>> process
>> > '3119' (I am process '3598')
>> > Using cuDNN version 5103 on context None
>> > /home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6:
>> > UserWarning: downsample module has been moved to the
>> > theano.tensor.signal.pool module.
>> >   "downsample module has been moved to the theano.tensor.signal.pool
>> > module.")
>> > Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
>> > Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
>> > Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
>> > Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
>> > Disabling C code for Alloc due to unsupported float16
>> > ERROR (theano.gof.opt): SeqOptimizer apply 
>> > > object at 0x7f3944076110>
>> > ERROR (theano.gof.opt): Traceback:
>> > ERROR (theano.gof.opt): Traceback (most recent call last):
>> >   File "/home/luca/data/Theano-master/theano/gof/opt.py", line 235, in 
>> apply
>> > sub_prof = optimizer.optimize(fgraph)
>> >   File "/home/luca/data/Theano-master/theano/gof/opt.py", line 90, in
>> > optimize
>> > ret = self.apply(fgraph, *args, **kwargs)
>> >   File "/home/luca/data/Theano-master/theano/gpuarray/opt.py", line 
>> 355, in
>> > apply
>> > node.outputs)
>> >   File "/home/luca/data/Theano-master/theano/gpuarray/dnn.py", line 
>> 1874,
>> > in local_gpua_pool_dnn_alternative
>> > img, ws, stride, pad = inputs
>> > ValueError: need more than 1 value to unpack
>> >
>> > ERROR (theano.gof.opt): Optimization failure due to:
>> > local_gpua_pool_dnn_grad_stride
>> > ERROR (theano.gof.opt): node: MaxPoolGrad{ds=(3, 3), ignore_border=True,
>> > st=(3, 3), padding=(0, 0), mode='max'}(sigmoid.0, Pool{ds=(3, 3),
>> > ignore_border=True, st=(3, 3), padding=(0, 0), mode='max'}.0, 
>> Reshape{4}.0)
>> > ERROR (theano.gof.opt): TRACEBACK:
>> > ERROR (theano.gof.opt): Traceback (most recent call last):
>> >   File "/home/luca/data/Theano-master/theano/gof/opt.py", line 1820, in
>> > process_node
>> > replacements = lopt.transform(node)
>> >   File "/home/luca/data/Theano-master/theano/gpuarray/opt.py", line 
>> 203, in
>> > local_opt
>> > new_op = maker(node.op, context_name, node.inputs, node.outputs)
>> >   File "/home/luca/data/Theano-master/theano/gpuarray/dnn.py", line 
>> 1888,
>> > in local_gpua_pool_dnn_grad_stride
>> > inp, out, out_grad, ws, stride, pad = inputs
>> > ValueError: need more than 3 values to unpack
>> >
>> > ERROR (theano.gof.opt): Optimization failure due to:
>> > local_gpua_pool_dnn_grad_stride
>> > ERROR (theano.gof.opt): node: MaxPoolGrad{ds=(3, 3), ignore_border=True,
>> > st=(3, 3), padding=(0, 0), mode='max'}(HostFromGpu(gpuarray).0, 
>> Pool{ds=(3,
>> > 3), ignore_border=True, st=(3, 3), padding=(0, 0), mode='max'}.0,
>> > Reshape{4}.0)
>> > ERROR (theano.gof.opt): TRACEBACK:
>> > ERROR (theano.gof.opt): Traceback (most recent call last):
>> >   File "/home/luca/data/Theano-master/theano/gof/opt.py", line 1820, in
>> > process_node
>> > replacements = lopt.transform(node)
>> >   File "/home/luca/data/Theano-master/theano/gpuarray/

Re: [theano-users] Error using floatX = float16 to save memory

2016-10-10 Thread luca . wagner . 0812
Hi Fred,
first I created a h5py dataset, i put  files in the dataset  and then load 
data.

I don't use pickle  but h5py to load data:

with h5py.File( h5py_dataset,'r') as f: 

 f.visit(dataset_list.append)   
 for j in range(len(dataset_list)):   

 dataset_x.append( np.array(f.get(dataset_list[j]))) 
   
 dataset_attributes = 
f.get(dataset_list[j]).attrs.values() 
 dataset_yy.append(dataset_attributes[2])


-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-10-07 Thread Frédéric Bastien
On Fri, Oct 7, 2016 at 11:31 AM, Pascal Lamblin 
wrote:

> On Fri, Oct 07, 2016, luca.wagner.0...@gmail.com wrote:
> > Hi Fred,
> > I did a test using:
> >
> > theano.tensor.nnet.conv3d2d import conv3d
>
> That's the old conv3d2d code, that should not be needed with cuDNN, and
> that has some pieces that do not work in float16.
> These are not the problems we should try to solve, we should focus on
> what happens when using dnn_conv3d instead.
>

not dnn_conv3d, but the new conv interface: theano.tensor.nnet.conv3d().
Use that one, with float=float15 and device=cuda.


>
> >
> > this PR: https://github.com/Theano/Theano/pull/4862
> >
> > [global]
> > floatX = float16
> > device=cuda
> > [cuda]
> > root = /usr/local/cuda-7.5
> >
> > [nvcc]
> > fastmath=True
> >
> > optimizer = fast_compile
> >
> > [dnn.conv]
> > algo_fwd =  time_once
> > algo_bwd_filter = time_once
> > algo_bwd_data = time_once
> >
> > The output is much slower then using float32:
> >
> > Python 2.7.12 |Anaconda custom (64-bit)| (default, Jul  2 2016, 17:42:40)
> > [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
> > Type "help", "copyright", "credits" or "license" for more information.
> > Anaconda is brought to you by Continuum Analytics.
> > Please check out: http://continuum.io/thanks and https://anaconda.org
> > >>>
> > runfile('/home/luca/data/DeepLearningTutorials/Theano-3D-
> ConvNet-master/convnet3d/core/run_multi_conv_t.py',
> > wdir='/home/luca/data/DeepLearningTutorials/Theano-3D-
> ConvNet-master/convnet3d/core')
> > Mapped name None to device cuda: GeForce 840M
> > WARNING (theano.gof.compilelock): Overriding existing lock by dead
> process
> > '3119' (I am process '3598')
> > Using cuDNN version 5103 on context None
> > /home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6:
> > UserWarning: downsample module has been moved to the
> > theano.tensor.signal.pool module.
> >   "downsample module has been moved to the theano.tensor.signal.pool
> > module.")
> > Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
> > Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
> > Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
> > Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
> > Disabling C code for Alloc due to unsupported float16
> > ERROR (theano.gof.opt): SeqOptimizer apply  U
> > object at 0x7f3944076110>
> > ERROR (theano.gof.opt): Traceback:
> > ERROR (theano.gof.opt): Traceback (most recent call last):
> >   File "/home/luca/data/Theano-master/theano/gof/opt.py", line 235, in
> apply
> > sub_prof = optimizer.optimize(fgraph)
> >   File "/home/luca/data/Theano-master/theano/gof/opt.py", line 90, in
> > optimize
> > ret = self.apply(fgraph, *args, **kwargs)
> >   File "/home/luca/data/Theano-master/theano/gpuarray/opt.py", line
> 355, in
> > apply
> > node.outputs)
> >   File "/home/luca/data/Theano-master/theano/gpuarray/dnn.py", line
> 1874,
> > in local_gpua_pool_dnn_alternative
> > img, ws, stride, pad = inputs
> > ValueError: need more than 1 value to unpack
> >
> > ERROR (theano.gof.opt): Optimization failure due to:
> > local_gpua_pool_dnn_grad_stride
> > ERROR (theano.gof.opt): node: MaxPoolGrad{ds=(3, 3), ignore_border=True,
> > st=(3, 3), padding=(0, 0), mode='max'}(sigmoid.0, Pool{ds=(3, 3),
> > ignore_border=True, st=(3, 3), padding=(0, 0), mode='max'}.0,
> Reshape{4}.0)
> > ERROR (theano.gof.opt): TRACEBACK:
> > ERROR (theano.gof.opt): Traceback (most recent call last):
> >   File "/home/luca/data/Theano-master/theano/gof/opt.py", line 1820, in
> > process_node
> > replacements = lopt.transform(node)
> >   File "/home/luca/data/Theano-master/theano/gpuarray/opt.py", line
> 203, in
> > local_opt
> > new_op = maker(node.op, context_name, node.inputs, node.outputs)
> >   File "/home/luca/data/Theano-master/theano/gpuarray/dnn.py", line
> 1888,
> > in local_gpua_pool_dnn_grad_stride
> > inp, out, out_grad, ws, stride, pad = inputs
> > ValueError: need more than 3 values to unpack
> >
> > ERROR (theano.gof.opt): Optimization failure due to:
> > local_gpua_pool_dnn_grad_stride
> > ERROR (theano.gof.opt): node: MaxPoolGrad{ds=(3, 3), ignore_border=True,
> > st=(3, 3), padding=(0, 0), mode='max'}(HostFromGpu(gpuarray).0,
> Pool{ds=(3,
> > 3), ignore_border=True, st=(3, 3), padding=(0, 0), mode='max'}.0,
> > Reshape{4}.0)
> > ERROR (theano.gof.opt): TRACEBACK:
> > ERROR (theano.gof.opt): Traceback (most recent call last):
> >   File "/home/luca/data/Theano-master/theano/gof/opt.py", line 1820, in
> > process_node
> > replacements = lopt.transform(node)
> >   File "/home/luca/data/Theano-master/theano/gpuarray/opt.py", line
> 203, in
> > local_opt
> > new_op = maker(node.op, context_name, node.inputs, node.outputs)
> >   File "/home/luca/data/Theano-master/theano/gpuarray/dnn.py", line
> 1888,
> > in local_gpua_pool_dnn_grad_stride
> > inp, out, out_grad, ws, stride, pad = inputs
> > Valu

Re: [theano-users] Error using floatX = float16 to save memory

2016-10-07 Thread Pascal Lamblin
On Fri, Oct 07, 2016, luca.wagner.0...@gmail.com wrote:
> Hi Fred,
> I did a test using:
> 
> theano.tensor.nnet.conv3d2d import conv3d

That's the old conv3d2d code, that should not be needed with cuDNN, and
that has some pieces that do not work in float16.
These are not the problems we should try to solve, we should focus on
what happens when using dnn_conv3d instead.

> 
> this PR: https://github.com/Theano/Theano/pull/4862
> 
> [global]
> floatX = float16
> device=cuda
> [cuda] 
> root = /usr/local/cuda-7.5
> 
> [nvcc]
> fastmath=True
> 
> optimizer = fast_compile
> 
> [dnn.conv]
> algo_fwd =  time_once
> algo_bwd_filter = time_once
> algo_bwd_data = time_once 
> 
> The output is much slower then using float32:
> 
> Python 2.7.12 |Anaconda custom (64-bit)| (default, Jul  2 2016, 17:42:40) 
> [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
> Anaconda is brought to you by Continuum Analytics.
> Please check out: http://continuum.io/thanks and https://anaconda.org
> >>> 
> runfile('/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv_t.py',
>  
> wdir='/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core')
> Mapped name None to device cuda: GeForce 840M
> WARNING (theano.gof.compilelock): Overriding existing lock by dead process 
> '3119' (I am process '3598')
> Using cuDNN version 5103 on context None
> /home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6: 
> UserWarning: downsample module has been moved to the 
> theano.tensor.signal.pool module.
>   "downsample module has been moved to the theano.tensor.signal.pool 
> module.")
> Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
> Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
> Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
> Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
> Disabling C code for Alloc due to unsupported float16
> ERROR (theano.gof.opt): SeqOptimizer apply  object at 0x7f3944076110>
> ERROR (theano.gof.opt): Traceback:
> ERROR (theano.gof.opt): Traceback (most recent call last):
>   File "/home/luca/data/Theano-master/theano/gof/opt.py", line 235, in apply
> sub_prof = optimizer.optimize(fgraph)
>   File "/home/luca/data/Theano-master/theano/gof/opt.py", line 90, in 
> optimize
> ret = self.apply(fgraph, *args, **kwargs)
>   File "/home/luca/data/Theano-master/theano/gpuarray/opt.py", line 355, in 
> apply
> node.outputs)
>   File "/home/luca/data/Theano-master/theano/gpuarray/dnn.py", line 1874, 
> in local_gpua_pool_dnn_alternative
> img, ws, stride, pad = inputs
> ValueError: need more than 1 value to unpack
> 
> ERROR (theano.gof.opt): Optimization failure due to: 
> local_gpua_pool_dnn_grad_stride
> ERROR (theano.gof.opt): node: MaxPoolGrad{ds=(3, 3), ignore_border=True, 
> st=(3, 3), padding=(0, 0), mode='max'}(sigmoid.0, Pool{ds=(3, 3), 
> ignore_border=True, st=(3, 3), padding=(0, 0), mode='max'}.0, Reshape{4}.0)
> ERROR (theano.gof.opt): TRACEBACK:
> ERROR (theano.gof.opt): Traceback (most recent call last):
>   File "/home/luca/data/Theano-master/theano/gof/opt.py", line 1820, in 
> process_node
> replacements = lopt.transform(node)
>   File "/home/luca/data/Theano-master/theano/gpuarray/opt.py", line 203, in 
> local_opt
> new_op = maker(node.op, context_name, node.inputs, node.outputs)
>   File "/home/luca/data/Theano-master/theano/gpuarray/dnn.py", line 1888, 
> in local_gpua_pool_dnn_grad_stride
> inp, out, out_grad, ws, stride, pad = inputs
> ValueError: need more than 3 values to unpack
> 
> ERROR (theano.gof.opt): Optimization failure due to: 
> local_gpua_pool_dnn_grad_stride
> ERROR (theano.gof.opt): node: MaxPoolGrad{ds=(3, 3), ignore_border=True, 
> st=(3, 3), padding=(0, 0), mode='max'}(HostFromGpu(gpuarray).0, Pool{ds=(3, 
> 3), ignore_border=True, st=(3, 3), padding=(0, 0), mode='max'}.0, 
> Reshape{4}.0)
> ERROR (theano.gof.opt): TRACEBACK:
> ERROR (theano.gof.opt): Traceback (most recent call last):
>   File "/home/luca/data/Theano-master/theano/gof/opt.py", line 1820, in 
> process_node
> replacements = lopt.transform(node)
>   File "/home/luca/data/Theano-master/theano/gpuarray/opt.py", line 203, in 
> local_opt
> new_op = maker(node.op, context_name, node.inputs, node.outputs)
>   File "/home/luca/data/Theano-master/theano/gpuarray/dnn.py", line 1888, 
> in local_gpua_pool_dnn_grad_stride
> inp, out, out_grad, ws, stride, pad = inputs
> ValueError: need more than 3 values to unpack
> 
> ERROR (theano.gof.opt): Optimization failure due to: 
> local_gpua_pool_dnn_alternative
> ERROR (theano.gof.opt): node: Pool{ds=(3, 3), ignore_border=True, st=(3, 
> 3), padding=(0, 0), mode='max'}(HostFromGpu(gpuarray).0)
> ERROR (theano.gof.opt): TRACEBACK:
> ERROR (theano.gof.opt): Traceback (most recent call last):
>   File "/home/luca/

Re: [theano-users] Error using floatX = float16 to save memory

2016-10-07 Thread Frédéric Bastien
Great. Now it run for you.

The slow down is probably caused by the error you saw.

Do you load old pickled files? This could cause this type of problem. If
not, then I'll need a way to reproduce this to fix it.

Fred

On Fri, Oct 7, 2016 at 8:21 AM,  wrote:

> Hi Fred,
> I did a test using:
>
> theano.tensor.nnet.conv3d2d import conv3d
>
> this PR: https://github.com/Theano/Theano/pull/4862
>
> [global]
> floatX = float16
> device=cuda
> [cuda]
> root = /usr/local/cuda-7.5
>
> [nvcc]
> fastmath=True
>
> optimizer = fast_compile
>
> [dnn.conv]
> algo_fwd =  time_once
> algo_bwd_filter = time_once
> algo_bwd_data = time_once
>
> The output is much slower then using float32:
>
> Python 2.7.12 |Anaconda custom (64-bit)| (default, Jul  2 2016, 17:42:40)
> [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
> Anaconda is brought to you by Continuum Analytics.
> Please check out: http://continuum.io/thanks and https://anaconda.org
> >>> runfile('/home/luca/data/DeepLearningTutorials/Theano-
> 3D-ConvNet-master/convnet3d/core/run_multi_conv_t.py',
> wdir='/home/luca/data/DeepLearningTutorials/Theano-
> 3D-ConvNet-master/convnet3d/core')
> Mapped name None to device cuda: GeForce 840M
> WARNING (theano.gof.compilelock): Overriding existing lock by dead process
> '3119' (I am process '3598')
> Using cuDNN version 5103 on context None
> /home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6:
> UserWarning: downsample module has been moved to the
> theano.tensor.signal.pool module.
>   "downsample module has been moved to the theano.tensor.signal.pool
> module.")
> Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
> Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
> Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
> Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
> Disabling C code for Alloc due to unsupported float16
> ERROR (theano.gof.opt): SeqOptimizer apply  object at 0x7f3944076110>
> ERROR (theano.gof.opt): Traceback:
> ERROR (theano.gof.opt): Traceback (most recent call last):
>   File "/home/luca/data/Theano-master/theano/gof/opt.py", line 235, in
> apply
> sub_prof = optimizer.optimize(fgraph)
>   File "/home/luca/data/Theano-master/theano/gof/opt.py", line 90, in
> optimize
> ret = self.apply(fgraph, *args, **kwargs)
>   File "/home/luca/data/Theano-master/theano/gpuarray/opt.py", line 355,
> in apply
> node.outputs)
>   File "/home/luca/data/Theano-master/theano/gpuarray/dnn.py", line 1874,
> in local_gpua_pool_dnn_alternative
> img, ws, stride, pad = inputs
> ValueError: need more than 1 value to unpack
>
> ERROR (theano.gof.opt): Optimization failure due to:
> local_gpua_pool_dnn_grad_stride
> ERROR (theano.gof.opt): node: MaxPoolGrad{ds=(3, 3), ignore_border=True,
> st=(3, 3), padding=(0, 0), mode='max'}(sigmoid.0, Pool{ds=(3, 3),
> ignore_border=True, st=(3, 3), padding=(0, 0), mode='max'}.0, Reshape{4}.0)
> ERROR (theano.gof.opt): TRACEBACK:
> ERROR (theano.gof.opt): Traceback (most recent call last):
>   File "/home/luca/data/Theano-master/theano/gof/opt.py", line 1820, in
> process_node
> replacements = lopt.transform(node)
>   File "/home/luca/data/Theano-master/theano/gpuarray/opt.py", line 203,
> in local_opt
> new_op = maker(node.op, context_name, node.inputs, node.outputs)
>   File "/home/luca/data/Theano-master/theano/gpuarray/dnn.py", line 1888,
> in local_gpua_pool_dnn_grad_stride
> inp, out, out_grad, ws, stride, pad = inputs
> ValueError: need more than 3 values to unpack
>
> ERROR (theano.gof.opt): Optimization failure due to:
> local_gpua_pool_dnn_grad_stride
> ERROR (theano.gof.opt): node: MaxPoolGrad{ds=(3, 3), ignore_border=True,
> st=(3, 3), padding=(0, 0), mode='max'}(HostFromGpu(gpuarray).0,
> Pool{ds=(3, 3), ignore_border=True, st=(3, 3), padding=(0, 0),
> mode='max'}.0, Reshape{4}.0)
> ERROR (theano.gof.opt): TRACEBACK:
> ERROR (theano.gof.opt): Traceback (most recent call last):
>   File "/home/luca/data/Theano-master/theano/gof/opt.py", line 1820, in
> process_node
> replacements = lopt.transform(node)
>   File "/home/luca/data/Theano-master/theano/gpuarray/opt.py", line 203,
> in local_opt
> new_op = maker(node.op, context_name, node.inputs, node.outputs)
>   File "/home/luca/data/Theano-master/theano/gpuarray/dnn.py", line 1888,
> in local_gpua_pool_dnn_grad_stride
> inp, out, out_grad, ws, stride, pad = inputs
> ValueError: need more than 3 values to unpack
>
> ERROR (theano.gof.opt): Optimization failure due to: local_gpua_pool_dnn_
> alternative
> ERROR (theano.gof.opt): node: Pool{ds=(3, 3), ignore_border=True, st=(3,
> 3), padding=(0, 0), mode='max'}(HostFromGpu(gpuarray).0)
> ERROR (theano.gof.opt): TRACEBACK:
> ERROR (theano.gof.opt): Traceback (most recent call last):
>   File "/home/luca/data/Theano-master/theano/gof/opt.py", line 1820, in
> process_node

Re: [theano-users] Error using floatX = float16 to save memory

2016-10-07 Thread luca . wagner . 0812
Hi Fred,
I did a test using:

theano.tensor.nnet.conv3d2d import conv3d

this PR: https://github.com/Theano/Theano/pull/4862

[global]
floatX = float16
device=cuda
[cuda] 
root = /usr/local/cuda-7.5

[nvcc]
fastmath=True

optimizer = fast_compile

[dnn.conv]
algo_fwd =  time_once
algo_bwd_filter = time_once
algo_bwd_data = time_once 

The output is much slower then using float32:

Python 2.7.12 |Anaconda custom (64-bit)| (default, Jul  2 2016, 17:42:40) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> 
runfile('/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv_t.py',
 
wdir='/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core')
Mapped name None to device cuda: GeForce 840M
WARNING (theano.gof.compilelock): Overriding existing lock by dead process 
'3119' (I am process '3598')
Using cuDNN version 5103 on context None
/home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6: 
UserWarning: downsample module has been moved to the 
theano.tensor.signal.pool module.
  "downsample module has been moved to the theano.tensor.signal.pool 
module.")
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Alloc due to unsupported float16
ERROR (theano.gof.opt): SeqOptimizer apply 
ERROR (theano.gof.opt): Traceback:
ERROR (theano.gof.opt): Traceback (most recent call last):
  File "/home/luca/data/Theano-master/theano/gof/opt.py", line 235, in apply
sub_prof = optimizer.optimize(fgraph)
  File "/home/luca/data/Theano-master/theano/gof/opt.py", line 90, in 
optimize
ret = self.apply(fgraph, *args, **kwargs)
  File "/home/luca/data/Theano-master/theano/gpuarray/opt.py", line 355, in 
apply
node.outputs)
  File "/home/luca/data/Theano-master/theano/gpuarray/dnn.py", line 1874, 
in local_gpua_pool_dnn_alternative
img, ws, stride, pad = inputs
ValueError: need more than 1 value to unpack

ERROR (theano.gof.opt): Optimization failure due to: 
local_gpua_pool_dnn_grad_stride
ERROR (theano.gof.opt): node: MaxPoolGrad{ds=(3, 3), ignore_border=True, 
st=(3, 3), padding=(0, 0), mode='max'}(sigmoid.0, Pool{ds=(3, 3), 
ignore_border=True, st=(3, 3), padding=(0, 0), mode='max'}.0, Reshape{4}.0)
ERROR (theano.gof.opt): TRACEBACK:
ERROR (theano.gof.opt): Traceback (most recent call last):
  File "/home/luca/data/Theano-master/theano/gof/opt.py", line 1820, in 
process_node
replacements = lopt.transform(node)
  File "/home/luca/data/Theano-master/theano/gpuarray/opt.py", line 203, in 
local_opt
new_op = maker(node.op, context_name, node.inputs, node.outputs)
  File "/home/luca/data/Theano-master/theano/gpuarray/dnn.py", line 1888, 
in local_gpua_pool_dnn_grad_stride
inp, out, out_grad, ws, stride, pad = inputs
ValueError: need more than 3 values to unpack

ERROR (theano.gof.opt): Optimization failure due to: 
local_gpua_pool_dnn_grad_stride
ERROR (theano.gof.opt): node: MaxPoolGrad{ds=(3, 3), ignore_border=True, 
st=(3, 3), padding=(0, 0), mode='max'}(HostFromGpu(gpuarray).0, Pool{ds=(3, 
3), ignore_border=True, st=(3, 3), padding=(0, 0), mode='max'}.0, 
Reshape{4}.0)
ERROR (theano.gof.opt): TRACEBACK:
ERROR (theano.gof.opt): Traceback (most recent call last):
  File "/home/luca/data/Theano-master/theano/gof/opt.py", line 1820, in 
process_node
replacements = lopt.transform(node)
  File "/home/luca/data/Theano-master/theano/gpuarray/opt.py", line 203, in 
local_opt
new_op = maker(node.op, context_name, node.inputs, node.outputs)
  File "/home/luca/data/Theano-master/theano/gpuarray/dnn.py", line 1888, 
in local_gpua_pool_dnn_grad_stride
inp, out, out_grad, ws, stride, pad = inputs
ValueError: need more than 3 values to unpack

ERROR (theano.gof.opt): Optimization failure due to: 
local_gpua_pool_dnn_alternative
ERROR (theano.gof.opt): node: Pool{ds=(3, 3), ignore_border=True, st=(3, 
3), padding=(0, 0), mode='max'}(HostFromGpu(gpuarray).0)
ERROR (theano.gof.opt): TRACEBACK:
ERROR (theano.gof.opt): Traceback (most recent call last):
  File "/home/luca/data/Theano-master/theano/gof/opt.py", line 1820, in 
process_node
replacements = lopt.transform(node)
  File "/home/luca/data/Theano-master/theano/gpuarray/opt.py", line 203, in 
local_opt
new_op = maker(node.op, context_name, node.inputs, node.outputs)
  File "/home/luca/data/Theano-master/theano/gpuarray/dnn.py", line 1874, 
in local_gpua_pool_dnn_alternative
img, ws, stride, pad = inputs
ValueError: need more than 1 value to unpack

ERROR (theano.gof.opt): Optimization failure due to: 
local_gpua_pool_dnn_grad_stri

Re: [theano-users] Error using floatX = float16 to save memory

2016-10-07 Thread luca . wagner . 0812
Hi Fred,
I did the test using:

theano.tensor.nnet.conv3d2d.conv3d

I updated the code with https://github.com/Theano/Theano/pull/4862 


.theanorc:
[global]
floatX = float16
device=cuda
[cuda] 
root = /usr/local/cuda-7.5
[nvcc]
fastmath=True

optimizer = fast_compile

[dnn.conv]
algo_fwd =  time_once
algo_bwd_filter = time_once
algo_bwd_data = time_once 

This is the output, much slower the using float32:

Python 2.7.12 |Anaconda custom (64-bit)| (default, Jul  2 2016, 17:42:40) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> 
runfile('/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv_t.py',
 
wdir='/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core')
Mapped name None to device cuda: GeForce 840M
WARNING (theano.gof.compilelock): Overriding existing lock by dead process 
'3119' (I am process '3598')
Using cuDNN version 5103 on context None
/home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6: 
UserWarning: downsample module has been moved to the 
theano.tensor.signal.pool module.
  "downsample module has been moved to the theano.tensor.signal.pool 
module.")
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Alloc due to unsupported float16
ERROR (theano.gof.opt): SeqOptimizer apply 
ERROR (theano.gof.opt): Traceback:
ERROR (theano.gof.opt): Traceback (most recent call last):
  File "/home/luca/data/Theano-master/theano/gof/opt.py", line 235, in apply
sub_prof = optimizer.optimize(fgraph)
  File "/home/luca/data/Theano-master/theano/gof/opt.py", line 90, in 
optimize
ret = self.apply(fgraph, *args, **kwargs)
  File "/home/luca/data/Theano-master/theano/gpuarray/opt.py", line 355, in 
apply
node.outputs)
  File "/home/luca/data/Theano-master/theano/gpuarray/dnn.py", line 1874, 
in local_gpua_pool_dnn_alternative
img, ws, stride, pad = inputs
ValueError: need more than 1 value to unpack

ERROR (theano.gof.opt): Optimization failure due to: 
local_gpua_pool_dnn_grad_stride
ERROR (theano.gof.opt): node: MaxPoolGrad{ds=(3, 3), ignore_border=True, 
st=(3, 3), padding=(0, 0), mode='max'}(sigmoid.0, Pool{ds=(3, 3), 
ignore_border=True, st=(3, 3), padding=(0, 0), mode='max'}.0, Reshape{4}.0)
ERROR (theano.gof.opt): TRACEBACK:
ERROR (theano.gof.opt): Traceback (most recent call last):
  File "/home/luca/data/Theano-master/theano/gof/opt.py", line 1820, in 
process_node
replacements = lopt.transform(node)
  File "/home/luca/data/Theano-master/theano/gpuarray/opt.py", line 203, in 
local_opt
new_op = maker(node.op, context_name, node.inputs, node.outputs)
  File "/home/luca/data/Theano-master/theano/gpuarray/dnn.py", line 1888, 
in local_gpua_pool_dnn_grad_stride
inp, out, out_grad, ws, stride, pad = inputs
ValueError: need more than 3 values to unpack

ERROR (theano.gof.opt): Optimization failure due to: 
local_gpua_pool_dnn_grad_stride
ERROR (theano.gof.opt): node: MaxPoolGrad{ds=(3, 3), ignore_border=True, 
st=(3, 3), padding=(0, 0), mode='max'}(HostFromGpu(gpuarray).0, Pool{ds=(3, 
3), ignore_border=True, st=(3, 3), padding=(0, 0), mode='max'}.0, 
Reshape{4}.0)
ERROR (theano.gof.opt): TRACEBACK:
ERROR (theano.gof.opt): Traceback (most recent call last):
  File "/home/luca/data/Theano-master/theano/gof/opt.py", line 1820, in 
process_node
replacements = lopt.transform(node)
  File "/home/luca/data/Theano-master/theano/gpuarray/opt.py", line 203, in 
local_opt
new_op = maker(node.op, context_name, node.inputs, node.outputs)
  File "/home/luca/data/Theano-master/theano/gpuarray/dnn.py", line 1888, 
in local_gpua_pool_dnn_grad_stride
inp, out, out_grad, ws, stride, pad = inputs
ValueError: need more than 3 values to unpack

ERROR (theano.gof.opt): Optimization failure due to: 
local_gpua_pool_dnn_alternative
ERROR (theano.gof.opt): node: Pool{ds=(3, 3), ignore_border=True, st=(3, 
3), padding=(0, 0), mode='max'}(HostFromGpu(gpuarray).0)
ERROR (theano.gof.opt): TRACEBACK:
ERROR (theano.gof.opt): Traceback (most recent call last):
  File "/home/luca/data/Theano-master/theano/gof/opt.py", line 1820, in 
process_node
replacements = lopt.transform(node)
  File "/home/luca/data/Theano-master/theano/gpuarray/opt.py", line 203, in 
local_opt
new_op = maker(node.op, context_name, node.inputs, node.outputs)
  File "/home/luca/data/Theano-master/theano/gpuarray/dnn.py", line 1874, 
in local_gpua_pool_dnn_alternativ

Re: [theano-users] Error using floatX = float16 to save memory

2016-10-06 Thread Frédéric Bastien
For float16, always use
device=cuda

Not device=gpu. This could be your problem. Can you test that?

thanks

Fred

On Tue, Oct 4, 2016 at 10:21 AM,  wrote:

> Hi Fred,
>  I tested the convnet using
>
>  floatX= float32,
> device=gpu
> theano.tensor.nnet.conv3d2d.conv3d
> updated theano/sandbox/cuda/blas.py  downloaded from
> https://github.com/Theano/Theano/pull/5050
> 
>
> The convnet converges:
>
> Python 2.7.12 |Anaconda custom (64-bit)| (default, Jul  2 2016, 17:42:40)
> [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
> Anaconda is brought to you by Continuum Analytics.
> Please check out: http://continuum.io/thanks and https://anaconda.org
> >>> runfile('/home/luca/data/DeepLearningTutorials/Theano-
> 3D-ConvNet-master/convnet3d/core/run_multi_conv_t.py',
> wdir='/home/luca/data/DeepLearningTutorials/Theano-
> 3D-ConvNet-master/convnet3d/core')
> Using gpu device 0: GeForce 840M (CNMeM is disabled, cuDNN 5103)
> /home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6:
> UserWarning: downsample module has been moved to the
> theano.tensor.signal.pool module.
>   "downsample module has been moved to the theano.tensor.signal.pool
> module.")
>
>
> start time:
> 04/10/2016
> 16:18:13
>
>
> Images for training: 316
> Images for validation: 56
>
> training @ iter =  0
> training cost 0.69672
> epoch 1, training batch 316/316, validation error 37.500 %
> --
>
> If I make the same test using:
> floatX = float16
> device=gpu
> theano.tensor.nnet.conv3d2d.conv3d
> updated theano/sandbox/cuda/blas.py  downloaded from
> https://github.com/Theano/Theano/pull/5050
> 
>
> I have an error running the  convnet:
> Python 2.7.12 |Anaconda custom (64-bit)| (default, Jul  2 2016, 17:42:40)
> [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
> Anaconda is brought to you by Continuum Analytics.
> Please check out: http://continuum.io/thanks and https://anaconda.org
> >>> runfile('/home/luca/data/DeepLearningTutorials/Theano-
> 3D-ConvNet-master/convnet3d/core/run_multi_conv_t.py',
> wdir='/home/luca/data/DeepLearningTutorials/Theano-
> 3D-ConvNet-master/convnet3d/core')
> Using gpu device 0: GeForce 840M (CNMeM is disabled, cuDNN 5103)
> /home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6:
> UserWarning: downsample module has been moved to the
> theano.tensor.signal.pool module.
>   "downsample module has been moved to the theano.tensor.signal.pool
> module.")
> Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
> Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
> Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
> Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
> Disabling C code for Alloc due to unsupported float16
> Disabling C code for Elemwise{abs_,no_inplace} due to unsupported float16
> Disabling C code for Sum{acc_dtype=float32} due to unsupported float16
> Disabling C code for mrg_uniform{TensorType(float16, matrix),inplace} due
> to unsupported float16
> Disabling C code for mrg_uniform{TensorType(float16, matrix),inplace} due
> to unsupported float16
> Disabling C code for Elemwise{Composite{(-Cast{float16}((i0 / i1)))}} due
> to unsupported float16
> Disabling C code for Elemwise{Composite{Cast{float16}(Cast{int64}(LT(i0,
> i1)))}}[(0, 0)] due to unsupported float16
> Disabling C code for Elemwise{Composite{Cast{float16}(Cast{int64}(LT(i0,
> i1)))}}[(0, 0)] due to unsupported float16
> Disabling C code for CorrMM{valid, (1, 1), (1, 1)} due to unsupported
> float16
> Disabling C code for DiagonalSubtensor{inplace} due to unsupported float16
> Disabling C code for Sum{axis=[3], acc_dtype=float32} due to unsupported
> float16
> Disabling C code for Elemwise{Add}[(0, 0)] due to unsupported float16
> Disabling C code for sigmoid due to unsupported float16
> Disabling C code for Pool{ds=(3, 3), ignore_border=True, st=(3, 3),
> padding=(0, 0), mode='max'} due to unsupported float16
> Disabling C code for Pool{ds=(1, 3), ignore_border=True, st=(1, 3),
> padding=(0, 0), mode='max'} due to unsupported float16
> Disabling C code for dot due to unsupported float16
> Disabling C code for Elemwise{Composite{scalar_sigmoid((i0 + i1))}}[(0,
> 0)] due to unsupported float16
> Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
> Disabling C code for dot due to unsupported float16
> Disabling C code for CrossentropySoftmaxArgmax1HotWithBias due to
> unsupported float16
> Disabling C code for CrossentropySoftmax1HotWithBiasDx due to unsupported
> float16
> Disabling C code for 

Re: [theano-users] Error using floatX = float16 to save memory

2016-10-04 Thread luca . wagner . 0812
Hi Fred,
 I tested the convnet using

 floatX= float32,
device=gpu
theano.tensor.nnet.conv3d2d.conv3d
updated theano/sandbox/cuda/blas.py  downloaded from 
https://github.com/Theano/Theano/pull/5050 


The convnet converges:

Python 2.7.12 |Anaconda custom (64-bit)| (default, Jul  2 2016, 17:42:40) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> 
runfile('/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv_t.py',
 
wdir='/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core')
Using gpu device 0: GeForce 840M (CNMeM is disabled, cuDNN 5103)
/home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6: 
UserWarning: downsample module has been moved to the 
theano.tensor.signal.pool module.
  "downsample module has been moved to the theano.tensor.signal.pool 
module.")


start time:
04/10/2016
16:18:13


Images for training: 316
Images for validation: 56

training @ iter =  0
training cost 0.69672
epoch 1, training batch 316/316, validation error 37.500 %
--

If I make the same test using:
floatX = float16
device=gpu
theano.tensor.nnet.conv3d2d.conv3d
updated theano/sandbox/cuda/blas.py  downloaded from 
https://github.com/Theano/Theano/pull/5050 


I have an error running the  convnet:
Python 2.7.12 |Anaconda custom (64-bit)| (default, Jul  2 2016, 17:42:40) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> 
runfile('/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv_t.py',
 
wdir='/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core')
Using gpu device 0: GeForce 840M (CNMeM is disabled, cuDNN 5103)
/home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6: 
UserWarning: downsample module has been moved to the 
theano.tensor.signal.pool module.
  "downsample module has been moved to the theano.tensor.signal.pool 
module.")
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Alloc due to unsupported float16
Disabling C code for Elemwise{abs_,no_inplace} due to unsupported float16
Disabling C code for Sum{acc_dtype=float32} due to unsupported float16
Disabling C code for mrg_uniform{TensorType(float16, matrix),inplace} due 
to unsupported float16
Disabling C code for mrg_uniform{TensorType(float16, matrix),inplace} due 
to unsupported float16
Disabling C code for Elemwise{Composite{(-Cast{float16}((i0 / i1)))}} due 
to unsupported float16
Disabling C code for Elemwise{Composite{Cast{float16}(Cast{int64}(LT(i0, 
i1)))}}[(0, 0)] due to unsupported float16
Disabling C code for Elemwise{Composite{Cast{float16}(Cast{int64}(LT(i0, 
i1)))}}[(0, 0)] due to unsupported float16
Disabling C code for CorrMM{valid, (1, 1), (1, 1)} due to unsupported 
float16
Disabling C code for DiagonalSubtensor{inplace} due to unsupported float16
Disabling C code for Sum{axis=[3], acc_dtype=float32} due to unsupported 
float16
Disabling C code for Elemwise{Add}[(0, 0)] due to unsupported float16
Disabling C code for sigmoid due to unsupported float16
Disabling C code for Pool{ds=(3, 3), ignore_border=True, st=(3, 3), 
padding=(0, 0), mode='max'} due to unsupported float16
Disabling C code for Pool{ds=(1, 3), ignore_border=True, st=(1, 3), 
padding=(0, 0), mode='max'} due to unsupported float16
Disabling C code for dot due to unsupported float16
Disabling C code for Elemwise{Composite{scalar_sigmoid((i0 + i1))}}[(0, 0)] 
due to unsupported float16
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for dot due to unsupported float16
Disabling C code for CrossentropySoftmaxArgmax1HotWithBias due to 
unsupported float16
Disabling C code for CrossentropySoftmax1HotWithBiasDx due to unsupported 
float16
Disabling C code for Sum{acc_dtype=float32} due to unsupported float16
Disabling C code for Sum{axis=[0], acc_dtype=float32} due to unsupported 
float16
Disabling C code for dot due to unsupported float16
Disabling C code for dot due to unsupported float16
Disabling C code for Elemwise{Composite{(i0 - (i1 * i2))}}[(0, 0)] due to 
unsupported float1

Re: [theano-users] Error using floatX = float16 to save memory

2016-10-04 Thread luca . wagner . 0812
thanks

On Monday, October 3, 2016 at 6:11:10 PM UTC+2, Pascal Lamblin wrote:
>
> On Mon, Oct 03, 2016, luca.wag...@gmail.com  wrote: 
> > floatX = float32 
> > device=cuda0 
> > dnn.conv.algo_fwd =  time_once 
> > dnn.conv.algo_bwd_filter = time_once 
> > dnn.conv.algo_bwd_data = time_once 
>
> In the .theanorc, you have to use sections, for instance: 
>
> [dnn.conv] 
> algo_fwd =  time_once 
> algo_bwd_filter = time_once 
> algo_bwd_data = time_once 
>
> > 
> > Using theano.gpuarray.dnn.dnn_conv  the output is: ValueError: 
> > ("convolution algo %s can't be used for 3d convolutions", ('small',)) 
> > Same output with float16. 
> > 
> > 
> > If I use  theano.sandbox.cuda.dnn.dnn_conv3d  with Theano flags 
> > floatX = float16 
> > device=cuda0 
> > dnn.conv.algo_fwd =  time_once 
> > dnn.conv.algo_bwd_filter = time_once 
> > dnn.conv.algo_bwd_data = time_once 
> > 
> > the output is: TypeError: CudaNdarrayType only supports dtype float32 
> for 
> > now. Tried using dtype float16 for variable None 
> > 
> > 
> > 
> > -- 
> > 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups "theano-users" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to theano-users...@googlegroups.com . 
> > For more options, visit https://groups.google.com/d/optout. 
>
>
> -- 
> Pascal 
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-10-04 Thread luca . wagner . 0812
Hi Fred,
following your fix I did this test:

1) I installed the updated theano/gpuarray/dnn.py from 
https://github.com/Theano/Theano/pull/5050 


2) .theanorc is:

[global]
floatX = float16
device = cuda

[cuda] 
root = /usr/local/cuda-7.5

[nvcc]
fastmath=True

optimizer =fast_compile

[dnn.conv]
algo_fwd =  time_once
algo_bwd_filter = time_once
algo_bwd_data = time_once 

3) I use theano.gpuarray.dnn.dnn_conv

 out = dnn_conv(img= input, 
kerns= self.W, 
border_mode='valid', 
subsample=(1,1,1),
conv_mode='conv',
direction_hint=None, 
workmem=None,
algo=None, 
precision=None) 

This is the output running a test convnet, that seems quite slow:

Python 2.7.12 |Anaconda 4.2.0 (64-bit)| (default, Jul  2 2016, 17:42:40) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> 
runfile('/home/luca/data/DeepLearningTutorials/Theano-3D-Convnet-master/convnet3d/core/run_multi_conv.py',
 
wdir='/home/luca/data/DeepLearningTutorials/Theano-3D-Convnet-master/convnet3d/core')
Mapped name None to device cuda: Tesla K40c
Using cuDNN version 5103 on context None
/home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6: 
UserWarning: downsample module has been moved to the 
theano.tensor.signal.pool module.
  "downsample module has been moved to the theano.tensor.signal.pool 
module.")
/home/luca/anaconda2/lib/python2.7/site-packages/scipy/ndimage/interpolation.py:568:
 
UserWarning: From scipy 0.13.0, the output shape of zoom() is calculated 
with round() instead of int() - for these inputs the size of the returned 
array has changed.
  "the returned array has changed.", UserWarning)
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
ERROR (theano.gof.opt): SeqOptimizer apply 
ERROR (theano.gof.opt): Traceback:
ERROR (theano.gof.opt): Traceback (most recent call last):
  File "/home/luca/data/Theano-master/theano/gof/opt.py", line 235, in apply
sub_prof = optimizer.optimize(fgraph)
  File "/home/luca/data/Theano-master/theano/gof/opt.py", line 90, in 
optimize
ret = self.apply(fgraph, *args, **kwargs)
  File "/home/luca/data/Theano-master/theano/gpuarray/opt.py", line 355, in 
apply
node.outputs)
  File "/home/luca/data/Theano-master/theano/gpuarray/dnn.py", line 1874, 
in local_gpua_pool_dnn_alternative
img, ws, stride, pad = inputs
ValueError: need more than 1 value to unpack

ERROR (theano.gof.opt): Optimization failure due to: 
local_gpua_pool_dnn_grad_stride
ERROR (theano.gof.opt): node: MaxPoolGrad{ds=(3, 3), ignore_border=True, 
st=(3, 3), padding=(0, 0), mode='max'}(sigmoid.0, Pool{ds=(3, 3), 
ignore_border=True, st=(3, 3), padding=(0, 0), mode='max'}.0, Reshape{4}.0)
ERROR (theano.gof.opt): TRACEBACK:
ERROR (theano.gof.opt): Traceback (most recent call last):
  File "/home/luca/data/Theano-master/theano/gof/opt.py", line 1820, in 
process_node
replacements = lopt.transform(node)
  File "/home/luca/data/Theano-master/theano/gpuarray/opt.py", line 203, in 
local_opt
new_op = maker(node.op, context_name, node.inputs, node.outputs)
  File "/home/luca/data/Theano-master/theano/gpuarray/dnn.py", line 1888, 
in local_gpua_pool_dnn_grad_stride
inp, out, out_grad, ws, stride, pad = inputs
ValueError: need more than 3 values to unpack

ERROR (theano.gof.opt): Optimization failure due to: 
local_gpua_pool_dnn_grad_stride
ERROR (theano.gof.opt): node: MaxPoolGrad{ds=(3, 3), ignore_border=True, 
st=(3, 3), padding=(0, 0), mode='max'}(sigmoid.0, Pool{ds=(3, 3), 
ignore_border=True, st=(3, 3), padding=(0, 0), mode='max'}.0, Reshape{4}.0)
ERROR (theano.gof.opt): TRACEBACK:
ERROR (theano.gof.opt): Traceback (most recent call last):
  File "/home/luca/data/Theano-master/theano/gof/opt.py", line 1820, in 
process_node
replacements = lopt.transform(node)
  File "/home/luca/data/Theano-master/theano/gpuarray/opt.py", line 203, in 
local_opt
new_op = maker(node.op, context_name, node.inputs, node.outputs)
  File "/home/luca/data/Theano-master/theano/gpuarray/dnn.py", line 1888, 
in local_gpua_pool_dnn_grad_stride
inp, out, out_grad, ws, stride, pad = inputs
ValueError: need more than 3 values to unpack

ERROR (theano.gof.opt): Optimization failure due to: 
local_gpua_pool_dnn_grad_stride
ERROR (theano.gof.opt): node: MaxPoolGrad{ds=(3, 3), 

Re: [theano-users] Error using floatX = float16 to save memory

2016-10-03 Thread Frédéric Bastien
I have a fix in the new back-end about the error:

https://github.com/Theano/Theano/pull/5050

So you have a few way to get it to work. Use the config as Pascal wrote
(the easiest) or try this new PR.



On Mon, Oct 3, 2016 at 12:11 PM, Pascal Lamblin 
wrote:

> On Mon, Oct 03, 2016, luca.wagner.0...@gmail.com wrote:
> > floatX = float32
> > device=cuda0
> > dnn.conv.algo_fwd =  time_once
> > dnn.conv.algo_bwd_filter = time_once
> > dnn.conv.algo_bwd_data = time_once
>
> In the .theanorc, you have to use sections, for instance:
>
> [dnn.conv]
> algo_fwd =  time_once
> algo_bwd_filter = time_once
> algo_bwd_data = time_once
>
> >
> > Using theano.gpuarray.dnn.dnn_conv  the output is: ValueError:
> > ("convolution algo %s can't be used for 3d convolutions", ('small',))
> > Same output with float16.
> >
> >
> > If I use  theano.sandbox.cuda.dnn.dnn_conv3d  with Theano flags
> > floatX = float16
> > device=cuda0
> > dnn.conv.algo_fwd =  time_once
> > dnn.conv.algo_bwd_filter = time_once
> > dnn.conv.algo_bwd_data = time_once
> >
> > the output is: TypeError: CudaNdarrayType only supports dtype float32 for
> > now. Tried using dtype float16 for variable None
> >
> >
> >
> > --
> >
> > ---
> > You received this message because you are subscribed to the Google
> Groups "theano-users" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to theano-users+unsubscr...@googlegroups.com.
> > For more options, visit https://groups.google.com/d/optout.
>
>
> --
> Pascal
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-10-03 Thread Pascal Lamblin
On Mon, Oct 03, 2016, luca.wagner.0...@gmail.com wrote:
> floatX = float32
> device=cuda0
> dnn.conv.algo_fwd =  time_once 
> dnn.conv.algo_bwd_filter = time_once
> dnn.conv.algo_bwd_data = time_once

In the .theanorc, you have to use sections, for instance:

[dnn.conv]
algo_fwd =  time_once 
algo_bwd_filter = time_once
algo_bwd_data = time_once

> 
> Using theano.gpuarray.dnn.dnn_conv  the output is: ValueError: 
> ("convolution algo %s can't be used for 3d convolutions", ('small',))
> Same output with float16.
> 
> 
> If I use  theano.sandbox.cuda.dnn.dnn_conv3d  with Theano flags
> floatX = float16
> device=cuda0
> dnn.conv.algo_fwd =  time_once 
> dnn.conv.algo_bwd_filter = time_once
> dnn.conv.algo_bwd_data = time_once
> 
> the output is: TypeError: CudaNdarrayType only supports dtype float32 for 
> now. Tried using dtype float16 for variable None
> 
> 
> 
> -- 
> 
> --- 
> You received this message because you are subscribed to the Google Groups 
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.


-- 
Pascal

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-10-03 Thread Frédéric Bastien
theano/sandbox/cuda is the old gpu back-end that work with device=gpu*.
theano/gpuarray is the new back-end with device=cuda*. Don't mix both of
them, it won't work.

Can you try this PR: https://github.com/Theano/Theano/pull/4862

and use theano.tensor.nnet.conv3d()?

This add the good user interface as conv2d to conv3d. Hopefully, we will
merge it this week.

On Mon, Oct 3, 2016 at 4:25 AM,  wrote:

> Hi Pascal,
> thanks for your answer.
> In .theanorc I set Theano flag:
>
> floatX = float32
> device=cuda0
> dnn.conv.algo_fwd =  time_once
> dnn.conv.algo_bwd_filter = time_once
> dnn.conv.algo_bwd_data = time_once
>
> Using theano.gpuarray.dnn.dnn_conv  the output is: ValueError:
> ("convolution algo %s can't be used for 3d convolutions", ('small',))
> Same output with float16.
>
>
> If I use  theano.sandbox.cuda.dnn.dnn_conv3d  with Theano flags
> floatX = float16
> device=cuda0
> dnn.conv.algo_fwd =  time_once
> dnn.conv.algo_bwd_filter = time_once
> dnn.conv.algo_bwd_data = time_once
>
> the output is: TypeError: CudaNdarrayType only supports dtype float32 for
> now. Tried using dtype float16 for variable None
>
>
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-10-03 Thread luca . wagner . 0812
Hi Pascal,
thanks for your answer.
In .theanorc I set Theano flag:

floatX = float32
device=cuda0
dnn.conv.algo_fwd =  time_once 
dnn.conv.algo_bwd_filter = time_once
dnn.conv.algo_bwd_data = time_once

Using theano.gpuarray.dnn.dnn_conv  the output is: ValueError: 
("convolution algo %s can't be used for 3d convolutions", ('small',))
Same output with float16.


If I use  theano.sandbox.cuda.dnn.dnn_conv3d  with Theano flags
floatX = float16
device=cuda0
dnn.conv.algo_fwd =  time_once 
dnn.conv.algo_bwd_filter = time_once
dnn.conv.algo_bwd_data = time_once

the output is: TypeError: CudaNdarrayType only supports dtype float32 for 
now. Tried using dtype float16 for variable None



-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-09-30 Thread Pascal Lamblin
Forwarding the response to the ML:

The default algo selected by Theano is not correct for conv3d (only
exists for 2D), this should be fixed.
In the mean time, try:
[dnn.conv]
algo_fwd = time_once
algo_bwd_filter = time_once
algo_bwd_data = time_once

On Fri, Sep 30, 2016, luca.wagner.0...@gmail.com wrote:
> Hi Pascal,
> 
> I did the previous test using
> [global]
> floatX = float32
> device=gpu
> [cuda] 
> 
> Following your answer I did another test with
> floatX = float32
> device=cuda0
> 
> 
> but it doesn't work: ValueError: ("convolution algo %s can't be used for 3d 
> convolutions", ('small',))
> 
> Using
> floatX = float16
> device=cuda0
> I have the same error: ValueError: ("convolution algo %s can't be used for 
> 3d convolutions", ('small',))
>  
> 
> 
> This is the output:
> Python 2.7.12 |Anaconda custom (64-bit)| (default, Jul  2 2016, 17:42:40) 
> [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
> Anaconda is brought to you by Continuum Analytics.
> Please check out: http://continuum.io/thanks and https://anaconda.org
> >>> runfile('/home/luca/data/
> DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv_t.py',
>  
> wdir='/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core')
> Mapped name None to device cuda0: GeForce 840M
> Using cuDNN version 5103 on context None
> /home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6: 
> UserWarning: downsample module has been moved to the 
> theano.tensor.signal.pool module.
>   "downsample module has been moved to the theano.tensor.signal.pool 
> module.")
> Traceback (most recent call last):
>   File "", line 1, in 
>   File 
> "/home/luca/anaconda2/lib/python2.7/site-packages/spyderlib/widgets/externalshell/sitecustomize.py",
>  
> line 714, in runfile
> execfile(filename, namespace)
>   File 
> "/home/luca/anaconda2/lib/python2.7/site-packages/spyderlib/widgets/externalshell/sitecustomize.py",
>  
> line 81, in execfile
> builtins.execfile(filename, *where)
>   File 
> "/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv_t.py",
>  
> line 32, in 
> run_experiments()
>   File 
> "/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv_t.py",
>  
> line 25, in run_experiments
> Learning_rate=0.001 
>   File "mpr_convnet_class_t.py", line 169, in __init__
> b )
>   File "cuddn_convnet3d.py", line 113, in __init__
> precision=None)   
>   File "/home/luca/data/Theano-master/theano/gpuarray/dnn.py", line 958, in 
> dnn_conv
> return gpu_dnn_conv(algo=algo)(img, kerns, out, desc)
>   File "/home/luca/data/Theano-master/theano/gof/op.py", line 602, in 
> __call__
> node = self.make_node(*inputs, **kwargs)
>   File "/home/luca/data/Theano-master/theano/gpuarray/dnn.py", line 513, in 
> make_node
> "3d convolutions", (self.algo,))
> ValueError: ("convolution algo %s can't be used for 3d convolutions", 
> ('small',))
> 
> -- 
> 
> --- 
> You received this message because you are subscribed to the Google Groups 
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.


-- 
Pascal

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-09-30 Thread luca . wagner . 0812
Hi Pascal,

I did the previous test using
[global]
floatX = float32
device=gpu
[cuda] 

Following your answer I did another test with
floatX = float32
device=cuda0


but it doesn't work: ValueError: ("convolution algo %s can't be used for 3d 
convolutions", ('small',))

Using
floatX = float16
device=cuda0
I have the same error: ValueError: ("convolution algo %s can't be used for 
3d convolutions", ('small',))
 


This is the output:
Python 2.7.12 |Anaconda custom (64-bit)| (default, Jul  2 2016, 17:42:40) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> runfile('/home/luca/data/
DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv_t.py',
 
wdir='/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core')
Mapped name None to device cuda0: GeForce 840M
Using cuDNN version 5103 on context None
/home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6: 
UserWarning: downsample module has been moved to the 
theano.tensor.signal.pool module.
  "downsample module has been moved to the theano.tensor.signal.pool 
module.")
Traceback (most recent call last):
  File "", line 1, in 
  File 
"/home/luca/anaconda2/lib/python2.7/site-packages/spyderlib/widgets/externalshell/sitecustomize.py",
 
line 714, in runfile
execfile(filename, namespace)
  File 
"/home/luca/anaconda2/lib/python2.7/site-packages/spyderlib/widgets/externalshell/sitecustomize.py",
 
line 81, in execfile
builtins.execfile(filename, *where)
  File 
"/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv_t.py",
 
line 32, in 
run_experiments()
  File 
"/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv_t.py",
 
line 25, in run_experiments
Learning_rate=0.001 
  File "mpr_convnet_class_t.py", line 169, in __init__
b )
  File "cuddn_convnet3d.py", line 113, in __init__
precision=None)   
  File "/home/luca/data/Theano-master/theano/gpuarray/dnn.py", line 958, in 
dnn_conv
return gpu_dnn_conv(algo=algo)(img, kerns, out, desc)
  File "/home/luca/data/Theano-master/theano/gof/op.py", line 602, in 
__call__
node = self.make_node(*inputs, **kwargs)
  File "/home/luca/data/Theano-master/theano/gpuarray/dnn.py", line 513, in 
make_node
"3d convolutions", (self.algo,))
ValueError: ("convolution algo %s can't be used for 3d convolutions", 
('small',))

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-09-30 Thread luca . wagner . 0812
Hi Pascal,
I tried to use theano.gpuarray.dnn.dnn_conv but it doesn't work:raise 
ValueError("Could not infer context from inputs")
ValueError: Could not infer context from inputs

from theano.gpuarray.dnn import dnn_conv

out = dnn_conv(img= input, 
kerns= self.W, 
border_mode='valid', 
subsample=(1, 1,1),
conv_mode='conv',
direction_hint=None, 
workmem=None,
algo=None, 
precision=None)


This is the output testing the small 3dconvnet:


Python 2.7.12 |Anaconda custom (64-bit)| (default, Jul  2 2016, 17:42:40) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> runfile('/run/media/luca/8C9A-AEF4/core/run_multi_conv_t.py', 
wdir='/run/media/luca/8C9A-AEF4/core')
Using gpu device 0: Tesla K40c (CNMeM is disabled, cuDNN 5103)
/home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6: 
UserWarning: downsample module has been moved to the 
theano.tensor.signal.pool module.
  "downsample module has been moved to the theano.tensor.signal.pool 
module.")
Traceback (most recent call last):
  File "", line 1, in 
  File 
"/home/luca/anaconda2/lib/python2.7/site-packages/spyderlib/widgets/externalshell/sitecustomize.py",
 
line 714, in runfile
execfile(filename, namespace)
  File 
"/home/luca/anaconda2/lib/python2.7/site-packages/spyderlib/widgets/externalshell/sitecustomize.py",
 
line 81, in execfile
builtins.execfile(filename, *where)
  File "/run/media/luca/8C9A-AEF4/core/run_multi_conv_t.py", line 32, in 

run_experiments()
  File "/run/media/luca/8C9A-AEF4/core/run_multi_conv_t.py", line 25, in 
run_experiments
Learning_rate=0.001,   
  File "mpr_convnet_class_t.py", line 159, in __init__
b )
  File "convnet3d.py", line 120, in __init__
precision=None) 
  File "/home/luca/data/Theano-master/theano/gpuarray/dnn.py", line 904, in 
dnn_conv
ctx_name = infer_context_name(img, kerns)
  File "/home/luca/data/Theano-master/theano/gpuarray/basic_ops.py", line 
121, in infer_context_name
raise ValueError("Could not infer context from inputs")
ValueError: Could not infer context from inputs
>>> 

Many thanks
Luca

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-09-29 Thread Pascal Lamblin
Hi Luca,

There is no 3D-specific Op in theano.gpuarray, the same GpuDnnConv Op
that is used for the 2D case also handles 3D if the number of dimensions
of the input tensors require it.

On Thu, Sep 29, 2016, luca.wagner.0...@gmail.com wrote:
> Fred,
> I started to look in cuddn:
> I don't find cudnn dnn_conv3d in gpuarray.dnn.dnn_conv3d; what I found is 
> class theano.sandbox.cuda.dnn.GpuDnnConv3d in 
> http://deeplearning.net/software/theano/library/sandbox/cuda/dnn.html
> 
> while in http://deeplearning.net/software/theano/library/gpuarray/dnn.html
> I don't see any 3dconv op.
> 
> Many thanks
> Luca
> 
> -- 
> 
> --- 
> You received this message because you are subscribed to the Google Groups 
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.


-- 
Pascal

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-09-29 Thread luca . wagner . 0812
Fred,
I used theano.sandbox.cuda.dnn import dnn_conv3d instead of 
theano.tensor.nnet.conv3d2d.conv3d
It works with floatX=float32  and device=gpu but it doesn't work with 
floatX=float16  and device=cuda:
TypeError: CudaNdarrayType only supports dtype float32 for now. Tried using 
dtype float16 for variable None

I also tried to put precision='float16'  in dnn_conv3d but nothing changed.



  out = dnn_conv3d(
img=input, 
kerns=self.W,
border_mode='valid',
subsample=(1, 1, 1),
conv_mode='conv',
direction_hint=None,
workmem=None,
algo=None,
precision=None
)
   
This is the output testing the small 3dconvnet:

Python 2.7.12 |Anaconda custom (64-bit)| (default, Jul  2 2016, 17:42:40)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> 
runfile('/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv_t.py',
 
wdir='/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core')
Mapped name None to device cuda: GeForce 840M
Using cuDNN version 5103 on context None
/home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6: 
UserWarning: downsample module has been moved to the 
theano.tensor.signal.pool module.
  "downsample module has been moved to the theano.tensor.signal.pool 
module.")
Traceback (most recent call last):
  File "", line 1, in 
  File 
"/home/luca/anaconda2/lib/python2.7/site-packages/spyderlib/widgets/externalshell/sitecustomize.py",
 
line 714, in runfile
execfile(filename, namespace)
  File 
"/home/luca/anaconda2/lib/python2.7/site-packages/spyderlib/widgets/externalshell/sitecustomize.py",
 
line 81, in execfile
builtins.execfile(filename, *where)
  File 
"/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv_t.py",
 
line 33, in 
run_experiments()
  File 
"/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv_t.py",
 
line 26, in run_experiments
Zoom=0.5
  File "mpr_convnet_class_t.py", line 171, in __init__
b )
  File "cuddn_convnet3d.py", line 100, in __init__
precision=None
  File "/home/luca/data/Theano-master/theano/sandbox/cuda/dnn.py", line 
1283, in dnn_conv3d
img = gpu_contiguous(img)
  File "/home/luca/data/Theano-master/theano/gof/op.py", line 602, in 
__call__
node = self.make_node(*inputs, **kwargs)
  File "/home/luca/data/Theano-master/theano/sandbox/cuda/basic_ops.py", 
line 3963, in make_node
input = as_cuda_ndarray_variable(input)
  File "/home/luca/data/Theano-master/theano/sandbox/cuda/basic_ops.py", 
line 46, in as_cuda_ndarray_variable
return gpu_from_host(tensor_x)
  File "/home/luca/data/Theano-master/theano/gof/op.py", line 602, in 
__call__
node = self.make_node(*inputs, **kwargs)
  File "/home/luca/data/Theano-master/theano/sandbox/cuda/basic_ops.py", 
line 139, in make_node
dtype=x.dtype)()])
  File "/home/luca/data/Theano-master/theano/sandbox/cuda/type.py", line 
95, in __init__
(self.__class__.__name__, dtype, name))
TypeError: CudaNdarrayType only supports dtype float32 for now. Tried using 
dtype float16 for variable None
>>>


-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-09-29 Thread luca . wagner . 0812
Sorry,
 I found dnn_conv3d in theano.sandbox.cuda.dnn.dnn_conv



thanks
Luca



-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-09-29 Thread luca . wagner . 0812
Fred,
I started to look in cuddn:
I don't find cudnn dnn_conv3d in gpuarray.dnn.dnn_conv3d; what I found is 
class theano.sandbox.cuda.dnn.GpuDnnConv3d in 
http://deeplearning.net/software/theano/library/sandbox/cuda/dnn.html

while in http://deeplearning.net/software/theano/library/gpuarray/dnn.html
I don't see any 3dconv op.

Many thanks
Luca

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-09-23 Thread luca . wagner . 0812
Many thanks

On Thursday, September 22, 2016 at 11:52:59 PM UTC+2, Arnaud Bergeron wrote:
>
> Actually I believe that your code is using conv2d3d in order to get it 
> working with 3d convolutions on the GPU.  This is now directly supported by 
> cudnn if you use dnn_conv() with 5d objects from the new backend.
>
> There is some work around an abstract conv3d interface which will 
> hopefully be complete soon that you might be able to use.
>
> 2016-09-19 4:33 GMT-04:00 >:
>
>> Hi Fred,
>> I thank you very much for your help and I hope that  DiagonalSubtensor 
>> and  IncDiagonalSubtensor may be supported on GPU with float16 
>>
>> Many thanks
>> Luca
>>
>> -- 
>>
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "theano-users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to theano-users...@googlegroups.com .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-09-22 Thread Arnaud Bergeron
Actually I believe that your code is using conv2d3d in order to get it
working with 3d convolutions on the GPU.  This is now directly supported by
cudnn if you use dnn_conv() with 5d objects from the new backend.

There is some work around an abstract conv3d interface which will hopefully
be complete soon that you might be able to use.

2016-09-19 4:33 GMT-04:00 :

> Hi Fred,
> I thank you very much for your help and I hope that  DiagonalSubtensor
> and  IncDiagonalSubtensor may be supported on GPU with float16
>
> Many thanks
> Luca
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-09-19 Thread luca . wagner . 0812
Hi Fred,
I thank you very much for your help and I hope that  DiagonalSubtensor and  
IncDiagonalSubtensor may be supported on GPU with float16 

Many thanks
Luca

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-09-06 Thread luca . wagner . 0812
The MemoryError: Error allocating 175446 bytes of device memory 
(CNMEM_STATUS_OUT_OF_MEMORY).
Apply node that caused the error: IncDiagonalSubtensor

disappears if I remove:

[lib]cnmem = 1

from .theanorc





-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-08-31 Thread luca . wagner . 0812
Hi Pascal,
many thanks for your help.
I modified the code and changed shared_randomstreams.RandomStreams, with 
MRG_RandomStreams where used:

def _dropout_from_layer(rng, layer, p):
#p is the probablity of dropping a unit
   
#it was originally:
#srng = 
theano.tensor.shared_randomstreams.RandomStreams(rng.randint(99))

 #changed to reduce slowdown
srng = MRG_RandomStreams(rng.randint(99))
   

mask = srng.binomial(n=1, p=1-p, size=layer.shape)
output = layer * T.cast(mask, theano.config.floatX)
return output

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-08-29 Thread Pascal Lamblin
It is likely that some operations are still not yet supported on GPU
with float16. From your messages, ( would guess at least the following
ones:
- DiagonalSubtensor
- IncDiagonalSubtensor

I thought that random sampling was supported, but I see
"RandomFunction{binomial}", which is surprising. Are you using
shared_randomstreams.RandomStreams, or MRG_RandomStreams?

On Mon, Aug 29, 2016, luca.wagner.0...@gmail.com wrote:
> 
> Fred, 
> I entered cnmem = 1 in .theanorc with float16, but no message is showed as 
> with float32 (CNMeM is enabled with initial size: 95.0% of memory)  and the 
> speed  has not been improved.
> These are the outputs:
> 
> 
> USING FLOAT16
> 
> .theanorc:
> 
> [global]
> floatX = float16
> device = cuda
> 
> [lib]
> cnmem=1
> 
> [cuda] 
> root = /usr/local/cuda-7.5
> 
> 
> [nvcc]
> fastmath=True
> 
> optimizer = fast_compile
> 
> output:
> 
> Python 2.7.12 |Anaconda custom (64-bit)| (default, Jul  2 2016, 17:42:40) 
> [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
> Anaconda is brought to you by Continuum Analytics.
> Please check out: http://continuum.io/thanks and https://anaconda.org
> >>> runfile('/run/media/luca/8C9A-AEF4/core/run_multi_conv.py', 
> wdir='/run/media/luca/8C9A-AEF4/core')
> Mapped name None to device cuda: Tesla K40c
> Using cuDNN version 5005 on context None
> /home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6: 
> UserWarning: downsample module has been moved to the 
> theano.tensor.signal.pool module.
>   "downsample module has been moved to the theano.tensor.signal.pool 
> module.")
> Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
> Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
> Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
> Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
> Disabling C code for Alloc due to unsupported float16
> Disabling C code for RandomFunction{binomial} due to unsupported float16
> Disabling C code for RandomFunction{binomial} due to unsupported float16
> Disabling C code for RandomFunction{binomial} due to unsupported float16
> Disabling C code for DiagonalSubtensor{inplace} due to unsupported float16
> Disabling C code for IncDiagonalSubtensor due to unsupported float16
> Disabling C code for RandomFunction{binomial} due to unsupported float16
> Disabling C code for RandomFunction{binomial} due to unsupported float16
> Disabling C code for RandomFunction{binomial} due to unsupported float16
> Disabling C code for DiagonalSubtensor{inplace} due to unsupported float16
> Disabling C code for MaxAndArgmax due to unsupported float16
> 
> 
> start time:
> 29/08/2016
> 11:30:44
> 
> 
> images for training: 574
> images for validation: 102
> epochs: 1000
> 
> 
> ... training neural network 33
> 
> 
> training @ iter =  0
> 
> 
> USING FLOAT32
> .theanorc:
> 
> [global]
> floatX = float32
> device = gpu
> 
> [lib]
> cnmem=1
> 
> [cuda] 
> root = /usr/local/cuda-7.5
> 
> 
> [nvcc]
> fastmath=True
> 
> optimizer = fast_compile
> 
> output:
> Python 2.7.12 |Anaconda custom (64-bit)| (default, Jul  2 2016, 17:42:40) 
> [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
> Anaconda is brought to you by Continuum Analytics.
> Please check out: http://continuum.io/thanks and https://anaconda.org
> >>> runfile('/run/media/luca/8C9A-AEF4/core/run_multi_conv.py', 
> wdir='/run/media/luca/8C9A-AEF4/core')
> Using gpu device 0: Tesla K40c (CNMeM is enabled with initial size: 95.0% 
> of memory, cuDNN 5005)
> /home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6: 
> UserWarning: downsample module has been moved to the 
> theano.tensor.signal.pool module.
>   "downsample module has been moved to the theano.tensor.signal.pool 
> module.")
> 
> 
> start time:
> 29/08/2016
> 11:32:43
> 
> 
> images for training: 574
> images for validation: 102
> epochs: 1000
> 
> 
> ... training neural network 33
> 
> 
> training @ iter =  0
> 
> 
> 
> -- 
> 
> --- 
> You received this message because you are subscribed to the Google Groups 
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.


-- 
Pascal

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-08-29 Thread luca . wagner . 0812

Fred, 
I entered cnmem = 1 in .theanorc with float16, but no message is showed as 
with float32 (CNMeM is enabled with initial size: 95.0% of memory)  and the 
speed  has not been improved.
These are the outputs:


USING FLOAT16

.theanorc:

[global]
floatX = float16
device = cuda

[lib]
cnmem=1

[cuda] 
root = /usr/local/cuda-7.5


[nvcc]
fastmath=True

optimizer = fast_compile

output:

Python 2.7.12 |Anaconda custom (64-bit)| (default, Jul  2 2016, 17:42:40) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> runfile('/run/media/luca/8C9A-AEF4/core/run_multi_conv.py', 
wdir='/run/media/luca/8C9A-AEF4/core')
Mapped name None to device cuda: Tesla K40c
Using cuDNN version 5005 on context None
/home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6: 
UserWarning: downsample module has been moved to the 
theano.tensor.signal.pool module.
  "downsample module has been moved to the theano.tensor.signal.pool 
module.")
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Alloc due to unsupported float16
Disabling C code for RandomFunction{binomial} due to unsupported float16
Disabling C code for RandomFunction{binomial} due to unsupported float16
Disabling C code for RandomFunction{binomial} due to unsupported float16
Disabling C code for DiagonalSubtensor{inplace} due to unsupported float16
Disabling C code for IncDiagonalSubtensor due to unsupported float16
Disabling C code for RandomFunction{binomial} due to unsupported float16
Disabling C code for RandomFunction{binomial} due to unsupported float16
Disabling C code for RandomFunction{binomial} due to unsupported float16
Disabling C code for DiagonalSubtensor{inplace} due to unsupported float16
Disabling C code for MaxAndArgmax due to unsupported float16


start time:
29/08/2016
11:30:44


images for training: 574
images for validation: 102
epochs: 1000


... training neural network 33


training @ iter =  0


USING FLOAT32
.theanorc:

[global]
floatX = float32
device = gpu

[lib]
cnmem=1

[cuda] 
root = /usr/local/cuda-7.5


[nvcc]
fastmath=True

optimizer = fast_compile

output:
Python 2.7.12 |Anaconda custom (64-bit)| (default, Jul  2 2016, 17:42:40) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> runfile('/run/media/luca/8C9A-AEF4/core/run_multi_conv.py', 
wdir='/run/media/luca/8C9A-AEF4/core')
Using gpu device 0: Tesla K40c (CNMeM is enabled with initial size: 95.0% 
of memory, cuDNN 5005)
/home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6: 
UserWarning: downsample module has been moved to the 
theano.tensor.signal.pool module.
  "downsample module has been moved to the theano.tensor.signal.pool 
module.")


start time:
29/08/2016
11:32:43


images for training: 574
images for validation: 102
epochs: 1000


... training neural network 33


training @ iter =  0



-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-08-29 Thread luca . wagner . 0812
Fred
I started testing float16 on Tesla K 40: I see that running the convnet  
with floatX=float16 is about three  times slower then floatx=float32.

For example one epoch using float32 needs 8 minutes while using float16 
needs about 25 minutes.

Many thanks.
Luca

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-08-24 Thread Frédéric Bastien
thanks.

On Wed, Aug 24, 2016 at 5:16 AM,  wrote:

> Fred,
> many thanks for your help.
>
> I reinstalled and updated anaconda.
> I reinstalled  theano and gpuarray/pygpu:
> Theano==0.9.0.dev2
> pygpu==0.2.1
>
> Then I tested a small 3D convnet  using first:
> floatX = float32
> device=gpu
> and the neural network converges.
>
>
> Then I tested the convnet using:
> floatX = float16
> device=cuda
>
> and the neural network converges.
>
> Next step I reinstall theano and gpuarray on the TeslaK40 server and test
> the large convnet: I'll give  another feedback next days.
>
> Luca
>
>
>
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-08-24 Thread luca . wagner . 0812
Fred, 
many thanks for your help.

I reinstalled and updated anaconda.
I reinstalled  theano and gpuarray/pygpu:
Theano==0.9.0.dev2
pygpu==0.2.1

Then I tested a small 3D convnet  using first:
floatX = float32
device=gpu
and the neural network converges.


Then I tested the convnet using:
floatX = float16
device=cuda

and the neural network converges.

Next step I reinstall theano and gpuarray on the TeslaK40 server and test 
the large convnet: I'll give  another feedback next days.

Luca




-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-08-15 Thread Frédéric Bastien
We merged last week some changes in Theano that make DLT MLP and convnet
train again in float16.

If you still have problems, tell us.

Fred

On Fri, Jul 29, 2016 at 6:12 AM,  wrote:

>
> I started to look for
> out = sigtools._convolve2d(in1, in2, 1, val, bval, fillvalue)
> https://github.com/scipy/scipy/blob/master/scipy/signal/signaltools.py
>
>
> May be the float16 bug is there
>
>>
>> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-07-29 Thread luca . wagner . 0812

I started to look for 
out = sigtools._convolve2d(in1, in2, 1, val, bval, fillvalue)
https://github.com/scipy/scipy/blob/master/scipy/signal/signaltools.py


May be the float16 bug is there

>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-07-29 Thread luca . wagner . 0812
Fred,
Mmay be there is eventually a problem with float16 convolution...

On Tuesday, July 26, 2016 at 11:58:40 AM UTC+2, luca.wag...@gmail.com wrote:
>
> Fred,
>
> I run the convnet another time using nanguardmode and I had not an illegal 
> memory access.
> The training cost start as  0.69336 and it doesn't change.
>
> This is the output:
>
> Mapped name None to device cuda: GeForce 840M
> Using cuDNN version 5005 on context None
> /home/luca/data/Theano-master/
> theano/tensor/signal/downsample.py:6: UserWarning: downsample module has 
> been moved to the theano.tensor.signal.pool module.
>   "downsample module has been moved to the theano.tensor.signal.pool 
> module.")
> Using gpu device 0: GeForce 840M (CNMeM is disabled, cuDNN 5005)
> Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
> Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
> Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
> Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
> Disabling C code for Alloc due to unsupported float16
> Disabling C code for DiagonalSubtensor{inplace} due to unsupported float16
> Disabling C code for Pool{ds=(2, 2), ignore_border=False, st=(2, 2), 
> padding=(0, 0), mode='max'} due to unsupported float16
> Disabling C code for Pool{ds=(1, 2), ignore_border=False, st=(1, 2), 
> padding=(0, 0), mode='max'} due to unsupported float16
> Disabling C code for MaxPoolGrad{ds=(1, 2), ignore_border=False, st=(1, 
> 2), padding=(0, 0), mode='max'} due to unsupported float16
> Disabling C code for MaxPoolGrad{ds=(2, 2), ignore_border=False, st=(2, 
> 2), padding=(0, 0), mode='max'} due to unsupported float16
> Disabling C code for IncDiagonalSubtensor due to unsupported float16
> Disabling C code for DiagonalSubtensor{inplace} due to unsupported float16
> Disabling C code for Pool{ds=(2, 2), ignore_border=False, st=(2, 2), 
> padding=(0, 0), mode='max'} due to unsupported float16
> Disabling C code for Pool{ds=(1, 2), ignore_border=False, st=(1, 2), 
> padding=(0, 0), mode='max'} due to unsupported float16
> Disabling C code for MaxAndArgmax due to unsupported float16
>
>
> start time:
> 26/07/2016
> 11:16:52
>
>
> images for training: 288
> images for validation: 50
> epochs: 300
>
>
> ... training neural network 25
>
>
> training @ iter =  0
> training @ iter =  200
>
>
> training cost 0.69336
> epoch 1, training batch 288/288,validation error 50.000 %
> training @ iter =  400
>
>
> training cost 0.69336
> epoch 2, training batch 288/288,validation error 50.000 %
> training @ iter =  600
> training @ iter =  800
>
>
> training cost 0.69336
> epoch 3, training batch 288/288,validation error 50.000 %
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-07-26 Thread luca . wagner . 0812
Fred,

I run the convnet another time using nanguardmode and I had not an illegal 
memory access.
The training cost start as  0.69336 and it doesn't change.

This is the output:

Mapped name None to device cuda: GeForce 840M
Using cuDNN version 5005 on context None
/home/luca/data/Theano-master/
theano/tensor/signal/downsample.py:6: UserWarning: downsample module has 
been moved to the theano.tensor.signal.pool module.
  "downsample module has been moved to the theano.tensor.signal.pool 
module.")
Using gpu device 0: GeForce 840M (CNMeM is disabled, cuDNN 5005)
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Alloc due to unsupported float16
Disabling C code for DiagonalSubtensor{inplace} due to unsupported float16
Disabling C code for Pool{ds=(2, 2), ignore_border=False, st=(2, 2), 
padding=(0, 0), mode='max'} due to unsupported float16
Disabling C code for Pool{ds=(1, 2), ignore_border=False, st=(1, 2), 
padding=(0, 0), mode='max'} due to unsupported float16
Disabling C code for MaxPoolGrad{ds=(1, 2), ignore_border=False, st=(1, 2), 
padding=(0, 0), mode='max'} due to unsupported float16
Disabling C code for MaxPoolGrad{ds=(2, 2), ignore_border=False, st=(2, 2), 
padding=(0, 0), mode='max'} due to unsupported float16
Disabling C code for IncDiagonalSubtensor due to unsupported float16
Disabling C code for DiagonalSubtensor{inplace} due to unsupported float16
Disabling C code for Pool{ds=(2, 2), ignore_border=False, st=(2, 2), 
padding=(0, 0), mode='max'} due to unsupported float16
Disabling C code for Pool{ds=(1, 2), ignore_border=False, st=(1, 2), 
padding=(0, 0), mode='max'} due to unsupported float16
Disabling C code for MaxAndArgmax due to unsupported float16


start time:
26/07/2016
11:16:52


images for training: 288
images for validation: 50
epochs: 300


... training neural network 25


training @ iter =  0
training @ iter =  200


training cost 0.69336
epoch 1, training batch 288/288,validation error 50.000 %
training @ iter =  400


training cost 0.69336
epoch 2, training batch 288/288,validation error 50.000 %
training @ iter =  600
training @ iter =  800


training cost 0.69336
epoch 3, training batch 288/288,validation error 50.000 %

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-07-26 Thread luca . wagner . 0812
Fred,
I checked cost and regularization terms because If I don't put 
regularizations the cost doesn't became Nans but anyway the  value start as 
training cost = 0.69336 and during iterations it doesn't change at all.
Also using ignore_border=False or ignore_border=True doesn't solve the bug.
This is my class  LogisticRegression ( using Activation =T.nnet.sigmoid)

class LogisticRegression(object):
""" Logistic Regression Layer, Top layer, Softmax layer, Output layer 
"""

def __init__(self, input, n_in, n_out, rng, layer_name, 
activation,L1_reg, L2_reg,
W, b, borrow=True, ):
   
# if trained
if W != None: 
with 
open('/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/params/layer6_w_0.pkl')
 
as f:
W = cPickle.load(f)  
self.W = shared(W, name=layer_name+"_W", borrow=borrow)

elif activation == T.nnet.softplus: 
W_val = _asarray(rng.normal(loc=0, scale=0.01, 
size=(n_in, n_out)), dtype=floatX)
self.W = shared(W_val, name=layer_name+"_W", borrow=borrow)
else:
self.W = shared(zeros((n_in, n_out), dtype=floatX), 
name=layer_name+"_W",
borrow=True)


# L1 norm ; one regularization option is to enforce L1 norm to
# be small
self.L1 = (
abs(self.W).sum()
   )


# square of L2 norm ; one regularization option is to enforce
# square of L2 norm to be small
self.L2_sqr = (
(self.W ** 2).sum()
  )

# Bias vector
test= np.any(b)
if test==True: 
with 
open('/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/params/layer6_b_0.pkl')
 
as f:
b = cPickle.load(f)   
self.b = shared(b, name=layer_name+"_b", borrow=borrow)

elif activation == T.nnet.softplus:
b_val = ones((n_out,), dtype=floatX)
self.b = shared(value=b_val, borrow=True)
else:
self.b = shared(zeros((n_out,), dtype=floatX),
name=layer_name+"_b",
borrow=True)
 
self.L1_reg = L1_reg
self.L2_reg = L2_reg

# Vector of prediction probabilities
self.p_y_given_x = softmax(T.dot(input, self.W) + self.b)
# Prediction
self.y_pred = T.argmax(self.p_y_given_x, axis=1)
# Parameters of the model
self.params = [self.W, self.b]

# keep track of model input
self.input = input

def cost(self, y):
regularization = self.L1_reg * self.L1 + self.L2_reg 
*self.L2_sqr
"""  regularized  Cost function"""
return (-T.mean(T.log(self.p_y_given_x)[T.arange(y.shape[0]), 
y]) + regularization)
#return (-T.mean(T.log(self.p_y_given_x)[T.arange(y.shape[0]), 
y]) )

def errors(self, y):
""" Errors over the total number of examples (in the minibatch) """
return T.mean(T.neq(self.y_pred, y))

def accuracy(self, y):
" accuracy over the total number of examples (in the minibatch) "
return T.mean(T.eq(self.y_pred, y))




 I run the code with flag mode=DebugMode.
.theanorc is
[global]
floatX = float16
device=cuda
[cuda] 
root = /usr/local/cuda-7.5


[nvcc]
fastmath=True

optimizer = fast_compile

[DebugMode]
check_py=False



It raised this error: "ValueError: convolve2d not available for this type." 

This is the output:

Python 2.7.11 |Anaconda custom (64-bit)| (default, Dec  6 2015, 18:08:32) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> 
runfile('/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv.py',
 
wdir='/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core')
Mapped name None to device cuda: GeForce 840M
Using cuDNN version 5005 on context None
/home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6: 
UserWarning: downsample module has been moved to the 
theano.tensor.signal.pool module.
  "downsample module has been moved to the theano.tensor.signal.pool 
module.")
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Alloc due to unsupported float16
Disabling C code for DiagonalSubtensor{inplace} due to unsupported float16
Disabling C code for Pool{ds=(2, 2), ignore_border=False, st=(2, 2),

Re: [theano-users] Error using floatX = float16 to save memory

2016-07-25 Thread luca . wagner . 0812
Fred,
debugging the convnet I don't see qhy training_cost_ij= nan after the first 
cycle whe it it is 1.09.
Could you help me with this thing.

Many thanks

Luca

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-07-22 Thread luca . wagner . 0812
 training_cost_ij  gives a value only for minibatch_index=0, but after it 
gives nan:
a = 
train_set_x[minibatch_index:minibatch_index+batch_size] 
b = 
train_set_y[minibatch_index:minibatch_index+batch_size] 
training_cost_ij=train_model(a, b)

Many thanks
Luca


-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-07-22 Thread luca . wagner . 0812
training_cost give a value only for 

a = train_set_x[minibatch_index:minibatch_index+batch_size] 
b = 
train_set_y[minibatch_index:minibatch_index+batch_size] 
training_cost_ij=train_model(a, b)

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Error using floatX = float16 to save memory

2016-07-22 Thread luca . wagner . 0812
Many thanks Fred: I updated Theano.

I tested the same  3D convnet: it's converging using  float32 but not  
float16.
The test  uses conv3d to classify three different patches extracted from 3D 
objects in a dataset.

Here are the outputs.

flags:
floatX = float32
device=gpu

output:
luca@cuda:~/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core$ 
python
Python 2.7.11 |Anaconda custom (64-bit)| (default, Dec  6 2015, 18:08:32) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> import run_multi_conv
Mapped name None to device cuda: GeForce 840M
Using cuDNN version 5005 on context None
Using gpu device 0: GeForce 840M (CNMeM is disabled, cuDNN 5005)
/home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6: 
UserWarning: downsample module has been moved to the 
theano.tensor.signal.pool module.
  "downsample module has been moved to the theano.tensor.signal.pool 
module.")
>>> run_multi_conv.run_experiments()


start time:
22/07/2016
10:35:36


images for training: 178
images for validation: 24
epochs: 200


... training neural network 13


training @ iter =  0


training cost 1.09
epoch 1, training batch 178/178,validation error 71.67 %
training @ iter =  200


training cost 1.09
epoch 2, training batch 178/178,validation error 71.67 %
training @ iter =  400


training cost 1.09
epoch 3, training batch 178/178,validation error 71.67 %
training @ iter =  600


training cost 1.09
epoch 4, training batch 178/178,validation error 71.67 %
training @ iter =  800


training cost 1.08
epoch 5, training batch 178/178,validation error 71.25 %
training @ iter =  1000


training cost 1.08
epoch 6, training batch 178/178,validation error 70.28 %
training @ iter =  1200


training cost 1.08
epoch 7, training batch 178/178,validation error 68.69 %
training @ iter =  1400


training cost 1.08
epoch 8, training batch 178/178,validation error 65.89 %
training @ iter =  1600


training cost 1.07
epoch 9, training batch 178/178,validation error 63.24 %


training cost 1.07
epoch 10, training batch 178/178,validation error 60.54 %
training @ iter =  1800


training cost 1.06
epoch 11, training batch 178/178,validation error 57.35 %
training @ iter =  2000


training cost 1.06
epoch 12, training batch 178/178,validation error 53.89 %
training @ iter =  2200


training cost 1.05
epoch 13, training batch 178/178,validation error 50.77 %
training @ iter =  2400


training cost 1.04
epoch 14, training batch 178/178,validation error 47.65 %
training @ iter =  2600


training cost 1.04
epoch 15, training batch 178/178,validation error 44.89 %
training @ iter =  2800


training cost 1.03
epoch 16, training batch 178/178,validation error 42.34 %
training @ iter =  3000


training cost 1.01
epoch 17, training batch 178/178,validation error 40.12 %
training @ iter =  3200


training cost 1.00
epoch 18, training batch 178/178,validation error 37.92 %


training cost 0.99
epoch 19, training batch 178/178,validation error 35.96 %
training @ iter =  3400

--

flags:
floatX = float16
device=cuda


output:

Python 2.7.11 |Anaconda custom (64-bit)| (default, Dec  6 2015, 18:08:32) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> 
runfile('/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv.py',
 
wdir='/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core')
Mapped name None to device cuda: GeForce 840M
Using cuDNN version 5005 on context None
/home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6: 
UserWarning: downsample module has been moved to the 
theano.tensor.signal.pool module.
  "downsample module has been moved to the theano.tensor.signal.pool 
module.")
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Alloc due to unsupported float16
Disabling C code for DiagonalSubtensor{inplace} due to unsupported float16
Disabling C code for IncDiagonalSubtensor due to unsupported float16
Disabling C code for DiagonalSubtensor{inplace} due to unsupported float16
Disabling C code for MaxAndArgmax due to unsupported float16


start time:
22/07/2016
11:04:55


images for training: 178
images for validation: 24
epochs: 200


... training neural network 13


training @ iter =  0


training cost nan
epoch 1, training batch 178/178,validation error 67.50 %
training @ iter =  200


training cost nan
epoch 2, 

Re: [theano-users] Error using floatX = float16 to save memory

2016-07-21 Thread Frédéric Bastien
Thanks for the error report. It is fixed in this PR:

https://github.com/Theano/Theano/pull/4771

hopefully we can finish soon our jenkins installation to have PR tested on
GPUs!

Fred

On Thu, Jul 21, 2016 at 9:02 AM,  wrote:

> After I reinstalled theano+gpuarray+pygpu,
> I'm still doing tests.
> Using flags:
> floatX = float32
> device=gpu
>
>
> error is:
>
> Python 2.7.11 |Anaconda custom (64-bit)| (default, Dec  6 2015, 18:08:32)
> [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
> Anaconda is brought to you by Continuum Analytics.
> Please check out: http://continuum.io/thanks and https://anaconda.org
> >>>
> runfile('/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv.py',
> wdir='/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core')
> Mapped name None to device cuda: GeForce 840M
> Using cuDNN version 5005 on context None
> Using gpu device 0: GeForce 840M (CNMeM is disabled, cuDNN 5005)
> /home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6:
> UserWarning: downsample module has been moved to the
> theano.tensor.signal.pool module.
>   "downsample module has been moved to the theano.tensor.signal.pool
> module.")
> ERROR (theano.gof.opt): Optimization failure due to:
> LocalOptGroup(local_abstractconv_cudnn,local_conv_dnn,local_abstractconv_gemm,local_abstractconv_gradinputs_gemm,local_abstractconv_gradweight_gemm,local_conv_gemm)
> ERROR (theano.gof.opt): node: AbstractConv2d{border_mode='valid',
> subsample=(1, 1), filter_flip=True, imshp=(20, 1, 20, 20), kshp=(100, 1, 5,
> 5), filter_dilation=(1, 1)}(GpuFromHost.0, GpuReshape{4}.0)
> ERROR (theano.gof.opt): TRACEBACK:
> ERROR (theano.gof.opt): Traceback (most recent call last):
>   File "/home/luca/data/Theano-master/theano/gof/opt.py", line 1820, in
> process_node
> replacements = lopt.transform(node)
>   File "/home/luca/data/Theano-master/theano/gof/opt.py", line 1265, in
> transform
> repl = opt.transform(node)
>   File "/home/luca/data/Theano-master/theano/sandbox/cuda/dnn.py", line
> 3149, in local_abstractconv_cudnn
> conv_mode=conv_mode)
>   File "/home/luca/data/Theano-master/theano/sandbox/cuda/dnn.py", line
> 1181, in dnn_conv
> conv_mode=conv_mode, precision=precision)(img.shape,
>   File "/home/luca/data/Theano-master/theano/sandbox/cuda/dnn.py", line
> 180, in __init__
> assert precision in ['float16', 'float32', 'float64']
> AssertionError
>
> Traceback (most recent call last):
>   File "", line 1, in 
>   File
> "/home/luca/anaconda2/lib/python2.7/site-packages/spyderlib/widgets/externalshell/sitecustomize.py",
> line 714, in runfile
> execfile(filename, namespace)
>   File
> "/home/luca/anaconda2/lib/python2.7/site-packages/spyderlib/widgets/externalshell/sitecustomize.py",
> line 81, in execfile
> builtins.execfile(filename, *where)
>   File
> "/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv.py",
> line 124, in 
> run_experiments()
>   File
> "/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv.py",
> line 83, in run_experiments
> Pretrained = False
>   File "mpr_convnet_class.py", line 291, in __init__
> train_model = theano.function([x,y],cost,
> updates=updates)
>   File "/home/luca/data/Theano-master/theano/compile/function.py", line
> 322, in function
> output_keys=output_keys)
>   File "/home/luca/data/Theano-master/theano/compile/pfunc.py", line 480,
> in pfunc
> output_keys=output_keys)
>   File "/home/luca/data/Theano-master/theano/compile/function_module.py",
> line 1783, in orig_function
> output_keys=output_keys).create(
>   File "/home/luca/data/Theano-master/theano/compile/function_module.py",
> line 1463, in __init__
> optimizer_profile = optimizer(fgraph)
>   File "/home/luca/data/Theano-master/theano/gof/opt.py", line 102, in
> __call__
> return self.optimize(fgraph)
>   File "/home/luca/data/Theano-master/theano/gof/opt.py", line 90, in
> optimize
> ret = self.apply(fgraph, *args, **kwargs)
>   File "/home/luca/data/Theano-master/theano/gof/opt.py", line 235, in
> apply
> sub_prof = optimizer.optimize(fgraph)
>   File "/home/luca/data/Theano-master/theano/gof/opt.py", line 90, in
> optimize
> ret = self.apply(fgraph, *args, **kwargs)
>   File "/home/luca/data/Theano-master/theano/gof/opt.py", line 235, in
> apply
> sub_prof = optimizer.optimize(fgraph)
>   File "/home/luca/data/Theano-master/theano/gof/opt.py", line 90, in
> optimize
> ret = self.apply(fgraph, *args, **kwargs)
>   File "/home/luca/data/Theano-master/theano/gof/opt.py", line 2257, in
> apply
> lopt_change = self.process_node(fgraph, node, lopt)
>   File "/home/luca/data/Theano-master/theano/gof/opt.py", line 1825, in
> process_node
> lopt, node)
>   File "/home/luca/data/Theano-master/theano/gof/opt.py", line 1719, in
> wa

Re: [theano-users] Error using floatX = float16 to save memory

2016-07-21 Thread luca . wagner . 0812
After I reinstalled theano+gpuarray+pygpu,
I'm still doing tests.
Using flags:
floatX = float32
device=gpu

error is:

Python 2.7.11 |Anaconda custom (64-bit)| (default, Dec  6 2015, 18:08:32) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> 
runfile('/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv.py',
 
wdir='/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core')
Mapped name None to device cuda: GeForce 840M
Using cuDNN version 5005 on context None
Using gpu device 0: GeForce 840M (CNMeM is disabled, cuDNN 5005)
/home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6: 
UserWarning: downsample module has been moved to the 
theano.tensor.signal.pool module.
  "downsample module has been moved to the theano.tensor.signal.pool 
module.")
ERROR (theano.gof.opt): Optimization failure due to: 
LocalOptGroup(local_abstractconv_cudnn,local_conv_dnn,local_abstractconv_gemm,local_abstractconv_gradinputs_gemm,local_abstractconv_gradweight_gemm,local_conv_gemm)
ERROR (theano.gof.opt): node: AbstractConv2d{border_mode='valid', 
subsample=(1, 1), filter_flip=True, imshp=(20, 1, 20, 20), kshp=(100, 1, 5, 
5), filter_dilation=(1, 1)}(GpuFromHost.0, GpuReshape{4}.0)
ERROR (theano.gof.opt): TRACEBACK:
ERROR (theano.gof.opt): Traceback (most recent call last):
  File "/home/luca/data/Theano-master/theano/gof/opt.py", line 1820, in 
process_node
replacements = lopt.transform(node)
  File "/home/luca/data/Theano-master/theano/gof/opt.py", line 1265, in 
transform
repl = opt.transform(node)
  File "/home/luca/data/Theano-master/theano/sandbox/cuda/dnn.py", line 
3149, in local_abstractconv_cudnn
conv_mode=conv_mode)
  File "/home/luca/data/Theano-master/theano/sandbox/cuda/dnn.py", line 
1181, in dnn_conv
conv_mode=conv_mode, precision=precision)(img.shape,
  File "/home/luca/data/Theano-master/theano/sandbox/cuda/dnn.py", line 
180, in __init__
assert precision in ['float16', 'float32', 'float64']
AssertionError

Traceback (most recent call last):
  File "", line 1, in 
  File 
"/home/luca/anaconda2/lib/python2.7/site-packages/spyderlib/widgets/externalshell/sitecustomize.py",
 
line 714, in runfile
execfile(filename, namespace)
  File 
"/home/luca/anaconda2/lib/python2.7/site-packages/spyderlib/widgets/externalshell/sitecustomize.py",
 
line 81, in execfile
builtins.execfile(filename, *where)
  File 
"/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv.py",
 
line 124, in 
run_experiments()
  File 
"/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv.py",
 
line 83, in run_experiments
Pretrained = False
  File "mpr_convnet_class.py", line 291, in __init__
train_model = theano.function([x,y],cost, 
updates=updates) 
  File "/home/luca/data/Theano-master/theano/compile/function.py", line 
322, in function
output_keys=output_keys)
  File "/home/luca/data/Theano-master/theano/compile/pfunc.py", line 480, 
in pfunc
output_keys=output_keys)
  File "/home/luca/data/Theano-master/theano/compile/function_module.py", 
line 1783, in orig_function
output_keys=output_keys).create(
  File "/home/luca/data/Theano-master/theano/compile/function_module.py", 
line 1463, in __init__
optimizer_profile = optimizer(fgraph)
  File "/home/luca/data/Theano-master/theano/gof/opt.py", line 102, in 
__call__
return self.optimize(fgraph)
  File "/home/luca/data/Theano-master/theano/gof/opt.py", line 90, in 
optimize
ret = self.apply(fgraph, *args, **kwargs)
  File "/home/luca/data/Theano-master/theano/gof/opt.py", line 235, in apply
sub_prof = optimizer.optimize(fgraph)
  File "/home/luca/data/Theano-master/theano/gof/opt.py", line 90, in 
optimize
ret = self.apply(fgraph, *args, **kwargs)
  File "/home/luca/data/Theano-master/theano/gof/opt.py", line 235, in apply
sub_prof = optimizer.optimize(fgraph)
  File "/home/luca/data/Theano-master/theano/gof/opt.py", line 90, in 
optimize
ret = self.apply(fgraph, *args, **kwargs)
  File "/home/luca/data/Theano-master/theano/gof/opt.py", line 2257, in 
apply
lopt_change = self.process_node(fgraph, node, lopt)
  File "/home/luca/data/Theano-master/theano/gof/opt.py", line 1825, in 
process_node
lopt, node)
  File "/home/luca/data/Theano-master/theano/gof/opt.py", line 1719, in 
warn_inplace
return NavigatorOptimizer.warn(exc, nav, repl_pairs, local_opt, node)
  File "/home/luca/data/Theano-master/theano/gof/opt.py", line 1705, in warn
raise exc
AssertionError
>>> 


Using flags:
floatX = float16
device=cuda

the convnet starts without errors:

luca@cuda:~/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core$ 
python
Python 2.7.11 |Anaco

Re: [theano-users] Error using floatX = float16 to save memory

2016-07-21 Thread luca . wagner . 0812
Frederic,
I had to reinstall theano  theano updated version + gpuarray and pygpu.
If the flags are:
floatX = float16
device=cuda

the convnet starts without errors:

Python 2.7.11 |Anaconda custom (64-bit)| (default, Dec  6 2015, 18:08:32) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> 
runfile('/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv.py',
 
wdir='/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core')
Mapped name None to device cuda: GeForce 840M
Using cuDNN version 5005 on context None
/home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6: 
UserWarning: downsample module has been moved to the 
theano.tensor.signal.pool module.
  "downsample module has been moved to the theano.tensor.signal.pool 
module.")
Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
Disabling C code for Alloc due to unsupported float16
Disabling C code for DiagonalSubtensor{inplace} due to unsupported float16
Disabling C code for IncDiagonalSubtensor due to unsupported float16
Disabling C code for DiagonalSubtensor{inplace} due to unsupported float16
Disabling C code for MaxAndArgmax due to unsupported float16


start time:
21/07/2016
14:32:07


images for training: 594
images for validation: 82
epochs: 200


... training neural network 13


training @ iter =  0
training @ iter =  200
training @ iter =  400


training cost 0.69336
epoch 1, training batch 594/594,validation error 45.122 %
training @ iter =  600
..


but if I put instead
floatX = float32
device=gpu

the error is:

Python 2.7.11 |Anaconda custom (64-bit)| (default, Dec  6 2015, 18:08:32) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> 
runfile('/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv.py',
 
wdir='/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core')
Mapped name None to device cuda: GeForce 840M
Using cuDNN version 5005 on context None
Using gpu device 0: GeForce 840M (CNMeM is disabled, cuDNN 5005)
/home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6: 
UserWarning: downsample module has been moved to the 
theano.tensor.signal.pool module.
  "downsample module has been moved to the theano.tensor.signal.pool 
module.")
ERROR (theano.gof.opt): Optimization failure due to: 
LocalOptGroup(local_abstractconv_cudnn,local_conv_dnn,local_abstractconv_gemm,local_abstractconv_gradinputs_gemm,local_abstractconv_gradweight_gemm,local_conv_gemm)
ERROR (theano.gof.opt): node: AbstractConv2d{border_mode='valid', 
subsample=(1, 1), filter_flip=True, imshp=(20, 1, 20, 20), kshp=(100, 1, 5, 
5), filter_dilation=(1, 1)}(GpuFromHost.0, GpuReshape{4}.0)
ERROR (theano.gof.opt): TRACEBACK:
ERROR (theano.gof.opt): Traceback (most recent call last):
  File "/home/luca/data/Theano-master/theano/gof/opt.py", line 1820, in 
process_node
replacements = lopt.transform(node)
  File "/home/luca/data/Theano-master/theano/gof/opt.py", line 1265, in 
transform
repl = opt.transform(node)
  File "/home/luca/data/Theano-master/theano/sandbox/cuda/dnn.py", line 
3149, in local_abstractconv_cudnn
conv_mode=conv_mode)
  File "/home/luca/data/Theano-master/theano/sandbox/cuda/dnn.py", line 
1181, in dnn_conv
conv_mode=conv_mode, precision=precision)(img.shape,
  File "/home/luca/data/Theano-master/theano/sandbox/cuda/dnn.py", line 
180, in __init__
assert precision in ['float16', 'float32', 'float64']
AssertionError

Traceback (most recent call last):
  File "", line 1, in 
  File 
"/home/luca/anaconda2/lib/python2.7/site-packages/spyderlib/widgets/externalshell/sitecustomize.py",
 
line 714, in runfile
execfile(filename, namespace)
  File 
"/home/luca/anaconda2/lib/python2.7/site-packages/spyderlib/widgets/externalshell/sitecustomize.py",
 
line 81, in execfile
builtins.execfile(filename, *where)
  File 
"/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv.py",
 
line 124, in 
run_experiments()
  File 
"/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv.py",
 
line 83, in run_experiments
Pretrained = False
  File "mpr_convnet_class.py", line 291, in __init__
train_model = theano.function([x,y],cost, 
updates=updates) 
  File "/home/luca/data/Theano-master/theano/comp

Re: [theano-users] Error using floatX = float16 to save memory

2016-07-21 Thread luca . wagner . 0812
Frederic,
In ops.py  I can't find shape_i_op
thanks

On Thursday, July 21, 2016 at 11:50:51 AM UTC+2, luca.wag...@gmail.com 
wrote:
>
> Frederic,
> this is the feedback afterl the upgrades about float 16.
>
> Python 2.7.11 |Anaconda custom (64-bit)| (default, Dec  6 2015, 18:08:32) 
> [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
> Anaconda is brought to you by Continuum Analytics.
> Please check out: http://continuum.io/thanks and https://anaconda.org
> >>> import run_multi_conv
> Traceback (most recent call last):
>   File "", line 1, in 
>   File "run_multi_conv.py", line 1, in 
> import mpr_convnet_class as conv
>   File "mpr_convnet_class.py", line 2, in 
> from convnet3d import ConvLayer, PoolLayer
>   File "convnet3d.py", line 3, in 
> from theano.tensor.nnet.conv3d2d import conv3d
>   File "/home/luca/data/Theano-master/theano/__init__.py", line 125, in 
> 
> import theano.gpuarray
>   File "/home/luca/data/Theano-master/theano/gpuarray/__init__.py", line 
> 31, in 
> from . import fft, dnn, opt, nerv, extra_ops
>   File "/home/luca/data/Theano-master/theano/gpuarray/dnn.py", line 17, in 
> 
> from theano.compile.ops import shape_i, shape_i_op
> ImportError: cannot import name shape_i_op
> >>> 
>
>
>
>
>
> On Thursday, July 21, 2016 at 11:15:06 AM UTC+2, luca.wag...@gmail.com 
> wrote:
>>
>> Frederic,
>> I'll do it and give you a feedback,
>> many thanks
>> Luca
>>
>> On Tuesday, July 19, 2016 at 10:09:21 PM UTC+2, nouiz wrote:
>>>
>>> We have a PR that upgrade some stuff about float16:
>>>
>>> https://github.com/Theano/Theano/pull/4764/files
>>>
>>> It probably fix your problem. Can you try it to confirm that you don't 
>>> have a different problem?
>>>
>>> thanks
>>>
>>> Frédéric
>>>
>>> On Fri, Jul 15, 2016 at 4:55 AM,  wrote:
>>>
 ok I try.
 thanks

 On Thursday, July 14, 2016 at 11:44:41 PM UTC+2, Arnaud Bergeron wrote:
>
> I can't reproduce your problem using a simple convolution in float16.
>
> Either this is because your code is doing something unexpected or 
> because the problem has been fixed in the development version.
>
> In nay case the development version is a much better option for the 
> new backend and float16 so I encourage you to upgrade and try again: 
> http://deeplearning.net/software/theano/install.html#bleeding-edge-install-instructions
> .
>
> 2016-07-14 4:22 GMT-04:00 :
>
>> Here is .theanorc:
>>
>> [global]
>> floatX = float16
>> device=cuda
>> [cuda] 
>> root = /usr/local/cuda-7.5
>>
>>
>> [nvcc]
>> fastmath=True
>>
>> optimizer = fast_compile
>>
>> On Thursday, July 14, 2016 at 10:19:56 AM UTC+2, 
>> luca.wag...@gmail.com wrote:
>>>
>>> Hi Arnaud,
>>> I put _f16_ok = True in dnn.py ( attached).
>>>
>>> This is the error I received:
>>>
>>> Python 2.7.11 |Anaconda custom (64-bit)| (default, Dec  6 2015, 
>>> 18:08:32) 
>>> [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
>>> Type "help", "copyright", "credits" or "license" for more 
>>> information.
>>> Anaconda is brought to you by Continuum Analytics.
>>> Please check out: http://continuum.io/thanks and 
>>> https://anaconda.org
>>> >>> import run_multi_conv
>>>
>>> Mapped name None to device cuda: GeForce 840M
>>> WARNING (theano.gof.compilelock): Overriding existing lock by dead 
>>> process '3202' (I am process '3351')
>>> Using cuDNN version 5005 on context None
>>> /home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6: 
>>> UserWarning: downsample module has been moved to the 
>>> theano.tensor.signal.pool module.
>>>   "downsample module has been moved to the theano.tensor.signal.pool 
>>> module.")
>>> >>> 
>>> >>> run_multi_conv.run_experiments()
>>> Disabling C code for Elemwise{mul,no_inplace} due to unsupported 
>>> float16
>>> Disabling C code for Elemwise{Cast{float32}} due to unsupported 
>>> float16
>>> Disabling C code for Elemwise{Cast{float16}} due to unsupported 
>>> float16
>>> Disabling C code for Elemwise{Cast{float16}} due to unsupported 
>>> float16
>>> Disabling C code for Alloc due to unsupported float16
>>> Disabling C code for Cast{float16} due to unsupported float16
>>> Disabling C code for Cast{float16} due to unsupported float16
>>> Disabling C code for Cast{float16} due to unsupported float16
>>> Disabling C code for Cast{float16} due to unsupported float16
>>> Disabling C code for RandomFunction{binomial} due to unsupported 
>>> float16
>>> Disabling C code for RandomFunction{binomial} due to unsupported 
>>> float16
>>> ===
>>> 1#include 
>>> 2#include 
>>> 3#include "theano_mod_helper.h"
>>> 4   

Re: [theano-users] Error using floatX = float16 to save memory

2016-07-21 Thread luca . wagner . 0812
Frederic,
this is the feedback afterl the upgrades about float 16.

Python 2.7.11 |Anaconda custom (64-bit)| (default, Dec  6 2015, 18:08:32) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> import run_multi_conv
Traceback (most recent call last):
  File "", line 1, in 
  File "run_multi_conv.py", line 1, in 
import mpr_convnet_class as conv
  File "mpr_convnet_class.py", line 2, in 
from convnet3d import ConvLayer, PoolLayer
  File "convnet3d.py", line 3, in 
from theano.tensor.nnet.conv3d2d import conv3d
  File "/home/luca/data/Theano-master/theano/__init__.py", line 125, in 

import theano.gpuarray
  File "/home/luca/data/Theano-master/theano/gpuarray/__init__.py", line 
31, in 
from . import fft, dnn, opt, nerv, extra_ops
  File "/home/luca/data/Theano-master/theano/gpuarray/dnn.py", line 17, in 

from theano.compile.ops import shape_i, shape_i_op
ImportError: cannot import name shape_i_op
>>> 





On Thursday, July 21, 2016 at 11:15:06 AM UTC+2, luca.wag...@gmail.com 
wrote:
>
> Frederic,
> I'll do it and give you a feedback,
> many thanks
> Luca
>
> On Tuesday, July 19, 2016 at 10:09:21 PM UTC+2, nouiz wrote:
>>
>> We have a PR that upgrade some stuff about float16:
>>
>> https://github.com/Theano/Theano/pull/4764/files
>>
>> It probably fix your problem. Can you try it to confirm that you don't 
>> have a different problem?
>>
>> thanks
>>
>> Frédéric
>>
>> On Fri, Jul 15, 2016 at 4:55 AM,  wrote:
>>
>>> ok I try.
>>> thanks
>>>
>>> On Thursday, July 14, 2016 at 11:44:41 PM UTC+2, Arnaud Bergeron wrote:

 I can't reproduce your problem using a simple convolution in float16.

 Either this is because your code is doing something unexpected or 
 because the problem has been fixed in the development version.

 In nay case the development version is a much better option for the new 
 backend and float16 so I encourage you to upgrade and try again: 
 http://deeplearning.net/software/theano/install.html#bleeding-edge-install-instructions
 .

 2016-07-14 4:22 GMT-04:00 :

> Here is .theanorc:
>
> [global]
> floatX = float16
> device=cuda
> [cuda] 
> root = /usr/local/cuda-7.5
>
>
> [nvcc]
> fastmath=True
>
> optimizer = fast_compile
>
> On Thursday, July 14, 2016 at 10:19:56 AM UTC+2, luca.wag...@gmail.com 
> wrote:
>>
>> Hi Arnaud,
>> I put _f16_ok = True in dnn.py ( attached).
>>
>> This is the error I received:
>>
>> Python 2.7.11 |Anaconda custom (64-bit)| (default, Dec  6 2015, 
>> 18:08:32) 
>> [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
>> Type "help", "copyright", "credits" or "license" for more information.
>> Anaconda is brought to you by Continuum Analytics.
>> Please check out: http://continuum.io/thanks and https://anaconda.org
>> >>> import run_multi_conv
>>
>> Mapped name None to device cuda: GeForce 840M
>> WARNING (theano.gof.compilelock): Overriding existing lock by dead 
>> process '3202' (I am process '3351')
>> Using cuDNN version 5005 on context None
>> /home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6: 
>> UserWarning: downsample module has been moved to the 
>> theano.tensor.signal.pool module.
>>   "downsample module has been moved to the theano.tensor.signal.pool 
>> module.")
>> >>> 
>> >>> run_multi_conv.run_experiments()
>> Disabling C code for Elemwise{mul,no_inplace} due to unsupported 
>> float16
>> Disabling C code for Elemwise{Cast{float32}} due to unsupported 
>> float16
>> Disabling C code for Elemwise{Cast{float16}} due to unsupported 
>> float16
>> Disabling C code for Elemwise{Cast{float16}} due to unsupported 
>> float16
>> Disabling C code for Alloc due to unsupported float16
>> Disabling C code for Cast{float16} due to unsupported float16
>> Disabling C code for Cast{float16} due to unsupported float16
>> Disabling C code for Cast{float16} due to unsupported float16
>> Disabling C code for Cast{float16} due to unsupported float16
>> Disabling C code for RandomFunction{binomial} due to unsupported 
>> float16
>> Disabling C code for RandomFunction{binomial} due to unsupported 
>> float16
>> ===
>> 1#include 
>> 2#include 
>> 3#include "theano_mod_helper.h"
>> 4#include 
>> 5#include 
>> 6#include 
>> 7#include 
>> 8#include 
>> 9#include 
>> 00010#include 
>> 00011#include 
>> 00012#include 
>> 00013#include "cudnn.h"
>> 00014#include "cudnn_helper.h"
>> 00015#incl

Re: [theano-users] Error using floatX = float16 to save memory

2016-07-21 Thread luca . wagner . 0812
Frederic,
I'll do it and give you a feedback,
many thanks
Luca

On Tuesday, July 19, 2016 at 10:09:21 PM UTC+2, nouiz wrote:
>
> We have a PR that upgrade some stuff about float16:
>
> https://github.com/Theano/Theano/pull/4764/files
>
> It probably fix your problem. Can you try it to confirm that you don't 
> have a different problem?
>
> thanks
>
> Frédéric
>
> On Fri, Jul 15, 2016 at 4:55 AM, > 
> wrote:
>
>> ok I try.
>> thanks
>>
>> On Thursday, July 14, 2016 at 11:44:41 PM UTC+2, Arnaud Bergeron wrote:
>>>
>>> I can't reproduce your problem using a simple convolution in float16.
>>>
>>> Either this is because your code is doing something unexpected or 
>>> because the problem has been fixed in the development version.
>>>
>>> In nay case the development version is a much better option for the new 
>>> backend and float16 so I encourage you to upgrade and try again: 
>>> http://deeplearning.net/software/theano/install.html#bleeding-edge-install-instructions
>>> .
>>>
>>> 2016-07-14 4:22 GMT-04:00 :
>>>
 Here is .theanorc:

 [global]
 floatX = float16
 device=cuda
 [cuda] 
 root = /usr/local/cuda-7.5


 [nvcc]
 fastmath=True

 optimizer = fast_compile

 On Thursday, July 14, 2016 at 10:19:56 AM UTC+2, luca.wag...@gmail.com 
 wrote:
>
> Hi Arnaud,
> I put _f16_ok = True in dnn.py ( attached).
>
> This is the error I received:
>
> Python 2.7.11 |Anaconda custom (64-bit)| (default, Dec  6 2015, 
> 18:08:32) 
> [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
> Anaconda is brought to you by Continuum Analytics.
> Please check out: http://continuum.io/thanks and https://anaconda.org
> >>> import run_multi_conv
>
> Mapped name None to device cuda: GeForce 840M
> WARNING (theano.gof.compilelock): Overriding existing lock by dead 
> process '3202' (I am process '3351')
> Using cuDNN version 5005 on context None
> /home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6: 
> UserWarning: downsample module has been moved to the 
> theano.tensor.signal.pool module.
>   "downsample module has been moved to the theano.tensor.signal.pool 
> module.")
> >>> 
> >>> run_multi_conv.run_experiments()
> Disabling C code for Elemwise{mul,no_inplace} due to unsupported 
> float16
> Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
> Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
> Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
> Disabling C code for Alloc due to unsupported float16
> Disabling C code for Cast{float16} due to unsupported float16
> Disabling C code for Cast{float16} due to unsupported float16
> Disabling C code for Cast{float16} due to unsupported float16
> Disabling C code for Cast{float16} due to unsupported float16
> Disabling C code for RandomFunction{binomial} due to unsupported 
> float16
> Disabling C code for RandomFunction{binomial} due to unsupported 
> float16
> ===
> 1#include 
> 2#include 
> 3#include "theano_mod_helper.h"
> 4#include 
> 5#include 
> 6#include 
> 7#include 
> 8#include 
> 9#include 
> 00010#include 
> 00011#include 
> 00012#include 
> 00013#include "cudnn.h"
> 00014#include "cudnn_helper.h"
> 00015#include "gpuarray_helper.h"
> 00016#include "gpuarray/types.h"
> 00017#include "gpuarray/array.h"
> 00018#include "gpuarray/util.h"
> 00019#include "gpuarray/ext_cuda.h"
> 00020#include "gpuarray_api.h"
> 00021#include "numpy_compat.h"
> 00022//
> 00023  Support Code
> 00024//
> 00025
> 00026
> 00027
> 00028static int
> 00029c_set_tensorNd(PyGpuArrayObject *var, cudnnTensorDescriptor_t 
> desc) {
> 00030  cudnnDataType_t dt;
> 00031  size_t ds;
> 00032  switch (var->ga.typecode) {
> 00033  case GA_FLOAT:
> 00034dt = CUDNN_DATA_FLOAT;
> 00035break;
> 00036  case GA_DOUBLE:
> 00037dt = CUDNN_DATA_DOUBLE;
> 00038break;
> 00039#if CUDNN_VERSION > 3000
> 00040  case GA_HALF:
> 00041dt = CUDNN_DATA_HALF;
> 00042break;
> 00043#endif
> 00044  default:
> 00045PyErr_SetString(PyExc_TypeError, "Non-float datatype in 
> c_set_tensorNd");
> 00046return -1;
> 00047  }
> 00048  ds = gpuarray_get_elsize(var->ga.typecode);
> 00049
> 00050  int strs[5], dims[5], default_stride = 1;
> 00051  

Re: [theano-users] Error using floatX = float16 to save memory

2016-07-19 Thread Frédéric Bastien
We have a PR that upgrade some stuff about float16:

https://github.com/Theano/Theano/pull/4764/files

It probably fix your problem. Can you try it to confirm that you don't have
a different problem?

thanks

Frédéric

On Fri, Jul 15, 2016 at 4:55 AM,  wrote:

> ok I try.
> thanks
>
> On Thursday, July 14, 2016 at 11:44:41 PM UTC+2, Arnaud Bergeron wrote:
>>
>> I can't reproduce your problem using a simple convolution in float16.
>>
>> Either this is because your code is doing something unexpected or because
>> the problem has been fixed in the development version.
>>
>> In nay case the development version is a much better option for the new
>> backend and float16 so I encourage you to upgrade and try again:
>> http://deeplearning.net/software/theano/install.html#bleeding-edge-install-instructions
>> .
>>
>> 2016-07-14 4:22 GMT-04:00 :
>>
>>> Here is .theanorc:
>>>
>>> [global]
>>> floatX = float16
>>> device=cuda
>>> [cuda]
>>> root = /usr/local/cuda-7.5
>>>
>>>
>>> [nvcc]
>>> fastmath=True
>>>
>>> optimizer = fast_compile
>>>
>>> On Thursday, July 14, 2016 at 10:19:56 AM UTC+2, luca.wag...@gmail.com
>>> wrote:

 Hi Arnaud,
 I put _f16_ok = True in dnn.py ( attached).

 This is the error I received:

 Python 2.7.11 |Anaconda custom (64-bit)| (default, Dec  6 2015,
 18:08:32)
 [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
 Type "help", "copyright", "credits" or "license" for more information.
 Anaconda is brought to you by Continuum Analytics.
 Please check out: http://continuum.io/thanks and https://anaconda.org
 >>> import run_multi_conv

 Mapped name None to device cuda: GeForce 840M
 WARNING (theano.gof.compilelock): Overriding existing lock by dead
 process '3202' (I am process '3351')
 Using cuDNN version 5005 on context None
 /home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6:
 UserWarning: downsample module has been moved to the
 theano.tensor.signal.pool module.
   "downsample module has been moved to the theano.tensor.signal.pool
 module.")
 >>>
 >>> run_multi_conv.run_experiments()
 Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
 Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
 Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
 Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
 Disabling C code for Alloc due to unsupported float16
 Disabling C code for Cast{float16} due to unsupported float16
 Disabling C code for Cast{float16} due to unsupported float16
 Disabling C code for Cast{float16} due to unsupported float16
 Disabling C code for Cast{float16} due to unsupported float16
 Disabling C code for RandomFunction{binomial} due to unsupported float16
 Disabling C code for RandomFunction{binomial} due to unsupported float16
 ===
 1#include 
 2#include 
 3#include "theano_mod_helper.h"
 4#include 
 5#include 
 6#include 
 7#include 
 8#include 
 9#include 
 00010#include 
 00011#include 
 00012#include 
 00013#include "cudnn.h"
 00014#include "cudnn_helper.h"
 00015#include "gpuarray_helper.h"
 00016#include "gpuarray/types.h"
 00017#include "gpuarray/array.h"
 00018#include "gpuarray/util.h"
 00019#include "gpuarray/ext_cuda.h"
 00020#include "gpuarray_api.h"
 00021#include "numpy_compat.h"
 00022//
 00023  Support Code
 00024//
 00025
 00026
 00027
 00028static int
 00029c_set_tensorNd(PyGpuArrayObject *var, cudnnTensorDescriptor_t
 desc) {
 00030  cudnnDataType_t dt;
 00031  size_t ds;
 00032  switch (var->ga.typecode) {
 00033  case GA_FLOAT:
 00034dt = CUDNN_DATA_FLOAT;
 00035break;
 00036  case GA_DOUBLE:
 00037dt = CUDNN_DATA_DOUBLE;
 00038break;
 00039#if CUDNN_VERSION > 3000
 00040  case GA_HALF:
 00041dt = CUDNN_DATA_HALF;
 00042break;
 00043#endif
 00044  default:
 00045PyErr_SetString(PyExc_TypeError, "Non-float datatype in
 c_set_tensorNd");
 00046return -1;
 00047  }
 00048  ds = gpuarray_get_elsize(var->ga.typecode);
 00049
 00050  int strs[5], dims[5], default_stride = 1;
 00051  unsigned int nd = PyGpuArray_NDIM(var);
 00052
 00053  if (nd > 5) {
 00054PyErr_SetString(PyExc_TypeError, "Tensor of more than 5d");
 00055return -1;
 00056  }
 00057
 00058  for (unsigned int _i = nd; _i > 0; _i--) {
 00059unsigned int i = _i - 1;
>>>

Re: [theano-users] Error using floatX = float16 to save memory

2016-07-15 Thread luca . wagner . 0812
ok I try.
thanks

On Thursday, July 14, 2016 at 11:44:41 PM UTC+2, Arnaud Bergeron wrote:
>
> I can't reproduce your problem using a simple convolution in float16.
>
> Either this is because your code is doing something unexpected or because 
> the problem has been fixed in the development version.
>
> In nay case the development version is a much better option for the new 
> backend and float16 so I encourage you to upgrade and try again: 
> http://deeplearning.net/software/theano/install.html#bleeding-edge-install-instructions
> .
>
> 2016-07-14 4:22 GMT-04:00 >:
>
>> Here is .theanorc:
>>
>> [global]
>> floatX = float16
>> device=cuda
>> [cuda] 
>> root = /usr/local/cuda-7.5
>>
>>
>> [nvcc]
>> fastmath=True
>>
>> optimizer = fast_compile
>>
>> On Thursday, July 14, 2016 at 10:19:56 AM UTC+2, luca.wag...@gmail.com 
>> wrote:
>>>
>>> Hi Arnaud,
>>> I put _f16_ok = True in dnn.py ( attached).
>>>
>>> This is the error I received:
>>>
>>> Python 2.7.11 |Anaconda custom (64-bit)| (default, Dec  6 2015, 
>>> 18:08:32) 
>>> [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
>>> Type "help", "copyright", "credits" or "license" for more information.
>>> Anaconda is brought to you by Continuum Analytics.
>>> Please check out: http://continuum.io/thanks and https://anaconda.org
>>> >>> import run_multi_conv
>>>
>>> Mapped name None to device cuda: GeForce 840M
>>> WARNING (theano.gof.compilelock): Overriding existing lock by dead 
>>> process '3202' (I am process '3351')
>>> Using cuDNN version 5005 on context None
>>> /home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6: 
>>> UserWarning: downsample module has been moved to the 
>>> theano.tensor.signal.pool module.
>>>   "downsample module has been moved to the theano.tensor.signal.pool 
>>> module.")
>>> >>> 
>>> >>> run_multi_conv.run_experiments()
>>> Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
>>> Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
>>> Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
>>> Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
>>> Disabling C code for Alloc due to unsupported float16
>>> Disabling C code for Cast{float16} due to unsupported float16
>>> Disabling C code for Cast{float16} due to unsupported float16
>>> Disabling C code for Cast{float16} due to unsupported float16
>>> Disabling C code for Cast{float16} due to unsupported float16
>>> Disabling C code for RandomFunction{binomial} due to unsupported float16
>>> Disabling C code for RandomFunction{binomial} due to unsupported float16
>>> ===
>>> 1#include 
>>> 2#include 
>>> 3#include "theano_mod_helper.h"
>>> 4#include 
>>> 5#include 
>>> 6#include 
>>> 7#include 
>>> 8#include 
>>> 9#include 
>>> 00010#include 
>>> 00011#include 
>>> 00012#include 
>>> 00013#include "cudnn.h"
>>> 00014#include "cudnn_helper.h"
>>> 00015#include "gpuarray_helper.h"
>>> 00016#include "gpuarray/types.h"
>>> 00017#include "gpuarray/array.h"
>>> 00018#include "gpuarray/util.h"
>>> 00019#include "gpuarray/ext_cuda.h"
>>> 00020#include "gpuarray_api.h"
>>> 00021#include "numpy_compat.h"
>>> 00022//
>>> 00023  Support Code
>>> 00024//
>>> 00025
>>> 00026
>>> 00027
>>> 00028static int
>>> 00029c_set_tensorNd(PyGpuArrayObject *var, cudnnTensorDescriptor_t 
>>> desc) {
>>> 00030  cudnnDataType_t dt;
>>> 00031  size_t ds;
>>> 00032  switch (var->ga.typecode) {
>>> 00033  case GA_FLOAT:
>>> 00034dt = CUDNN_DATA_FLOAT;
>>> 00035break;
>>> 00036  case GA_DOUBLE:
>>> 00037dt = CUDNN_DATA_DOUBLE;
>>> 00038break;
>>> 00039#if CUDNN_VERSION > 3000
>>> 00040  case GA_HALF:
>>> 00041dt = CUDNN_DATA_HALF;
>>> 00042break;
>>> 00043#endif
>>> 00044  default:
>>> 00045PyErr_SetString(PyExc_TypeError, "Non-float datatype in 
>>> c_set_tensorNd");
>>> 00046return -1;
>>> 00047  }
>>> 00048  ds = gpuarray_get_elsize(var->ga.typecode);
>>> 00049
>>> 00050  int strs[5], dims[5], default_stride = 1;
>>> 00051  unsigned int nd = PyGpuArray_NDIM(var);
>>> 00052
>>> 00053  if (nd > 5) {
>>> 00054PyErr_SetString(PyExc_TypeError, "Tensor of more than 5d");
>>> 00055return -1;
>>> 00056  }
>>> 00057
>>> 00058  for (unsigned int _i = nd; _i > 0; _i--) {
>>> 00059unsigned int i = _i - 1;
>>> 00060strs[i] = PyGpuArray_STRIDE(var, i) ?
>>> 00061  PyGpuArray_STRIDE(var, i)/ds : default_stride;
>>> 00062default_stride *= PyGpuArray_DIM(var, i);
>>> 00063dims[i] = PyGpuArray_DIM(var, i);
>>> 00064  }
>>> 00065
>>> 00066  cudnnStatus_t err = cudnnSetTensorNdDescriptor(desc, dt, nd, 
>>> dims, strs);
>>>

Re: [theano-users] Error using floatX = float16 to save memory

2016-07-14 Thread Arnaud Bergeron
I can't reproduce your problem using a simple convolution in float16.

Either this is because your code is doing something unexpected or because
the problem has been fixed in the development version.

In nay case the development version is a much better option for the new
backend and float16 so I encourage you to upgrade and try again:
http://deeplearning.net/software/theano/install.html#bleeding-edge-install-instructions
.

2016-07-14 4:22 GMT-04:00 :

> Here is .theanorc:
>
> [global]
> floatX = float16
> device=cuda
> [cuda]
> root = /usr/local/cuda-7.5
>
>
> [nvcc]
> fastmath=True
>
> optimizer = fast_compile
>
> On Thursday, July 14, 2016 at 10:19:56 AM UTC+2, luca.wag...@gmail.com
> wrote:
>>
>> Hi Arnaud,
>> I put _f16_ok = True in dnn.py ( attached).
>>
>> This is the error I received:
>>
>> Python 2.7.11 |Anaconda custom (64-bit)| (default, Dec  6 2015, 18:08:32)
>> [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
>> Type "help", "copyright", "credits" or "license" for more information.
>> Anaconda is brought to you by Continuum Analytics.
>> Please check out: http://continuum.io/thanks and https://anaconda.org
>> >>> import run_multi_conv
>>
>> Mapped name None to device cuda: GeForce 840M
>> WARNING (theano.gof.compilelock): Overriding existing lock by dead
>> process '3202' (I am process '3351')
>> Using cuDNN version 5005 on context None
>> /home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6:
>> UserWarning: downsample module has been moved to the
>> theano.tensor.signal.pool module.
>>   "downsample module has been moved to the theano.tensor.signal.pool
>> module.")
>> >>>
>> >>> run_multi_conv.run_experiments()
>> Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
>> Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
>> Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
>> Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
>> Disabling C code for Alloc due to unsupported float16
>> Disabling C code for Cast{float16} due to unsupported float16
>> Disabling C code for Cast{float16} due to unsupported float16
>> Disabling C code for Cast{float16} due to unsupported float16
>> Disabling C code for Cast{float16} due to unsupported float16
>> Disabling C code for RandomFunction{binomial} due to unsupported float16
>> Disabling C code for RandomFunction{binomial} due to unsupported float16
>> ===
>> 1#include 
>> 2#include 
>> 3#include "theano_mod_helper.h"
>> 4#include 
>> 5#include 
>> 6#include 
>> 7#include 
>> 8#include 
>> 9#include 
>> 00010#include 
>> 00011#include 
>> 00012#include 
>> 00013#include "cudnn.h"
>> 00014#include "cudnn_helper.h"
>> 00015#include "gpuarray_helper.h"
>> 00016#include "gpuarray/types.h"
>> 00017#include "gpuarray/array.h"
>> 00018#include "gpuarray/util.h"
>> 00019#include "gpuarray/ext_cuda.h"
>> 00020#include "gpuarray_api.h"
>> 00021#include "numpy_compat.h"
>> 00022//
>> 00023  Support Code
>> 00024//
>> 00025
>> 00026
>> 00027
>> 00028static int
>> 00029c_set_tensorNd(PyGpuArrayObject *var, cudnnTensorDescriptor_t
>> desc) {
>> 00030  cudnnDataType_t dt;
>> 00031  size_t ds;
>> 00032  switch (var->ga.typecode) {
>> 00033  case GA_FLOAT:
>> 00034dt = CUDNN_DATA_FLOAT;
>> 00035break;
>> 00036  case GA_DOUBLE:
>> 00037dt = CUDNN_DATA_DOUBLE;
>> 00038break;
>> 00039#if CUDNN_VERSION > 3000
>> 00040  case GA_HALF:
>> 00041dt = CUDNN_DATA_HALF;
>> 00042break;
>> 00043#endif
>> 00044  default:
>> 00045PyErr_SetString(PyExc_TypeError, "Non-float datatype in
>> c_set_tensorNd");
>> 00046return -1;
>> 00047  }
>> 00048  ds = gpuarray_get_elsize(var->ga.typecode);
>> 00049
>> 00050  int strs[5], dims[5], default_stride = 1;
>> 00051  unsigned int nd = PyGpuArray_NDIM(var);
>> 00052
>> 00053  if (nd > 5) {
>> 00054PyErr_SetString(PyExc_TypeError, "Tensor of more than 5d");
>> 00055return -1;
>> 00056  }
>> 00057
>> 00058  for (unsigned int _i = nd; _i > 0; _i--) {
>> 00059unsigned int i = _i - 1;
>> 00060strs[i] = PyGpuArray_STRIDE(var, i) ?
>> 00061  PyGpuArray_STRIDE(var, i)/ds : default_stride;
>> 00062default_stride *= PyGpuArray_DIM(var, i);
>> 00063dims[i] = PyGpuArray_DIM(var, i);
>> 00064  }
>> 00065
>> 00066  cudnnStatus_t err = cudnnSetTensorNdDescriptor(desc, dt, nd,
>> dims, strs);
>> 00067  if (err != CUDNN_STATUS_SUCCESS) {
>> 00068PyErr_Format(PyExc_RuntimeError,
>> 00069 "Could not set tensorNd descriptor: %s",
>> 00070 cudnnGetErrorString(err));
>> 00071return -1;
>> 00072  }
>> 00073  return 0;
>> 00074 

Re: [theano-users] Error using floatX = float16 to save memory

2016-07-14 Thread luca . wagner . 0812
Here is .theanorc:

[global]
floatX = float16
device=cuda
[cuda] 
root = /usr/local/cuda-7.5


[nvcc]
fastmath=True

optimizer = fast_compile

On Thursday, July 14, 2016 at 10:19:56 AM UTC+2, luca.wag...@gmail.com 
wrote:
>
> Hi Arnaud,
> I put _f16_ok = True in dnn.py ( attached).
>
> This is the error I received:
>
> Python 2.7.11 |Anaconda custom (64-bit)| (default, Dec  6 2015, 18:08:32) 
> [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
> Anaconda is brought to you by Continuum Analytics.
> Please check out: http://continuum.io/thanks and https://anaconda.org
> >>> import run_multi_conv
>
> Mapped name None to device cuda: GeForce 840M
> WARNING (theano.gof.compilelock): Overriding existing lock by dead process 
> '3202' (I am process '3351')
> Using cuDNN version 5005 on context None
> /home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6: 
> UserWarning: downsample module has been moved to the 
> theano.tensor.signal.pool module.
>   "downsample module has been moved to the theano.tensor.signal.pool 
> module.")
> >>> 
> >>> run_multi_conv.run_experiments()
> Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16
> Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
> Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
> Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
> Disabling C code for Alloc due to unsupported float16
> Disabling C code for Cast{float16} due to unsupported float16
> Disabling C code for Cast{float16} due to unsupported float16
> Disabling C code for Cast{float16} due to unsupported float16
> Disabling C code for Cast{float16} due to unsupported float16
> Disabling C code for RandomFunction{binomial} due to unsupported float16
> Disabling C code for RandomFunction{binomial} due to unsupported float16
> ===
> 1#include 
> 2#include 
> 3#include "theano_mod_helper.h"
> 4#include 
> 5#include 
> 6#include 
> 7#include 
> 8#include 
> 9#include 
> 00010#include 
> 00011#include 
> 00012#include 
> 00013#include "cudnn.h"
> 00014#include "cudnn_helper.h"
> 00015#include "gpuarray_helper.h"
> 00016#include "gpuarray/types.h"
> 00017#include "gpuarray/array.h"
> 00018#include "gpuarray/util.h"
> 00019#include "gpuarray/ext_cuda.h"
> 00020#include "gpuarray_api.h"
> 00021#include "numpy_compat.h"
> 00022//
> 00023  Support Code
> 00024//
> 00025
> 00026
> 00027
> 00028static int
> 00029c_set_tensorNd(PyGpuArrayObject *var, cudnnTensorDescriptor_t 
> desc) {
> 00030  cudnnDataType_t dt;
> 00031  size_t ds;
> 00032  switch (var->ga.typecode) {
> 00033  case GA_FLOAT:
> 00034dt = CUDNN_DATA_FLOAT;
> 00035break;
> 00036  case GA_DOUBLE:
> 00037dt = CUDNN_DATA_DOUBLE;
> 00038break;
> 00039#if CUDNN_VERSION > 3000
> 00040  case GA_HALF:
> 00041dt = CUDNN_DATA_HALF;
> 00042break;
> 00043#endif
> 00044  default:
> 00045PyErr_SetString(PyExc_TypeError, "Non-float datatype in 
> c_set_tensorNd");
> 00046return -1;
> 00047  }
> 00048  ds = gpuarray_get_elsize(var->ga.typecode);
> 00049
> 00050  int strs[5], dims[5], default_stride = 1;
> 00051  unsigned int nd = PyGpuArray_NDIM(var);
> 00052
> 00053  if (nd > 5) {
> 00054PyErr_SetString(PyExc_TypeError, "Tensor of more than 5d");
> 00055return -1;
> 00056  }
> 00057
> 00058  for (unsigned int _i = nd; _i > 0; _i--) {
> 00059unsigned int i = _i - 1;
> 00060strs[i] = PyGpuArray_STRIDE(var, i) ?
> 00061  PyGpuArray_STRIDE(var, i)/ds : default_stride;
> 00062default_stride *= PyGpuArray_DIM(var, i);
> 00063dims[i] = PyGpuArray_DIM(var, i);
> 00064  }
> 00065
> 00066  cudnnStatus_t err = cudnnSetTensorNdDescriptor(desc, dt, nd, 
> dims, strs);
> 00067  if (err != CUDNN_STATUS_SUCCESS) {
> 00068PyErr_Format(PyExc_RuntimeError,
> 00069 "Could not set tensorNd descriptor: %s",
> 00070 cudnnGetErrorString(err));
> 00071return -1;
> 00072  }
> 00073  return 0;
> 00074}
> 00075
> 00076static int
> 00077c_set_filter(PyGpuArrayObject *var, cudnnFilterDescriptor_t desc) 
> {
> 00078  cudnnDataType_t dt;
> 00079  cudnnStatus_t err;
> 00080
> 00081  if (!GpuArray_IS_C_CONTIGUOUS(&var->ga)) {
> 00082PyErr_SetString(PyExc_ValueError,
> 00083"Only contiguous filters (kernels) are supported.");
> 00084return -1;
> 00085  }
> 00086  switch (var->ga.typecode) {
> 00087  case GA_FLOAT:
> 00088dt = CUDNN_DATA_FLOAT;
> 00089break;
> 00090   

Re: [theano-users] Error using floatX = float16 to save memory

2016-07-13 Thread luca . wagner . 0812
Sorry Arnaud, 
but I don't understand what changes I have to do.
Do I have to modify dnn.py?
Many thanks.
Luca

On Monday, July 11, 2016 at 9:34:57 PM UTC+2, Arnaud Bergeron wrote:
>
> The problem is that DnnConv is not tagged properly for float16 support and 
> so its C code is disabled.  It doesn't have a python implementation so it 
> just crashes.
>
> Try this diff:
>
> index 6e7bc11..28fd96b 100644
> --- a/theano/gpuarray/dnn.py
> +++ b/theano/gpuarray/dnn.py
> @@ -405,7 +405,7 @@ class GpuDnnConv(DnnBase):
>  Default is the value of :attr:`config.dnn.conv.algo_fwd`.
>  
>  """
> -
> +_f16_ok = True
>  __props__ = ('algo', 'inplace')
>  
>  def __init__(self, algo=None, inplace=False):
>
> 2016-07-11 5:00 GMT-04:00 >:
>
> Hi Pascali,
> I tried what you suggested but nothing changed: id doesn't work with float 
> 16.
> Did you asked to Arnaud Bergeron?
>
> Many Thanks
> Luca
>
>
>
>
> On Monday, July 4, 2016 at 10:19:49 AM UTC+2, luca.wag...@gmail.com wrote:
>
> Many thanks Pascal for your help.
>
>
> On Saturday, July 2, 2016 at 4:34:28 AM UTC+2, Pascal Lamblin wrote:
>
> Thanks, it helps with formatting :) 
>
> I am not sure what is happening with the Elemwise, Arnaud Bergeron would 
> be more qualified to answer, he should be back in a week or so. 
>
> It may be possible that the cuDNN convolutions i or theyr gradient do 
> not support float16 yet. 
>
> Two other remarks though: 
>
> - Pooling and its gradient have limited GPU support when using 
> "ignore_border=False", which may explain why they are not transferred to 
> the GPU in your case 
>
> - The default random sampling functions (available from 
> tensor.shared_randomstreams.RandomStreams) are executed in Python on 
> CPU, using NumPy. You can try theano.sandbox.MRG_RandomStreams instead, 
> which can actually sample on GPU. 
>
> On Fri, Jul 01, 2016, luca.wag...@gmail.com wrote: 
> > I attach the file with the result. 
> > Many Thanks 
> > Luca 
> > 
> > -- 
> > 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups "theano-users" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to theano-users...@googlegroups.com. 
> > For more options, visit https://groups.google.com/d/optout. 
>
> > Python 2.7.11 |Anaconda custom (64-bit)| (default, Dec  6 2015, 
> 18:08:32) 
> > [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2 
> > Type "help", "copyright", "credits" or "license" for more information. 
> > Anaconda is brought to you by Continuum Analytics. 
> > Please check out: http://continuum.io/thanks and https://anaconda.org 
> > >>> import run_multi_conv 
> > Mapped name None to device cuda: GeForce 840M 
> > Using cuDNN version 5005 on context None 
> > /home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6: 
> UserWarning: downsample module has been moved to the 
> theano.tensor.signal.pool module. 
> >   "downsample module has been moved to the theano.tensor.signal.pool 
> module.") 
> > >>> run_multi_conv.run_experiments() 
> > Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16 
> > Disabling C code for Elemwise{Cast{float32}} due to unsupported float16 
> > Disabling C code for Elemwise{Cast{float16}} due to unsupported float16 
> > Disabling C code for Elemwise{Cast{float16}} due to unsupported float16 
> > Disabling C code for Alloc due to unsupported float16 
> > Disabling C code for Cast{float16} due to unsupported float16 
> > Disabling C code for Cast{float16} due to unsupported float16 
> > Disabling C code for Cast{float16} due to unsupported float16 
> > Disabling C code for Cast{float16} due to unsupported float16 
> > Disabling C code for RandomFunction{binomial} due to unsupported float16 
> > Disabling C code for RandomFunction{binomial} due to unsupported float16 
> > Disabling C code for GpuDnnConv{algo='small', inplace=True} due to 
> unsupported float16 
> > Disabling C code for DiagonalSubtensor{inplace} due to unsupported 
> float16 
> > Disabling C code for Pool{ds=(4, 4), ignore_border=False, st=(4, 4), 
> padding=(0, 0), mode='max'} due to unsupported float16 
> > Disabling C code for Pool{ds=(1, 4), ignore_border=False, st=(1, 4), 
> padding=(0, 0), mode='max'} due to unsupported float16 
> > Disabling C code for MaxPoolGrad{ds=(1, 4), ignore_border=False, st=(1, 
> 4), padding=(0, 0), mode='max'} due to unsupported float16 
> > Disabling C code for MaxPoolGrad{ds=(4, 4), ignore_border=False, st=(4, 
> 4), padding=(0, 0), mode='max'} due to unsupported float16 
> > Disabling C code for IncDiagonalSubtensor due to unsupported float16 
> > Disabling C code for GpuDnnConvGradW{algo='none', inplace=True} due to 
> unsupported float16 
> > HostFromGpu(gpuarray) [id A]  ''   150 
> >  |GpuElemwise{Composite{((-Cast{float16}(((-i0) / i1))) + (i2 * i3) + 
> (i2 * i4))}}[(0, 0)] [id B] (float16, ())> '' 
>   147 
> >|GpuCAReduceCuda{add} [id C] (float16, ())> ''   
> 135 
> >| |GpuC

Re: [theano-users] Error using floatX = float16 to save memory

2016-07-11 Thread Arnaud Bergeron
The problem is that DnnConv is not tagged properly for float16 support and
so its C code is disabled.  It doesn't have a python implementation so it
just crashes.

Try this diff:

index 6e7bc11..28fd96b 100644
--- a/theano/gpuarray/dnn.py
+++ b/theano/gpuarray/dnn.py
@@ -405,7 +405,7 @@ class GpuDnnConv(DnnBase):
 Default is the value of :attr:`config.dnn.conv.algo_fwd`.

 """
-
+_f16_ok = True
 __props__ = ('algo', 'inplace')

 def __init__(self, algo=None, inplace=False):

2016-07-11 5:00 GMT-04:00 :

> Hi Pascali,
> I tried what you suggested but nothing changed: id doesn't work with float
> 16.
> Did you asked to Arnaud Bergeron?
>
> Many Thanks
> Luca
>
>
>
>
> On Monday, July 4, 2016 at 10:19:49 AM UTC+2, luca.wag...@gmail.com wrote:
>
>> Many thanks Pascal for your help.
>>
>>
>> On Saturday, July 2, 2016 at 4:34:28 AM UTC+2, Pascal Lamblin wrote:
>>
>> Thanks, it helps with formatting :)
>>
>> I am not sure what is happening with the Elemwise, Arnaud Bergeron would
>> be more qualified to answer, he should be back in a week or so.
>>
>> It may be possible that the cuDNN convolutions i or theyr gradient do
>> not support float16 yet.
>>
>> Two other remarks though:
>>
>> - Pooling and its gradient have limited GPU support when using
>> "ignore_border=False", which may explain why they are not transferred to
>> the GPU in your case
>>
>> - The default random sampling functions (available from
>> tensor.shared_randomstreams.RandomStreams) are executed in Python on
>> CPU, using NumPy. You can try theano.sandbox.MRG_RandomStreams instead,
>> which can actually sample on GPU.
>>
>> On Fri, Jul 01, 2016, luca.wag...@gmail.com wrote:
>> > I attach the file with the result.
>> > Many Thanks
>> > Luca
>> >
>> > --
>> >
>> > ---
>> > You received this message because you are subscribed to the Google
>> Groups "theano-users" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> an email to theano-users...@googlegroups.com.
>> > For more options, visit https://groups.google.com/d/optout.
>>
>> > Python 2.7.11 |Anaconda custom (64-bit)| (default, Dec  6 2015,
>> 18:08:32)
>> > [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
>> > Type "help", "copyright", "credits" or "license" for more information.
>> > Anaconda is brought to you by Continuum Analytics.
>> > Please check out: http://continuum.io/thanks and https://anaconda.org
>> > >>> import run_multi_conv
>> > Mapped name None to device cuda: GeForce 840M
>> > Using cuDNN version 5005 on context None
>> > /home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6:
>> UserWarning: downsample module has been moved to the
>> theano.tensor.signal.pool module.
>> >   "downsample module has been moved to the theano.tensor.signal.pool
>> module.")
>> > >>> run_multi_conv.run_experiments()
>> > Disabling C code for Elemwise{mul,no_inplace} due to unsupported
>> float16
>> > Disabling C code for Elemwise{Cast{float32}} due to unsupported float16
>> > Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
>> > Disabling C code for Elemwise{Cast{float16}} due to unsupported float16
>> > Disabling C code for Alloc due to unsupported float16
>> > Disabling C code for Cast{float16} due to unsupported float16
>> > Disabling C code for Cast{float16} due to unsupported float16
>> > Disabling C code for Cast{float16} due to unsupported float16
>> > Disabling C code for Cast{float16} due to unsupported float16
>> > Disabling C code for RandomFunction{binomial} due to unsupported
>> float16
>> > Disabling C code for RandomFunction{binomial} due to unsupported
>> float16
>> > Disabling C code for GpuDnnConv{algo='small', inplace=True} due to
>> unsupported float16
>> > Disabling C code for DiagonalSubtensor{inplace} due to unsupported
>> float16
>> > Disabling C code for Pool{ds=(4, 4), ignore_border=False, st=(4, 4),
>> padding=(0, 0), mode='max'} due to unsupported float16
>> > Disabling C code for Pool{ds=(1, 4), ignore_border=False, st=(1, 4),
>> padding=(0, 0), mode='max'} due to unsupported float16
>> > Disabling C code for MaxPoolGrad{ds=(1, 4), ignore_border=False, st=(1,
>> 4), padding=(0, 0), mode='max'} due to unsupported float16
>> > Disabling C code for MaxPoolGrad{ds=(4, 4), ignore_border=False, st=(4,
>> 4), padding=(0, 0), mode='max'} due to unsupported float16
>> > Disabling C code for IncDiagonalSubtensor due to unsupported float16
>> > Disabling C code for GpuDnnConvGradW{algo='none', inplace=True} due to
>> unsupported float16
>> > HostFromGpu(gpuarray) [id A]  ''   150
>> >  |GpuElemwise{Composite{((-Cast{float16}(((-i0) / i1))) + (i2 * i3) +
>> (i2 * i4))}}[(0, 0)] [id B] (float16, ())> ''
>>   147
>> >|GpuCAReduceCuda{add} [id C] (float16, ())> ''
>> 135
>> >| |GpuCrossentropySoftmaxArgmax1HotWithBias.0 [id D]
>> (float16, (False,))> ''   133
>> >|   |Rebroadcast{0} [id E] (float16, (False,
>> False))> ''   132
>> >|   | |GpuGemm{inplace=True} [id F] (float16,

Re: [theano-users] Error using floatX = float16 to save memory

2016-07-11 Thread luca . wagner . 0812
Hi Pascali,
I tried what you suggested but nothing changed: id doesn't work with float 
16.
Did you asked to Arnaud Bergeron?

Many Thanks
Luca



On Monday, July 4, 2016 at 10:19:49 AM UTC+2, luca.wag...@gmail.com wrote:
>
> Many thanks Pascal for your help.
>
>
> On Saturday, July 2, 2016 at 4:34:28 AM UTC+2, Pascal Lamblin wrote:
>
> Thanks, it helps with formatting :) 
>
> I am not sure what is happening with the Elemwise, Arnaud Bergeron would 
> be more qualified to answer, he should be back in a week or so. 
>
> It may be possible that the cuDNN convolutions i or theyr gradient do 
> not support float16 yet. 
>
> Two other remarks though: 
>
> - Pooling and its gradient have limited GPU support when using 
> "ignore_border=False", which may explain why they are not transferred to 
> the GPU in your case 
>
> - The default random sampling functions (available from 
> tensor.shared_randomstreams.RandomStreams) are executed in Python on 
> CPU, using NumPy. You can try theano.sandbox.MRG_RandomStreams instead, 
> which can actually sample on GPU. 
>
> On Fri, Jul 01, 2016, luca.wag...@gmail.com wrote: 
> > I attach the file with the result. 
> > Many Thanks 
> > Luca 
> > 
> > -- 
> > 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups "theano-users" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to theano-users...@googlegroups.com. 
> > For more options, visit https://groups.google.com/d/optout. 
>
> > Python 2.7.11 |Anaconda custom (64-bit)| (default, Dec  6 2015, 
> 18:08:32) 
> > [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2 
> > Type "help", "copyright", "credits" or "license" for more information. 
> > Anaconda is brought to you by Continuum Analytics. 
> > Please check out: http://continuum.io/thanks and https://anaconda.org 
> > >>> import run_multi_conv 
> > Mapped name None to device cuda: GeForce 840M 
> > Using cuDNN version 5005 on context None 
> > /home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6: 
> UserWarning: downsample module has been moved to the 
> theano.tensor.signal.pool module. 
> >   "downsample module has been moved to the theano.tensor.signal.pool 
> module.") 
> > >>> run_multi_conv.run_experiments() 
> > Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16 
> > Disabling C code for Elemwise{Cast{float32}} due to unsupported float16 
> > Disabling C code for Elemwise{Cast{float16}} due to unsupported float16 
> > Disabling C code for Elemwise{Cast{float16}} due to unsupported float16 
> > Disabling C code for Alloc due to unsupported float16 
> > Disabling C code for Cast{float16} due to unsupported float16 
> > Disabling C code for Cast{float16} due to unsupported float16 
> > Disabling C code for Cast{float16} due to unsupported float16 
> > Disabling C code for Cast{float16} due to unsupported float16 
> > Disabling C code for RandomFunction{binomial} due to unsupported float16 
> > Disabling C code for RandomFunction{binomial} due to unsupported float16 
> > Disabling C code for GpuDnnConv{algo='small', inplace=True} due to 
> unsupported float16 
> > Disabling C code for DiagonalSubtensor{inplace} due to unsupported 
> float16 
> > Disabling C code for Pool{ds=(4, 4), ignore_border=False, st=(4, 4), 
> padding=(0, 0), mode='max'} due to unsupported float16 
> > Disabling C code for Pool{ds=(1, 4), ignore_border=False, st=(1, 4), 
> padding=(0, 0), mode='max'} due to unsupported float16 
> > Disabling C code for MaxPoolGrad{ds=(1, 4), ignore_border=False, st=(1, 
> 4), padding=(0, 0), mode='max'} due to unsupported float16 
> > Disabling C code for MaxPoolGrad{ds=(4, 4), ignore_border=False, st=(4, 
> 4), padding=(0, 0), mode='max'} due to unsupported float16 
> > Disabling C code for IncDiagonalSubtensor due to unsupported float16 
> > Disabling C code for GpuDnnConvGradW{algo='none', inplace=True} due to 
> unsupported float16 
> > HostFromGpu(gpuarray) [id A]  ''   150 
> >  |GpuElemwise{Composite{((-Cast{float16}(((-i0) / i1))) + (i2 * i3) + 
> (i2 * i4))}}[(0, 0)] [id B] (float16, ())> '' 
>   147 
> >|GpuCAReduceCuda{add} [id C] (float16, ())> ''   
> 135 
> >| |GpuCrossentropySoftmaxArgmax1HotWithBias.0 [id D] 
> (float16, (False,))> ''   133 
> >|   |Rebroadcast{0} [id E] (float16, (False, 
> False))> ''   132 
> >|   | |GpuGemm{inplace=True} [id F] (float16, 
> (True, False))> ''   130 
> >|   |   |GpuAllocEmpty{dtype='float16', context_name=None} [id G] 
> (float16, (True, False))> ''   20 
> >|   |   | |TensorConstant{1} [id H]  
> >|   |   | |Shape_i{1} [id I]  ''   2 
> >|   |   |   |DropoutLogisticRegression_W [id J] 
> (float16, (False, False))> 
> >|   |   |TensorConstant{1.0} [id K]  
> >|   |   |GpuElemwise{mul,no_inplace} [id L] 
> (float16, (False, False))> ''   129 
> >|   |   | |GpuElemwise{Cast{float16}}[] [id M] 
> (float16, (False, False))> ''   40 
> >|   |   | | |GpuFrom