Re: [theano-users] Error using floatX = float16 to save memory

2016-10-14 Thread luca . wagner . 0812
Hi Pascal, I don't know how to see what happens during "raise_with_op" in that call. This is the output using pdb inside spyder: Python 2.7.12 |Continuum Analytics, Inc.| (default, Jul 2 2016, 17:42:40) [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2 Type "help", "copyright", "credits" or

Re: [theano-users] Error using floatX = float16 to save memory

2016-10-13 Thread Pascal Lamblin
On Thu, Oct 13, 2016, luca.wagner.0...@gmail.com wrote: > whether I use float 16 or float32, theano.gpuarray.dnn.dnn_conv or > theano.tensor.nnet.conv3d2d.conv3d > it works only for small images but if I increase the images size there are > problems > with memory. > I had not this issue

Re: [theano-users] Error using floatX = float16 to save memory

2016-10-13 Thread Frédéric Bastien
For the memory error, the problem is that you try to allocate 14G for a shared variable on a 12G GPU. This is probably not what you want to do. Use theano.tensor.nnet.conv3d now (not conv3d2d.conv3d() or dnn_conv3d). But we need to fix the memory problem. conv3d2d.conv3d probably cause an upcast

Re: [theano-users] Error using floatX = float16 to save memory

2016-10-11 Thread luca . wagner . 0812
Hi Fred, I installed the Theano version that contains "Adding an AbstractConv3d interface #4862" https://github.com/Theano/Theano/pull/4862 but now it doesn't work

Re: [theano-users] Error using floatX = float16 to save memory

2016-10-10 Thread luca . wagner . 0812
On Friday, October 7, 2016 at 5:38:14 PM UTC+2, nouiz wrote: > > > > On Fri, Oct 7, 2016 at 11:31 AM, Pascal Lamblin > wrote: > >> On Fri, Oct 07, 2016, luca.wag...@gmail.com wrote: >> > Hi Fred, >> > I did a test using: >> > >> > theano.tensor.nnet.conv3d2d import

Re: [theano-users] Error using floatX = float16 to save memory

2016-10-10 Thread luca . wagner . 0812
Hi Fred, first I created a h5py dataset, i put files in the dataset and then load data. I don't use pickle but h5py to load data: with h5py.File( h5py_dataset,'r') as f: f.visit(dataset_list.append) for j in

Re: [theano-users] Error using floatX = float16 to save memory

2016-10-07 Thread Frédéric Bastien
On Fri, Oct 7, 2016 at 11:31 AM, Pascal Lamblin wrote: > On Fri, Oct 07, 2016, luca.wagner.0...@gmail.com wrote: > > Hi Fred, > > I did a test using: > > > > theano.tensor.nnet.conv3d2d import conv3d > > That's the old conv3d2d code, that should not be needed with

Re: [theano-users] Error using floatX = float16 to save memory

2016-10-07 Thread Pascal Lamblin
On Fri, Oct 07, 2016, luca.wagner.0...@gmail.com wrote: > Hi Fred, > I did a test using: > > theano.tensor.nnet.conv3d2d import conv3d That's the old conv3d2d code, that should not be needed with cuDNN, and that has some pieces that do not work in float16. These are not the problems we should

Re: [theano-users] Error using floatX = float16 to save memory

2016-10-07 Thread luca . wagner . 0812
Hi Fred, I did a test using: theano.tensor.nnet.conv3d2d import conv3d this PR: https://github.com/Theano/Theano/pull/4862 [global] floatX = float16 device=cuda [cuda] root = /usr/local/cuda-7.5 [nvcc] fastmath=True optimizer = fast_compile [dnn.conv] algo_fwd = time_once algo_bwd_filter =

Re: [theano-users] Error using floatX = float16 to save memory

2016-10-07 Thread luca . wagner . 0812
Hi Fred, I did the test using: theano.tensor.nnet.conv3d2d.conv3d I updated the code with https://github.com/Theano/Theano/pull/4862 .theanorc: [global] floatX =

Re: [theano-users] Error using floatX = float16 to save memory

2016-10-06 Thread Frédéric Bastien
For float16, always use device=cuda Not device=gpu. This could be your problem. Can you test that? thanks Fred On Tue, Oct 4, 2016 at 10:21 AM, wrote: > Hi Fred, > I tested the convnet using > > floatX= float32, > device=gpu > theano.tensor.nnet.conv3d2d.conv3d

Re: [theano-users] Error using floatX = float16 to save memory

2016-10-04 Thread luca . wagner . 0812
Hi Fred, I tested the convnet using floatX= float32, device=gpu theano.tensor.nnet.conv3d2d.conv3d updated theano/sandbox/cuda/blas.py downloaded from https://github.com/Theano/Theano/pull/5050

Re: [theano-users] Error using floatX = float16 to save memory

2016-09-30 Thread Pascal Lamblin
Forwarding the response to the ML: The default algo selected by Theano is not correct for conv3d (only exists for 2D), this should be fixed. In the mean time, try: [dnn.conv] algo_fwd = time_once algo_bwd_filter = time_once algo_bwd_data = time_once On Fri, Sep 30, 2016,

Re: [theano-users] Error using floatX = float16 to save memory

2016-09-30 Thread luca . wagner . 0812
Hi Pascal, I did the previous test using [global] floatX = float32 device=gpu [cuda] Following your answer I did another test with floatX = float32 device=cuda0 but it doesn't work: ValueError: ("convolution algo %s can't be used for 3d convolutions", ('small',)) Using floatX = float16

Re: [theano-users] Error using floatX = float16 to save memory

2016-09-29 Thread luca . wagner . 0812
Fred, I used theano.sandbox.cuda.dnn import dnn_conv3d instead of theano.tensor.nnet.conv3d2d.conv3d It works with floatX=float32 and device=gpu but it doesn't work with floatX=float16 and device=cuda: TypeError: CudaNdarrayType only supports dtype float32 for now. Tried using dtype float16

Re: [theano-users] Error using floatX = float16 to save memory

2016-09-29 Thread luca . wagner . 0812
Fred, I started to look in cuddn: I don't find cudnn dnn_conv3d in gpuarray.dnn.dnn_conv3d; what I found is class theano.sandbox.cuda.dnn.GpuDnnConv3d in http://deeplearning.net/software/theano/library/sandbox/cuda/dnn.html while in

Re: [theano-users] Error using floatX = float16 to save memory

2016-09-23 Thread luca . wagner . 0812
Many thanks On Thursday, September 22, 2016 at 11:52:59 PM UTC+2, Arnaud Bergeron wrote: > > Actually I believe that your code is using conv2d3d in order to get it > working with 3d convolutions on the GPU. This is now directly supported by > cudnn if you use dnn_conv() with 5d objects from

Re: [theano-users] Error using floatX = float16 to save memory

2016-09-22 Thread Arnaud Bergeron
Actually I believe that your code is using conv2d3d in order to get it working with 3d convolutions on the GPU. This is now directly supported by cudnn if you use dnn_conv() with 5d objects from the new backend. There is some work around an abstract conv3d interface which will hopefully be

Re: [theano-users] Error using floatX = float16 to save memory

2016-09-19 Thread luca . wagner . 0812
Hi Fred, I thank you very much for your help and I hope that DiagonalSubtensor and IncDiagonalSubtensor may be supported on GPU with float16 Many thanks Luca -- --- You received this message because you are subscribed to the Google Groups "theano-users" group. To unsubscribe from this

Re: [theano-users] Error using floatX = float16 to save memory

2016-08-29 Thread Pascal Lamblin
It is likely that some operations are still not yet supported on GPU with float16. From your messages, ( would guess at least the following ones: - DiagonalSubtensor - IncDiagonalSubtensor I thought that random sampling was supported, but I see "RandomFunction{binomial}", which is surprising. Are

Re: [theano-users] Error using floatX = float16 to save memory

2016-08-24 Thread Frédéric Bastien
thanks. On Wed, Aug 24, 2016 at 5:16 AM, wrote: > Fred, > many thanks for your help. > > I reinstalled and updated anaconda. > I reinstalled theano and gpuarray/pygpu: > Theano==0.9.0.dev2 > pygpu==0.2.1 > > Then I tested a small 3D convnet using first: > floatX =

Re: [theano-users] Error using floatX = float16 to save memory

2016-07-21 Thread luca . wagner . 0812
After I reinstalled theano+gpuarray+pygpu, I'm still doing tests. Using flags: floatX = float32 device=gpu error is: Python 2.7.11 |Anaconda custom (64-bit)| (default, Dec 6 2015, 18:08:32) [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2 Type "help", "copyright", "credits" or "license" for

Re: [theano-users] Error using floatX = float16 to save memory

2016-07-21 Thread luca . wagner . 0812
Frederic, I had to reinstall theano theano updated version + gpuarray and pygpu. If the flags are: floatX = float16 device=cuda the convnet starts without errors: Python 2.7.11 |Anaconda custom (64-bit)| (default, Dec 6 2015, 18:08:32) [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2 Type

Re: [theano-users] Error using floatX = float16 to save memory

2016-07-21 Thread luca . wagner . 0812
Frederic, In ops.py I can't find shape_i_op thanks On Thursday, July 21, 2016 at 11:50:51 AM UTC+2, luca.wag...@gmail.com wrote: > > Frederic, > this is the feedback afterl the upgrades about float 16. > > Python 2.7.11 |Anaconda custom (64-bit)| (default, Dec 6 2015, 18:08:32) > [GCC 4.4.7

Re: [theano-users] Error using floatX = float16 to save memory

2016-07-21 Thread luca . wagner . 0812
Frederic, I'll do it and give you a feedback, many thanks Luca On Tuesday, July 19, 2016 at 10:09:21 PM UTC+2, nouiz wrote: > > We have a PR that upgrade some stuff about float16: > > https://github.com/Theano/Theano/pull/4764/files > > It probably fix your problem. Can you try it to confirm that

Re: [theano-users] Error using floatX = float16 to save memory

2016-07-19 Thread Frédéric Bastien
We have a PR that upgrade some stuff about float16: https://github.com/Theano/Theano/pull/4764/files It probably fix your problem. Can you try it to confirm that you don't have a different problem? thanks Frédéric On Fri, Jul 15, 2016 at 4:55 AM, wrote: > ok I