Thanks for the error report. It is fixed in this PR: https://github.com/Theano/Theano/pull/4771
hopefully we can finish soon our jenkins installation to have PR tested on GPUs! Fred On Thu, Jul 21, 2016 at 9:02 AM, <[email protected]> wrote: > After I reinstalled theano+gpuarray+pygpu, > I'm still doing tests. > Using flags: > floatX = float32 > device=gpu > > > error is: > > Python 2.7.11 |Anaconda custom (64-bit)| (default, Dec 6 2015, 18:08:32) > [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > Anaconda is brought to you by Continuum Analytics. > Please check out: http://continuum.io/thanks and https://anaconda.org > >>> > runfile('/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv.py', > wdir='/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core') > Mapped name None to device cuda: GeForce 840M > Using cuDNN version 5005 on context None > Using gpu device 0: GeForce 840M (CNMeM is disabled, cuDNN 5005) > /home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6: > UserWarning: downsample module has been moved to the > theano.tensor.signal.pool module. > "downsample module has been moved to the theano.tensor.signal.pool > module.") > ERROR (theano.gof.opt): Optimization failure due to: > LocalOptGroup(local_abstractconv_cudnn,local_conv_dnn,local_abstractconv_gemm,local_abstractconv_gradinputs_gemm,local_abstractconv_gradweight_gemm,local_conv_gemm) > ERROR (theano.gof.opt): node: AbstractConv2d{border_mode='valid', > subsample=(1, 1), filter_flip=True, imshp=(20, 1, 20, 20), kshp=(100, 1, 5, > 5), filter_dilation=(1, 1)}(GpuFromHost.0, GpuReshape{4}.0) > ERROR (theano.gof.opt): TRACEBACK: > ERROR (theano.gof.opt): Traceback (most recent call last): > File "/home/luca/data/Theano-master/theano/gof/opt.py", line 1820, in > process_node > replacements = lopt.transform(node) > File "/home/luca/data/Theano-master/theano/gof/opt.py", line 1265, in > transform > repl = opt.transform(node) > File "/home/luca/data/Theano-master/theano/sandbox/cuda/dnn.py", line > 3149, in local_abstractconv_cudnn > conv_mode=conv_mode) > File "/home/luca/data/Theano-master/theano/sandbox/cuda/dnn.py", line > 1181, in dnn_conv > conv_mode=conv_mode, precision=precision)(img.shape, > File "/home/luca/data/Theano-master/theano/sandbox/cuda/dnn.py", line > 180, in __init__ > assert precision in ['float16', 'float32', 'float64'] > AssertionError > > Traceback (most recent call last): > File "<stdin>", line 1, in <module> > File > "/home/luca/anaconda2/lib/python2.7/site-packages/spyderlib/widgets/externalshell/sitecustomize.py", > line 714, in runfile > execfile(filename, namespace) > File > "/home/luca/anaconda2/lib/python2.7/site-packages/spyderlib/widgets/externalshell/sitecustomize.py", > line 81, in execfile > builtins.execfile(filename, *where) > File > "/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv.py", > line 124, in <module> > run_experiments() > File > "/home/luca/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core/run_multi_conv.py", > line 83, in run_experiments > Pretrained = False > File "mpr_convnet_class.py", line 291, in __init__ > train_model = theano.function([x,y],cost, > updates=updates) > File "/home/luca/data/Theano-master/theano/compile/function.py", line > 322, in function > output_keys=output_keys) > File "/home/luca/data/Theano-master/theano/compile/pfunc.py", line 480, > in pfunc > output_keys=output_keys) > File "/home/luca/data/Theano-master/theano/compile/function_module.py", > line 1783, in orig_function > output_keys=output_keys).create( > File "/home/luca/data/Theano-master/theano/compile/function_module.py", > line 1463, in __init__ > optimizer_profile = optimizer(fgraph) > File "/home/luca/data/Theano-master/theano/gof/opt.py", line 102, in > __call__ > return self.optimize(fgraph) > File "/home/luca/data/Theano-master/theano/gof/opt.py", line 90, in > optimize > ret = self.apply(fgraph, *args, **kwargs) > File "/home/luca/data/Theano-master/theano/gof/opt.py", line 235, in > apply > sub_prof = optimizer.optimize(fgraph) > File "/home/luca/data/Theano-master/theano/gof/opt.py", line 90, in > optimize > ret = self.apply(fgraph, *args, **kwargs) > File "/home/luca/data/Theano-master/theano/gof/opt.py", line 235, in > apply > sub_prof = optimizer.optimize(fgraph) > File "/home/luca/data/Theano-master/theano/gof/opt.py", line 90, in > optimize > ret = self.apply(fgraph, *args, **kwargs) > File "/home/luca/data/Theano-master/theano/gof/opt.py", line 2257, in > apply > lopt_change = self.process_node(fgraph, node, lopt) > File "/home/luca/data/Theano-master/theano/gof/opt.py", line 1825, in > process_node > lopt, node) > File "/home/luca/data/Theano-master/theano/gof/opt.py", line 1719, in > warn_inplace > return NavigatorOptimizer.warn(exc, nav, repl_pairs, local_opt, node) > File "/home/luca/data/Theano-master/theano/gof/opt.py", line 1705, in > warn > raise exc > AssertionError > >>> > > > Using flags: > floatX = float16 > device=cuda > > the convnet starts without errors: > > luca@cuda:~/data/DeepLearningTutorials/Theano-3D-ConvNet-master/convnet3d/core$ > python > Python 2.7.11 |Anaconda custom (64-bit)| (default, Dec 6 2015, 18:08:32) > [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > Anaconda is brought to you by Continuum Analytics. > Please check out: http://continuum.io/thanks and https://anaconda.org > >>> import run_multi_conv > Mapped name None to device cuda: GeForce 840M > Using cuDNN version 5005 on context None > /home/luca/data/Theano-master/theano/tensor/signal/downsample.py:6: > UserWarning: downsample module has been moved to the > theano.tensor.signal.pool module. > "downsample module has been moved to the theano.tensor.signal.pool > module.") > >>> run_multi_conv.run_experiments() > Disabling C code for Elemwise{mul,no_inplace} due to unsupported float16 > Disabling C code for Elemwise{Cast{float32}} due to unsupported float16 > Disabling C code for Elemwise{Cast{float16}} due to unsupported float16 > Disabling C code for Elemwise{Cast{float16}} due to unsupported float16 > Disabling C code for Alloc due to unsupported float16 > Disabling C code for DiagonalSubtensor{inplace} due to unsupported float16 > Disabling C code for IncDiagonalSubtensor due to unsupported float16 > Disabling C code for DiagonalSubtensor{inplace} due to unsupported float16 > Disabling C code for MaxAndArgmax due to unsupported float16 > > > start time: > 21/07/2016 > 15:01:22 > > > images for training: 594 > images for validation: 82 > epochs: 200 > > > ... training neural network 13 > > > training @ iter = 0 > training @ iter = 200 > training @ iter = 400 > > > training cost 0.69336 > epoch 1, training batch 594/594,validation error 45.122 % > > > > > Thanks > Luca > > -- > > --- > You received this message because you are subscribed to the Google Groups > "theano-users" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > For more options, visit https://groups.google.com/d/optout. > -- --- You received this message because you are subscribed to the Google Groups "theano-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/d/optout.
