I have a total dedicated memory of 1948MB. And used dedicated memory of 
233MB. I gave lib.cnmem=1. No the error is shown when lib.cnmem=0..But it 
shows 

"Using gpu device 0: GeForce 820M (CNMeM is disabled, cuDNN not available)"

On Friday, December 2, 2016 at 2:01:58 AM UTC+5:30, Pascal Lamblin wrote:
>
> The error message mentions being out of memory for cnmem. 
> How much memory do you have on your GPU? How much free memory? 
> What is the value you gave to lib.cnmem? 
> Does it still happen if you set lib.cnmem to 0? 
>
> On Thu, Dec 01, 2016, Sandipan Haldar wrote: 
> > *Can somebody help me with this error?* 
> > 
> > 
> > 
> > >>> import theano 
> > ERROR (theano.sandbox.cuda): ERROR: Not using GPU. Initialisation of 
> device 
> > gpu failed: 
> > initCnmem: cnmemInit call failed! Reason=CNMEM_STATUS_OUT_OF_MEMORY. 
> > numdev=1 
> > 
> > Traceback (most recent call last): 
> >   File 
> > 
> "/home/sandipan/.local/lib/python3.5/site-packages/theano/compile/function_module.py",
>  
>
> > line 859, in __call__ 
> >     outputs = self.fn() 
> > RuntimeError: Cuda error: 
> > kernel_reduce_ccontig_node_meb404c8cd39208f6884dd773b584b7d7_0: out of 
> > memory. (grid: 1 x 1; block: 256 x 1 x 1) 
> > 
> > 
> > During handling of the above exception, another exception occurred: 
> > 
> > Traceback (most recent call last): 
> >   File "<stdin>", line 1, in <module> 
> >   File 
> > "/home/sandipan/.local/lib/python3.5/site-packages/theano/__init__.py", 
> > line 111, in <module> 
> >     theano.sandbox.cuda.tests.test_driver.test_nvidia_driver1() 
> >   File 
> > 
> "/home/sandipan/.local/lib/python3.5/site-packages/theano/sandbox/cuda/tests/test_driver.py",
>  
>
> > line 38, in test_nvidia_driver1 
> >     if not numpy.allclose(f(), a.sum()): 
> >   File 
> > 
> "/home/sandipan/.local/lib/python3.5/site-packages/theano/compile/function_module.py",
>  
>
> > line 871, in __call__ 
> >     storage_map=getattr(self.fn, 'storage_map', None)) 
> >   File 
> > "/home/sandipan/.local/lib/python3.5/site-packages/theano/gof/link.py", 
> > line 314, in raise_with_op 
> >     reraise(exc_type, exc_value, exc_trace) 
> >   File "/home/sandipan/.local/lib/python3.5/site-packages/six.py", line 
> > 685, in reraise 
> >     raise value.with_traceback(tb) 
> >   File 
> > 
> "/home/sandipan/.local/lib/python3.5/site-packages/theano/compile/function_module.py",
>  
>
> > line 859, in __call__ 
> >     outputs = self.fn() 
> > RuntimeError: Cuda error: 
> > kernel_reduce_ccontig_node_meb404c8cd39208f6884dd773b584b7d7_0: out of 
> > memory. (grid: 1 x 1; block: 256 x 1 x 1) 
> > 
> > Apply node that caused the error: 
> > GpuCAReduce{add}{1}(<CudaNdarrayType(float32, vector)>) 
> > Toposort index: 0 
> > Inputs types: [CudaNdarrayType(float32, vector)] 
> > Inputs shapes: [(10000,)] 
> > Inputs strides: [(1,)] 
> > Inputs values: ['not shown'] 
> > Outputs clients: [[HostFromGpu(GpuCAReduce{add}{1}.0)]] 
> > 
> > HINT: Re-running with most Theano optimization disabled could give you a 
> > back-trace of when this node was created. This can be done with by 
> setting 
> > the Theano flag 'optimizer=fast_compile'. If that does not work, Theano 
> > optimizations can be disabled with 'optimizer=None'. 
> > HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint 
> and 
> > storage map footprint of this apply node. 
> > >>> 
> > 
> > -- 
> > 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups "theano-users" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to [email protected] <javascript:>. 
> > For more options, visit https://groups.google.com/d/optout. 
>
>
> -- 
> Pascal 
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to