Hello, I'm getting a segmentation fault upon calling "import theano". My system is Ubuntu 16.04 LTS with CUDA 8.0 and a GeForce 980M. The GPU is detected and the Cuda driver is running correctly (applications such as the NVIDIA examples, PyCuda, and Caffe all run on the GPU correctly). Running
theano-cache clear also results in a segmentation fault. Clearing the cache via rm -rf ~.theano did not help. Reinstalling Theano did not help either. Here's the output Python 2.7.12 (default, Jul 1 2016, 15:12:24) [GCC 5.4.0 20160609] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import theano [ln16ks:26814] *** Process received signal *** [ln16ks:26814] Signal: Segmentation fault (11) [ln16ks:26814] Signal code: Address not mapped (1) [ln16ks:26814] Failing at address: 0x3038 [ln16ks:26814] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0x113d0)[ 0x7efd4e87e3d0] [ln16ks:26814] [ 1] /lib/x86_64-linux-gnu/libpthread.so.0(pthread_mutex_lock +0x4)[0x7efd4e876d84] [ln16ks:26814] [ 2] /usr/lib/x86_64-linux-gnu/libcuda.so.1(+0x1ba0e8)[ 0x7efd341690e8] [ln16ks:26814] [ 3] /usr/lib/x86_64-linux-gnu/libcuda.so.1(+0x26ff91)[ 0x7efd3421ef91] [ln16ks:26814] [ 4] /usr/lib/x86_64-linux-gnu/libcuda.so.1(+0x270105)[ 0x7efd3421f105] [ln16ks:26814] [ 5] /usr/lib/x86_64-linux-gnu/libcuda.so.1(+0x1bfe24)[ 0x7efd3416ee24] [ln16ks:26814] [ 6] /usr/lib/x86_64-linux-gnu/libcuda.so.1(+0x1c1677)[ 0x7efd34170677] [ln16ks:26814] [ 7] /usr/lib/x86_64-linux-gnu/libcuda.so.1(+0x195846)[ 0x7efd34144846] [ln16ks:26814] [ 8] /usr/lib/x86_64-linux-gnu/libcuda.so.1(cuInit+0x4d)[ 0x7efd3419079d] [ln16ks:26814] [ 9] /usr/local/cuda/lib64/libcudart.so.8.0(+0x1b405)[ 0x7efd2c508405] [ln16ks:26814] [10] /usr/local/cuda/lib64/libcudart.so.8.0(+0x1b461)[ 0x7efd2c508461] [ln16ks:26814] [11] /lib/x86_64-linux-gnu/libpthread.so.0(+0xead9)[ 0x7efd4e87bad9] [ln16ks:26814] [12] /usr/local/cuda/lib64/libcudart.so.8.0(+0x4aec9)[ 0x7efd2c537ec9] [ln16ks:26814] [13] /usr/local/cuda/lib64/libcudart.so.8.0(+0x1784a)[ 0x7efd2c50484a] [ln16ks:26814] [14] /usr/local/cuda/lib64/libcudart.so.8.0(+0x1b31b)[ 0x7efd2c50831b] [ln16ks:26814] [15] /usr/local/cuda/lib64/libcudart.so.8.0( cudaGetDeviceCount+0x4a)[0x7efd2c51e18a] [ln16ks:26814] [16] /home/sullivan/.theano/compiledir_Linux-4.4--generic- x86_64-with-Ubuntu-16.04-xenial-x86_64-2.7.12-64/cuda_ndarray/cuda_ndarray. so(+0x15914)[0x7efd2ee7b914] [ln16ks:26814] [17] python(PyEval_EvalFrameEx+0x68a)[0x4c41da] [ln16ks:26814] [18] python(PyEval_EvalCodeEx+0x255)[0x4c22e5] [ln16ks:26814] [19] python(PyEval_EvalCode+0x19)[0x4c2089] [ln16ks:26814] [20] python(PyImport_ExecCodeModuleEx+0xcb)[0x4c019b] [ln16ks:26814] [21] python[0x4bd24e] [ln16ks:26814] [22] python[0x4be547] [ln16ks:26814] [23] python[0x4afd2d] [ln16ks:26814] [24] python(PyImport_ImportModuleLevel+0x8bd)[0x4af4cd] [ln16ks:26814] [25] python[0x4b10a8] [ln16ks:26814] [26] python(PyObject_Call+0x43)[0x4b0de3] [ln16ks:26814] [27] python(PyEval_CallObjectWithKeywords+0x30)[0x4ce140] [ln16ks:26814] [28] python(PyEval_EvalFrameEx+0x31b1)[0x4c6d01] [ln16ks:26814] [29] python(PyEval_EvalCodeEx+0x255)[0x4c22e5] [ln16ks:26814] *** End of error message *** Output from lldb is similar (there are no debugging symbols in libcuda). Digging through the Theano source code, I narrowed the issue down to the call to gpu_init() around line 232 in theano/theano/sandbox/cuda/__init.py__ , which calls the corresponding function in theano/theano/sandbox/cuda_ndarray.cu. digging deeper, the actual code that causes the problem is on line 3195, cudaError err = cudaGetDeviceCount(&deviceCount); A similar line appears in many of the NVIDIA examples, all of which run correctly on my machine. I suspect the problem lies in how Theano is calling nvcc to compile the .cu files, but I cannot verify this. Does anyone have any suggestions on how to proceed? -- --- You received this message because you are subscribed to the Google Groups "theano-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to theano-users+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.