Reading the .so file seems to be using the correct linrary
$ readelf -a cuda_ndarray.so | grep NEEDED
 0x0000000000000001 (NEEDED)             Shared library: [libcublas.so.8.0]
 0x0000000000000001 (NEEDED)             Shared library: 
[libpython3.6m.so.1.0]
 0x0000000000000001 (NEEDED)             Shared library: [libcudart.so.7.5]
 0x0000000000000001 (NEEDED)             Shared library: [librt.so.1]
 0x0000000000000001 (NEEDED)             Shared library: [libpthread.so.0]
 0x0000000000000001 (NEEDED)             Shared library: [libdl.so.2]
 0x0000000000000001 (NEEDED)             Shared library: [libstdc++.so.6]
 0x0000000000000001 (NEEDED)             Shared library: [libm.so.6]
 0x0000000000000001 (NEEDED)             Shared library: [libgcc_s.so.1]
 0x0000000000000001 (NEEDED)             Shared library: [libc.so.6]

On Wednesday, 8 February 2017 09:20:31 UTC+5:30, Jayendra Parmar wrote:
>
> With more debugging I get error here
>
> https://github.com/Theano/Theano/blob/8b9f73365e4932f1c005a0a37b907d28985fbc5f/theano/gof/cmodule.py#L302
>
> when `nvcc_compiler` tries to load the `cuda_ndarray.so` from 
> `cuda_ndarray` in theano cache
>
> comiplation phase for mod.cu runs without error.
>
> On Wednesday, 8 February 2017 06:59:25 UTC+5:30, Jayendra Parmar wrote:
>>
>> No I don't have two CUDAs in my system I have only CUDA8
>>
>> On Wednesday, 8 February 2017 03:27:51 UTC+5:30, nouiz wrote:
>>>
>>> So it probably mean your environment contain a mix of both cuda version. 
>>> Make sure your environment variable only contain one cude version. 
>>> Sometimes there is a mix. Using the env variable CUDA_ROOT or the Theano 
>>> flag cuda.root isn't a reliable way to select which cuda version to use.
>>>
>>> Fred
>>>
>>> On Mon, Feb 6, 2017 at 10:37 AM Frédéric Bastien <[email protected]> 
>>> wrote:
>>>
>>>> Delete your Theano cache. You probably have it populated with module 
>>>> that request cuda 7.5. Run:
>>>>
>>>> theano-cache purge
>>>>
>>>> otherwise, by default it is under ~/.theano
>>>>
>>>> Fred
>>>>
>>>> On Sun, Feb 5, 2017 at 10:50 PM, Jayendra Parmar <[email protected]> 
>>>> wrote:
>>>>
>>>>> defintely I can run the cuda samples
>>>>>
>>>>> $ ./deviceQuery 
>>>>> ./deviceQuery Starting...
>>>>>
>>>>>  CUDA Device Query (Runtime API) version (CUDART static linking)
>>>>>
>>>>> Detected 1 CUDA Capable device(s)
>>>>>
>>>>> Device 0: "GeForce GTX 970M"
>>>>>   CUDA Driver Version / Runtime Version          8.0 / 7.5
>>>>>   CUDA Capability Major/Minor version number:    5.2
>>>>>   Total amount of global memory:                 3016 MBytes 
>>>>> (3162570752 bytes)
>>>>>   (10) Multiprocessors, (128) CUDA Cores/MP:     1280 CUDA Cores
>>>>>   GPU Max Clock rate:                            1038 MHz (1.04 GHz)
>>>>>   Memory Clock rate:                             2505 Mhz
>>>>>   Memory Bus Width:                              192-bit
>>>>>   L2 Cache Size:                                 1572864 bytes
>>>>>   Maximum Texture Dimension Size (x,y,z)         1D=(65536), 
>>>>> 2D=(65536, 65536), 3D=(4096, 4096, 4096)
>>>>>   Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 
>>>>> layers
>>>>>   Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 
>>>>> 2048 layers
>>>>>   Total amount of constant memory:               65536 bytes
>>>>>   Total amount of shared memory per block:       49152 bytes
>>>>>   Total number of registers available per block: 65536
>>>>>   Warp size:                                     32
>>>>>   Maximum number of threads per multiprocessor:  2048
>>>>>   Maximum number of threads per block:           1024
>>>>>   Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
>>>>>   Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 
>>>>> 65535)
>>>>>   Maximum memory pitch:                          2147483647 bytes
>>>>>   Texture alignment:                             512 bytes
>>>>>   Concurrent copy and kernel execution:          Yes with 2 copy 
>>>>> engine(s)
>>>>>   Run time limit on kernels:                     No
>>>>>   Integrated GPU sharing Host Memory:            No
>>>>>   Support host page-locked memory mapping:       Yes
>>>>>   Alignment requirement for Surfaces:            Yes
>>>>>   Device has ECC support:                        Disabled
>>>>>   Device supports Unified Addressing (UVA):      Yes
>>>>>   Device PCI Domain ID / Bus ID / location ID:   0 / 1 / 0
>>>>>   Compute Mode:
>>>>>      < Default (multiple host threads can use ::cudaSetDevice() with 
>>>>> device simultaneously) >
>>>>>
>>>>> deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 8.0, CUDA 
>>>>> Runtime Version = 7.5, NumDevs = 1, Device0 = GeForce GTX 970M
>>>>> Result = PASS
>>>>>
>>>>>
>>>>> On Monday, 6 February 2017 08:59:50 UTC+5:30, Ria Chakraborty wrote:
>>>>>>
>>>>>> Is your GPU supported for CUDA? Check in NVIDIA website for list of 
>>>>>> GPUs supported by CUDA.
>>>>>>
>>>>>> On 06-Feb-2017 8:51 AM, "Jayendra Parmar" <[email protected]> 
>>>>>> wrote:
>>>>>>
>>>>>>> Tried it, but it didn't help me. Moreover I uninstalled theano and 
>>>>>>> installed it from source, still having that issue.
>>>>>>>
>>>>>>> On Monday, 6 February 2017 00:34:52 UTC+5:30, Mustg Oplay wrote:
>>>>>>>>
>>>>>>>> May still be worth checking your theanorc file since the same error 
>>>>>>>> can happen in windows:
>>>>>>>>
>>>>>>>> Add the following lines to .theanorc:
>>>>>>>>         [nvcc]
>>>>>>>>         flags=--cl-version=2015 -D_FORCE_INLINES
>>>>>>>> if you do not include the cl-version then you get the error:
>>>>>>>>
>>>>>>>> nvcc fatal : nvcc cannot find a supported version of Microsoft 
>>>>>>>> Visual Studio. Only the versions 2010, 2012, and 2013 are supported
>>>>>>>>
>>>>>>>> the D_FORCE_INLINES part is for an Ubuntu bug although I'm not sure 
>>>>>>>> it's necessary anymore. It can help prevent this error:
>>>>>>>>
>>>>>>>> WARNING (theano.sandbox.cuda): CUDA is installed, but device gpu0 
>>>>>>>> is not available (error: cuda unavailable)
>>>>>>>>
>>>>>>>> Note: This error seems to also show if the g++ version is too new 
>>>>>>>> for the CUDA version.
>>>>>>>>
>>>>>>>> -- 
>>>>>>>
>>>>>>> --- 
>>>>>>> You received this message because you are subscribed to the Google 
>>>>>>> Groups "theano-users" group.
>>>>>>> To unsubscribe from this group and stop receiving emails from it, 
>>>>>>> send an email to [email protected].
>>>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>>>
>>>>>> -- 
>>>>>
>>>>> --- 
>>>>> You received this message because you are subscribed to the Google 
>>>>> Groups "theano-users" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>>> an email to [email protected].
>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>
>>>>
>>>>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to