On Sunday, July 16, 2017 at 1:43:41 AM UTC+2, Pascal Lamblin wrote:
>
> Your original example seems to work for me, though, so it may have to do 
> with your setup:
>

I got it work when I removed device and contexts flags from my theanorc 
config file and used the command 

THEANO_FLAGS="init_gpu_device=cuda" python t.py

If I add the device flag set to cuda or cuda0, it gives me a seg fault.
I found this information when running the test below: "If you want 
GPU-related tests to run on a specific GPU device, and not the default one, 
you should use the init_gpu_device theano flag."

What does that mean for my configuration ? What shall I change ? 

Yes, it gets tested in our daily buildbot and on several pull requests per 
> week, by our continuous integration systems. I also just launched it 
> manually:
> $ theano-nose theano/gpuarray/tests/test_basic_ops.py:test_gpueye
> Can not use cuDNN on context None: Disabled by dnn.enabled flag
> Mapped name None to device cuda: GeForce GTX 580 (0000:02:00.0)
> .............................................
> ----------------------------------------------------------------------
> Ran 45 tests in 21.645s
>
> OK
>

I get :

ImportError: No module named test_basic_ops


When I run 

THEANO_FLAGS="init_gpu_device=cuda" theano-nose 
/usr/local/lib/python2.7/dist-packages/theano/gpuarray/tests/test_basic_ops.py:test_gpueye


I get

 
    if hasattr(theano.tests, "TheanoNoseTester"):
AttributeError: 'module' object has no attribute 'tests'

 

> You do not specify C and GPU implementations for the same Op, what we have 
> in general is two different Ops, one that has CPU inputs and outputs, and 
> computes on CPU, and another one with GPU inputs and outputs, that computes 
> on GPU.
> This is necessary because the Variables in Theano are strongly typed, and 
> the device is part of the type.
> There are optimizations that replace CPU Ops by GPU ones, inserting 
> transfer Ops (GpuFromHost, HostFromGpu) if necessary.
> GPU Ops, like CPU ones, can have C (using CUDA) or Python implementations 
> (or both). 
>

Are the rules name-based ? If there is the string Gpu in the name? Or is 
there any registration as other framework ? 
Thanks a lot for clarification on the optimization rules.
 

> What surprises me is to get seg faults in the theano function, while I 
>> would have expected them to occur during evaluation on values...
>>
>
> It is strange indeed. It may be possible that some GPU operations are 
> executed on GPU during the compilation phase, for constant folding 
> (constant propagation) for instance.
> Does it happen as well with the latest master from GitHub?
>

Installing the latest dev version from github did not improve results.
 

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to