[theano-users] Re: Getting "pygpu.gpuarray.GpuArrayException: Out of memory" for a small application

2017-07-03 Thread Daniel Seita
 

Thanks Pascal.


I tried using gpu preallocate 0.01 and 0.1. The run with 0.1, for instance, 
starts like this:


$ python scripts/run_rl_mj.py --env_name CartPole-v0 --log trpo_logs/
CartPole-v0
Using cuDNN version 5105 on context None 
Preallocating 1218/12186 Mb (0.10) on cuda 
Mapped name None to device cuda: TITAN X (Pascal) (:01:00.0)


But the same error message results:
Traceback (most recent call last):
  File "scripts/run_rl_mj.py", line 116, in 
main()
  File "scripts/run_rl_mj.py", line 109, in main
iter_info = opt.step()
  File "/home/daniel/imitation_noise/policyopt/rl.py", line 283, in step
cfg=self.sim_cfg)
  File "/home/daniel/imitation_noise/policyopt/__init__.py", line 425, in 
sim_mp
traj = job.get()
  File "/home/daniel/anaconda2/lib/python2.7/multiprocessing/pool.py", line 
567, in get
raise self._value
pygpu.gpuarray.GpuArrayException: Out of memory

Apply node that caused the error: GpuFromHost(obsfeat_B_Df)
Toposort index: 1
Inputs types: [TensorType(float32, matrix)]
Inputs shapes: [(1, 4)]
Inputs strides: [(16, 4)]
Inputs values: [array([[ 0.02563, -0.03082,  0.01663, -0.00558]], dtype=
float32)]
Outputs clients: [[GpuElemwise{Composite{((i0 - i1) / i2)}}[(0, 0)](GpuFromHost.0, /GibbsPolicy/obsnorm/Standardizer/mean_1_D, 
GpuElemwise{Composite{(i0 + sqrt((i1 * (Composite{(i0 - sqr(i1))}(i2, i3) + 
Abs(Composite{(i0 - sqr(i1))}(i2, i3))}}[].0)]]

HINT: Re-running with most Theano optimization disabled could give you a 
back-trace of when this node was created. This can be done with by setting 
the Theano flag 'optimizer=fast_compile'. If that does not work, Theano 
optimizations can be disabled with 'optimizer=None'.
HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and 
storage map footprint of this apply node.


With -1 as the preallocate, I get this to start:


$ python scripts/run_rl_mj.py --env_name CartPole-v0 --log trpo_logs/
CartPole-v0
Using cuDNN version 5105 on context None
Disabling allocation cache on cuda


I get a similar error message except it's slightly different, with an 
initialization error, but the same part of the code is running into 
problems:


Traceback (most recent call last):
  File "scripts/run_rl_mj.py", line 116, in 
main()
  File "scripts/run_rl_mj.py", line 109, in main
iter_info = opt.step()
  File "/home/daniel/imitation_noise/policyopt/rl.py", line 283, in step
cfg=self.sim_cfg)
  File "/home/daniel/imitation_noise/policyopt/__init__.py", line 425, in 
sim_mp
traj = job.get()
  File "/home/daniel/anaconda2/lib/python2.7/multiprocessing/pool.py", line 
567, in get
raise self._value
pygpu.gpuarray.GpuArrayException: initialization error

Apply node that caused the error: GpuFromHost(obsfeat_B_Df)
Toposort index: 1
Inputs types: [TensorType(float32, matrix)]
Inputs shapes: [(1, 4)]
Inputs strides: [(16, 4)]
Inputs values: [array([[ 0.01357, -0.02611,  0.0341 ,  0.0162 ]], dtype=
float32)]
Outputs clients: [[GpuElemwise{Composite{((i0 - i1) / i2)}}[(0, 0)](GpuFromHost.0, /GibbsPolicy/obsnorm/Standardizer/mean_1_D, 
GpuElemwise{Composite{(i0 + sqrt((i1 * (Composite{(i0 - sqr(i1))}(i2, i3) + 
Abs(Composite{(i0 - sqr(i1))}(i2, i3))}}[].0)]]

HINT: Re-running with most Theano optimization disabled could give you a 
back-trace of when this node was created. This can be done with by setting 
the Theano flag 'optimizer=fast_compile'. If that does not work, Theano 
optimizations can be disabled with 'optimizer=None'.
HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and 
storage map footprint of this apply node. 


Yes, the code seems to be using multiprocessing. I will try to see if I can 
find out how to deal with the multiprocessing, or perhaps just disable it.

On Monday, July 3, 2017 at 3:08:44 PM UTC-7, Pascal Lamblin wrote:
>
> What happens if you set gpuarray.preallocate to something much smaller, or 
> even to -1?
>
> Also, I see the script uses multiprocessing. Weird things happen if new 
> Python processes are spawned after the GPU has been initialized. This is a 
> limitation of how cuda handles GPU contexts I believe.
> The solution would be not to use `device=cuda`, but `device=cpu`, and call 
> `theano.gpuarray.use('cuda')` manually in the subprocess, or after all 
> processes have been launched.
>
> On Sunday, July 2, 2017 at 3:59:31 PM UTC-4, Daniel Seita wrote:
>>
>> I am attempting to run some reinforcement learning code on the GPU. (The 
>> code is https://github.com/openai/imitation if it matters, running 
>> `scripts/run_rl_mj.py`.)
>>
>> I converted the code to run on float32 by changing the way the data is 
>> supplied via numpy. Unfortunately, with the new GPU backend, I am gettting 
>> an out of memory error, despite having 12GB of memory on my Titan X Pascal 
>> GPU. Here are my settings:
>>
>> $ cat ~/.theanorc 
>> [global] 
>> device = cuda 
>> floatX = float32 
>>
>> [gpuarray] 
>> preallocate = 1 
>>
>> [cuda] 
>> root = 

Re: [theano-users] theano0.9 and cuda-8

2017-07-03 Thread Pascal Lamblin
How did you determine it is using the CPU?

On Monday, July 3, 2017 at 10:20:40 AM UTC-4, ngu...@interactions.com wrote:
>
> Changing it to cuda* results is CPU usage and not GPU
>
> On Friday, June 30, 2017 at 4:20:22 PM UTC-4, nouiz wrote:
>>
>> You should not mix cuda version...
>>
>> Do you still use the old gpu back-end (device=gpu*) or the new back-end 
>> (device=cuda*)?
>>
>> Fred
>>
>> On Fri, Jun 30, 2017 at 9:57 AM  wrote:
>>
>>> I trying to understand some unexplained behavior of my code.
>>> To be sure that the problem is with my code and not with software 
>>> incompatibility I would like to sure  about the correctness of my setup
>>> I have:
>>> theano version 0.9
>>>
>>> CUDA_ROOT =/usr/local/cuda-7.5
>>> LD_PATH=/usr/local/cuda-8/lib64:...
>>> PATH=/usr/local/cuda-8/bin: ...
>>>
>>> Essentially I am using some parts of cuda-8 and some of cuda-7.5.
>>>
>>> With CUDA_ROOT =/usr/local/cuda-8, I cannot compile the theano functions.
>>>
>>> Thanks
>>>
>>>
>>>
>>>
>>> ***
>>>
>>> This e-mail and any of its attachments may contain Interactions 
>>> Corporation proprietary information, which is privileged, confidential, or 
>>> subject to copyright belonging to the Interactions Corporation. This e-mail 
>>> is intended solely for the use of the individual or entity to which it is 
>>> addressed. If you are not the intended recipient of this e-mail, you are 
>>> hereby notified that any dissemination, distribution, copying, or action 
>>> taken in relation to the contents of and attachments to this e-mail is 
>>> strictly prohibited and may be unlawful. If you have received this e-mail 
>>> in error, please notify the sender immediately and permanently delete the 
>>> original and any copy of this e-mail and any printout. Thank You.  
>>>
>>>
>>> ***
>>>   
>>>
>>>
>>> -- 
>>>
>>> --- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "theano-users" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to theano-users...@googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] Re: Getting "pygpu.gpuarray.GpuArrayException: Out of memory" for a small application

2017-07-03 Thread Pascal Lamblin
What happens if you set gpuarray.preallocate to something much smaller, or 
even to -1?

Also, I see the script uses multiprocessing. Weird things happen if new 
Python processes are spawned after the GPU has been initialized. This is a 
limitation of how cuda handles GPU contexts I believe.
The solution would be not to use `device=cuda`, but `device=cpu`, and call 
`theano.gpuarray.use('cuda')` manually in the subprocess, or after all 
processes have been launched.

On Sunday, July 2, 2017 at 3:59:31 PM UTC-4, Daniel Seita wrote:
>
> I am attempting to run some reinforcement learning code on the GPU. (The 
> code is https://github.com/openai/imitation if it matters, running 
> `scripts/run_rl_mj.py`.)
>
> I converted the code to run on float32 by changing the way the data is 
> supplied via numpy. Unfortunately, with the new GPU backend, I am gettting 
> an out of memory error, despite having 12GB of memory on my Titan X Pascal 
> GPU. Here are my settings:
>
> $ cat ~/.theanorc 
> [global] 
> device = cuda 
> floatX = float32 
>
> [gpuarray] 
> preallocate = 1 
>
> [cuda] 
> root = /usr/local/cuda-8.0
>
>
> Theano seems to be importing correctly:
>
> $ ipython
> Python 2.7.13 |Anaconda custom (64-bit)| (default, Dec 20 2016, 23:09:15) 
>  
> Type "copyright", "credits" or "license" for more information. 
> IPython 5.3.0 -- An enhanced Interactive Python. 
> ? -> Introduction and overview of IPython's features. 
> %quickref -> Quick reference. 
> help  -> Python's own help system. 
> object?   -> Details about 'object', use 'object??' for extra details. 
>
> In [1]: import theano 
> Using cuDNN version 5105 on context None 
> Preallocating 11576/12186 Mb (0.95) on cuda 
> Mapped name None to device cuda: TITAN X (Pascal) (:01:00.0) 
>
> In [2]: 
>
>
>
> Unfortunately, running `python scripts/run_rl_mj.py --env_name 
> CartPole-v0 --log trpo_logs/CartPole-v0` on the very low-dimensional 
> CartPole setting (state space is just four numbers, actions are just one 
> number) gives me (after a bit of a setup):
>
>
> Traceback (most recent call last):
>
>   File "scripts/run_rl_mj.py", line 116, in 
>
> main()
>
>   File "scripts/run_rl_mj.py", line 109, in main
>
> iter_info = opt.step()
>
>   File "/home/daniel/imitation_noise/policyopt/rl.py", line 280, in step
>
> cfg=self.sim_cfg)
>
>   File "/home/daniel/imitation_noise/policyopt/__init__.py", line 411, in 
> sim_mp
>
> traj = job.get()
>
>   File "/home/daniel/anaconda2/lib/python2.7/multiprocessing/pool.py", 
> line 567, in get
>
> raise self._value
>
> pygpu.gpuarray.GpuArrayException: Out of memory
>
> Apply node that caused the error: GpuFromHost(obsfeat_B_Df)
>
> Toposort index: 4
>
> Inputs types: [TensorType(float32, matrix)]
>
> Inputs shapes: [(1, 4)]
>
> Inputs strides: [(16, 4)]
>
> Inputs values: [array([[ 0.04058,  0.00428,  0.03311, -0.02898]], 
> dtype=float32)]
>
> Outputs clients: [[GpuElemwise{Composite{((i0 - i1) / 
> i2)}}[](GpuFromHost.0, 
> /GibbsPolicy/obsnorm/Standardizer/mean_1_D, GpuElemwise{Composite{(i0 + 
> sqrt((i1 * (Composite{(i0 - sqr(i1))}(i2, i3) + Abs(Composite{(i0 - 
> sqr(i1))}(i2, i3))}}[].0)]]
>
>
> HINT: Re-running with most Theano optimization disabled could give you a 
> back-trace of when this node was created. This can be done with by setting 
> the Theano flag 'optimizer=fast_compile'. If that does not work, Theano 
> optimizations can be disabled with 'optimizer=None'.
>
> HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and 
> storage map footprint of this apply node.
>
> Closing remaining open files:trpo_logs/CartPole-v0...done
>
>
> What I'm confused about is that
>
>- This happens right at the beginning of the reinforcement learning, 
>so it's not as if the algorithm has been running a long time and then ran 
>out of memory.
>- The input shapes are quite small, (1,4) and (16,4). In addition, the 
>output is supposed to do normalization and several other element-wise 
>operations. None of this suggests high memory usage.
>
> I tried `optimizer = fast_compile` and re-ran this, but the error message 
> was actually less informative (it contains a subset of the above error 
> message). Running with `exception_verbosity = high` results in a different 
> error message:
>
>
> Max traj len: 200
>
> Traceback (most recent call last):
>
>   File "scripts/run_rl_mj.py", line 116, in 
>
> main()
>
>   File "scripts/run_rl_mj.py", line 109, in main
>
> iter_info = opt.step()
>
>   File "/home/daniel/imitation_noise/policyopt/rl.py", line 280, in step
>
> cfg=self.sim_cfg)
>
>   File "/home/daniel/imitation_noise/policyopt/__init__.py", line 411, in 
> sim_mp
>
> traj = job.get()
>
>   File "/home/daniel/anaconda2/lib/python2.7/multiprocessing/pool.py", 
> line 567, in get
>
> raise self._value
>
> pygpu.gpuarray.GpuArrayException: initialization error
>
> Closing remaining open 

[theano-users] Re: cudnn detected in cuda backend but not in gpuarray backend

2017-07-03 Thread Pascal Lamblin
The test actually ran on GPU, as evidenced by it printing "GpuElemwise".
The issue is you are using a really old version of "gputest", which does 
not correctly detect the new back-end. Please use the latest version at 
http://deeplearning.net/software/theano/tutorial/using_gpu.html#testing-the-gpu

On Sunday, July 2, 2017 at 8:58:41 AM UTC-4, Akshay Chaturvedi wrote:
>
> I was able to solve the issue by setting CPLUS_INCLUDE_PATH. Now the 
> output looks like this
> Using cuDNN version 5110 on context None
> Mapped name None to device cuda0: GeForce GTX 960 (:01:00.0)
> [GpuElemwise{exp,no_inplace}(), 
> HostFromGpu(gpuarray)(GpuElemwise{exp,no_inplace}.0)]
> Looping 1000 times took 0.196515 seconds
> Result is [ 1.23178029 1.61879349 1.52278066 ..., 2.20771813 2.29967761
> 1.62323296]
> Used the cpu
>
> The file is unable to detect that the program ran on gpu but it's a 
> seperate issue. I am attaching the file gputest.py. 
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] Always Segmentation fault(core dumped) when use theano(NOT when import)

2017-07-03 Thread Pascal Lamblin
For pylearn2, you can change the imports to use `six` directly, rather than 
`theano.six`.

On Sunday, July 2, 2017 at 1:53:35 AM UTC-4, noodles wrote:
>
> It works. When I install theano 1.0, it don't report segmentation fault. 
> But the problem is that I install theano to use lasagne and pylearn2, which 
> has't support theano1.0 (it report"ImportError: No module named 
> six.moves”), do you know how to resolve this?
>
> Thank you very much.
>
> 在 2017年6月30日星期五 UTC+8下午8:36:49,nouiz写道:
>>
>> Install the dev version of Theano. It contains segmentation fault fixes.
>>
>> If that don't work, tell us, but I think it should work.
>>
>> Le ven. 30 juin 2017 06:00, noodles  a écrit :
>>
>>> Hello,
>>>
>>> I encounter a strange problem when using theano. These days I 
>>> bought 
>>> a new computer and install theano on it, and I can even import it in
>>> python with no error, but everytime I create a function, it corrupted 
>>> with "Segmentation fault(core dumped)". Below is the detail:
>>> I have installed theano on another two old machine, and they 
>>> works well. This new machine is : CPU: intel 7700; GPU  2xGTX1080Ti, 
>>> OS: ubuntu16.04.  CUDA 8.0, cudnn 5.1 .I use miniconda2 to install 
>>> theano( conda install theano), python 2.7, theano 0.9.0
>>>
>>>   when I import theano in python, the output is:
>>>
 *nice@fat01:~$ python*
 *Python 2.7.13 |Continuum Analytics, Inc.| (default, Dec 20 2016,
 23:09:15) *
 *[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2*
 *Type "help", "copyright", "credits" or "license" for more information.*
 *Anaconda is brought to you by Continuum Analytics.*
 *Please check out: http://continuum.io/thanks <
 http://continuum.io/thanks>
 and https://anaconda.org *
 *>>> import theano*
 *Using cuDNN version 5110 on context None*
 *Mapped name None to device cuda1: GeForce GTX 1080 Ti (:02:00.0)*
 *>>> *
>>>
>>>  
>>> then I input the code from the exercise of 
>>> http://deeplearning.net/software/theano/tutorial/using_gpu.html#gpuarray
>>>  
>>>
>>> 
>>>
>>> *import numpy*
>>> *import theano*
>>> *import theano.tensor as T*
>>> *rng = numpy.random*
>>> *N = 400*
>>> *feats = 784*
>>> *D = (rng.randn(N, feats).astype(theano.config.floatX),*
>>> *rng.randint(size=N,low=0, high=2).astype(theano.config.floatX))*
>>> *training_steps = 1*
>>> *# Declare Theano symbolic variables*
>>> *x = T.matrix("x")*
>>> *y = T.vector("y")*
>>> *w = theano.shared(rng.randn(feats).astype(theano.config.floatX), 
>>> name="w")*
>>> *b = theano.shared(numpy.asarray(0., dtype=theano.config.floatX), 
>>> name="b")*
>>> *x.tag.test_value = D[0]*
>>> *y.tag.test_value = D[1]*
>>> *# Construct Theano expression graph*
>>> *p_1 = 1 / (1 + T.exp(-T.dot(x, w)-b)) # Probability of having a one*
>>> *prediction = p_1 > 0.5 # The prediction that is done: 0 or 1*
>>> *xent = -y*T.log(p_1) - (1-y)*T.log(1-p_1) # Cross-entropy*
>>> *cost = xent.mean() + 0.01*(w**2).sum() # The cost to optimize*
>>> *gw,gb = T.grad(cost, [w,b])*
>>> *# Compile expressions to functions*
>>> *train = theano.function(*
>>> *inputs=[x,y],*
>>> *outputs=[prediction, xent],*
>>> *updates=[(w, w-0.01*gw), (b, b-0.01*gb)],*
>>> *name = "train")*
>>>
>>>
>>> ==
>>>  
>>> It corrupted at this line.
>>> I have run numpy.test() and scipy.test() and they work well, but when I 
>>> run theano.test(), it corrupted too. The full log is too long, so I 
>>> just post
>>> the end of it:
>>>
>>> */home/nice/miniconda2/lib/python2.7/site-packages/
 theano/compile/nanguardmode.py:168:
 RuntimeWarning: All-NaN axis encountered*
 *  return np.isinf(np.nanmax(arr)) or np.isinf(np.nanmin(arr))*
 *.E/home/nice/
 miniconda2/lib/python2.7/site-packages/theano/gof/vm.py:851:
 UserWarning: CVM does not support memory profile, using Stack VM.*
 *  'CVM does not support memory profile, using Stack VM.')*
 *...SS.0.930614401665*
 *0.930614401665*
 *0.930614401665*
 *0.930614401665*
 *...
 
 ...E/home/nice/miniconda2/
 lib/python2.7/site-packages/theano/gof/vm.py:854:
 UserWarning: LoopGC does not support partial evaluation, using Stack 
 VM.*
 *  'LoopGC does not support partial evaluation, '*
 *.Segmentation fault (core dumped)*
>>>
>>>
>>>
>>> I hope someone can help me.  
>>>
>>> -- 
>>>
>>> --- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "theano-users" group.
>>> To unsubscribe from this group and stop receiving 

[theano-users] Re: Python stops working when importing theano.

2017-07-03 Thread Pascal Lamblin
This usually happens when mixing libraries compiled with different 
compilers on Windows.
Do you have another g++ than the one installed by the conda package "
m2w64-toolchain" that is mentioned in the installation instructions?

On Thursday, June 1, 2017 at 4:17:40 AM UTC-4, Rohit Dulam wrote:
>
> When i try to import theano, a window pops up saying python.exe has 
> stopped. What is the problem? The contents of my .theanorc file are given 
> below.
>
> [global]
> floatX = float32
> device = cuda0
> mode=FAST_RUN
>
> [cuda]
> root = C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\bin
>
>
> [nvcc]
> fastmath = True
>
> [dnn]
>
> include_path = C:\Program Files\NVIDIA GPU Computing 
> Toolkit\CUDA\v8.0\include
> library_path = C:\Program Files\NVIDIA GPU Computing 
> Toolkit\CUDA\v8.0\lib\x64
>
> [gpuarray]
>
> preallocate = 0.2
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] theano0.9 and cuda-8

2017-07-03 Thread ngupta
Changing it to cuda* results is CPU usage and not GPU

On Friday, June 30, 2017 at 4:20:22 PM UTC-4, nouiz wrote:
>
> You should not mix cuda version...
>
> Do you still use the old gpu back-end (device=gpu*) or the new back-end 
> (device=cuda*)?
>
> Fred
>
> On Fri, Jun 30, 2017 at 9:57 AM  
> wrote:
>
>> I trying to understand some unexplained behavior of my code.
>> To be sure that the problem is with my code and not with software 
>> incompatibility I would like to sure  about the correctness of my setup
>> I have:
>> theano version 0.9
>>
>> CUDA_ROOT =/usr/local/cuda-7.5
>> LD_PATH=/usr/local/cuda-8/lib64:...
>> PATH=/usr/local/cuda-8/bin: ...
>>
>> Essentially I am using some parts of cuda-8 and some of cuda-7.5.
>>
>> With CUDA_ROOT =/usr/local/cuda-8, I cannot compile the theano functions.
>>
>> Thanks
>>
>>
>>
>>
>> ***
>>
>> This e-mail and any of its attachments may contain Interactions 
>> Corporation proprietary information, which is privileged, confidential, or 
>> subject to copyright belonging to the Interactions Corporation. This e-mail 
>> is intended solely for the use of the individual or entity to which it is 
>> addressed. If you are not the intended recipient of this e-mail, you are 
>> hereby notified that any dissemination, distribution, copying, or action 
>> taken in relation to the contents of and attachments to this e-mail is 
>> strictly prohibited and may be unlawful. If you have received this e-mail 
>> in error, please notify the sender immediately and permanently delete the 
>> original and any copy of this e-mail and any printout. Thank You.  
>>
>>
>> ***
>>   
>>
>>
>> -- 
>>
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "theano-users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to theano-users...@googlegroups.com .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
-- 
 

***

This e-mail and any of its attachments may contain Interactions Corporation 
proprietary information, which is privileged, confidential, or subject to 
copyright belonging to the Interactions Corporation. This e-mail is 
intended solely for the use of the individual or entity to which it is 
addressed. If you are not the intended recipient of this e-mail, you are 
hereby notified that any dissemination, distribution, copying, or action 
taken in relation to the contents of and attachments to this e-mail is 
strictly prohibited and may be unlawful. If you have received this e-mail 
in error, please notify the sender immediately and permanently delete the 
original and any copy of this e-mail and any printout. Thank You.  

*** 
 

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] theano0.9 and cuda-8

2017-07-03 Thread ngupta
I changed it to cuda*, thanks, everything is OK now.

On Friday, June 30, 2017 at 4:20:22 PM UTC-4, nouiz wrote:
>
> You should not mix cuda version...
>
> Do you still use the old gpu back-end (device=gpu*) or the new back-end 
> (device=cuda*)?
>
> Fred
>
> On Fri, Jun 30, 2017 at 9:57 AM  
> wrote:
>
>> I trying to understand some unexplained behavior of my code.
>> To be sure that the problem is with my code and not with software 
>> incompatibility I would like to sure  about the correctness of my setup
>> I have:
>> theano version 0.9
>>
>> CUDA_ROOT =/usr/local/cuda-7.5
>> LD_PATH=/usr/local/cuda-8/lib64:...
>> PATH=/usr/local/cuda-8/bin: ...
>>
>> Essentially I am using some parts of cuda-8 and some of cuda-7.5.
>>
>> With CUDA_ROOT =/usr/local/cuda-8, I cannot compile the theano functions.
>>
>> Thanks
>>
>>
>>
>>
>> ***
>>
>> This e-mail and any of its attachments may contain Interactions 
>> Corporation proprietary information, which is privileged, confidential, or 
>> subject to copyright belonging to the Interactions Corporation. This e-mail 
>> is intended solely for the use of the individual or entity to which it is 
>> addressed. If you are not the intended recipient of this e-mail, you are 
>> hereby notified that any dissemination, distribution, copying, or action 
>> taken in relation to the contents of and attachments to this e-mail is 
>> strictly prohibited and may be unlawful. If you have received this e-mail 
>> in error, please notify the sender immediately and permanently delete the 
>> original and any copy of this e-mail and any printout. Thank You.  
>>
>>
>> ***
>>   
>>
>>
>> -- 
>>
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "theano-users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to theano-users...@googlegroups.com .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
-- 
 

***

This e-mail and any of its attachments may contain Interactions Corporation 
proprietary information, which is privileged, confidential, or subject to 
copyright belonging to the Interactions Corporation. This e-mail is 
intended solely for the use of the individual or entity to which it is 
addressed. If you are not the intended recipient of this e-mail, you are 
hereby notified that any dissemination, distribution, copying, or action 
taken in relation to the contents of and attachments to this e-mail is 
strictly prohibited and may be unlawful. If you have received this e-mail 
in error, please notify the sender immediately and permanently delete the 
original and any copy of this e-mail and any printout. Thank You.  

*** 
 

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.