Specifically it gives a cuda error:
Traceback (most recent call last):
  File "/share/apps/python-3.6.0-shared/lib/python3.6/threading.py", line 
916, in _bootstrap_inner
    self.run()
  File "/share/apps/python-3.6.0-shared/lib/python3.6/threading.py", line 
864, in run
    self._target(*self._args, **self._kwargs)
  File "test.py", line 23, in transfer
    queue.put(pygpu.gpuarray.asarray(value, context=ctx), block=True)
  File "pygpu/gpuarray.pyx", line 781, in pygpu.gpuarray.asarray (pygpu/
gpuarray.c:10723)
  File "pygpu/gpuarray.pyx", line 953, in pygpu.gpuarray.array (pygpu/
gpuarray.c:12229)
  File "pygpu/gpuarray.pyx", line 1008, in pygpu.gpuarray.carray (pygpu/
gpuarray.c:13111)
  File "pygpu/gpuarray.pyx", line 686, in pygpu.gpuarray.pygpu_fromhostdata 
(pygpu/gpuarray.c:9853)
  File "pygpu/gpuarray.pyx", line 311, in pygpu.gpuarray.array_copy_from_host 
(pygpu/gpuarray.c:5819)
pygpu.gpuarray.GpuArrayException: b'cuMemAlloc: CUDA_ERROR_INVALID_CONTEXT: 
invalid device context'




On Wednesday, 10 May 2017 18:05:46 UTC+1, Alexander Botev wrote:
>
> Thanks a lot. I was actually hoping that the pygpu will work with multi 
> threading but it doesn't, e.g. the commentted out version works the one 
> with the threading not:
>
> import numpy as np
> import theano
> import theano.tensor as T
> import pygpu
> import time
> from threading import Thread
> from queue import Queue
> from theano.gpuarray.basic_ops import infer_context_name
>
>
> def fill_in(value, ctx=None):
>     copied_value = np.array(value, copy=True)
>     if ctx is None:
>         def gen(value):
>             while True:
>                 yield value
>                 value += 1
>     else:
>         queue = Queue(maxsize=3)
>
>         def transfer(value):
>             while True:
>                 queue.put(pygpu.gpuarray.asarray(value, context=ctx), 
> block=True)
>                 value += 1
>         thread = Thread(target=transfer, args=(copied_value, ))
>         thread.daemon = True
>         thread.start()
>
>         def gen(_):
>             while True:
>                 yield queue.get()
>     return gen(copied_value)
>
>
> #def fill_in(value, ctx=None):
> #    copied_value = np.array(value, copy=True)
> #    if ctx is None:
> #        def gen(value):
> #            while True:
> #                yield value
> #                value += 1
> #    else:
> #        def gen(value):
> #            while True:
> #                yield pygpu.gpuarray.asarray(value, context=ctx)
> #                value += 1
> #    return gen(copied_value)
>
>
> def main(M=10000, N=10000, iter=100):
>     f1, f2, ctx = make_function(M, N)
>     np_in = np.random.randn(M, N).astype(theano.config.floatX)
>     generator = fill_in(np_in, ctx=ctx)
>     next(generator)
>     s1 = time.time()
>     for i, x in zip(range(iter), generator):
>         o1 = f2(x)
>         print(np_in[0, 0], o1[0, 0], type(x))
>     print("end 1 -", time.time() - s1)
>     generator = fill_in(np_in)
>     s2 = time.time()
>     for i, x in zip(range(iter), generator):
>         o2 = f1(x)
>         print(np_in[0, 0], o2[0, 0], type(x))
>     print("end 2 -", time.time() - s2)
>
>
> def make_function(M, N):
>     a = T.fmatrix()
>     b = a + T.constant(2)
>     f1 = theano.function([a], b)
>     theano.printing.debugprint(f1)
>     gpu_fmatrix = theano.gpuarray.GpuArrayType(dtype=a.dtype, 
> broadcastable=a.broadcastable)
>     a_gpu = gpu_fmatrix()
>     f2 = theano.function([a_gpu], b, givens=((a, a_gpu), ))
>     theano.printing.debugprint(f2)
>     ctx_name = infer_context_name(a_gpu)
>     ctx = theano.gpuarray.type.get_context(ctx_name)
>     print(ctx_name, ctx)
>     return f1, f2, ctx
>
> if __name__ == '__main__':
>     main()
>
>
>
> On Wednesday, 10 May 2017 17:17:34 UTC+1, Adam Becker wrote:
>>
>> line 5 is supposed to be:
>>
>> a_gpu = gpu_fmatrix()
>>
>> On Thursday, May 11, 2017 at 12:09:14 AM UTC+8, Alexander Botev wrote:
>>>
>>> the host_to_gpu does not like it:
>>>
>>> AttributeError: 'GpuArrayType' object has no attribute 'type'
>>>>
>>>
>>>
>>> On Wednesday, 10 May 2017 16:43:55 UTC+1, Adam Becker wrote:
>>>>
>>>> Correct typo: T.fmatrix -> gpu_fmatrix on line 5. I'm in need of more 
>>>> coffee ...
>>>>
>>>> On Wednesday, May 10, 2017 at 11:13:48 PM UTC+8, Alexander Botev wrote:
>>>>>
>>>>> Again with Theano "0.9.0.dev-ca213aa43e78ab3fb074a7c679907ea4d5412ed1" 
>>>>> I get:
>>>>>
>>>>> Traceback (most recent call last):
>>>>>>   File "tt.py", line 9, in <module>
>>>>>>     a = theano.gpuarray.host_from_gpu(a_gpu)
>>>>>>   File 
>>>>>> "/share/apps/barber/system/lib/python3.6/site-packages/theano/gof/op.py",
>>>>>>  
>>>>>> line 615, in __call__
>>>>>>     node = self.make_node(*inputs, **kwargs)
>>>>>>   File 
>>>>>> "/share/apps/barber/system/lib/python3.6/site-packages/theano/gpuarray/basic_ops.py",
>>>>>>  
>>>>>> line 549, in make_node
>>>>>>     raise TypeError(x)
>>>>>> TypeError: <TensorType(float32, matrix)>
>>>>>>
>>>>>
>>>>>
>>>>> On Wednesday, 10 May 2017 04:11:46 UTC+1, Adam Becker wrote:
>>>>>>
>>>>>> Hmm ... my bad. I thought givens would work.
>>>>>>
>>>>>> Anyway, this trick would work:
>>>>>>
>>>>>> import theano
>>>>>> from theano.gpuarray.basic_ops import infer_context_name
>>>>>>
>>>>>> gpu_fmatrix  = theano.gpuarray.GpuArrayType(dtype='float32', 
>>>>>> broadcastable=(False,False))
>>>>>> a_gpu = T.fmatrix()
>>>>>>
>>>>>> # insert transfer
>>>>>> a = theano.gpuarray.host_from_gpu(a_gpu)
>>>>>> # define graph as usual
>>>>>> b = a + 2.
>>>>>>
>>>>>> # compiles function, but takes GpdevuArray as input
>>>>>> fn = theano.function([a_gpu], b)
>>>>>> theano.printing.debugprint(fn)
>>>>>>
>>>>>> # compiles function that takes GpuArray as input/output
>>>>>> ctx_name = infer_context_name(a_gpu)
>>>>>> b_gpu = theano.gpuarray.as_gpuarray_variable(b, ctx_name)
>>>>>> fn2 = theano.function([a_gpu], b_gpu)
>>>>>> theano.printing.debugprint(fn2)
>>>>>>
>>>>>>
>>>>>> Console output:
>>>>>>
>>>>>> HostFromGpu(gpuarray) [id A] ''   1
>>>>>>  |GpuElemwise{add,no_inplace} [id B] ''   0
>>>>>>    |GpuArrayConstant{[[ 2.]]} [id C]
>>>>>>    |<GpuArrayType<None>(float32, matrix)> [id D]
>>>>>>
>>>>>> GpuElemwise{add,no_inplace} [id A] ''   0
>>>>>>  |GpuArrayConstant{[[ 2.]]} [id B]
>>>>>>  |<GpuArrayType<None>(float32, matrix)> [id C]
>>>>>>
>>>>>> The above works because the optimizer can remove redundant GPU -> CPU 
>>>>>> -> GPU transfers. The downside is the above approach doesn't work with 
>>>>>> config optimizer=None
>>>>>>
>>>>>> On Wednesday, May 10, 2017 at 5:02:20 AM UTC+8, Alexander Botev wrote:
>>>>>>>
>>>>>>> That does not seem to work. So I have this:
>>>>>>>
>>>>>>> a = T.fmatrix()
>>>>>>> ctx = pygpu.init(theano.config.device)
>>>>>>> theano.gpuarray.reg_context("mine", ctx)
>>>>>>> a_gpu = theano.gpuarray.GpuArrayType(a.dtype, a.broadcastable, "mine")
>>>>>>> f2 = theano.function([a_gpu], a + T.constant(2), givens={a: a_gpu})
>>>>>>> return f1, f2
>>>>>>>
>>>>>>>
>>>>>>> However, Theano complains about:
>>>>>>>
>>>>>>> TypeError: Unknown parameter type: <class 
>>>>>>> 'theano.gpuarray.type.GpuArrayType'>
>>>>>>>
>>>>>>> If instead of the [a_gpu] I have [a] it complains that the givens is 
>>>>>>> overwriting an input:
>>>>>>>
>>>>>>> RuntimeError: You are trying to replace variable 
>>>>>>> '<TensorType(float32, matrix)>' through the `givens` parameter, but 
>>>>>>> this 
>>>>>>> variable is an input to your function. Replacing inputs is currently 
>>>>>>> forbidden because it has no effect. One way to modify an input `x` to a 
>>>>>>> function evaluating f(x) is to define a new input `y` and use 
>>>>>>> `theano.function([y], f(x), givens={x: g(y)})`. Another solution 
>>>>>>> consists 
>>>>>>> in using `theano.clone`, e.g. like this: `theano.function([x], 
>>>>>>> theano.clone(f(x), replace={x: g(x)}))`.
>>>>>>>
>>>>>>>
>>>>>>> On Tuesday, 9 May 2017 15:19:10 UTC+1, Adam Becker wrote:
>>>>>>>>
>>>>>>>> In the main graph, replace the input variables with type: 
>>>>>>>> theano.gpuarray.GpuArrayType (Can be done using givens parameter 
>>>>>>>> of theano.function). Then, feed pygpu.gpuarray.GpuArray object 
>>>>>>>> directly to the compiled function. pygpu.gpuarray.asarray can be 
>>>>>>>> used to move numpy array to GPU.
>>>>>>>>
>>>>>>>> On Tuesday, May 9, 2017 at 5:01:42 PM UTC+8, Alexander Botev wrote:
>>>>>>>>>
>>>>>>>>> Actually one thing I've just realized is that to do this 
>>>>>>>>> consistently I need to have access to the underlying Theano pygpu 
>>>>>>>>> Context. 
>>>>>>>>> Is there anyway to get that?
>>>>>>>>>
>>>>>>>>> On Tuesday, 9 May 2017 09:53:02 UTC+1, Alexander Botev wrote:
>>>>>>>>>>
>>>>>>>>>> So recently I was wondering if there is any way that after 
>>>>>>>>>> compiling a theano function, rather than taking numpy arrays / 
>>>>>>>>>> native lists 
>>>>>>>>>> / native numbers it can accept as an input something like a 
>>>>>>>>>> libgpuarray or 
>>>>>>>>>> anything else that lives on the GPU. However, I know that in the 
>>>>>>>>>> computation graph usually when you compile it there is a Transfer Op 
>>>>>>>>>> if it 
>>>>>>>>>> is on the GPU. Is there a way to avoid that transfer?
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to