I'm not very familar with cuda, thus I wld like to ask if you have any
guesses on what is leading to my on-device segfault?
I'm guessing that saving the ctx in the GPU thread class, pushing and
poping it before I run my code earlier might have caused it.
If so, is there any way I can avoid it?

Many thanks,
Zhangsheng

On 12 May 2018 at 12:34, Andreas Kloeckner <[email protected]> wrote:

> Zhangsheng Lai <[email protected]> writes:
>
> > Hi,
> >
> > I'm trying to do some updates to a state which is a binary array. gputid
> is
> > a GPU thread class (https://wiki.tiker.net/PyCuda/Examples/
> MultipleThreads)
> > and it stores the state and the index of the array to be updated in
> another
> > class which can be accessed with gputid.mp.x_gpu and gputid.mp.neuron_gpu
> > respectively. Below is my kernel that takes in the gputid and performs
> the
> > update of the state. However, it the output of the code is not consistent
> > as it runs into errors and executes perfectly when i run it multiple
> times.
> > The error msg makes no sense to me:
> >
> > File "/root/anaconda3/lib/python3.6/site-packages/pycuda/driver.py",
> line
> > 447, in function_prepared_call
> >     func._set_block_shape(*block)
> > pycuda._driver.LogicError: cuFuncSetBlockShape failed: invalid resource
> > handle
>
> I think the right way to interpret this is that if you cause an
> on-device segfault, the GPU context dies, and all the handles of objects
> contained in it (including the function) become invalid.
>
> HTH,
> Andreas
>
_______________________________________________
PyCUDA mailing list
[email protected]
https://lists.tiker.net/listinfo/pycuda

Reply via email to