Neither of these are needed to answer my question, but for context, I neglected to note that I'm basing this off of:
http://stackoverflow.com/questions/5904872/python-multiprocessing-with-pycuda Which was linked to my off of my question here: http://stackoverflow.com/questions/9612134/cuda-contexts-streams-and-events-on-multiple-gpus Obviously, I'm kind of wandering all over the place trying to figure out what I should be doing. :/ Cheers, Eli On Thu, Mar 8, 2012 at 9:33 AM, Eli Stevens (Gmail) <[email protected]> wrote: > Hello, > > I was wondering if the following will work: > > - Main thread spins up thread B. > - Thread B creates a context, invokes a kernel, and creates an event. > - Event is saved. > - Thread B pops the context (kernel is still running at this point) > and finishes. > - Main thread join()s B and grabs the event. > - Main thread does other stuff and eventually calls .synchronize() > > Does that work? Or will trying to use an event after popping the > associated context (and from a different thread) cause problems? My > actual use case involves a thread C that's doing other things on a > second GPU. Maybe instead of an event, I should just have the threads > block and then use the join to indicate when the kernel is done? Any > advice appreciated. :) > > Thanks, > Eli _______________________________________________ PyCUDA mailing list [email protected] http://lists.tiker.net/listinfo/pycuda
