-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hello,

I have a question regarding how PyCUDA interacts with CUDA 4.x's
support for sharing contexts across threads.

Broadly speaking I wish to create an analogue of CUDA streams that
also support invoking arbitrary Python functions (as opposed to just
CUDA kernels and memcpy operations).

My idea is to associate a Python thread with each CUDA stream in my
application and use a Queue (import Queue) to submit either CUDA
kernels or Python functions to the queue with the core code being
along the lines of:

def queue_worker(q, comm, stream):
    while True:
        item = q.get()
        if item_is_a_cuda_kernel:
            item(stream=stream)
            stream.synchronize()
        elif item_is_a_mpireq:
            comm.Prequest.startall(item)
            comm.Prequest.waitall(item)
        else:
            item()
        q.task_done()

Allowing one to do:
    q1, q2 = Queue(), Queue()
    t1 = Thread(target=queue_worker, args=(q1, comm, a_stream1)
    t2 = Thread(target=queue_worker, args=(q2, comm, a_stream2)
    t1.start()
    t2.start()

    # Stick items into the queue for the thread to consume


However, this is only meaningful if it is possible to share a PyCUDA
context between threads.  Can someone update me on if this is possible
at all (on the CUDA driver level) and if PyCUDA supports this?

Regards, Freddie.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0.18 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Mozilla - http://www.enigmail.net/

iEYEARECAAYFAlBl3bQACgkQ/J9EM/uoqVclOgCfYBNn8ibh2sRIwHmYo6oX8P30
SZEAnA+732Spf1anfi3yv9wpEtPO/rNt
=q8uY
-----END PGP SIGNATURE-----

_______________________________________________
PyCUDA mailing list
[email protected]
http://lists.tiker.net/listinfo/pycuda

Reply via email to