Hi Freddie,

Freddie Witherden <[email protected]> writes:
> Consider a simple application using PyCUDA + the source module
> functionality.  An example of this can be seen under 'Executing a
> Kernel' in the tutorial:
>
>   http://documen.tician.de/pycuda/tutorial.html
>
> Assuming the file is saved as cuda.py I am wondering if the following
> is safe:
>
>   python cuda.py &
>   pycuda cuda.py &
>
> so launching two instances of the program.  I ask as while this does
> appear -- initially -- to result in two nvcc invocations I can only
> see one folder in /tmp:
>
>   pycuda-compiler-cache-v1-uid1000
>
> Hence, if both processes are sharing the same temorpary folder what
> precautions is PyCUDA taking to ensure that nothing goes awry?

PyCUDA creates a temporary folder for each compiler invocation in a
process-safe manner. Only the resulting binary is stored in the shared
cache directory you found. The structure of this cache for PyCUDA is
really simple--a hash of the file being compiled, with ".cubin"
appended. Theoretically, two processes could race each other in writing
the cache file, or a process could find the half-done write done by
another process. Despite lots of use in MPI processes that would trigger
conflicts exactly like this, I haven't yet run into an issue.

PyOpenCL has a better caching infrastructure that does proper locking
and takes into account include files. Backporting this to PyCUDA (or,
preferably, making it shared infrastructure) has been on my to-do list,
but I haven't yet found the time--patches are certainly welcome.

HTH,
Andreas

_______________________________________________
PyCUDA mailing list
[email protected]
http://lists.tiker.net/listinfo/pycuda

Reply via email to