Jaroslaw Blusewicz <jaroslaw.blusew...@gmail.com> writes:
> I'm using numpy-sharedmem <https://bitbucket.org/cleemesser/numpy-sharedmem>
> to allocate shared memory array across multiple cpu processes. However,
> after page locking it with register_host_memory, the shared memory is never
> cleared at exit. Below is a minimal example of this behavior on Ubuntu
> 16.04, python 2.7.12  and pycuda 2016.1.2:
> import sharedmem
> import numpy as np
> from pycuda import autoinit
> import pycuda.driver as driver
> arr = sharedmem.zeros(10 ** 8, dtype=np.float32)
> arr = driver.register_host_memory(arr,
> flags=driver.mem_host_register_flags.DEVICEMAP)
> At exit, this shared memory array is not cleared. Unregistering the
> pagelocked memory beforehand doesn't work either.
> Also, I noticed that RegisteredHostMemory instance in arr.base, which
> according to the documentation
> <https://documen.tician.de/pycuda/driver.html#pycuda.driver.RegisteredHostMemory>
> should have base attribute containing the original array, doesn't actually
> have it.
> Is there a manual way of clearing this shared memory in pycuda that I'm
> missing?

I'm honestly not sure that pagelocked and SysV shared memory have a
defined interaction, i.e. I don't even know what's supposed to
happen. And at any rate, for what you're doing, you're just getting the
behavior of the CUDA API--I'm not sure PyCUDA could help or hurt in your

tl;dr: Ask someone at Nvidia if this supposed to work, and if it is, and
if PyCUDA breaks it, I'm happy to try and help fix it.


PyCUDA mailing list

Reply via email to