Hi,

I'm trying to create different GPU arrays on different GPUs.

```
import pycuda
import pycuda.driver as cuda
from pycuda.compiler import SourceModule
import pycuda.curandom as curandom

d = 2 ** 15

cuda.init()
dev1 = cuda.Device(1)
ctx1 = dev1.make_context()

curng1 = curandom.XORWOWRandomNumberGenerator()

x1 = curng1.gen_normal((d,d), dtype = np.float32) # so x1 is stored in GPU
1 memory

ctx1.pop() # clearing ctx of GPU1

dev2 = cuda.Device(1)
ctx2 = dev2.make_context()

curng2 = curandom.XORWOWRandomNumberGenerator()

x2 = curng2.gen_normal((d,d), dtype = np.float32) # so x2 is stored in GPU 2

```

with the setup above, I tried to check by poping ctx2 and pushing ctx1, can
I access x1 and not x2 and vice versa, popping ctx1 and pushing ctx2, I can
access x2 and not x1. However, I realise that I can access x1 and x2 in
both contexts.

Thus I'm wondering if my assumptions of x1 stored in GPU1 and x2 stored in
GPU2 are correct, or if it is actually the UVA and peer access that allows
me to access both x1 and x2 even if only one of the two ctx is active.

Thanks,
Zhangsheng
_______________________________________________
PyCUDA mailing list
PyCUDA@tiker.net
https://lists.tiker.net/listinfo/pycuda

Reply via email to