Hello everyone. I'd want to know:

is it possible (how?) to reserve memory from pycuda and assign the
resulting device pointer to a "global" pointer on a cuda module?

like, for instance, having inside a test.cu file

__device__ float *my_array;

__global__ void somekernel() {
   int i = threadIdx.x;
   my_array[i] = 0.3f;
}

(have test.cu compiled into cubin test.cubin)

and from python calling:
import pycuda.autoinit
import pycuda.driver as drv
mod=drv.module_from_file('test.cubin')

#dynamcally choose size
size = 4 * 1000 # 4 is sizeof float32
mem1 = drv.mem_alloc(size)
mem1_pointer = (long) mem1

my_array_pointer = mod.get_global('my_array')

memcpy_htod(my_array_pointer, mem1_pointer)
# this fails with TypeError: expected a readable buffer object

--
How do I write the mem1_pointer to my_array ?
Is there a simpler aproach to reserving global memory from pycuda and
assigning to a global pointer variable so it is usable from kernels?
(without passing it as kernel parameter)

Thanks,
Ezequiel

_______________________________________________
PyCUDA mailing list
[email protected]
http://lists.tiker.net/listinfo/pycuda

Reply via email to