I'm trying to turn a 3d numpy array with float32 data into a texture
that I can read via tex3D inside of kernel code. I have this in the
kernel:
texture<float, cudaTextureType3D, cudaReadModeElementType> my_tex;
And I have tried both:
my_texref = cuda_module.get_texref("my_tex")
my_gpu = pycuda.gpuarray.GPUArray(my_array.shape, numpy.float32)
my_gpu.set(numpy.array(my_array, order='F'))
my_gpu.bind_to_texref_ext(my_texref)
and:
my_texref = cuda_module.get_texref("my_tex")
my_gpu = pycuda.gpuarray.to_gpu(my_array)
my_gpu.bind_to_texref_ext(my_texref)
However, all I get from tex3D are zeros. I suspect that my problem is
conflating "GPUArray" with "Array" - the former being a numpy-like
that does computation on the gpu, while the latter is how one
transfers numpy data to the gpu for use in kernels, etc. Is that
correct?
I don't see any API support for creating Array instances from
arbitrary (read: 3d) numpy arrays. Having poked around the source, I
see the source of the function matrix_to_array has the code:
def matrix_to_array(matrix, order, allow_double_hack=False):
if order.upper() == "C":
h, w = matrix.shape
stride = 0
elif order.upper() == "F":
w, h = matrix.shape
stride = -1
else:
raise LogicError, "order must be either F or C"
matrix = np.asarray(matrix, order=order)
descr = ArrayDescriptor()
descr.width = w
descr.height = h
if matrix.dtype == np.float64 and allow_double_hack:
descr.format = array_format.SIGNED_INT32
descr.num_channels = 2
else:
descr.format = dtype_to_array_format(matrix.dtype)
descr.num_channels = 1
ary = Array(descr)
copy = Memcpy2D()
copy.set_src_host(matrix)
copy.set_dst_array(ary)
copy.width_in_bytes = copy.src_pitch = copy.dst_pitch = \
matrix.strides[stride]
copy.height = h
copy(aligned=True)
return ary
Should I be looking to do something similar? I see there is a
Memcpy3D() in src/wrapper/wrap_cudadrv.cpp - should I be using that?
It seems like the ability to handle 3d numpy arrays should be present
in the pycuda API. I can produce a patch for a numpy3d_to_array
function if desired.
Thanks for reading,
Eli
_______________________________________________
PyCUDA mailing list
[email protected]
http://lists.tiker.net/listinfo/pycuda