Thats the idea, but the problem I am having is getting the pointer into the correct DeviceAllocation type.
What is the type of x.gpudata in the theano example you show? The c++ function claims to expect a ctypes.c_ulonglong, but I think the python wrapper needs a DeviceAllocation object which I do not know how to make from the long long address provided by the DeviceNDArray. Michael. On Jul 17, 2013, at 4:31 PM, Frédéric Bastien <[email protected]> wrote: > Hi, > > In Theano, we have an utility fct that create pycuda array from Theano > cudandarray. It is the first fct in this file: > > https://github.com/Theano/Theano/blob/master/theano/misc/pycuda_utils.py > > I think you can reuse the same logic for numba pro gpu object. > > Fred > > > On Wed, Jul 17, 2013 at 7:27 PM, Michael McNeil Forbes > <[email protected]> wrote: > Hi all, > > I would like to try to interface PyCuda with NumbaPro (in particular, using > NumbaPro for the FFT). Attempts like the following fail: > > import numbapro.cuda > import pycuda.gpuarray > > A = np.random.random((2, 2, 2)) > cu_A = numbapro.cuda.to_device(A) > pycu_A = pycuda.gpuarray.GPUArray( > shape=cu_A.shape, dtype=cu_A.dtype, gpudata=cu_A.gpu_data, > strides=cu_A.strides) > pycu_A.get() > > --------------------------------------------------------------------------- > ArgumentError Traceback (most recent call last) > <ipython-input-6-242ab62d4ae4> in <module>() > ----> 1 pycu_A.get() > > /data/apps/anaconda/1.3.1/lib/python2.7/site-packages/pycuda/gpuarray.pyc in > get(self, ary, pagelocked) > 250 > 251 if self.size: > --> 252 drv.memcpy_dtoh(ary, self.gpudata) > 253 return ary > 254 > > ArgumentError: Python argument types in > pycuda._driver.memcpy_dtoh(numpy.ndarray, DeviceMemory) > did not match C++ signature: > memcpy_dtoh(pycudaboost::python::api::object dest, unsigned long long src) > ----------------------- > > I am guessing that the issue is the gpudata object which probably needs to be > of type pycuda.driver.DeviceAllocation but which cannot be allocated in > python. Is there some way of creating a pycuda.driver.DeviceAllocation proxy > that actually points to the NumbaPro array data (which has the following > attributes: > > cu_A.gpu_data.bytesize > cu_A.gpu_data.device_ctypes_pointer > cu_A.gpu_data.device > cu_A.gpu_data.driver > > Thanks, > Michael. > _______________________________________________ > PyCUDA mailing list > [email protected] > http://lists.tiker.net/listinfo/pycuda >
_______________________________________________ PyCUDA mailing list [email protected] http://lists.tiker.net/listinfo/pycuda
