Do the suggestions from this post earlier in the summer still remain valid? I do a lot of 2D (array, sub-array) calcs in R and trying to get up to speed on the interchange between numpy indexing & PyCuda processing. TIA, V.
<snip> Message: 2 Date: Tue, 16 Jun 2009 16:18:42 -0400 From: Andreas Kl?ckner <[email protected]> Subject: Re: [PyCUDA] Pointer arithmetic To: [email protected] Message-ID: <[email protected]> Content-Type: text/plain; charset="iso-8859-1" On Dienstag 16 Juni 2009, Andrew Wagner wrote: > Suppose I have a column-major array stored in linear memory on the > gpu, and want to run a kernel on one column. The "right" way isn't quite supported yet, which would be to just write a[:,i] and get the right view delivered. This is mostly because PyCUDA doesn't know about strides just yet, and assumes that arrays are contiguous chunks of memory. Of course, your particular case doesn't violate that assumption, so you can feel free to hack just that case into GPUArray.__getitem__. (It already deals with the 1D case.) (Or feel free to hack stride treatment into PyCUDA--even a tiny step in that direction would be pretty cool.) Another way is to obtain a 1D view of the 2D array (which would require you to implement a .flat attribute mimicking numpy, also simple by copying what's happening in __getitem__). The last way is to just grab the pointer from ary.gpuarray, increment it by the right multiple of ary.dtype.itemsize and run with that. -- Vince Fulco, CFA, CAIA 612.424.5477 (universal) [email protected] A posse ad esse non valet consequentia “the possibility does not necessarily lead to materialization” _______________________________________________ PyCUDA mailing list [email protected] http://tiker.net/mailman/listinfo/pycuda_tiker.net
