Hi Rui,

On Mon, Nov 5, 2012 at 8:51 AM, Rui Lopes <[email protected]> wrote:
> I've written a kernel to perform a custom dot operation that would work
> perfectly if there was not an issue with the memory allocation. Maybe I am
> missing something  in the mapping process?
> From what I understood matrices are allocated column-wise. So in this case
> b[0] and b[1] (kernel) would be respectively b[0][0] and b[1][0], but from
> the result it looks like the matrix is stored by rows. Is this an option?

Yes, by default numpy arrays (and therefore GPU arrays created from
them) are stored by rows. See
http://docs.scipy.org/doc/numpy/reference/generated/numpy.array.html ,
namely the "order" parameter.

_______________________________________________
PyCUDA mailing list
[email protected]
http://lists.tiker.net/listinfo/pycuda

Reply via email to