Ashwin Srinath ashwinsr...@gmail.com writes:
I'm not sure - but this may have something to do with the implementation of
`fill`. Because on the flip side, changes to the PETSc Vec *are* reflected
the GPUArray. So I can see that they are actually sharing device memory..
As far as I know, PETSc
Thank you, Andreas. The documentation does mention that PETSc internally
keeps track of which (host or device) vector is last updated. So when I
update the memory on the PyCUDA side, maybe PETSc doesn't know about it.
Thank you, I'll investigate further.
On Thu, Oct 16, 2014 at 1:28 PM, Andreas
Yup. That was it. Manually updating this flag after PyCUDA modified the
buffer fixed it. Thank you!
And if anyone is interested, here is a petsc4py extension that lets you
access PETSc vectors as PyCUDA GPUArrays:
https://github.com/ashwinsrnth/petsc-pycuda
On Thu, Oct 16, 2014 at 1:37 PM,
Ashwin Srinath ashwinsr...@gmail.com writes:
Hello, PyCUDA users!
I'm trying to construct a GPUArray from device memory allocated using
petsc4py. I've written some C code that extracts a raw pointer from a PETSc
cusp vector. Now, I am hoping to 'place' this memory into a gpuarray,
using
Thank you for your reply, Andreas, and for your work!
Keeping in mind that `array` is of type double*, would this be the right
way to do it? I realize this is more a Cython question:
cdef extern from stdint.h:
ctypedef unsigned long long uint64_t
V_gpu = gpuarray.empty(V.getSizes(),
I'd like to report partial success! Following your hint worked, and now I'm
successfully able to construct a GPUArray from a PETSc vector in Python!
However, as you pointed out, I should be concerned about who owns what.
When I make changes to the GPUArray, they are not reflected in the CUSP