I'm not sure - but this may have something to do with the implementation of `fill`. Because on the flip side, changes to the PETSc Vec *are* reflected the GPUArray. So I can see that they are actually sharing device memory..
Thanks, Ashwin On Tue, Oct 14, 2014 at 7:55 PM, Ashwin Srinath <[email protected]> wrote: > I'd like to report partial success! Following your hint worked, and now > I'm successfully able to construct a GPUArray from a PETSc vector in Python! > > However, as you pointed out, I should be concerned about who owns what. > When I make changes to the GPUArray, they are not reflected in the CUSP > vector! You can view my test script here: > > https://github.com/ashwinsrnth/petsc-pycuda/blob/master/run_demo.py > > The output I get: > > Original PETSc Vec: > [ 2. 2. 2. 2.] > GPUArray constructed from PETSc Vec: > [ 2. 2. 2. 2.] > Modified GPUArray: > [ 4. 4. 4. 4.] > PETSc Vec changes too: > [ 2. 2. 2. 2.] > > Ideally, that last line should be [4., 4., 4., 4.]. > > Any hints to make this work? > > Thank you! > Ashwin > > > On Tue, Oct 14, 2014 at 10:10 AM, Andreas Kloeckner < > [email protected]> wrote: > >> Ashwin Srinath <[email protected]> writes: >> >> > Thank you for your reply, Andreas, and for your work! >> > >> > Keeping in mind that `array` is of type double*, would this be the right >> > way to do it? I realize this is more a Cython question: >> > >> > cdef extern from "stdint.h": >> > ctypedef unsigned long long uint64_t >> > >> > V_gpu = gpuarray.empty(V.getSizes(), dtype=np.float64, >> > gpudata=<uint64_t>array) >> >> Yes, I think so. Perhaps cast gpudata to a bare int if there's >> trouble. Also there are concerns of who owns the data and how long it >> lives, but if you don't need that, then you should be set. >> >> Andreas >> > >
_______________________________________________ PyCUDA mailing list [email protected] http://lists.tiker.net/listinfo/pycuda
