On Mon, Apr 9, 2018 at 6:12 PM, Manuel Valera <mvaler...@sdsu.edu> wrote:

> Hello guys,
> I've made advances in my CUDA acceleration project, as you remember i have
> a CFD model in need of better execution times.
> So far i have been able to solve the pressure system in the GPU and the
> rest in serial, using PETSc only for this pressure solve, the library i got
> to work was ViennaCL. First question, do i still have to switch
> installations to use either CUDA library? this was a suggestion before, so
> in order to use CUSP instead of ViennaCL, for example, i currently have to
> change installations, is this still the case?

I am not sure what that means exactly. However, you can build a PETSc with
CUDA and ViennaCL support. The type of Vec/Mat is selected at runtime.

> Now, i started working in a fully parallelized version of the model, which
> uses the DMs and DMDAs to distribute the arrays, if i try the same flags as
> before i get an error saying "Currently only handles ViennaCL matrices"
> when trying to solve for pressure, i get this is a feature still not
> implemented? what options do i have to solve pressure, or assign a DMDA
> array update to be done specifically in a GPU device?

If we can't see the error, we are just guessing. Please send the entire
error message.

Note, we only do linear algebra on the GPU, so none of the
FormFunction/FormJacobian stuff for DMDA would be on the GPU.

> I was thinking of using the VecScatterCreateToZero for a regular vector,

Why do you want a serial vector?

> but then i would have to create a vector and copy the DMDAVec into it,

I do not understand what it means to copy the DM into the Vec.



> is this accomplished with DMDAVecGetArrayReadF90 and then just copy? do
> you think this will generate too much overhead?
> Thanks so much for your input,
> Manuel

What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/~mk51/>

Reply via email to