On Fri, Feb 4, 2022 at 11:09 AM Sajid Ali Syed <[email protected]> wrote:
> Hi PETSc-developers, > > Could the linear solver table (at > https://petsc.org/main/overview/linear_solve_table/) be updated with > information regarding direct solvers that work on mpiaijkokkos/kokkos (or > mpiaijcusparse/cuda) matrix/vector types? > > The use case for this solver would be to repeatedly invert the same matrix > so any solver that is able to perform the SpTRSV phase entirely using GPU > matrices/vectors would be helpful (even if the initial factorization is > performed using CPU matrices/vectors with GPU offload), this functionality > of course being the corresponding distributed memory counterpart to the > current device-solve capabilities of the seqaijkokkos matrix type (provided > by the kokkos-kernel SpTRSV routines). The system arises from a (7-pt) > finite difference discretization of the 3D Poisson equation with a mesh of > 256x256x1024 (likely necessitate using multiple GPUs) with dirichlet > boundary conditions. > We do not have parallel SpTRSV on GPU. I think you need superlu_dist for that. > The recent article on PETScSF (arXiv:2102.13018) describes an asynchronous > CG solver that works well on communication bound multi-GPU systems. Is this > solver available now and can it be combined with GAMG/hypre preconditioning > ? > > The asynchronous CG solver is experimental. It requires a lot of things not in petsc/main. It is currently not in a state for general use. > Summary of Sparse Linear Solvers Available In PETSc — PETSc > v3.16.2-540-g1213a6437a documentation > <https://petsc.org/main/overview/linear_solve_table/> > Last updated on 2022-01-01T03:38:46-0600 (v3.16.2-540-g1213a6437a). > petsc.org > > Thank You, > Sajid Ali (he/him) | Research Associate > Scientific Computing Division > Fermi National Accelerator Laboratory > s-sajid-ali.github.io > >
