Hello,

I currently investigate why our code does not show the expected weak scaling behaviour in a CG solver. Therefore I wanted to try out different communication methods for the VecScatter in the matrix-vector product. However, it seems like PETSc (version 3.7.6) always chooses either MPI_Alltoallv or MPI_Alltoallw when I pass different options via the PETSC_OPTIONS environment variable. Does anybody know, why this doesn't work as I expected?

The matrix is a MPIAIJ matrix and created by a finite element discretization of a 3D Laplacian. Therefore it only communicates with 'neighboring' MPI ranks. Not sure if it helps, but the code is run on a Cray XC40.

I tried the `ssend`, `rsend`, `sendfirst`, `reproduce` and no options from https://www.mcs.anl.gov/petsc/petsc-3.7/docs/manualpages/Vec/VecScatterCreate.html which all result in a MPI_Alltoallv. When combined with `nopack` the communication uses MPI_Alltoallw.

Best regards,
Felix

Reply via email to