Yes, but was surprised it was not used, so I removed it (same for -vec_type mpicuda)
mpirun -np 2 ./ex10 2 -f Matrix_3133717_rows_1_cpus.petsc -ksp_view -log_view -ksp_monitor -ksp_type cg -pc_type hypre -pc_hypre_type boomeramg -pc_hypre_boomeramg_strong_threshold 0.7 -mat_type aijcusparse -vec_type cuda ... WARNING! There are options you set that were not used! WARNING! could be spelling mistake, etc! There is one unused database option. It is: Option left: name:-vec_type value: cuda source: command lin Pierre LEDAC Commissariat à l’énergie atomique et aux énergies alternatives Centre de SACLAY DES/ISAS/DM2S/SGLS/LCAN Bâtiment 451 – point courrier n°41 F-91191 Gif-sur-Yvette +33 1 69 08 04 03 +33 6 83 42 05 79 ________________________________ De : Barry Smith <bsm...@petsc.dev> Envoyé : samedi 30 août 2025 21:47:07 À : LEDAC Pierre Cc : petsc-users@mcs.anl.gov Objet : Re: [petsc-users] [MPI][GPU] Did you try the additional option -vec_type cuda with ex10.c ? On Aug 30, 2025, at 1:16 PM, LEDAC Pierre <pierre.le...@cea.fr> wrote: Hello, My code is built with PETSc 3.23+OpenMPI 4.1.6 (Cuda support enabled) and profling indicates that MPI communications are done between GPUs in all the code except PETSc part where D2H transfers occur. I reproduced the PETSc issue with the example under src/ksp/ksp/tutorials/ex10 on 2 MPI ranks. See output in ex10.log Also below the Nsys system profiling on ex10 with D2H and H2D copies before/after MPI calls. Thanks for your help, <pastedImage.png> Pierre LEDAC Commissariat à l’énergie atomique et aux énergies alternatives Centre de SACLAY DES/ISAS/DM2S/SGLS/LCAN Bâtiment 451 – point courrier n°41 F-91191 Gif-sur-Yvette +33 1 69 08 04 03 +33 6 83 42 05 79 <ex10.log>