Hello,

I have CUDA aware MPI, and I have upgraded from PETSc 3.12 to PETSc 3.15.4
and petsc4py 3.15.4.

Now, when I call

PETSc.KSP().solve(..., ...)

The information of GPU is always printed to stdout by every MPI rank, like

CUDA version:   v 11040
CUDA Devices:

0 : Quadro P4000 6 1
  Global memory:   8105 mb
  Shared memory:   48 kb
  Constant memory: 64 kb
  Block registers: 65536

CUDA version:   v 11040
CUDA Devices:

0 : Quadro P4000 6 1
  Global memory:   8105 mb
  Shared memory:   48 kb
  Constant memory: 64 kb
  Block registers: 6553

...

I wonder if there is an option to turn that off?
I have tried including

-cuda_device NONE

in command options, but that did not work.

Best regards,
Yiyang

Reply via email to