Re: [petsc-users] suppress CUDA warning & choose MCA parameter for mpirun during make PETSC_ARCH=arch-linux-c-debug check

2022-10-08 Thread Jed Brown
Barry Smith writes: >I hate these kinds of make rules that hide what the compiler is doing (in > the name of having less output, I guess) it makes it difficult to figure out > what is going wrong. You can make VERBOSE=1 with CMake-generated makefiles.

Re: [petsc-users] suppress CUDA warning & choose MCA parameter for mpirun during make PETSC_ARCH=arch-linux-c-debug check

2022-10-08 Thread Barry Smith
True, but when users send reports back to us they will never have used the VERBOSE=1 option, so it requires one more round trip of email to get this additional information. > On Oct 8, 2022, at 6:48 PM, Jed Brown wrote: > > Barry Smith writes: > >> I hate these kinds of make rules

Re: [petsc-users] suppress CUDA warning & choose MCA parameter for mpirun during make PETSC_ARCH=arch-linux-c-debug check

2022-10-08 Thread Barry Smith
I hate these kinds of make rules that hide what the compiler is doing (in the name of having less output, I guess) it makes it difficult to figure out what is going wrong. Anyways, either some of the MPI libraries are missing from the link line or they are in the wrong order and thus it

Re: [petsc-users] suppress CUDA warning & choose MCA parameter for mpirun during make PETSC_ARCH=arch-linux-c-debug check

2022-10-08 Thread Junchao Zhang
Perhaps we can back one step: Use your mpicc to build a "hello world" mpi test, then run it on a compute node (with GPU) to see if it works. If no, then your MPI environment has problems; If yes, then use it to build petsc (turn on petsc's gpu support, --with-cuda --with-cudac=nvcc), and then

Re: [petsc-users] suppress CUDA warning & choose MCA parameter for mpirun during make PETSC_ARCH=arch-linux-c-debug check

2022-10-08 Thread Rob Kudyba
> > Perhaps we can back one step: > Use your mpicc to build a "hello world" mpi test, then run it on a compute > node (with GPU) to see if it works. > If no, then your MPI environment has problems; > If yes, then use it to build petsc (turn on petsc's gpu support, > --with-cuda