Re: [petsc-users] Status of PETScSF failures with GPU-aware MPI on Perlmutter

2023-11-02 Thread Jed Brown
What modules do you have loaded. I don't know if it currently works with 
cuda-11.7. I assume you're following these instructions carefully.

https://docs.nersc.gov/development/programming-models/mpi/cray-mpich/#cuda-aware-mpi

In our experience, GPU-aware MPI continues to be brittle on these machines. 
Maybe you can inquire with NERSC exactly which CUDA versions are tested with 
GPU-aware MPI.

Sajid Ali  writes:

> Hi PETSc-developers,
>
> I had posted about crashes within PETScSF when using GPU-aware MPI on
> Perlmutter a while ago (
> https://lists.mcs.anl.gov/mailman/htdig/petsc-users/2022-February/045585.html).
> Now that the software stacks have stabilized, I was wondering if there was
> a fix for the same as I am still observing similar crashes.
>
> I am attaching the trace of the latest crash (with PETSc-3.20.0) for
> reference.
>
> Thank You,
> Sajid Ali (he/him) | Research Associate
> Data Science, Simulation, and Learning Division
> Fermi National Accelerator Laboratory
> s-sajid-ali.github.io


Re: [petsc-users] Status of PETScSF failures with GPU-aware MPI on Perlmutter

2023-11-02 Thread Junchao Zhang
Hi, Sajid,
  Do you have a test example to reproduce the error?
--Junchao Zhang


On Thu, Nov 2, 2023 at 3:37 PM Sajid Ali 
wrote:

> Hi PETSc-developers,
>
> I had posted about crashes within PETScSF when using GPU-aware MPI on
> Perlmutter a while ago (
> https://lists.mcs.anl.gov/mailman/htdig/petsc-users/2022-February/045585.html).
> Now that the software stacks have stabilized, I was wondering if there was
> a fix for the same as I am still observing similar crashes.
>
> I am attaching the trace of the latest crash (with PETSc-3.20.0) for
> reference.
>
> Thank You,
> Sajid Ali (he/him) | Research Associate
> Data Science, Simulation, and Learning Division
> Fermi National Accelerator Laboratory
> s-sajid-ali.github.io
>


[petsc-users] Status of PETScSF failures with GPU-aware MPI on Perlmutter

2023-11-02 Thread Sajid Ali
Hi PETSc-developers,

I had posted about crashes within PETScSF when using GPU-aware MPI on
Perlmutter a while ago (
https://lists.mcs.anl.gov/mailman/htdig/petsc-users/2022-February/045585.html).
Now that the software stacks have stabilized, I was wondering if there was
a fix for the same as I am still observing similar crashes.

I am attaching the trace of the latest crash (with PETSc-3.20.0) for
reference.

Thank You,
Sajid Ali (he/him) | Research Associate
Data Science, Simulation, and Learning Division
Fermi National Accelerator Laboratory
s-sajid-ali.github.io


2_gpu_crash
Description: Binary data


Re: [petsc-users] error while compiling PETSc on windows using cygwin

2023-11-02 Thread Maruthi NH
Hi Barry,

Thanks for the suggestion. It worked after updating the compilers.

Regards,
Maruthi



On Thu, 2 Nov 2023 at 9:33 PM, Barry Smith  wrote:

>
>It could be you would benefit from having the latest Microsoft compilers
>
>If you do not need C++ you could use --with-cxx=0
>
>Otherwise please send configure.log to petsc-ma...@mcs.anl.gov
>
>
>
> > On Nov 2, 2023, at 11:20 AM, Maruthi NH  wrote:
> >
> > Hi all,
> >
> > I get the following error while trying to compile PETSc version 3.20.1
> on Windows
> >
> > \petsc\include\petsc/private/cpp/unordered_map.hpp(309): error C2938:
> 'std::enable_if_t' : Failed to specialize alias template
> >
> > This is the configuration file I used to compile PETSc
> >
> > #!/usr/bin/python
> >
> > import os
> > petsc_hash_pkgs=os.path.join(os.getenv('HOME'),'petsc-hash-pkgs')
> >
> > oadirf='"/cygdrive/c/Program Files (x86)/Intel/oneAPI"'
> > oadir=os.popen('cygpath -u '+os.popen('cygpath -ms
> '+oadirf).read()).read().strip()
> > oamkldir=oadir+'/mkl/2022.1.0/lib/intel64'
> > oampidir=oadir+'/mpi/2021.6.0'
> >
> > if __name__ == '__main__':
> >   import sys
> >   import os
> >   sys.path.insert(0, os.path.abspath('config'))
> >   import configure
> >   configure_options = [
> > '--package-prefix-hash='+petsc_hash_pkgs,
> > '--with-debugging=0',
> > '--with-shared-libraries=0',
> > '--with-blaslapack-lib=-L'+oamkldir+' mkl_intel_lp64_dll.lib
> mkl_sequential_dll.lib mkl_core_dll.lib',
> > '--with-cc=win32fe cl',
> > '--with-cxx=win32fe cl',
> > '--with-fc=win32fe ifort',
> > 'FOPTFLGS=-O3 -fp-model=precise',
> > '--with-mpi-include='+oampidir+'/include',
> > '--with-mpi-lib='+oampidir+'/lib/release/impi.lib',
> > '-with-mpiexec='+oampidir+'/bin/mpiexec -localonly',
> >   ]
> >   configure.petsc_configure(configure_options)
> >
> > Regards,
> > Maruthi
>
>


Re: [petsc-users] Error using Metis with PETSc installed with MUMPS

2023-11-02 Thread Victoria Rolandi
Pierre,
Yes, sorry, I'll keep the list in copy.
Launching with those options (-mat_mumps_icntl_28 2 -mat_mumps_icntl_29 2)
I get an error during the analysis step. I also launched increasing the
memory and I still have the error.

*The calculations stops at :*

Entering CMUMPS 5.4.1 from C interface with JOB, N =   1  699150
  executing #MPI =  2, without OMP

 =
 MUMPS compiled with option -Dmetis
 MUMPS compiled with option -Dparmetis
 =
L U Solver for unsymmetric matrices
Type of parallelism: Working host

 ** ANALYSIS STEP 

 ** Maximum transversal (ICNTL(6)) not allowed because matrix is distributed
 Using ParMETIS for parallel ordering
 Structural symmetry is: 90%


*The error:*

[0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
probably memory access out of range
[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[0]PETSC ERROR: or see https://petsc.org/release/faq/#valgrind
[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple MacOS to
find memory corruption errors
[0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and
run
[0]PETSC ERROR: to get more information on the crash.
[0]PETSC ERROR: - Error Message
--
[0]PETSC ERROR: Signal received
[0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.17.0, unknown
[0]PETSC ERROR: ./charlin.exe on a  named n1056 by vrolandi Wed Nov  1
11:38:28 2023
[0]PETSC ERROR: Configure options
--prefix=/u/home/v/vrolandi/CODES/LIBRARY/packages/petsc/installationDir
--with-cc=mpiicc --with-cxx=mpiicpc --with-fc=mpiifort CXXOPTFLAGS=-O3
--with-scalar-type=complex --with-debugging=0 --with-precision=single
--download-mumps --download-scalapack --download-parmetis --download-metis

[0]PETSC ERROR: #1 User provided function() at unknown file:0
[0]PETSC ERROR: Run with -malloc_debug to check if memory corruption is
causing the crash.
Abort(59) on node 0 (rank 0 in comm 0): application called
MPI_Abort(MPI_COMM_WORLD, 59) - process 0


Thanks,
Victoria

Il giorno mer 1 nov 2023 alle ore 10:33 Pierre Jolivet  ha
scritto:

> Victoria, please keep the list in copy.
>
> I am not understanding how can I switch to ParMetis if it does not appear
> in the options of -mat_mumps_icntl_7.In the options I only have Metis and
> not ParMetis.
>
>
> You need to use -mat_mumps_icntl_28 2 -mat_mumps_icntl_29 2
>
> Barry, I don’t think we can programmatically shut off this warning, it’s
> guarded by a bunch of KEEP() values, see src/dana_driver.F:4707, which are
> only settable/gettable by people with access to consortium releases.
> I’ll ask the MUMPS people for confirmation.
> Note that this warning is only printed to screen with the option
> -mat_mumps_icntl_4 2 (or higher), so this won’t show up for standard runs.
>
> Thanks,
> Pierre
>
> On 1 Nov 2023, at 5:52 PM, Barry Smith  wrote:
>
>
>   Pierre,
>
>Could the PETSc MUMPS interface "turn-off" ICNTL(6) in this situation
> so as to not trigger the confusing warning message from MUMPS?
>
>   Barry
>
> On Nov 1, 2023, at 12:17 PM, Pierre Jolivet  wrote:
>
>
>
> On 1 Nov 2023, at 3:33 PM, Zhang, Hong via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
> Victoria,
> "** Maximum transversal (ICNTL(6)) not allowed because matrix is
> distributed
> Ordering based on METIS"
>
>
> This warning is benign and appears for every run using a sequential
> partitioner in MUMPS with a MATMPIAIJ.
> (I’m not saying switching to ParMETIS will not make the issue go away)
>
> Thanks,
> Pierre
>
> $ ../../../../arch-darwin-c-debug-real/bin/mpirun -n 2 ./ex2 -pc_type lu
> -mat_mumps_icntl_4 2
> Entering DMUMPS 5.6.2 from C interface with JOB, N =   1  56
>   executing #MPI =  2, without OMP
>
>  =
>  MUMPS compiled with option -Dmetis
>  MUMPS compiled with option -Dparmetis
>  MUMPS compiled with option -Dpord
>  MUMPS compiled with option -Dptscotch
>  MUMPS compiled with option -Dscotch
>  =
> L U Solver for unsymmetric matrices
> Type of parallelism: Working host
>
>  ** ANALYSIS STEP 
>
>  ** Maximum transversal (ICNTL(6)) not allowed because matrix is
> distributed
>  Processing a graph of size:56 with   194 edges
>  Ordering based on AMF
>  WARNING: Largest root node of size26 not selected for parallel
> execution
>
> Leaving analysis phase with  ...
>  INFOG(1)   =   0
>  INFOG(2)   =   0
> […]
>
> Try parmetis.
> Hong
> --
> *From:* petsc-users  on behalf of
> Victoria Rolandi 
> *Sent:* Tuesday, October 31, 2023 10:30 PM
> *To:* 

Re: [petsc-users] Error using Metis with PETSc installed with MUMPS

2023-11-02 Thread Pierre Jolivet

> On 2 Nov 2023, at 5:29 PM, Victoria Rolandi  
> wrote:
> 
> Pierre, 
> Yes, sorry, I'll keep the list in copy.
> Launching with those options (-mat_mumps_icntl_28 2 -mat_mumps_icntl_29 2) I 
> get an error during the analysis step. I also launched increasing the memory 
> and I still have the error.

Oh, OK, that’s bad.
Would you be willing to give SCOTCH and/or PT-SCOTCH a try?
You’d need to reconfigure/recompile with --download-ptscotch (and maybe 
--download-bison depending on your system).
Then, the option would become either -mat_mumps_icntl_28 2 -mat_mumps_icntl_29 
2 (PT-SCOTCH) or -mat_mumps_icntl_7 3 (SCOTCH).
It may be worth updating PETSc as well (you are using 3.17.0, we are at 
3.20.1), though I’m not sure we updated the METIS/ParMETIS snapshots since 
then, so it may not fix the present issue.

Thanks,
Pierre

> The calculations stops at :
> 
> Entering CMUMPS 5.4.1 from C interface with JOB, N =   1  699150
>   executing #MPI =  2, without OMP
> 
>  =
>  MUMPS compiled with option -Dmetis
>  MUMPS compiled with option -Dparmetis
>  =
> L U Solver for unsymmetric matrices
> Type of parallelism: Working host
> 
>  ** ANALYSIS STEP 
> 
>  ** Maximum transversal (ICNTL(6)) not allowed because matrix is distributed
>  Using ParMETIS for parallel ordering
>  Structural symmetry is: 90%
> 
> 
> The error:
> 
> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, 
> probably memory access out of range
> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
> [0]PETSC ERROR: or see https://petsc.org/release/faq/#valgrind
> [0]PETSC ERROR: or try http://valgrind.org  on 
> GNU/linux and Apple MacOS to find memory corruption errors
> [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run
> [0]PETSC ERROR: to get more information on the crash.
> [0]PETSC ERROR: - Error Message 
> --
> [0]PETSC ERROR: Signal received
> [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting.
> [0]PETSC ERROR: Petsc Release Version 3.17.0, unknown
> [0]PETSC ERROR: ./charlin.exe on a  named n1056 by vrolandi Wed Nov  1 
> 11:38:28 2023
> [0]PETSC ERROR: Configure options 
> --prefix=/u/home/v/vrolandi/CODES/LIBRARY/packages/petsc/installationDir 
> --with-cc=mpiicc --with-cxx=mpiicpc --with-fc=mpiifort CXXOPTFLAGS=-O3 
> --with-scalar-type=complex --with-debugging=0 --with-precision=single 
> --download-mumps --download-scalapack --download-parmetis --download-metis
> 
> [0]PETSC ERROR: #1 User provided function() at unknown file:0
> [0]PETSC ERROR: Run with -malloc_debug to check if memory corruption is 
> causing the crash.
> Abort(59) on node 0 (rank 0 in comm 0): application called 
> MPI_Abort(MPI_COMM_WORLD, 59) - process 0
> 
> 
> Thanks, 
> Victoria 
> 
> Il giorno mer 1 nov 2023 alle ore 10:33 Pierre Jolivet  > ha scritto:
>> Victoria, please keep the list in copy.
>> 
>>> I am not understanding how can I switch to ParMetis if it does not appear 
>>> in the options of -mat_mumps_icntl_7.In the options I only have Metis and 
>>> not ParMetis.
>> 
>> 
>> You need to use -mat_mumps_icntl_28 2 -mat_mumps_icntl_29 2
>> 
>> Barry, I don’t think we can programmatically shut off this warning, it’s 
>> guarded by a bunch of KEEP() values, see src/dana_driver.F:4707, which are 
>> only settable/gettable by people with access to consortium releases.
>> I’ll ask the MUMPS people for confirmation.
>> Note that this warning is only printed to screen with the option 
>> -mat_mumps_icntl_4 2 (or higher), so this won’t show up for standard runs.
>> 
>> Thanks,
>> Pierre
>> 
>>> On 1 Nov 2023, at 5:52 PM, Barry Smith >> > wrote:
>>> 
>>> 
>>>   Pierre,
>>> 
>>>Could the PETSc MUMPS interface "turn-off" ICNTL(6) in this situation so 
>>> as to not trigger the confusing warning message from MUMPS?
>>> 
>>>   Barry
>>> 
 On Nov 1, 2023, at 12:17 PM, Pierre Jolivet >>> > wrote:
 
 
 
> On 1 Nov 2023, at 3:33 PM, Zhang, Hong via petsc-users 
> mailto:petsc-users@mcs.anl.gov>> wrote:
> 
> Victoria,
> "** Maximum transversal (ICNTL(6)) not allowed because matrix is 
> distributed
> Ordering based on METIS"
 
 This warning is benign and appears for every run using a sequential 
 partitioner in MUMPS with a MATMPIAIJ.
 (I’m not saying switching to ParMETIS will not make the issue go away)
 
 Thanks,
 Pierre
 
 $ ../../../../arch-darwin-c-debug-real/bin/mpirun -n 2 ./ex2 -pc_type lu 
 -mat_mumps_icntl_4 2
 Entering DMUMPS 5.6.2 from C interface with JOB, N =   1  56
   executing #MPI =  2, without OMP
 
  

Re: [petsc-users] error while compiling PETSc on windows using cygwin

2023-11-02 Thread Barry Smith


   It could be you would benefit from having the latest Microsoft compilers

   If you do not need C++ you could use --with-cxx=0

   Otherwise please send configure.log to petsc-ma...@mcs.anl.gov 



> On Nov 2, 2023, at 11:20 AM, Maruthi NH  wrote:
> 
> Hi all,
> 
> I get the following error while trying to compile PETSc version 3.20.1 on 
> Windows
> 
> \petsc\include\petsc/private/cpp/unordered_map.hpp(309): error C2938: 
> 'std::enable_if_t' : Failed to specialize alias template
> 
> This is the configuration file I used to compile PETSc
> 
> #!/usr/bin/python
> 
> import os
> petsc_hash_pkgs=os.path.join(os.getenv('HOME'),'petsc-hash-pkgs')
> 
> oadirf='"/cygdrive/c/Program Files (x86)/Intel/oneAPI"'
> oadir=os.popen('cygpath -u '+os.popen('cygpath -ms 
> '+oadirf).read()).read().strip()
> oamkldir=oadir+'/mkl/2022.1.0/lib/intel64'
> oampidir=oadir+'/mpi/2021.6.0'
> 
> if __name__ == '__main__':
>   import sys
>   import os
>   sys.path.insert(0, os.path.abspath('config'))
>   import configure
>   configure_options = [
> '--package-prefix-hash='+petsc_hash_pkgs,
> '--with-debugging=0',
> '--with-shared-libraries=0',
> '--with-blaslapack-lib=-L'+oamkldir+' mkl_intel_lp64_dll.lib 
> mkl_sequential_dll.lib mkl_core_dll.lib',
> '--with-cc=win32fe cl',
> '--with-cxx=win32fe cl',
> '--with-fc=win32fe ifort',
> 'FOPTFLGS=-O3 -fp-model=precise',
> '--with-mpi-include='+oampidir+'/include',
> '--with-mpi-lib='+oampidir+'/lib/release/impi.lib',
> '-with-mpiexec='+oampidir+'/bin/mpiexec -localonly',
>   ]
>   configure.petsc_configure(configure_options)
> 
> Regards,
> Maruthi



[petsc-users] error while compiling PETSc on windows using cygwin

2023-11-02 Thread Maruthi NH
Hi all,

I get the following error while trying to compile PETSc version 3.20.1 on
Windows

\petsc\include\petsc/private/cpp/unordered_map.hpp(309): error C2938:
'std::enable_if_t' : Failed to specialize alias template

This is the configuration file I used to compile PETSc

#!/usr/bin/python

import os
petsc_hash_pkgs=os.path.join(os.getenv('HOME'),'petsc-hash-pkgs')

oadirf='"/cygdrive/c/Program Files (x86)/Intel/oneAPI"'
oadir=os.popen('cygpath -u '+os.popen('cygpath -ms
'+oadirf).read()).read().strip()
oamkldir=oadir+'/mkl/2022.1.0/lib/intel64'
oampidir=oadir+'/mpi/2021.6.0'

if __name__ == '__main__':
  import sys
  import os
  sys.path.insert(0, os.path.abspath('config'))
  import configure
  configure_options = [
'--package-prefix-hash='+petsc_hash_pkgs,
'--with-debugging=0',
'--with-shared-libraries=0',
'--with-blaslapack-lib=-L'+oamkldir+' mkl_intel_lp64_dll.lib
mkl_sequential_dll.lib mkl_core_dll.lib',
'--with-cc=win32fe cl',
'--with-cxx=win32fe cl',
'--with-fc=win32fe ifort',
'FOPTFLGS=-O3 -fp-model=precise',
'--with-mpi-include='+oampidir+'/include',
'--with-mpi-lib='+oampidir+'/lib/release/impi.lib',
'-with-mpiexec='+oampidir+'/bin/mpiexec -localonly',
  ]
  configure.petsc_configure(configure_options)

Regards,
Maruthi


Re: [petsc-users] Error using Metis with PETSc installed with MUMPS

2023-11-02 Thread Pierre Jolivet

> On 1 Nov 2023, at 8:02 PM, Barry Smith  wrote:
> 
> 
>   Pierre,
> 
>Sorry, I was not clear. What I meant was that the PETSc code that calls 
> MUMPS could change the value of ICNTL(6) under certain conditions before 
> calling MUMPS, thus the MUMPS warning might not be triggered.

Again, I’m not sure it is possible, as the message is not guarded by the value 
of ICNTL(6), but by some other internal parameters.

Thanks,
Pierre

$ for i in {1..7} do echo "ICNTL(6) = ${i}" 
../../../../arch-darwin-c-debug-real/bin/mpirun -n 2 ./ex2 -pc_type lu 
-mat_mumps_icntl_4 2 -mat_mumps_icntl_28 2 -mat_mumps_icntl_29 2 
-mat_mumps_icntl_6 ${i} | grep -i "not allowed" done
ICNTL(6) = 1
 ** Maximum transversal (ICNTL(6)) not allowed because matrix is distributed
ICNTL(6) = 2
 ** Maximum transversal (ICNTL(6)) not allowed because matrix is distributed
ICNTL(6) = 3
 ** Maximum transversal (ICNTL(6)) not allowed because matrix is distributed
ICNTL(6) = 4
 ** Maximum transversal (ICNTL(6)) not allowed because matrix is distributed
ICNTL(6) = 5
 ** Maximum transversal (ICNTL(6)) not allowed because matrix is distributed
ICNTL(6) = 6
 ** Maximum transversal (ICNTL(6)) not allowed because matrix is distributed
ICNTL(6) = 7
 ** Maximum transversal (ICNTL(6)) not allowed because matrix is distributed

> I am basing this on a guess from looking at the MUMPS manual and the warning 
> message that the particular value of ICNTL(6) is incompatible with the given 
> matrix state. But I could easily be wrong.
> 
>   Barry
> 
> 
>> On Nov 1, 2023, at 1:33 PM, Pierre Jolivet  wrote:
>> 
>> Victoria, please keep the list in copy.
>> 
>>> I am not understanding how can I switch to ParMetis if it does not appear 
>>> in the options of -mat_mumps_icntl_7.In the options I only have Metis and 
>>> not ParMetis.
>> 
>> 
>> You need to use -mat_mumps_icntl_28 2 -mat_mumps_icntl_29 2
>> 
>> Barry, I don’t think we can programmatically shut off this warning, it’s 
>> guarded by a bunch of KEEP() values, see src/dana_driver.F:4707, which are 
>> only settable/gettable by people with access to consortium releases.
>> I’ll ask the MUMPS people for confirmation.
>> Note that this warning is only printed to screen with the option 
>> -mat_mumps_icntl_4 2 (or higher), so this won’t show up for standard runs.
>> 
>> Thanks,
>> Pierre
>> 
>>> On 1 Nov 2023, at 5:52 PM, Barry Smith  wrote:
>>> 
>>> 
>>>   Pierre,
>>> 
>>>Could the PETSc MUMPS interface "turn-off" ICNTL(6) in this situation so 
>>> as to not trigger the confusing warning message from MUMPS?
>>> 
>>>   Barry
>>> 
 On Nov 1, 2023, at 12:17 PM, Pierre Jolivet  wrote:
 
 
 
> On 1 Nov 2023, at 3:33 PM, Zhang, Hong via petsc-users 
>  wrote:
> 
> Victoria,
> "** Maximum transversal (ICNTL(6)) not allowed because matrix is 
> distributed
> Ordering based on METIS"
 
 This warning is benign and appears for every run using a sequential 
 partitioner in MUMPS with a MATMPIAIJ.
 (I’m not saying switching to ParMETIS will not make the issue go away)
 
 Thanks,
 Pierre
 
 $ ../../../../arch-darwin-c-debug-real/bin/mpirun -n 2 ./ex2 -pc_type lu 
 -mat_mumps_icntl_4 2
 Entering DMUMPS 5.6.2 from C interface with JOB, N =   1  56
   executing #MPI =  2, without OMP
 
  =
  MUMPS compiled with option -Dmetis
  MUMPS compiled with option -Dparmetis
  MUMPS compiled with option -Dpord
  MUMPS compiled with option -Dptscotch
  MUMPS compiled with option -Dscotch
  =
 L U Solver for unsymmetric matrices
 Type of parallelism: Working host
 
  ** ANALYSIS STEP 
 
  ** Maximum transversal (ICNTL(6)) not allowed because matrix is 
 distributed
  Processing a graph of size:56 with   194 edges
  Ordering based on AMF 
  WARNING: Largest root node of size26 not selected for parallel 
 execution
 
 Leaving analysis phase with  ...
  INFOG(1)   =   0
  INFOG(2)   =   0
 […]
 
> Try parmetis.
> Hong
> From: petsc-users  on behalf of Victoria 
> Rolandi 
> Sent: Tuesday, October 31, 2023 10:30 PM
> To: petsc-users@mcs.anl.gov 
> Subject: [petsc-users] Error using Metis with PETSc installed with MUMPS
>  
> Hi, 
> 
> I'm solving a large sparse linear system in parallel and I am using PETSc 
> with MUMPS. I am trying to test different options, like the ordering of 
> the matrix. Everything works if I use the -mat_mumps_icntl_7 2  or 
> -mat_mumps_icntl_7 0 options (with the first one, AMF, performing better 
> than AMD), however when I test METIS -mat_mumps_icntl_7 5 I get an error 
> (reported at the end of the email).