Re: [petsc-users] GAMG preconditioning

2021-04-12 Thread Barry Smith

  Please send -log_view for the ilu and GAMG case.

  Barry


> On Apr 12, 2021, at 10:34 AM, Milan Pelletier via petsc-users 
>  wrote:
> 
> Dear all,
> 
> I am currently trying to use PETSc with CG solver and GAMG preconditioner.
> I have started with the following set of parameters:
> -ksp_type cg
> -pc_type gamg
> -pc_gamg_agg_nsmooths 1 
> -pc_gamg_threshold 0.02 
> -mg_levels_ksp_type chebyshev 
> -mg_levels_pc_type sor 
> -mg_levels_ksp_max_it 2
> 
> Unfortunately, the preconditioning seems to run extremely slowly. I tried to 
> play around with the numbers, to check if I could notice some difference, but 
> could not observe significant changes. 
> As a comparison, the KSPSetup call with GAMG PC takes more than 10 times 
> longer than completing the whole computation (preconditioning + ~400 KSP 
> iterations to convergence) of the similar case using the following parameters 
> :
> -ksp_type cg
> -pc_type ilu
> -pc_factor_levels 0
> 
> The matrix size for my case is ~1,850,000*1,850,000 elements, with 
> ~38,000,000 non-zero terms (i.e. ~20 per row). For both ILU and AMG cases I 
> use matseqaij/vecseq storage (as a first step I work with only 1 MPI process).
> 
> Is there something wrong in the parameter set I have been using?
> I understand that the preconditioning overhead with AMG is higher than with 
> ILU, but I would also expect CG/GAMG to be competitive against CG/ILU, 
> especially considering the relatively big problem size.
> 
> For information, I am using the PETSc version built from commit 
> 6840fe907c1a3d26068082d180636158471d79a2 (release branch from April 7, 2021). 
> 
> Any clue or idea would be greatly appreciated!
> Thanks for your help,
> 
> Best regards,
> Milan Pelletier
> 
> 



Re: [petsc-users] GAMG preconditioning

2021-04-12 Thread Mark Adams
Can you briefly describe your application,?

AMG usually only works well for straightforward elliptic problems, at least
right out of the box.


On Mon, Apr 12, 2021 at 11:35 AM Milan Pelletier via petsc-users <
petsc-users@mcs.anl.gov> wrote:

> Dear all,
>
> I am currently trying to use PETSc with CG solver and GAMG preconditioner.
> I have started with the following set of parameters:
> -ksp_type cg
> -pc_type gamg
> -pc_gamg_agg_nsmooths 1
> -pc_gamg_threshold 0.02
> -mg_levels_ksp_type chebyshev
> -mg_levels_pc_type sor
> -mg_levels_ksp_max_it 2
>
> Unfortunately, the preconditioning seems to run extremely slowly. I tried
> to play around with the numbers, to check if I could notice some
> difference, but could not observe significant changes.
> As a comparison, the KSPSetup call with GAMG PC takes more than 10 times
> longer than completing the whole computation (preconditioning + ~400 KSP
> iterations to convergence) of the similar case using the following
> parameters :
> -ksp_type cg
> -pc_type ilu
> -pc_factor_levels 0
>
> The matrix size for my case is ~1,850,000*1,850,000 elements, with
> ~38,000,000 non-zero terms (i.e. ~20 per row). For both ILU and AMG cases I
> use matseqaij/vecseq storage (as a first step I work with only 1 MPI
> process).
>
> Is there something wrong in the parameter set I have been using?
> I understand that the preconditioning overhead with AMG is higher than
> with ILU, but I would also expect CG/GAMG to be competitive against CG/ILU,
> especially considering the relatively big problem size.
>
> For information, I am using the PETSc version built from commit
> 6840fe907c1a3d26068082d180636158471d79a2 (release branch from April 7,
> 2021).
>
> Any clue or idea would be greatly appreciated!
> Thanks for your help,
>
> Best regards,
> Milan Pelletier
>
>
>


[petsc-users] GAMG preconditioning

2021-04-12 Thread Milan Pelletier via petsc-users
Dear all,

I am currently trying to use PETSc with CG solver and GAMG preconditioner.
I have started with the following set of parameters:
-ksp_type cg
-pc_type gamg
-pc_gamg_agg_nsmooths 1
-pc_gamg_threshold 0.02
-mg_levels_ksp_type chebyshev
-mg_levels_pc_type sor
-mg_levels_ksp_max_it 2

Unfortunately, the preconditioning seems to run extremely slowly. I tried to 
play around with the numbers, to check if I could notice some difference, but 
could not observe significant changes.
As a comparison, the KSPSetup call with GAMG PC takes more than 10 times longer 
than completing the whole computation (preconditioning + ~400 KSP iterations to 
convergence) of the similar case using the following parameters :
-ksp_type cg
-pc_type ilu
-pc_factor_levels 0

The matrix size for my case is ~1,850,000*1,850,000 elements, with ~38,000,000 
non-zero terms (i.e. ~20 per row). For both ILU and AMG cases I use 
matseqaij/vecseq storage (as a first step I work with only 1 MPI process).

Is there something wrong in the parameter set I have been using?
I understand that the preconditioning overhead with AMG is higher than with 
ILU, but I would also expect CG/GAMG to be competitive against CG/ILU, 
especially considering the relatively big problem size.

For information, I am using the PETSc version built from commit 
6840fe907c1a3d26068082d180636158471d79a2 (release branch from April 7, 2021).

Any clue or idea would be greatly appreciated!
Thanks for your help,

Best regards,
Milan Pelletier

Re: [petsc-users] How to add a source term for PETSCFV ?

2021-04-12 Thread Matthew Knepley
On Mon, Jul 20, 2020 at 11:04 AM Thibault Bridel-Bertomeu <
thibault.bridelberto...@gmail.com> wrote:

> Thank you Mark, Jed and Matthew for your quick answers !
>
> I see now where I should be more accurate in my question.
>
> Mark, I mentioned the hyperbolicity because I would like to keep using the
> PetscDSSetRiemannSolver and the DMTSSetBoundaryLocal and 
> DMTSSetRHSFunctionLocal
> with DMPlexTSComputeRHSFunctionFVM that are quite automatic and nice and
> efficient wrappers. Now aside from those which deal specifically with the
> hyperbolic part of the PDE, i would like to add the diffusive terms. I
> would rather stay in the FVM world, but if it is easier in the FEM world
> then I am open to it.
>
> Jed, as for the discretization let us say indeed that the mesh can be
> either cartesian or not, and the discretization should therefore be
> independent of the nature of the mesh - any unstructured mesh (i handle it
> with DMPlex in my case). I saw indeed that FV has gradient reconstruction,
> with or without a limiter, which is great. However I have not quite
> understood what function to use to get the gradient of any variable, be it
> in the context (e.g. for N-S, ro, rou, rov, etc...) or an auxiliary
> variable (e.g. the components of the strain tensor). I also agree that the
> diffusive part is usually the one that strongly limits the time step in
> explicit computations, but for now I would like to set up a fully explicit
> system.
>
> Matthew, I'll take a look at ex 18, thanks, I missed that one.
>
> So basically if I wanted to summarize, I want to keep the Riemann Solver
> capability from the DS, and use the
> "automatic" DMPlexTSComputeRHSFunctionFVM for the hyperbolic part and add
> on top of it a discretization of the diffusive terms. I was thinking maybe
> one way to go would be to hack the DMTSSetForcingFunction but
> 1/ I still am not sure what this function should return exactly, is it a
> Vec for the flux on all faces ?
> 2/ I still do not know how to compute all the derivatives involved in the
> diffusive terms of the N-S using the gradient reconstruction from PetscFV
>
> Thank you for your help, I hope I am clear enough in where I want to go !
>

Hi Thibault,

Did anything happen on this front? I have another project where people want
to do that same thing.

  Thanks,

Matt


> Thibault
>
> Le lun. 20 juil. 2020 à 16:10, Matthew Knepley  a
> écrit :
>
>> On Mon, Jul 20, 2020 at 9:36 AM Jed Brown  wrote:
>>
>>> How would you like to discretize the diffusive terms?  The example has a
>>> type of gradient reconstruction so you can have cellwise gradients, but
>>> there are many techniques for discretizing diffusive terms in FV.  It's
>>> simpler if you use an orthogonal grid, but I doubt that you are.
>>>
>>> As for terminology, the diffusive part is usually stiff and thus must be
>>> treated implicitly.  In TS terminology, this would be part of the
>>> IFunction, not the RHSFunction.
>>>
>>
>> At a high level, I would say that this is doable, but complicated. You
>> can see me trying to do something much easier (advection +
>> visco-elasticity) in TS ex18,
>> where I want to discretize the elliptic part with FEM and the advective
>> part with FVM. I assume that is why Jed wants to know how you want to
>> handle the
>> elliptic terms, since this has a large impact on how you would implement.
>>
>>   Thanks,
>>
>>  Matt
>>
>>
>>> Thibault Bridel-Bertomeu  writes:
>>>
>>> > Dear all,
>>> >
>>> > I have been studying ex11.c from ts/tutorials to understand how to
>>> solve an
>>> > hyperbolic system of equations using PETSCFV. I first worked on the
>>> Euler
>>> > equations for inviscid fluids and based on what ex11.c presents, I was
>>> able
>>> > to add the right PETSc instructions in an already existing in-house
>>> code
>>> > with different gas models  to solve the problems in parallel (MPI) and
>>> with
>>> > the AMR capabilities offered by P4EST.
>>> >
>>> > Now my goal is to move to Navier-Stokes equations. Theoretically the
>>> system
>>> > is not completely hyperbolic and can be seen as one with an hyperbolic
>>> part
>>> > (identical to the Euler equations) and a parabolic part coming from
>>> the RHS
>>> > diffusion terms.
>>> > I have been looking into the manual and also the sources of PETSc
>>> around
>>> > the DM, DMPlex, DS and FV classes but I could not find anything that
>>> speaks
>>> > to me as "adding a RHS to an hyperbolic system of equations" or
>>> "adding a
>>> > source term to an hyperbolic system of equations". What's more, that
>>> source
>>> > term depends on the derivatives of the context variables ...
>>> >
>>> > I wanted to know if anyone maybe had a suggestion regarding this issue
>>> ?
>>> >
>>> > Thank you very much in advance,
>>> >
>>> > Thibault Bridel-Bertomeu
>>> > —
>>> > Eng, MSc, PhD
>>> > Research Engineer
>>> > CEA/CESTA
>>> > 33114 LE BARP
>>> > Tel.: (+33)557046924
>>> > Mob.: (+33)611025322
>>> > Mail: thibault.bridelberto...@gmail.com

Re: [petsc-users] Undefined reference in PETSc 3.13+ with old MPI version

2021-04-12 Thread Junchao Zhang
On Mon, Apr 12, 2021 at 8:09 AM Satish Balay  wrote:

> Whats the oldest version of mpich or openmpi we should test with?
>
> OpenMPI-1.6.5 or mpich2-1.5, which are their latest support of MPI-2.2

We can modify one of the tests to use that version of tarball with
> --download-mpich=URL [or --download-openmpi=URL]
>
> Satish
>
> On Sun, 11 Apr 2021, Junchao Zhang wrote:
>
> > Danyang,
> >   I pushed another commit to the same branch jczhang/fix-mpi3-win to
> guard
> > uses of MPI_Iallreduce.
> >
> >   Satish, it seems we need an MPI-2.2 CI to say petsc does not need
> MPI-3.0?
> >
> > --Junchao Zhang
> >
> >
> > On Sun, Apr 11, 2021 at 1:45 PM Danyang Su  wrote:
> >
> > > Hi Junchao,
> > >
> > >
> > >
> > > I also ported the changes you have made to PETSc 3.13.6 and configured
> > > with Intel 14.0 and OpenMPI 1.6.5, it works too.
> > >
> > > There is a similar problem in PETSc 3.14+ version as MPI_Iallreduce is
> > > only available in OpenMPI V1.7+. I would not say this is a bug, it just
> > > requires a newer MPI version.
> > >
> > >
> > >
> > >
> /home/danyangs/soft/petsc/petsc-3.14.6/intel-14.0.2-openmpi-1.6.5/lib/libpetsc.so:
> > > undefined reference to `MPI_Iallreduce'
> > >
> > >
> > >
> > > Thanks again for all your help,
> > >
> > >
> > >
> > > Danyang
> > >
> > > *From: *Junchao Zhang 
> > > *Date: *Sunday, April 11, 2021 at 7:54 AM
> > > *To: *Danyang Su 
> > > *Cc: *Barry Smith , "petsc-users@mcs.anl.gov" <
> > > petsc-users@mcs.anl.gov>
> > > *Subject: *Re: [petsc-users] Undefined reference in PETSc 3.13+ with
> old
> > > MPI version
> > >
> > >
> > >
> > > Thanks, Glad to know you have a workaround.
> > >
> > > --Junchao Zhang
> > >
> > >
> > >
> > >
> > >
> > > On Sat, Apr 10, 2021 at 10:06 PM Danyang Su 
> wrote:
> > >
> > > Hi Junchao,
> > >
> > >
> > >
> > > I cannot configure your branch with same options due to the error in
> > > sowing. I had similar error before on other clusters with very old
> openmpi
> > > version. Problem was solved when openmpi was updated to a newer one.
> > >
> > >
> > >
> > > At this moment, I configured a PETSc version with Openmpi 2.1.6 version
> > > and it seems working properly.
> > >
> > >
> > >
> > > Thanks and have a good rest of the weekend,
> > >
> > >
> > >
> > > Danyang
> > >
> > >
> > >
> > > *From: *Danyang Su 
> > > *Date: *Saturday, April 10, 2021 at 4:08 PM
> > > *To: *Junchao Zhang 
> > > *Cc: *Barry Smith , "petsc-users@mcs.anl.gov" <
> > > petsc-users@mcs.anl.gov>
> > > *Subject: *Re: [petsc-users] Undefined reference in PETSc 3.13+ with
> old
> > > MPI version
> > >
> > >
> > >
> > > Hi Junchao,
> > >
> > >
> > >
> > > The configuration is successful. The error comes from the last step
> when I
> > > run
> > >
> > >
> > >
> > > make PETSC_DIR=/home/danyangs/soft/petsc/petsc-3.13.6
> > > PETSC_ARCH=linux-intel-openmpi check
> > >
> > >
> > >
> > > Error detected during compile or
> > > link!
> > >
> > > *See http://www.mcs.anl.gov/petsc/documentation/faq.html
> > > *
> > >
> > > */home/danyangs/soft/petsc/petsc-3.13.6/src/snes/tutorials ex5f*
> > >
> > > ***
> > >
> > > mpif90 -fPIC -O3 -march=native -mtune=nativels
> > > -I/home/danyangs/soft/petsc/petsc-3.13.6/include
> > > -I/home/danyangs/soft/petsc/petsc-3.13.6/linux-intel-openmpi/include
> > > ex5f.F90
> > >
> -Wl,-rpath,/home/danyangs/soft/petsc/petsc-3.13.6/linux-intel-openmpi/lib
> > > -L/home/danyangs/soft/petsc/petsc-3.13.6/linux-intel-openmpi/lib
> > >
> -Wl,-rpath,/home/danyangs/soft/petsc/petsc-3.13.6/linux-intel-openmpi/lib
> > > -L/home/danyangs/soft/petsc/petsc-3.13.6/linux-intel-openmpi/lib
> > >
> -Wl,-rpath,/global/software/intel/composer_xe_2013_sp1.2.144/mkl/lib/intel64
> > > -L/global/software/intel/composer_xe_2013_sp1.2.144/mkl/lib/intel64
> > >
> -Wl,-rpath,/global/software/intel/composer_xe_2013_sp1.2.144/compiler/lib/intel64
> > >
> -L/global/software/intel/composer_xe_2013_sp1.2.144/compiler/lib/intel64
> > > -Wl,-rpath,/global/software/openmpi-1.6.5/intel/lib64
> > > -L/global/software/openmpi-1.6.5/intel/lib64
> > > -Wl,-rpath,/global/software/intel/composerxe/mkl/lib/intel64
> > > -L/global/software/intel/composerxe/mkl/lib/intel64
> > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.4.7
> > > -L/usr/lib/gcc/x86_64-redhat-linux/4.4.7
> > > -Wl,-rpath,/global/software/intel/composerxe/lib/intel64 -lpetsc
> -lHYPRE
> > > -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack
> > > -lsuperlu -lflapack -lfblas -lX11 -lhdf5hl_fortran -lhdf5_fortran
> -lhdf5_hl
> > > -lhdf5 -lparmetis -lmetis -lstdc++ -ldl -lmpi_f90 -lmpi_f77 -lmpi -lm
> > > -lnuma -lrt -lnsl -lutil -limf -lifport -lifcore -lsvml -lipgo -lintlc
> > > -lpthread -lgcc_s -lirc_s -lstdc++ -ldl -o ex5f
> > >
> > > ifort: command line warning #10159: invalid argument for option '-m'
> > >
> > >
> 

Re: [petsc-users] Undefined reference in PETSc 3.13+ with old MPI version

2021-04-12 Thread Satish Balay via petsc-users
Whats the oldest version of mpich or openmpi we should test with?

We can modify one of the tests to use that version of tarball with
--download-mpich=URL [or --download-openmpi=URL]

Satish

On Sun, 11 Apr 2021, Junchao Zhang wrote:

> Danyang,
>   I pushed another commit to the same branch jczhang/fix-mpi3-win to guard
> uses of MPI_Iallreduce.
> 
>   Satish, it seems we need an MPI-2.2 CI to say petsc does not need MPI-3.0?
> 
> --Junchao Zhang
> 
> 
> On Sun, Apr 11, 2021 at 1:45 PM Danyang Su  wrote:
> 
> > Hi Junchao,
> >
> >
> >
> > I also ported the changes you have made to PETSc 3.13.6 and configured
> > with Intel 14.0 and OpenMPI 1.6.5, it works too.
> >
> > There is a similar problem in PETSc 3.14+ version as MPI_Iallreduce is
> > only available in OpenMPI V1.7+. I would not say this is a bug, it just
> > requires a newer MPI version.
> >
> >
> >
> > /home/danyangs/soft/petsc/petsc-3.14.6/intel-14.0.2-openmpi-1.6.5/lib/libpetsc.so:
> > undefined reference to `MPI_Iallreduce'
> >
> >
> >
> > Thanks again for all your help,
> >
> >
> >
> > Danyang
> >
> > *From: *Junchao Zhang 
> > *Date: *Sunday, April 11, 2021 at 7:54 AM
> > *To: *Danyang Su 
> > *Cc: *Barry Smith , "petsc-users@mcs.anl.gov" <
> > petsc-users@mcs.anl.gov>
> > *Subject: *Re: [petsc-users] Undefined reference in PETSc 3.13+ with old
> > MPI version
> >
> >
> >
> > Thanks, Glad to know you have a workaround.
> >
> > --Junchao Zhang
> >
> >
> >
> >
> >
> > On Sat, Apr 10, 2021 at 10:06 PM Danyang Su  wrote:
> >
> > Hi Junchao,
> >
> >
> >
> > I cannot configure your branch with same options due to the error in
> > sowing. I had similar error before on other clusters with very old openmpi
> > version. Problem was solved when openmpi was updated to a newer one.
> >
> >
> >
> > At this moment, I configured a PETSc version with Openmpi 2.1.6 version
> > and it seems working properly.
> >
> >
> >
> > Thanks and have a good rest of the weekend,
> >
> >
> >
> > Danyang
> >
> >
> >
> > *From: *Danyang Su 
> > *Date: *Saturday, April 10, 2021 at 4:08 PM
> > *To: *Junchao Zhang 
> > *Cc: *Barry Smith , "petsc-users@mcs.anl.gov" <
> > petsc-users@mcs.anl.gov>
> > *Subject: *Re: [petsc-users] Undefined reference in PETSc 3.13+ with old
> > MPI version
> >
> >
> >
> > Hi Junchao,
> >
> >
> >
> > The configuration is successful. The error comes from the last step when I
> > run
> >
> >
> >
> > make PETSC_DIR=/home/danyangs/soft/petsc/petsc-3.13.6
> > PETSC_ARCH=linux-intel-openmpi check
> >
> >
> >
> > Error detected during compile or
> > link!
> >
> > *See http://www.mcs.anl.gov/petsc/documentation/faq.html
> > *
> >
> > */home/danyangs/soft/petsc/petsc-3.13.6/src/snes/tutorials ex5f*
> >
> > ***
> >
> > mpif90 -fPIC -O3 -march=native -mtune=nativels
> > -I/home/danyangs/soft/petsc/petsc-3.13.6/include
> > -I/home/danyangs/soft/petsc/petsc-3.13.6/linux-intel-openmpi/include
> > ex5f.F90
> > -Wl,-rpath,/home/danyangs/soft/petsc/petsc-3.13.6/linux-intel-openmpi/lib
> > -L/home/danyangs/soft/petsc/petsc-3.13.6/linux-intel-openmpi/lib
> > -Wl,-rpath,/home/danyangs/soft/petsc/petsc-3.13.6/linux-intel-openmpi/lib
> > -L/home/danyangs/soft/petsc/petsc-3.13.6/linux-intel-openmpi/lib
> > -Wl,-rpath,/global/software/intel/composer_xe_2013_sp1.2.144/mkl/lib/intel64
> > -L/global/software/intel/composer_xe_2013_sp1.2.144/mkl/lib/intel64
> > -Wl,-rpath,/global/software/intel/composer_xe_2013_sp1.2.144/compiler/lib/intel64
> > -L/global/software/intel/composer_xe_2013_sp1.2.144/compiler/lib/intel64
> > -Wl,-rpath,/global/software/openmpi-1.6.5/intel/lib64
> > -L/global/software/openmpi-1.6.5/intel/lib64
> > -Wl,-rpath,/global/software/intel/composerxe/mkl/lib/intel64
> > -L/global/software/intel/composerxe/mkl/lib/intel64
> > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.4.7
> > -L/usr/lib/gcc/x86_64-redhat-linux/4.4.7
> > -Wl,-rpath,/global/software/intel/composerxe/lib/intel64 -lpetsc -lHYPRE
> > -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack
> > -lsuperlu -lflapack -lfblas -lX11 -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl
> > -lhdf5 -lparmetis -lmetis -lstdc++ -ldl -lmpi_f90 -lmpi_f77 -lmpi -lm
> > -lnuma -lrt -lnsl -lutil -limf -lifport -lifcore -lsvml -lipgo -lintlc
> > -lpthread -lgcc_s -lirc_s -lstdc++ -ldl -o ex5f
> >
> > ifort: command line warning #10159: invalid argument for option '-m'
> >
> > /home/danyangs/soft/petsc/petsc-3.13.6/linux-intel-openmpi/lib/libpetsc.so:
> > undefined reference to `MPI_Win_allocate'
> >
> > /home/danyangs/soft/petsc/petsc-3.13.6/linux-intel-openmpi/lib/libpetsc.so:
> > undefined reference to `MPI_Win_attach'
> >
> > /home/danyangs/soft/petsc/petsc-3.13.6/linux-intel-openmpi/lib/libpetsc.so:
> > undefined reference to `MPI_Win_create_dynamic'
> >
> > gmake[4]: *** [ex5f] Error 1
> >
> >
> >
> > Thanks,
> >
> >
> >
> > Danyang
> >
> >
> >
> > *From: 

Re: [petsc-users] BoomerAMG Hypre options

2021-04-12 Thread Barry Smith

  Without more information we cannot help you. I suspect you did not actually 
use the PCFIELDSPLIT preconditioner, you just stuck that option onto a 
completely different preconditioner hypre where it would be ignored. 

  The thing to understand about linear systems arising from PDE discretizations 
is that iterative solvers never work "black box" if the PDE is non-trivial. You 
need to understand the components of the PDE you are solving and how they 
relate to the resulting (non) linear systems and how to select and compose the 
preconditioner that will work for your system.  If the problem will always be 
of moderate size then just use direct solvers and be done. If you need to solve 
for very fine discretizations where direct solvers do not scale you need to 
invest menta/mathematical energy to understand the relationship between the 
PDEs you are solving and their discretization and likely need to read the 
iterative solver literature related to your class of problems. We are happy to 
help you, but can only help if you provide us with enough information about 
your PDE and how you discretize it and exactly what solver options you have 
tried.

  Barry


   

> On Apr 11, 2021, at 11:40 PM, sthavishtha bhopalam  
> wrote:
> 
> @Barry, Upon your suggestion, I added -pc_fieldsplit_detect_saddle_point to 
> my previously used command line options. But, I get the same error message as 
> earlier.
> 
> ---
> Regards
> 
> Sthavishtha
> 
> 
> 
> 
> 
> 
> On Mon, Apr 12, 2021 at 4:36 AM Barry Smith  > wrote:
> 
> 0 SNES Function norm 6.145506780035e-04 
>   0 KSP Residual norm 2.013603254316e+41 
> 
>   My guess is that your matrix has zeros or essentially zeros on some 
> diagonal entries, this could be breaking 
> 
> Relax down  symmetric-SOR/Jacobi
> Relax upsymmetric-SOR/Jacobi
> 
> since the smoother would be dividing by those numbers. Standard multigrid 
> methods generally cannot handle such matrices without adjustments.
> 
> On approach for such problems is to use PCFIELDSPLIT and split off the zero 
> diagonal portion while using AMG on the rest 
> -pc_fieldsplit_detect_saddle_point  see PCFieldSplitSetDetectSaddlePoint().
> 
>   Barry
> 
> 
> 
> 
>> On Apr 11, 2021, at 1:06 PM, Matthew Knepley > > wrote:
>> 
>> On Sun, Apr 11, 2021 at 1:37 PM sthavishtha bhopalam 
>> mailto:sthavishth...@gmail.com>> wrote:
>> Hello PETSc users
>> 
>> I am trying to experiment with Hypre's BoomerAMG preconditioner which 
>> continually yields the error message "Linear solve did not converge due to 
>> DIVERGED_DTOL iterations 1". I would appreciate if someone can suggest some 
>> ways I could get BoomerAMG to yield converged results - the attached output 
>> shows a snippet of the error message. Command Line options I used for 
>> BoomerAMG : -pc_type hypre -pc_hypre_type boomeramg -pchypreboomeramgtol 
>> 1.0e-3 -pchypreboomeramgstrongthreshold 0.25 -ksp_type richardson 
>> -pc_hypre_boomeramg_max_iter 6 -snes_rtol 1.0e-3 -ksp_rtol 1.0e-3 -ksp_view 
>> -snes_view -ksp_monitor -snes_monitor -ksp_max_it 100 -ksp_converged_reason 
>> -snes_converged_reason
>> 
>> I also tried using -ksp_type gmres, different values of 
>> -pc_hypre_boomeramg_max_iter, -pchypreboomeramgstrongthreshold, 
>> -ksp_initial_guess_nonzero but all yielded the same error message.
>> 
>> However, the direct solver converges as required - the attached output shows 
>> a snippet of the norms from the SNES and KSP.
>> Command Line options I used for the direct solver : -ksp_type gmres -pc_type 
>> lu -pc_factor_shift_type nonzero -pc_factor_mat_solver_type mumps 
>> -snes_converged_reason -ksp_converged_reason -ksp_rtol 1e-3 -snes_rtol 1e-3 
>> -ksp_monitor -snes_monitor
>> 
>> There is no particular reason for using the AMG here, but I just wanted to 
>> familiarize with it's options to see which of them need to be particularly 
>> tuned to yield converged and correct results.
>> 
>> Hypre is only going to work for a very specific set of systems. What are you 
>> solving?
>> 
>>   Thanks,
>> 
>>  Matt
>>  
>> Thanks
>> ---
>> Regards
>> 
>> Sthavishtha 
>> 
>> 
>> 
>> 
>> 
>> 
>> -- 
>> What most experimenters take for granted before they begin their experiments 
>> is infinitely more interesting than any results to which their experiments 
>> lead.
>> -- Norbert Wiener
>> 
>> https://www.cse.buffalo.edu/~knepley/ 
>