Re: [petsc-users] GAMG parallel convergence sensitivity

2019-03-14 Thread Jed Brown via petsc-users
Mark Lohry  writes:

> It seems to me with these semi-implicit methods the CFL limit is still so
> close to the explicit limit (that paper stops at 30), I don't really see
> the purpose unless you're running purely incompressible? That's just my
> ignorance speaking though. I'm currently running fully implicit for
> everything, with CFLs around 1e3 - 1e5 or so.

It depends what you're trying to resolve.  Sounds like maybe you're
stepping toward steady state.  The paper is wishing to resolve vortex
and baroclinic dynamics while stepping over acoustics and barotropic
waves.


Re: [petsc-users] GAMG parallel convergence sensitivity

2019-03-13 Thread Jed Brown via petsc-users
Mark Lohry via petsc-users  writes:

> For what it's worth, I'm regularly solving much larger problems (1M-100M
> unknowns, unsteady) with this discretization and AMG setup on 500+ cores
> with impressively great convergence, dramatically better than ILU/ASM. This
> just happens to be the first time I've experimented with this extremely low
> Mach number, which is known to have a whole host of issues and generally
> needs low-mach preconditioners, I was just a bit surprised by this specific
> failure mechanism.

A common technique for low-Mach preconditioning is to convert to
primitive variables (much better conditioned for the solve) and use a
Schur fieldsplit into the pressure space.  For modest time step, you can
use SIMPLE-like method ("selfp" in PCFieldSplit lingo) to approximate
that Schur complement.  You can also rediscretize to form that
approximation.  This paper has a bunch of examples of choices for the
state variables and derivation of the continuous pressure preconditioner
each case.  (They present it as a classical semi-implicit method, but
that would be the Schur complement preconditioner if using FieldSplit
with a fully implicit or IMEX method.)

https://doi.org/10.1137/090775889


Re: [petsc-users] GAMG parallel convergence sensitivity

2019-03-13 Thread Mark Adams via petsc-users
>
>
>
> Any thoughts here? Is there anything obviously wrong with my setup?
>

Fast and robust solvers for NS require specialized methods that are not
provided in PETSc and the methods tend to require tighter integration with
the meshing and discretization than the algebraic interface supports.

I see you are using 20 smoothing steps. That is very high. Generally you
want to use the v-cycle more (ie, lower number of smoothing steps and more
iterations).

And, full MG is a bit tricky. I would not use it, but if it helps, fine.


> Any way to reduce the dependence of the convergence iterations on the
> parallelism?
>

This comes from the bjacobi smoother. Use jacobi and you will not have a
parallelism problem and you have bjacobi in the limit of parallelism.


> -- obviously I expect the iteration count to be higher in parallel, but I
> didn't expect such catastrophic failure.
>
>
You are beyond what AMG is designed for. If you press this problem it will
break any solver and will break generic AMG relatively early.

This makes it hard to give much advice. You really just need to test things
and use what works best. There are special purpose methods that you can
implement in PETSc but that is a topic for a significant project.