Re: [petsc-users] GAMG parameters for ideal coarsening ratio

2020-03-17 Thread Mark Adams
On Tue, Mar 17, 2020 at 1:42 PM Sajid Ali 
wrote:

> Hi Mark/Jed,
>
> The problem I'm solving is scalar helmholtz in 2D, (u_t = A*u_xx + A*u_yy
> + F_t*u, with the familiar 5 point central difference as the derivative
> approximation,
>

I assume this is definite HelmHoltz. The time integrator will also add a
mass term. I'm assuming F_t looks like a mass matrix.


> I'm also attaching the result of -info | grep GAMG if that helps). My goal
> is to get weak and strong scaling results for the FD solver (leading me to
> double check all my parameters). I ran the sweep again as Mark suggested
> and it looks like my base params were close to optimal ( negative threshold
> and 10 levels of squaring
>

For low order discretizations, squaring every level, as you are doing,
sound right. And the mass matrix confuses GAMG's filtering heuristics so no
filter sounds reasonable.

Note, hypre would do better than GAMG on this problem.


> with gmres/jacobi smoothers (chebyshev/sor is slower)).
>

You don't want to use GMRES as a smoother (unless you have
indefinite Helmholtz). SOR will be more expensive but often converges a lot
faster. chebyshev/jacobi would probably be better for you.

And you want CG (-ksp_type cg) if this system is symmetric positive
definite.


>
> [image: image.png]
>
> While I think that the base parameters should work well for strong
> scaling, do I have to modify any of my parameters for a weak scaling run ?
> Does GAMG automatically increase the number of mg-levels as grid size
> increases or is it upon the user to do that ?
>
> @Mark : Is there a GAMG implementation paper I should cite ? I've already
> added a citation for the Comput. Mech. (2007) 39: 497–507 as a reference
> for the general idea of applying agglomeration type multigrid
> preconditioning to helmholtz operators.
>
>
> Thank You,
> Sajid Ali | PhD Candidate
> Applied Physics
> Northwestern University
> s-sajid-ali.github.io
>
>


Re: [petsc-users] GAMG parameters for ideal coarsening ratio

2020-03-17 Thread Sajid Ali
 Hi Mark/Jed,

The problem I'm solving is scalar helmholtz in 2D, (u_t = A*u_xx + A*u_yy +
F_t*u, with the familiar 5 point central difference as the derivative
approximation, I'm also attaching the result of -info | grep GAMG if that
helps). My goal is to get weak and strong scaling results for the FD solver
(leading me to double check all my parameters). I ran the sweep again as
Mark suggested and it looks like my base params were close to optimal (
negative threshold and 10 levels of squaring with gmres/jacobi smoothers
(chebyshev/sor is slower)).

[image: image.png]

While I think that the base parameters should work well for strong scaling,
do I have to modify any of my parameters for a weak scaling run ? Does GAMG
automatically increase the number of mg-levels as grid size increases or is
it upon the user to do that ?

@Mark : Is there a GAMG implementation paper I should cite ? I've already
added a citation for the Comput. Mech. (2007) 39: 497–507 as a reference
for the general idea of applying agglomeration type multigrid
preconditioning to helmholtz operators.


Thank You,
Sajid Ali | PhD Candidate
Applied Physics
Northwestern University
s-sajid-ali.github.io


Re: [petsc-users] GAMG parameters for ideal coarsening ratio

2020-03-16 Thread Jed Brown
Sajid Ali  writes:

> Hi PETSc-developers,
>
> As per the manual, the ideal gamg parameters are those which result in
> MatPtAP time being roughly similar to (or just slightly larger) than KSP
> solve times. The way to adjust this is by changing the threshold for
> coarsening and/or squaring the graph. I was working with a grid of size
> 2^14 by 2^14 in a linear & time-independent TS with the following params :
>
> #PETSc Option Table entries:
> -ksp_monitor
> -ksp_rtol 1e-5
> -ksp_type fgmres
> -ksp_view
> -log_view
> -mg_levels_ksp_type gmres
> -mg_levels_pc_type jacobi
> -pc_gamg_coarse_eq_limit 1000
> -pc_gamg_reuse_interpolation true
> -pc_gamg_square_graph 10
> -pc_gamg_threshold -0.04
> -pc_gamg_type agg
> -pc_gamg_use_parallel_coarse_grid_solver
> -pc_mg_monitor
> -pc_type gamg
> -prop_steps 8
> -ts_monitor
> -ts_type cn
> #End of PETSc Option Table entries
>
> With this I get a grid complexity of 1.33047, 6 multigrid levels,
> MatPtAP/KSPSolve ratio of 0.24, and the linear solve at each TS step takes
> 5 iterations (with approx one order of magnitude reduction in residual per
> step for iterations 2 through 5 and two orders for the first). The
> convergence and grid complexity look good, but the ratio of grid coarsening
> time to ksp-solve time is far from ideal. I've attached the log file from
> this set of base parameters as well.
>
> To investigate the effect of coarsening rates, I ran a parameter sweep over
> the coarsening parameters (threshold and sq. graph) and I'm confused by the
> results. For some reason either the number of gamg levels turns out to be
> too high or it is set to 1. When I try to manually set the number of levels
> to 4 (with pc_mg_levels 4 and thres. -0.04/ squaring of 10) I see
> performance much worse than the base parameters. Any advice as to what I'm
> missing in my search for a set of params where MatPtAP to KSPSolve is ~ 1 ?

Your solver looks efficient and the time to setup roughly matches the
solve time:

PCSetUp8 1.0 1.2202e+02 1.0 4.39e+09 1.0 4.9e+05 6.5e+03 
6.3e+02 36 12 19 27 21  36 12 19 27 22  9201
PCApply   40 1.0 1.1077e+02 1.0 2.63e+10 1.0 2.0e+06 3.8e+03 
2.0e+03 33 72 79 65 68  33 72 79 65 68 60662

If you have a specific need to reduce setup time or reduce solve time
(e.g., if you'll do many solves with the same setup), you might be able
to adjust.  But your iteration count is pretty low so probably not a lot
of room in that direction.