Lawrence Mitchell <[email protected]> writes:

> Below I show output from a run on 1 process and then two (along with 
> ksp_view) for the following options:
>
>  -pc_type mg -ksp_rtol 1e-8 -ksp_max_it 6 -pc_mg_levels 2 -mg_levels_pc_type 
> sor -ksp_monitor
>
> On 1 process:
>   0 KSP Residual norm 5.865090856053e+02 
>   1 KSP Residual norm 1.293159126247e+01 
>   2 KSP Residual norm 5.181199296299e-01 
>   3 KSP Residual norm 1.268870802643e-02 
>   4 KSP Residual norm 5.116058930806e-04 
>   5 KSP Residual norm 3.735036960550e-05 
>   6 KSP Residual norm 1.755288530515e-06 
> KSP Object: 1 MPI processes
>   type: gmres
>     GMRES: restart=30, using Classical (unmodified) Gram-Schmidt 
> Orthogonalization with no iterative refinement
>     GMRES: happy breakdown tolerance 1e-30
>   maximum iterations=6, initial guess is zero
>   tolerances:  relative=1e-08, absolute=1e-50, divergence=10000
>   left preconditioning
>   using PRECONDITIONED norm type for convergence test
> PC Object: 1 MPI processes
>   type: mg
>     MG: type is MULTIPLICATIVE, levels=2 cycles=v
>       Cycles per PCApply=1
>       Not using Galerkin computed coarse grid matrices

How are you sure the rediscretized matrices are correct in parallel?

I would stick with the redundant coarse solve and use

  -mg_levels_ksp_type chebyshev -mg_levels_pc_type jacobi 
-ksp_monitor_true_residual

Use of Jacobi here is to make the smoother the same in parallel as
serial.  (Usually SOR is a bit stronger, though I think the Cheby/SOR
combination is somewhat peculiar and usually overkill.)

Compare convergence with and without -pc_mg_galerkin.

Attachment: pgpzTb7PGlo3a.pgp
Description: PGP signature

Reply via email to