So as I understand it, GMRES is used as a preconditioner and as a solver when I 
use PCMG with defaults. If this is the case, I should be able to recreate this 
set up without the PCMG. Any pointers as to how this can be done?

Also, yes indeed, my mesh is completely unstructured, so I will have to use ml 
or boomeramg. 

The problems that I am attempting involve RANS of compressible turbulent 
combustion (finite volume, steady). The high condition numbers are because of 
the extreme grid stretching and stiff source terms (in the turbulence and 
combustion model). I have been trying a reduced problem in these 2D test cases, 
in which the condition number is only 1e7.

Thanks,
Karthik.


----- Original Message -----
From: "Mark F. Adams" <[email protected]>
To: "PETSc users list" <petsc-users at mcs.anl.gov>
Sent: Tuesday, May 1, 2012 3:50:31 PM
Subject: Re: [petsc-users] Multigrid


Also, note that PCMG can not create coarse grid spaces for an (MPI)AIJ matrix. 
If you use regular grids (DA?) then PETSc can construct geometric multigrid 
coarse grid spaces, although I don't know if PCMG will construct these for you 
(I don't think it will and I can see from your output that PCMG just used one 
grid). 'ml', hypre' and 'gamg' (a native AMG solver) will do real AMG solvers 
for you. All three can work on a similar class of problems. 


Also, you mention that you have a condition number of 1.e20. That is 
astronomical for such a small problem. How did you compute that number? Do you 
know where the ill-conditioning comes from? Is this an elliptic operator? 


Mark 




On May 1, 2012, at 6:31 PM, Matthew Knepley wrote: 



On Tue, May 1, 2012 at 6:27 PM, Karthik Duraisamy < dkarthik at stanford.edu > 
wrote: 



The following was output for the very first iteration whereas what I had 
attached earlier was output every iteration. I am still a bit perplexed because 
PCMG drops the residual like a rock (after the first few iterations whereas 
with no PCMG, it is very slow) 



Because the smoother IS the solver you were using before. Just like I said last 
time, what you are doing is 
wrapping up the same solver you used before, sticking it in another GMRES loop, 
and only looking at the 
outer loop. This has nothing to do with MG. 


Matt 


KSP Object: 8 MPI processes 
type: gmres 
GMRES: restart=100, using Classical (unmodified) Gram-Schmidt Orthogonalization 
with no iterative refinement 
GMRES: happy breakdown tolerance 1e-30 
maximum iterations=1, initial guess is zero 
using preconditioner applied to right hand side for initial guess 
tolerances: relative=0.01, absolute=1e-08, divergence=1e+10 
left preconditioning 
using DEFAULT norm type for convergence test 
PC Object: 8 MPI processes 
type: mg 
MG: type is MULTIPLICATIVE, levels=1 cycles=v 
Cycles per PCApply=1 
Not using Galerkin computed coarse grid matrices 
Coarse grid solver -- level ------------------------------- 
KSP Object: (mg_levels_0_) 8 MPI processes 
type not yet set 
maximum iterations=1, initial guess is zero 
tolerances: relative=1e-05, absolute=1e-50, divergence=10000 
left preconditioning 
using DEFAULT norm type for convergence test 
PC Object: (mg_levels_0_) 8 MPI processes 
type not yet set 
linear system matrix = precond matrix: 
Matrix Object: 8 MPI processes 
type: mpiaij 
rows=75000, cols=75000 
total: nonzeros=4427800, allocated nonzeros=4427800 
total number of mallocs used during MatSetValues calls =0 
using I-node (on process 0) routines: found 3476 nodes, limit used is 5 


----- Original Message ----- 
From: "Matthew Knepley" < [email protected] > 
To: "PETSc users list" < petsc-users at mcs.anl.gov > 
Sent: Tuesday, May 1, 2012 3:22:56 PM 
Subject: Re: [petsc-users] Multigrid 


On Tue, May 1, 2012 at 6:18 PM, Karthik Duraisamy < dkarthik at stanford.edu > 
wrote: 



Hello, 

Sorry (and thanks for the reply). I've attached the no multigrid case. I didn't 
include it because (at least to the untrained eye, everything looks the same). 



Did you send all the output from the MG case? There must be a PC around it. By 
default its GMRES, so there would be 
an extra GMRES loop compared to the case without MG. 


Matt 


Regards, 
Karthik 

KSP Object: 8 MPI processes 
type: gmres 
GMRES: restart=100, using Classical (unmodified) Gram-Schmidt Orthogonalization 
with no iterative refinement 
GMRES: happy breakdown tolerance 1e-30 
maximum iterations=1 
using preconditioner applied to right hand side for initial guess 
tolerances: relative=1e-05, absolute=1e-50, divergence=1e+10 
left preconditioning 
using nonzero initial guess 
using PRECONDITIONED norm type for convergence test 
PC Object: 8 MPI processes 
type: bjacobi 
block Jacobi: number of blocks = 8 
Local solve is same for all blocks, in the following KSP and PC objects: 
KSP Object: (sub_) 1 MPI processes 
type: preonly 
maximum iterations=10000, initial guess is zero 
tolerances: relative=1e-05, absolute=1e-50, divergence=10000 
left preconditioning 
using NONE norm type for convergence test 
PC Object: (sub_) 1 MPI processes 
type: ilu 
ILU: out-of-place factorization 
0 levels of fill 
tolerance for zero pivot 1e-12 
using diagonal shift to prevent zero pivot 
matrix ordering: natural 
factor fill ratio given 1, needed 1 
Factored matrix follows: 
Matrix Object: 1 MPI processes 
type: seqaij 
rows=9015, cols=9015 
package used to perform factorization: petsc 
total: nonzeros=517777, allocated nonzeros=517777 
total number of mallocs used during MatSetValues calls =0 
using I-node routines: found 3476 nodes, limit used is 5 
linear system matrix = precond matrix: 
Matrix Object: 1 MPI processes 
type: seqaij 
rows=9015, cols=9015 
total: nonzeros=517777, allocated nonzeros=517777 
total number of mallocs used during MatSetValues calls =0 
using I-node routines: found 3476 nodes, limit used is 5 
linear system matrix = precond matrix: 
Matrix Object: 8 MPI processes 
type: mpiaij 
rows=75000, cols=75000 
total: nonzeros=4427800, allocated nonzeros=4427800 
total number of mallocs used during MatSetValues calls =0 
using I-node (on process 0) routines: found 3476 nodes, limit used is 5 


----- Original Message ----- 
From: "Matthew Knepley" < [email protected] > 
To: "PETSc users list" < petsc-users at mcs.anl.gov > 
Sent: Tuesday, May 1, 2012 3:15:14 PM 
Subject: Re: [petsc-users] Multigrid 


On Tue, May 1, 2012 at 6:12 PM, Karthik Duraisamy < dkarthik at stanford.edu > 
wrote: 



Hello Barry, 

Thank you for your super quick response. I have attached the output of ksp_view 
and it is practically the same as that when I don't use PCMG. The part I don't 
understand is how PCMG able to function at the zero grid level and still 
produce a much better convergence than when using the default PC. Is there any 
additional smoothing or interpolation going on? 



You only included one output, so I have no way of knowing what you used before. 
However, this is running GMRES/ILU. 


Also, for Algebraic Multigrid, would you recommend BoomerAMG or ML ? 



They are different algorithms. Its not possible to say generally that one is 
better. Try them both. 


Matt 


Best regards, 
Karthik. 

type: mg 
MG: type is MULTIPLICATIVE, levels=1 cycles=v 
Cycles per PCApply=1 
Not using Galerkin computed coarse grid matrices 
Coarse grid solver -- level ------------------------------- 
KSP Object: (mg_levels_0_) 8 MPI processes 
type: gmres 
GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization 
with no iterative refinement 
GMRES: happy breakdown tolerance 1e-30 
maximum iterations=1, initial guess is zero 
tolerances: relative=1e-05, absolute=1e-50, divergence=10000 
left preconditioning 
using PRECONDITIONED norm type for convergence test 
PC Object: (mg_levels_0_) 8 MPI processes 
type: bjacobi 
block Jacobi: number of blocks = 8 
Local solve is same for all blocks, in the following KSP and PC objects: 
KSP Object: (mg_levels_0_sub_) 1 MPI processes 
type: preonly 
maximum iterations=10000, initial guess is zero 
tolerances: relative=1e-05, absolute=1e-50, divergence=10000 
left preconditioning 
using NONE norm type for convergence test 
PC Object: (mg_levels_0_sub_) 1 MPI processes 
type: ilu 
ILU: out-of-place factorization 
0 levels of fill 
tolerance for zero pivot 1e-12 
using diagonal shift to prevent zero pivot 
matrix ordering: natural 
factor fill ratio given 1, needed 1 
Factored matrix follows: 
Matrix Object: 1 MPI processes 
type: seqaij 
rows=9015, cols=9015 
package used to perform factorization: petsc 
total: nonzeros=517777, allocated nonzeros=517777 
total number of mallocs used during MatSetValues calls =0 
using I-node routines: found 3476 nodes, limit used is 5 
linear system matrix = precond matrix: 
Matrix Object: 1 MPI processes 
type: seqaij 
rows=9015, cols=9015 
total: nonzeros=517777, allocated nonzeros=517777 
total number of mallocs used during MatSetValues calls =0 
using I-node routines: found 3476 nodes, limit used is 5 
linear system matrix = precond matrix: 
Matrix Object: 8 MPI processes 
type: mpiaij 
rows=75000, cols=75000 
total: nonzeros=4427800, allocated nonzeros=4427800 
total number of mallocs used during MatSetValues calls =0 
using I-node (on process 0) routines: found 3476 nodes, limit used is 5 
linear system matrix = precond matrix: 
Matrix Object: 8 MPI processes 
type: mpiaij 
rows=75000, cols=75000 
total: nonzeros=4427800, allocated nonzeros=4427800 
total number of mallocs used during MatSetValues calls =0 
using I-node (on process 0) routines: found 3476 nodes, limit used is 5 



----- Original Message ----- 
From: "Barry Smith" < [email protected] > 
To: "PETSc users list" < petsc-users at mcs.anl.gov > 
Sent: Tuesday, May 1, 2012 1:39:26 PM 
Subject: Re: [petsc-users] Multigrid 


On May 1, 2012, at 3:37 PM, Karthik Duraisamy wrote: 

> Hello, 
> 
> I have been using PETSc for a couple of years with good success, but lately 
> as my linear problems have become stiffer (condition numbers of the order of 
> 1.e20), I am looking to use better preconditioners. I tried using PCMG with 
> all the default options (i.e., I just specified my preconditioner as PCMG and 
> did not add any options to it) and I am immediately seeing better 
> convergence. 
> 
> What I am not sure of is why? I would like to know more about the default 
> parameters (the manual is not very explicit) and more importantly, want to 
> know why it is working even when I haven't specified any grid levels and 
> coarse grid operators. Any 
> help in this regard will be appreciated. 

First run with -ksp_view to see what solver it is actually using. 

Barry 

> 
> Also, ultimately I want to use algebraic multigrid so is PCML a better option 
> than BoomerAMG? I tried BoomerAMG with mixed results. 
> 
> Thanks, 
> Karthik 
> 
> 
> 
> -- 
> 
> ======================================= 
> Karthik Duraisamy 
> Assistant Professor (Consulting) 
> Durand Building Rm 357 
> Dept of Aeronautics and Astronautics 
> Stanford University 
> Stanford CA 94305 
> 
> Phone: 650-721-2835 
> Web: www.stanford.edu/~dkarthik 
> ======================================= 





-- 
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead. 
-- Norbert Wiener 




-- 
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead. 
-- Norbert Wiener 




-- 
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead. 
-- Norbert Wiener 

Reply via email to