Dear Dr. Smith,
I sincerely appreciate your valuable answers. My KSP Poisson solver
has been significantly speed up with your help. Here, I wonder what
should I do extra to employ geometric MG for non-uniform Cartesian mesh.
I suppose the DMDA won't automatically generate the coarse grid for
non-uniform Cartesian mesh.
have a great evening, :)
Alan
On Aug 7, 2013, at 12:30 PM, Alan <[email protected]> wrote:
Hi Dr. Smith,
Thank you so much for your reply. I tried to use the geometric multigrid to
speed up the KSP solver with setup:
mpiexec -np 1 ./ex29 -ksp_type cg -pc_type mg -da_refine 2 -ksp_rtol 1.0e-7
It do have almost the same computational rate compared with mudpack right
now. Whereas, I have few questions here:
1. After KSPCreate(), I used a DMDACreate2d() and DMDASetUniformCoordinates()
to build a uniform Cartesian mesh from PETSc. If I input imax and jmax to
DMDACreate2d() as the global dimension of the grid, the real number of grid of
x- and y-direction are imax and jmax, respectively, for the code with PC =
GAMG; while they are (imax-1)*4 and (jmax-1)*4, respectively, for the code with
PC = MG with -da_refine = 2. Is this normal? Does this indicate that the imax,
jmax I inputed for the code with PC = MG is the global dimension for the
coarsest level in the multi-grid?
The -da_refine n causes the DA you provided to be refined n times giving
you the final grid. Each refinement is a factor of 2 in each direction so yes
the fine grid would be (imax-1)*4 and (jmax-1)*4,.
You do not need to use -da_refine n you can just set the imax and jmax
and use -pc_mg_levels p where p is the number of multigrid levels you wish to
use. Note that imax and jmax must be large enough to be coarsened p times and
must have appropriate integer values that can be coarsened.
2, Is there any command of PETSc that I can used in my code to detect what is
the type of my preconditioner?
PCGetType()
3, Is there any command of PETSc that I can used to know what is the value of
-da_refine if the MG is used?
PCGetMGLevels() tells you how many levels of multigrid it is using.
-ksp_view shows full details on the solver being used.
4, What is 'PCMGType'? Should I just keep it as default? In the original
makefile for /src/ksp/ksp/tutorial/example/ex29.c, pc_mg_type was 'full'. I
tried it; it is slightly slower than the default setting.
You can use whatever is faster.
5, What other settings I can play with to further speed up the computational
rate?
There are many options for multigrid. In particular how many levels you
use and what smoother you use on each level. What results in the fastest solver
depends on the machine and exact problem you are solving and how many processes
you are using. The defaults in PETSc 3.4 should be reasonably good.
Barry
thanks,
Alan
Alan,
If you can use MUDPACK then you can also use PETSc's geometric multigrid,
both sequential and parallel and its performance should be fairly close to
mudpack on one process.
Barry
On Aug 6, 2013, at 2:56 PM, Alan <[email protected]> wrote:
Thanks for replies. Here I attached the log_summary for the large and small
problems. The DoFs for the large problem is 4 times of that for the small
problem. Few observations are listed here:
1, the total number of iterations does not change much from the small problem
to the large one;
2, the time elapsed for KSPSolve() for the large problem is less than 4 times
of that for the small problem;
3, the time elapsed for PCSet() for the large problem is more than 10 times of
that for the small problem;
4, the time elapsed for PCGAMGProl_AGG for the large problem is more than 20
times of that for the small problem;
In my code, I have solved the Poisson equation for twice with difference
RHS; however, the observation above is almost consistent for these two times.
Do these observation indicate that I should switch my PC from GAMG to MG
for solving Poisson equation in a single process?
best,
Alan
On Tue, Aug 6, 2013 at 2:31 PM, Karl Rupp <[email protected]> wrote:
Hi Alan,
please use -log_summary to get profiling information on the run. What is
the bottleneck? Is it the number of solver iterations increasing
significantly? If so, consider changing the preconditioner options (more
levels!). I don't expect a direct solver to be any faster in the 180k
case for a Poisson problem.
Mudpack is geometric multigrid:
http://www2.cisl.ucar.edu/resources/legacy/mudpack
This should be faster.
Matt
Best regards,
Karli
On 08/06/2013 02:22 PM, Alan wrote:
Dear all,
I hope you're having a nice day.
I have a quick question on solving Poisson equation with KSP solvers
(/src/ksp/ksp/example/tutorial/ex29.c). Currently, I run this solver with:
-pc_type gamg -ksp_type cg -pc_gamg_agg_nsmooths 1 -mg_levels_ksp_max_it
1 -mg_levels_ksp_type richardson -ksp_rtol 1.0e-7
It performs very well in parallel computation and scalability is fine.
However, if I run it with a single process, the KSP solver is much
slower than direct ones, i.e. Mudpack. Briefly, the speed difference
between the KSP solver and the direct solver is negligible on dealing
with small problems (i.e.36k DoFs ) but becomes very huge for moderate
large problems (i.e. 180k DoFs). Although the direct solver inherently
has better performance for moderate large problems in the single
process, I wonder if any setup or approach can improve the performance
of this KSP Poisson solver with the single process? or even make it
obtain competitive speed (a little bit slower is fine) against direct
solvers.
thanks in advance,
Alan
--
What most experimenters take for granted before they begin their experiments is
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener
<out_large.txt><out_small.txt>