Barry,

thank you. I think in few days I will run a larger test and I will send
the log to have better insight.
Thank you for now.

Michele


On Thu, 2015-07-16 at 19:48 -0500, Barry Smith wrote:

> > On Jul 16, 2015, at 7:18 PM, Michele Rosso <[email protected]> wrote:
> > 
> > Barry,
> > 
> > thank you very much for the detailed answer.  I tried what you suggested 
> > and it works.
> > So far I tried on a small system but the final goal is to use it for very 
> > large runs.  How does  PCGAMG compares to PCMG  as far as performances and 
> > scalability are concerned?
> 
>    Algebraic multigrid has much larger setup times than geometric multigrid, 
> but since it is is running on a much smaller problem with geometric multigrid 
> on top of it that is much less of an issue.
> 
> > Also, could you help me to tune the GAMG part ( my current setup is in the 
> > attached ksp_view.txt file )? 
> 
>   The defaults are probably fine.
> > 
> > I also tried to use superlu_dist for the LU decomposition on 
> > mg_coarse_mg_sub_
> > -mg_coarse_mg_coarse_sub_pc_type lu
> > -mg_coarse_mg_coarse_sub_pc_factor_mat_solver_package superlu_dist
> 
>   Yeah, gamg tries to put all the rows of the matrix on one process and leave 
> the other processes empty, I suspect superlu_dist doesn't like empty 
> matrices. Just use -mg_coarse_mg_coarse_sub_pc_type lu it is PETSc's native 
> sequential sparse solver and will run faster than superlu_dist anyways except 
> for really large coarse problems which you don't want anyway.
> 
>   How many iterations is the multigrid using? 
> 
>   Instead of using -mg_coarse_pc_type some of the PETSc guys are fans of 
> -mg_coarse_pc_type  chebyshev which may decrease the iterations slightly.
> 
>   You can run with -log_summary and send the output, this shows where the 
> time is being spent.
> 
>   Barry
> 
> 
> > 
> > but I got an error:
> > 
> > ****** Error in MC64A/AD. INFO(1) = -2 
> > ****** Error in MC64A/AD. INFO(1) = -2
> > ****** Error in MC64A/AD. INFO(1) = -2
> > ****** Error in MC64A/AD. INFO(1) = -2
> > ****** Error in MC64A/AD. INFO(1) = -2
> > ****** Error in MC64A/AD. INFO(1) = -2
> > ****** Error in MC64A/AD. INFO(1) = -2
> > symbfact() error returns 0
> > symbfact() error returns 0
> > symbfact() error returns 0
> > symbfact() error returns 0
> > symbfact() error returns 0
> > symbfact() error returns 0
> > symbfact() error returns 0
> > 
> > 
> > Thank you,
> > Michele
> > 
> > 
> > On Thu, 2015-07-16 at 18:07 -0500, Barry Smith wrote:
> >> > On Jul 16, 2015, at 5:42 PM, Michele Rosso <[email protected]> wrote:
> >> > 
> >> > Barry,
> >> > 
> >> > thanks for your reply. So if I want it fixed, I will have to use the 
> >> > master branch, correct?
> >> 
> >> 
> >>   Yes, or edit mg.c and remove the offending lines of code (easy enough). 
> >> 
> >> > 
> >> > On a side note, what I am trying to achieve is to be able to use how 
> >> > many levels of MG I want, despite the limitation imposed by the local 
> >> > number of grid nodes.
> >> 
> >> 
> >>    I assume you are talking about with DMDA? There is no generic 
> >> limitation for PETSc's multigrid, it is only with the way the DMDA code 
> >> figures out the interpolation that causes a restriction.
> >> 
> >> 
> >> > So far I am using a borrowed code that implements a PC that creates a 
> >> > sub communicator and perform MG on it.
> >> > While reading the documentation I found out that PCMGSetLevels takes in 
> >> > an optional array of communicators. How does this work?
> >> 
> >> 
> >>    It doesn't work. It was an idea that never got pursued.
> >> 
> >> 
> >> > Can I can simply define my matrix and rhs on the fine grid as I would do 
> >> > normally ( I do not use kspsetoperators and kspsetrhs ) and KSP would 
> >> > take care of it by using the correct communicator for each level?
> >> 
> >> 
> >>    No.
> >> 
> >>    You can use the PCMG geometric multigrid with DMDA for as many levels 
> >> as it works and then use PCGAMG as the coarse grid solver. PCGAMG 
> >> automatically uses fewer processes for the coarse level matrices and 
> >> vectors. You could do this all from the command line without writing code. 
> >> 
> >>    For example if your code uses a DMDA and calls KSPSetDM() use for 
> >> example -da_refine 3 -pc_type mg -pc_mg_galerkin -mg_coarse_pc_type gamg  
> >> -ksp_view 
> >> 
> >> 
> >> 
> >>   Barry
> >> 
> >> 
> >> 
> >> > 
> >> > Thanks,
> >> > Michele
> >> > 
> >> > 
> >> > 
> >> > 
> >> > On Thu, 2015-07-16 at 17:30 -0500, Barry Smith wrote:
> >> >>    Michel,
> >> >> 
> >> >>     This is a very annoying feature that has been fixed in master 
> >> >> http://www.mcs.anl.gov/petsc/developers/index.html
> >> >>   I would like to have changed it in maint but Jed would have a 
> >> >> shit-fit :-) since it changes behavior.
> >> >> 
> >> >>   Barry
> >> >> 
> >> >> 
> >> >> > On Jul 16, 2015, at 4:53 PM, Michele Rosso <[email protected]> wrote:
> >> >> > 
> >> >> > Hi,
> >> >> > 
> >> >> > I am performing a series of solves inside a loop. The matrix for each 
> >> >> > solve changes but not enough to justify a rebuilt of the PC at each 
> >> >> > solve.
> >> >> > Therefore I am using  KSPSetReusePreconditioner to avoid rebuilding 
> >> >> > unless necessary. The solver is CG + MG with a custom  PC at the 
> >> >> > coarse level.
> >> >> > If KSP is not updated each time, everything works as it is supposed 
> >> >> > to. 
> >> >> > When instead I allow the default PETSc  behavior, i.e. updating PC 
> >> >> > every time the matrix changes, the coarse level KSP , initially set 
> >> >> > to PREONLY, is changed into GMRES 
> >> >> > after the first solve. I am not sure where the problem lies (my PC or 
> >> >> > PETSc), so I would like to have your opinion on this.
> >> >> > I attached the ksp_view for the 2 successive solve and the options 
> >> >> > stack.
> >> >> > 
> >> >> > Thanks for your help,
> >> >> > Michel
> >> >> > 
> >> >> > 
> >> >> > 
> >> >> > <ksp_view.txt><petsc_options.txt>
> >> >> 
> >> >> 
> >> >> 
> >> > 
> >> 
> >> 
> >> 
> > 
> > <ksp_view.txt>
> 


Reply via email to