Mark: > I have made the suggested changes so you should now be able to specify the > smoothers with gamg in the normal command line way (eg, -mg_levels_pc_type > ilu) and your errors should be caught more elegantly. > > Unfortunately one of the regression tests is now failing, I'm working on that > with Hong (I think its a bug in her stuff) , so there is a known bug in the > current version.
I pushed a fix for runex55_SA. Hong > > On Dec 3, 2011, at 1:16 PM, Barry Smith wrote: > >> >> On Dec 3, 2011, at 12:56 PM, Mark F. Adams wrote: >> >>> >>> On Dec 3, 2011, at 12:03 PM, Stefano Zampini wrote: >>> >>>> Mark, >>>> >>>> I observe some strange behaviour in your gamg code: it always consider SA >>>> method either if I pass "-pc_gamg_type geo" by command line or using >>>> PCGAMGSetSolverType directly into my code. I put a printf of your >>>> variables m_type and m_method inside gamg.c and it always prints >>>> m_type="string I've passed" and m_method=2. >>> >>> If you do not give it coordinates then it can not do geo. ?So I silently >>> switch it to SA with this: >>> >>> if(a_coords==0 && pc_gamg->m_method==0) pc_gamg->m_method = 2; /* use SA if >>> no coords */ >>> >>> Maybe I should throw and error here instead of silently switching ... >> >> ? Yes, if they ask for something that cannot work you really need to error >> out. It is totally confusing to switch to something they didn't ask for. >> >> ? Barry >> >>> >>> It sounds like you hit this code. >>> >>>> >>>> I added some code into gamg.c for setting KSP and PC for the smoother. I >>>> removed the define PETSC_GAMG_SMOOTHER and pass the correct value for PC >>>> to createProlongation function. Is this enough to mantain self-consistency >>>> of the code? >>>> >>> >>> Cheby with diagonal preconditioning is a good default and I want to reuse >>> the eigen estimate for the prolongator smoothing. ?So instead of just >>> setting the PC type to Cheby, I should add PCSetFromOptions and then check >>> if it is still Cheby before computing the eigen estimate, and check that >>> the PC is still Jacobi before caching the max eigenvalue for reuse. >>> ?Currently we can only smooth the prolongation matrix with Jacobi easily. >>> ?So the only self-consistancy issue is in reusing the eigen estimate. >>> >>>> Early test cases suggest better convergenge rate by dropping the lines >>>> where you adapt emax and emin before their estimation, either with >>>> chebichev, or with richardson (using richardson factor 2/(emin+emax)) >>>> >>> >>> Not sure what you mean by "adapt emax and emin before their estimation", >>> but setting the Cheby emax and emin is full of heuristics and it can be >>> tuned better for any particular problem. ?If there is a big difference then >>> that is an issue for concern. >>> >>> 1) It is very bad if emax is lower than the true max eigen value so I bump >>> up the estimate some. ?The less you bump up, the better performance until >>> you get too low, then it dies rapidly. >>> >>> 2) The lowest eigen estimate is really a black art. ?Jed has suggested some >>> heuristics that might be better than what I now have. ?Fortunately >>> performance is not too sensitivity to emin (the cheby poly is pretty smooth >>> there). ?But this can be tuned to improve performance for any problem, I >>> just tuned it approximately with a few of the test cases (ex54, ex55, ex56). >>> >>> 3) One iteration of your richardson with factor 2/(emin+emax) is equivalent >>> to some first order cheby and so this is another type of parameter >>> optimization. ?NB, the lowest eigen estimate is very bad, too high for SPD >>> problems, from just a few iteration of a Krylov method. >>> >>>> Moreover, if I call PCSetCoordinates with dim=3 it prints an error >>>> reporting "3d unsupported". >>>> >>> >>> This sounds like you were trying to use 'geo' in 3D. I've added (not >>> pushed) ?"for 'geo' AMG" to this error message: >>> >>> SETERRQ(wcomm,PETSC_ERR_LIB,"3D not implemented for 'geo' AMG"); >>> >>> I do not fix this problem silently like I do the no coordinates case, I >>> will add that to my next push. >>> >>> Mark >>> >>>> Regards, >>>> Stefano >>>> >>>> 2011/11/30 Mark F. Adams <mark.adams at columbia.edu> >>>> Stefano, >>>> >>>> You really need to tell a black/gray box AMG solver that you are solving a >>>> vector/step PDE. ?For GAMG you simply have to set the block size: >>>> >>>> ierr = MatSetBlockSize( mat, 2 or 3 ); ? ? ?CHKERRQ(ierr); >>>> >>>> I'm not sure if HYPRE or ML is equipped to deal with this in the PETSc >>>> interface (I know the ML library can). >>>> >>>> ML and GAMG use smoothed aggregation which is well suited for elasticity >>>> but to be optimal it needs the null space of the operator which is the 3 >>>> or 6 rigid body modes for elasticity. ?GAMG has an interface where you can >>>> give it the coordinates of your vertices and it will create the rotation >>>> rigid body modes with this. If you do not give it coordinates then it will >>>> use only the translational RBMs and, in general, will not be as good but >>>> still a usable solver. >>>> >>>> Mark >>>> >>>> On Nov 30, 2011, at 12:25 PM, Stefano Zampini wrote: >>>> >>>>> Hi, >>>>> >>>>> I'm trying different AMG (sequential) solvers as black-boxes >>>>> preconditioners for almost incompressible elasticity in 3d with spectral >>>>> elements; specifically, ML and HYPRE (both called from PETSc), but they >>>>> don't provide good results (at least using them via the PETSc interface). >>>>> I wish to test for new GAMG preconditioner from actual petsc-dev. >>>>> >>>>> I wish to test the solver either with essential boundary conditions on >>>>> one face, or with pure neumann boundaries. Can you (I think Mark Adams is >>>>> the one I'm talking with) give my some hints on the customization of the >>>>> preconditioner? >>>>> >>>>> Regards, >>>>> -- >>>>> Stefano >>>> >>>> >>>> >>>> >>>> -- >>>> Stefano >>> >> >> >