Hi, Sorry I made a mistake. The ex2f uses default pc and ksp and hence it is actually using GMRES. I've tested some other options of ml like -pc_ml_CoarsenScheme, -pc_mg_cycles, -pc_ml_maxNlevels but the answer (if there's one) is much slower. Also, for the CoarsenScheme, if I use METIS, I also get the error msg:
*ML*WRN* This function has been compiled without the configure *ML*WRN* option --with-ml_metis *ML*WRN* I will put all the nodes in the same aggregate, this time... *ML*WRN* (file ./Coarsen/ml_agg_METIS.c, line 954) Thanks Barry Smith wrote: > On Fri, 17 Aug 2007, Ben Tay wrote: > > >> Hi, >> >> I tried to test ml using "-pc_type ml" on my problem. It was working ok using >> LU or hypre/boomeramg. However I got the warning: >> >> Gen_Prolongator warning : max eigen <= 0.0 >> Gen_Prolongator warning : max eigen <= 0.0 >> Gen_Prolongator warning : max eigen <= 0.0 >> Gen_Prolongator warning : max eigen <= 0.0 >> Gen_Prolongator warning : max eigen <= 0.0 >> >> When I use hypre, I used "-pc_type hypre -pc_hypre_type boomeramg which gives >> good and fast result for solving my poisson eqn. Is there a recommendation >> for >> ml as well? >> >> Moreover, although I got answers from ml, my answer using ml differs from LU >> and hypre from the 3 sig. fig. When I run the example ex2f, I get different >> norms of error: >> >> LU: Norm of error 0.1192E-05 iterations 4 >> > > Huh? Why is this not 1.e-14, since LU is a direct solver, do you mean ILU? > > >> hypre: Norm of error < 1.e-12,iterations 1 >> >> ml: Norm of error 0.2098E-03 iterations 2 >> > > You may need to play around with the -ksp_rtol <rtol> value or options > in ml to (run with -pc_type ml -help to see ML options). Run with > -ksp_monitor_true_residual and send us the results, We don't have > a lot of experience with ML on real problems. > > Barry > > >> It seems that norm of error for ml is a bit too high. How can I make it >> lower? >> >> Thank you. >> >> >> >> Hong Zhang wrote: >> >>> On Thu, 16 Aug 2007, Ben Tay wrote: >>> >>> >>> >>>> Btw, is the file to download >>>> ftp://ftp.mcs.anl.gov/pub/petsc/externalpackages/ml-5.0.tar.gz? >>>> >>>> >>> Yes. You can simply use '--download-ml=1' during configuration. >>> >>> Run your program with '-help' to see all options for using ml. >>> See ~petsc/src/ksp/ksp/examples/tests/ex26.c on how to use ML. >>> >>> Hong >>> >>> >>> >>>> Thanks >>>> >>>> Matthew Knepley wrote: >>>> >>>> >>>>> The multigrid PC needs information about the grid and discretization. If >>>>> you >>>>> cannot provide this, use AMG through Hypre or ML. Or if you have a >>>>> structured grid, consider using DMMG. >>>>> >>>>> Thanks, >>>>> >>>>> Matt >>>>> >>>>> On 8/15/07, Ben Tay <zonexo at gmail.com> wrote: >>>>> >>>>> >>>>> >>>>>> Hi, >>>>>> >>>>>> I found that there's a multigrid option in the manual which can be >>>>>> used >>>>>> by adding -pctype mg -pcmgtype PC_MG_MULTIPLICATIVE. >>>>>> >>>>>> However, it seems that I am actually just using LU since with or >>>>>> without >>>>>> this option, I got exactly the same answer. Am I using multigrid the >>>>>> wrong way? >>>>>> >>>>>> Thanks >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>> >>>> >>> >>> >> > > >
