I was eventually able to make it run adopting a very small time-step (Courant number of about 1).

So either my problem is not well solved by AMG, as you said, or I am not using it very well.

But I guess I should be able to take it from there.

Thanks again for the support!

Dario


On 05/16/2014 01:56 PM, Barry Smith wrote:
    Algebraic multigrid is not for everything.

On May 16, 2014, at 11:49 AM, Dario Isola <[email protected]> wrote:

Thanks a lot for your answers.

I ran it with
-ksp_type gmres -pc_type hypre -pc_hypre_type euclid
and it worked very well. Thanks.

I then tried to use boomeramg as a preconditioner coupled with Richardson but I 
was not successful, it failed to solve the system and returned nans.
-ksp_type richardson -pc_type hypre -pc_hypre_type boomeramg 
-pc_hypre_boomeramg_relax_type_all SOR/Jacobi -pc_hypre_boomeramg_print_debug 
-ksp_view -ksp_monitor_true_residual
and i got the following

  ===== Proc = 0     Level = 0  =====
Proc = 0    Coarsen 1st pass = 0.000000
Proc = 0    Coarsen 2nd pass = 0.000000
Proc = 0    Initialize CLJP phase = 0.000000
Proc = 0  iter 1  comm. and subgraph update = 0.000000
Proc = 0    CLJP phase = 0.000000 graph_size = 18308 nc_offd = 0
Proc = 0  iter 2  comm. and subgraph update = 0.000000
Proc = 0    CLJP phase = 0.000000 graph_size = 7 nc_offd = 0

  ===== Proc = 0     Level = 1  =====
Proc = 0    Coarsen 1st pass = 0.010000
Proc = 0    Coarsen 2nd pass = 0.000000
Proc = 0    Initialize CLJP phase = 0.000000
Proc = 0  iter 1  comm. and subgraph update = 0.000000
Proc = 0    CLJP phase = 0.000000 graph_size = 8725 nc_offd = 0
Proc = 0  iter 2  comm. and subgraph update = 0.000000
Proc = 0    CLJP phase = 0.000000 graph_size = 16 nc_offd = 0

  ===== Proc = 0     Level = 2  =====
Proc = 0    Coarsen 1st pass = 0.000000
Proc = 0    Coarsen 2nd pass = 0.000000
Proc = 0    Initialize CLJP phase = 0.000000
Proc = 0  iter 1  comm. and subgraph update = 0.000000
Proc = 0    CLJP phase = 0.000000 graph_size = 4721 nc_offd = 0
Proc = 0  iter 2  comm. and subgraph update = 0.000000
Proc = 0    CLJP phase = 0.000000 graph_size = 13 nc_offd = 0
Proc = 0  iter 3  comm. and subgraph update = 0.000000
Proc = 0    CLJP phase = 0.000000 graph_size = 4 nc_offd = 0

  ===== Proc = 0     Level = 3  =====
Proc = 0    Coarsen 1st pass = 0.000000
Proc = 0    Coarsen 2nd pass = 0.000000
Proc = 0    Initialize CLJP phase = 0.000000
Proc = 0  iter 1  comm. and subgraph update = 0.000000
Proc = 0    CLJP phase = 0.000000 graph_size = 2495 nc_offd = 0
Proc = 0  iter 2  comm. and subgraph update = 0.000000
Proc = 0    CLJP phase = 0.000000 graph_size = 22 nc_offd = 0
Proc = 0  iter 3  comm. and subgraph update = 0.000000
Proc = 0    CLJP phase = 0.000000 graph_size = 4 nc_offd = 0

  ===== Proc = 0     Level = 4  =====
Proc = 0    Coarsen 1st pass = 0.000000
Proc = 0    Coarsen 2nd pass = 0.000000
Proc = 0    Initialize CLJP phase = 0.000000
Proc = 0  iter 1  comm. and subgraph update = 0.000000
Proc = 0    CLJP phase = 0.000000 graph_size = 1337 nc_offd = 0
Proc = 0  iter 2  comm. and subgraph update = 0.000000
Proc = 0    CLJP phase = 0.000000 graph_size = 13 nc_offd = 0
Proc = 0  iter 3  comm. and subgraph update = 0.000000
Proc = 0    CLJP phase = 0.000000 graph_size = 2 nc_offd = 0

  ===== Proc = 0     Level = 5  =====
Proc = 0    Coarsen 1st pass = 0.000000
Proc = 0    Coarsen 2nd pass = 0.000000
Proc = 0    Initialize CLJP phase = 0.000000
Proc = 0  iter 1  comm. and subgraph update = 0.000000
Proc = 0    CLJP phase = 0.000000 graph_size = 695 nc_offd = 0
Proc = 0  iter 2  comm. and subgraph update = 0.000000
Proc = 0    CLJP phase = 0.000000 graph_size = 3 nc_offd = 0

  ===== Proc = 0     Level = 6  =====
Proc = 0    Coarsen 1st pass = 0.000000
Proc = 0    Coarsen 2nd pass = 0.000000
Proc = 0    Initialize CLJP phase = 0.000000
Proc = 0  iter 1  comm. and subgraph update = 0.000000
Proc = 0    CLJP phase = 0.000000 graph_size = 343 nc_offd = 0
Proc = 0  iter 2  comm. and subgraph update = 0.000000
Proc = 0    CLJP phase = 0.000000 graph_size = 21 nc_offd = 0
Proc = 0  iter 3  comm. and subgraph update = 0.000000
Proc = 0    CLJP phase = 0.000000 graph_size = 2 nc_offd = 0

  ===== Proc = 0     Level = 7  =====
Proc = 0    Coarsen 1st pass = 0.000000
Proc = 0    Coarsen 2nd pass = 0.000000
Proc = 0    Initialize CLJP phase = 0.000000
Proc = 0  iter 1  comm. and subgraph update = 0.000000
Proc = 0    CLJP phase = 0.000000 graph_size = 174 nc_offd = 0
Proc = 0  iter 2  comm. and subgraph update = 0.000000
Proc = 0    CLJP phase = 0.000000 graph_size = 15 nc_offd = 0
Proc = 0  iter 3  comm. and subgraph update = 0.000000
Proc = 0    CLJP phase = 0.000000 graph_size = 2 nc_offd = 0

  ===== Proc = 0     Level = 8  =====
Proc = 0    Coarsen 1st pass = 0.000000
Proc = 0    Coarsen 2nd pass = 0.000000
Proc = 0    Initialize CLJP phase = 0.000000
Proc = 0  iter 1  comm. and subgraph update = 0.000000
Proc = 0    CLJP phase = 0.000000 graph_size = 81 nc_offd = 0
Proc = 0  iter 2  comm. and subgraph update = 0.000000
Proc = 0    CLJP phase = 0.000000 graph_size = 13 nc_offd = 0

  ===== Proc = 0     Level = 9  =====
Proc = 0    Coarsen 1st pass = 0.000000
Proc = 0    Coarsen 2nd pass = 0.000000
Proc = 0    Initialize CLJP phase = 0.000000
Proc = 0  iter 1  comm. and subgraph update = 0.000000
Proc = 0    CLJP phase = 0.000000 graph_size = 37 nc_offd = 0
Proc = 0  iter 2  comm. and subgraph update = 0.000000
Proc = 0    CLJP phase = 0.000000 graph_size = 6 nc_offd = 0

  ===== Proc = 0     Level = 10  =====
Proc = 0    Coarsen 1st pass = 0.000000
Proc = 0    Coarsen 2nd pass = 0.000000
Proc = 0    Initialize CLJP phase = 0.000000
Proc = 0  iter 1  comm. and subgraph update = 0.000000
Proc = 0    CLJP phase = 0.000000 graph_size = 11 nc_offd = 0


   0 KSP preconditioned resid norm 7.299769365830e+14 true resid norm 
8.197927963033e-03 ||r(i)||/||b|| 1.000000000000e+00
   1 KSP preconditioned resid norm 2.319459389445e+28 true resid norm 
6.152576199945e+12 ||r(i)||/||b|| 7.505038136086e+14
KSP Object: 1 MPI processes
   type: richardson
     Richardson: damping factor=1
   maximum iterations=90, initial guess is zero
   tolerances:  relative=0.1, absolute=1e-50, divergence=100000
   left preconditioning
   using PRECONDITIONED norm type for convergence test
PC Object: 1 MPI processes
   type: hypre
     HYPRE BoomerAMG preconditioning
     HYPRE BoomerAMG: Cycle type V
     HYPRE BoomerAMG: Maximum number of levels 25
     HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1
     HYPRE BoomerAMG: Convergence tolerance PER hypre call 0
     HYPRE BoomerAMG: Threshold for strong coupling 0.25
     HYPRE BoomerAMG: Interpolation truncation factor 0
     HYPRE BoomerAMG: Interpolation: max elements per row 0
     HYPRE BoomerAMG: Number of levels of aggressive coarsening 0
     HYPRE BoomerAMG: Number of paths for aggressive coarsening 1
     HYPRE BoomerAMG: Maximum row sums 0.9
     HYPRE BoomerAMG: Sweeps down         1
     HYPRE BoomerAMG: Sweeps up           1
     HYPRE BoomerAMG: Sweeps on coarse    1
     HYPRE BoomerAMG: Relax down          SOR/Jacobi
     HYPRE BoomerAMG: Relax up            SOR/Jacobi
     HYPRE BoomerAMG: Relax on coarse     Gaussian-elimination
     HYPRE BoomerAMG: Relax weight  (all)      1
     HYPRE BoomerAMG: Outer relax weight (all) 1
     HYPRE BoomerAMG: Using CF-relaxation
     HYPRE BoomerAMG: Measure type        local
     HYPRE BoomerAMG: Coarsen type        Falgout
     HYPRE BoomerAMG: Interpolation type  classical
   linear system matrix = precond matrix:
   Matrix Object:   1 MPI processes
     type: seqbaij
     rows=22905, cols=22905, bs=5
     total: nonzeros=785525, allocated nonzeros=785525
     total number of mallocs used during MatSetValues calls =0
         block size is 5
Do you guys have any suggestion?  Is it possible that I am haven't initialized 
boomeramg properly? Or it is just my system equations that can not be solved by 
AMG?

Sincerely,
Dario




On 05/16/2014 11:54 AM, Barry Smith wrote:
On May 16, 2014, at 10:46 AM, Dario Isola <[email protected]>
  wrote:


Dear all,

I am investigating the use of hypre+petsc. I was able to successfully 
configure, install, compile petsc 3.3 with the external package for hypre.

I tried to run it with the following options
-pc_type hypre -pc_type_hypre pilut -ksp_type richardson
and, although he did not complain, it does not solve the system either.

    Do you meaning it did not converge? At first always run with -ksp_view (or 
-snes_view if using snes or -ts_view if using ts) and 
-ksp_monitor_true_residual to see what is going on.


-pc_type_hypre pilut

is wrong it is -pc_hypre_type pilut

Note that pilut will generally not work with Richardson you need a “real” 
Krylov method like GMRES.

Also the ilu type preconditioners don’t scale particularly well though 
occasionally they can be fine.


To what extent is hypre supported by petsc? More specifically, what kind of 
matrices?

    If it cannot handle the matrix type it would give an error message.  Hypre 
uses a format like AIJ so you should use AIJ. Note that you can make the matrix 
type a runtime option so you don’t have to compile in that it is BAIJ.



I am using a baij matrix.

Thanks in advance,
D


Reply via email to