Hi,
I got a strange problem with PETSc 3.4 that linear equations assembled in
our FEM codes cannot be solved within 5000 iterations, whereas linear
equations loaded from binary files can be solved with only 24 iterations
using ksp/examples/tutorials/ex10.c. The binary files were created by the
FEM codes using MatView() and VecView() right before calling KSPSolve().
Linear solver types are -ksp_type bcgs -pc_type bjacobi. I set the same
tolerance to both programs.
As you can see from attached log files, both programs calculate the same
"true resid norm" at the beginning but different "preconditioned resid
norm". Does it mean both programs are actually solving the same problem but
somewhat with different preconditoner? It would be so helpful if you have
any clue for this problem.
just tell you background of this: Currently I'm facing a convergence
problem in linear solvers for solving transient heat transport problems
using FEM. At early time steps, PETSc converges quickly (<20 iterations).
Later, iteration numbers increase as time steps increase (>5000 after 19
time steps). I'm in the middle of checking where is a problem for the slow
convergence. Because I don't get such slow convergence with other linear
solvers (BiCGSTAB+Jacobi), I suspect the FEM codes are missing some PETSc
functions or options to be used. As I wrote above, if I use ex10.c with the
binary files, the convergence problem is solved, which means something
going wrong in the FEM codes.
Thank you in advance,
Nori
Warning: A is non-symmetric
0 KSP preconditioned resid norm 7.759422222567e+03 true resid norm 3.917535137736e+06 ||r(i)||/||b|| 1.000000000000e+00
1 KSP preconditioned resid norm 5.807427260559e+02 true resid norm 1.276936513742e+05 ||r(i)||/||b|| 3.259540677610e-02
2 KSP preconditioned resid norm 2.891912150211e+02 true resid norm 2.349751814935e+04 ||r(i)||/||b|| 5.998036347653e-03
3 KSP preconditioned resid norm 1.778395009090e+02 true resid norm 8.616735111997e+03 ||r(i)||/||b|| 2.199529757626e-03
4 KSP preconditioned resid norm 2.471000973896e+02 true resid norm 1.313832966537e+04 ||r(i)||/||b|| 3.353723503028e-03
5 KSP preconditioned resid norm 1.444803185401e+02 true resid norm 5.992560395690e+03 ||r(i)||/||b|| 1.529676233907e-03
6 KSP preconditioned resid norm 1.161370209194e+01 true resid norm 7.546260179981e+02 ||r(i)||/||b|| 1.926277599221e-04
7 KSP preconditioned resid norm 5.540352515993e+00 true resid norm 5.890511298943e+02 ||r(i)||/||b|| 1.503626921480e-04
8 KSP preconditioned resid norm 1.781183107829e+00 true resid norm 1.311939570278e+02 ||r(i)||/||b|| 3.348890371500e-05
9 KSP preconditioned resid norm 2.067029930839e+00 true resid norm 1.734170154741e+02 ||r(i)||/||b|| 4.426686918608e-05
10 KSP preconditioned resid norm 4.796737103885e-01 true resid norm 3.240034175211e+01 ||r(i)||/||b|| 8.270593782303e-06
11 KSP preconditioned resid norm 5.111216604571e-01 true resid norm 5.540089748118e+01 ||r(i)||/||b|| 1.414177423644e-05
12 KSP preconditioned resid norm 2.490708481038e-01 true resid norm 2.532433039198e+01 ||r(i)||/||b|| 6.464353094894e-06
13 KSP preconditioned resid norm 1.764242167015e+00 true resid norm 1.811088459916e+02 ||r(i)||/||b|| 4.623030544053e-05
14 KSP preconditioned resid norm 2.432902537060e-01 true resid norm 2.514073511347e+01 ||r(i)||/||b|| 6.417488096353e-06
15 KSP preconditioned resid norm 3.231192821375e-02 true resid norm 3.037733709072e+00 ||r(i)||/||b|| 7.754196458407e-07
16 KSP preconditioned resid norm 1.153115326462e-01 true resid norm 1.139488496347e+01 ||r(i)||/||b|| 2.908687366632e-06
17 KSP preconditioned resid norm 3.132201228685e-03 true resid norm 1.790558444570e-01 ||r(i)||/||b|| 4.570625103837e-08
18 KSP preconditioned resid norm 2.250133975421e-02 true resid norm 1.843165024539e+00 ||r(i)||/||b|| 4.704910000128e-07
19 KSP preconditioned resid norm 6.342383471101e-04 true resid norm 3.577122331312e-02 ||r(i)||/||b|| 9.131053597591e-09
20 KSP preconditioned resid norm 1.307272883338e-04 true resid norm 9.676180590148e-03 ||r(i)||/||b|| 2.469966509538e-09
21 KSP preconditioned resid norm 1.562545210500e-05 true resid norm 8.061029194454e-04 ||r(i)||/||b|| 2.057678849337e-10
22 KSP preconditioned resid norm 3.089429885996e-06 true resid norm 2.698469439954e-04 ||r(i)||/||b|| 6.888181841588e-11
23 KSP preconditioned resid norm 2.130974045078e-06 true resid norm 1.613097849834e-04 ||r(i)||/||b|| 4.117634668533e-11
24 KSP preconditioned resid norm 2.559318656576e-07 true resid norm 1.538155020293e-05 ||r(i)||/||b|| 3.926333692520e-12
Linear solve converged due to CONVERGED_RTOL iterations 24
KSP Object: 4 MPI processes
type: bcgs
maximum iterations=5000, initial guess is zero
tolerances: relative=1e-10, absolute=1e-50, divergence=10000
left preconditioning
using PRECONDITIONED norm type for convergence test
PC Object: 4 MPI processes
type: bjacobi
block Jacobi: number of blocks = 4
Local solve is same for all blocks, in the following KSP and PC objects:
KSP Object: (sub_) 1 MPI processes
type: preonly
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (sub_) 1 MPI processes
type: ilu
ILU: out-of-place factorization
0 levels of fill
tolerance for zero pivot 2.22045e-14
using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
matrix ordering: natural
factor fill ratio given 1, needed 1
Factored matrix follows:
Matrix Object: 1 MPI processes
type: seqaij
rows=5201, cols=5201
package used to perform factorization: petsc
total: nonzeros=71369, allocated nonzeros=71369
total number of mallocs used during MatSetValues calls =0
not using I-node routines
linear system matrix = precond matrix:
Matrix Object: 1 MPI processes
type: seqaij
rows=5201, cols=5201
total: nonzeros=71369, allocated nonzeros=71369
total number of mallocs used during MatSetValues calls =0
not using I-node routines
linear system matrix = precond matrix:
Matrix Object: 4 MPI processes
type: mpiaij
rows=20801, cols=20801
total: nonzeros=295051, allocated nonzeros=295051
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
Number of iterations = 24
Residual norm 1.53816e-05
Number of iterations = 24
Residual norm 1.53816e-05
************************************************************************************************************************
*** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document ***
************************************************************************************************************************
---------------------------------------------- PETSc Performance Summary: ----------------------------------------------
./ex10 on a linux-gnu named envinf53 with 4 processors, by localadmin Mon Jan 6 17:24:31 2014
Using Petsc Release Version 3.4.3, Oct, 15, 2013
Max Max/Min Avg Total
Time (sec): 1.086e-01 1.00003 1.086e-01
Objects: 9.100e+01 1.00000 9.100e+01
Flops: 2.156e+07 1.02475 2.128e+07 8.511e+07
Flops/sec: 1.986e+08 1.02475 1.960e+08 7.839e+08
MPI Messages: 2.640e+02 1.02326 2.595e+02 1.038e+03
MPI Message Lengths: 1.891e+06 2.45505 4.326e+03 4.490e+06
MPI Reductions: 1.750e+02 1.00000
Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N flops
and VecAXPY() for complex vectors of length N --> 8N flops
Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions --
Avg %Total Avg %Total counts %Total Avg %Total counts %Total
0: Main Stage: 2.2119e-04 0.2% 0.0000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0%
1: Load system: 4.3493e-02 40.1% 0.0000e+00 0.0% 1.320e+02 12.7% 2.906e+03 67.2% 4.100e+01 23.4%
2: KSPSetUpSolve: 6.4855e-02 59.7% 8.5113e+07 100.0% 9.060e+02 87.3% 1.420e+03 32.8% 1.330e+02 76.0%
------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
Count: number of times phase was executed
Time and Flops: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
Avg. len: average message length (bytes)
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
%T - percent time in this phase %f - percent flops in this phase
%M - percent messages in this phase %L - percent message lengths in this phase
%R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event Count Time (sec) Flops --- Global --- --- Stage --- Total
Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
--- Event Stage 0: Main Stage
PetscBarrier 1 1.0 1.0967e-05 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 4 0 0 0 0 0
--- Event Stage 1: Load system
MatAssemblyBegin 2 1.0 6.5989e-0317.6 0.00e+00 0.0 3.6e+01 3.7e+03 4.0e+00 3 0 3 3 2 9 0 27 4 10 0
MatAssemblyEnd 2 1.0 4.0488e-03 1.1 0.00e+00 0.0 4.8e+01 4.1e+02 1.6e+01 4 0 5 0 9 9 0 36 1 39 0
MatLoad 1 1.0 2.7602e-02 1.0 0.00e+00 0.0 3.3e+01 8.3e+04 1.7e+01 25 0 3 61 10 63 0 25 90 41 0
MatTranspose 1 1.0 1.3076e-02 1.0 0.00e+00 0.0 9.6e+01 1.7e+03 1.7e+01 12 0 9 4 10 30 0 73 5 41 0
VecSet 3 1.0 1.5974e-05 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecAssemblyBegin 1 1.0 4.3988e-0414.6 0.00e+00 0.0 0.0e+00 0.0e+00 3.0e+00 0 0 0 0 2 0 0 0 0 7 0
VecAssemblyEnd 1 1.0 2.8610e-06 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecLoad 1 1.0 9.1720e-04 1.0 0.00e+00 0.0 3.0e+00 4.2e+04 4.0e+00 1 0 0 3 2 2 0 2 4 10 0
--- Event Stage 2: KSPSetUpSolve
MatMult 75 1.0 2.7505e-02 1.1 1.08e+07 1.0 9.0e+02 1.6e+03 0.0e+00 25 50 87 33 0 41 50 99100 0 1552
MatSolve 49 1.0 1.7800e-02 1.0 6.86e+06 1.0 0.0e+00 0.0e+00 0.0e+00 16 32 0 0 0 27 32 0 0 0 1521
MatLUFactorNum 1 1.0 4.6430e-03 1.0 6.63e+05 1.1 0.0e+00 0.0e+00 0.0e+00 4 3 0 0 0 7 3 0 0 0 550
MatILUFactorSym 1 1.0 1.2360e-03 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+00 1 0 0 0 1 2 0 0 0 1 0
MatGetRowIJ 1 1.0 0.0000e+00 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatGetOrdering 1 1.0 9.0122e-05 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 1 0 0 0 0 2 0
MatView 3 3.0 8.5521e-04 7.4 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+00 0 0 0 0 1 0 0 0 0 1 0
VecDot 48 1.0 3.3863e-03 1.5 4.99e+05 1.0 0.0e+00 0.0e+00 4.8e+01 3 2 0 0 27 4 2 0 0 36 590
VecDotNorm2 24 1.0 2.4080e-03 1.4 4.99e+05 1.0 0.0e+00 0.0e+00 2.4e+01 2 2 0 0 14 3 2 0 0 18 829
VecNorm 53 1.0 1.7185e-03 1.3 5.51e+05 1.0 0.0e+00 0.0e+00 5.3e+01 1 3 0 0 30 2 3 0 0 40 1283
VecCopy 27 1.0 2.1553e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 52 1.0 2.5892e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecAXPY 2 1.0 1.0252e-05 1.3 2.08e+04 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 8116
VecAYPX 25 1.0 3.8695e-04 1.0 1.30e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 1 1 0 0 0 1344
VecAXPBYCZ 48 1.0 1.2984e-03 1.0 9.99e+05 1.0 0.0e+00 0.0e+00 0.0e+00 1 5 0 0 0 2 5 0 0 0 3076
VecWAXPY 48 1.0 9.5487e-04 1.0 4.99e+05 1.0 0.0e+00 0.0e+00 0.0e+00 1 2 0 0 0 1 2 0 0 0 2091
VecScatterBegin 75 1.0 7.8940e-04 1.3 0.00e+00 0.0 9.0e+02 1.6e+03 0.0e+00 1 0 87 33 0 1 0 99100 0 0
VecScatterEnd 75 1.0 6.6972e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0
KSPSetUp 2 1.0 1.4615e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 1 0 0 0 0 2 0
KSPSolve 1 1.0 5.6503e-02 1.0 2.06e+07 1.0 8.8e+02 1.6e+03 1.2e+02 52 95 84 32 70 87 95 97 97 92 1438
PCSetUp 2 1.0 6.1879e-03 1.1 6.63e+05 1.1 0.0e+00 0.0e+00 5.0e+00 5 3 0 0 3 9 3 0 0 4 412
PCSetUpOnBlocks 2 1.0 5.9793e-03 1.1 6.63e+05 1.1 0.0e+00 0.0e+00 3.0e+00 5 3 0 0 2 9 3 0 0 2 427
PCApply 49 1.0 1.9290e-02 1.0 6.86e+06 1.0 0.0e+00 0.0e+00 0.0e+00 17 32 0 0 0 29 32 0 0 0 1403
------------------------------------------------------------------------------------------------------------------------
Memory usage is given in bytes:
Object Type Creations Destructions Memory Descendants' Mem.
Reports information only for process 0.
--- Event Stage 0: Main Stage
--- Event Stage 1: Load system
Viewer 2 2 1504 0
Matrix 6 3 1062480 0
Vector 7 3 10784 0
Vector Scatter 2 1 1076 0
Index Set 4 4 9248 0
Bipartite Graph 1 1 864 0
--- Event Stage 2: KSPSetUpSolve
Viewer 2 1 736 0
Matrix 1 4 2004796 0
Vector 59 63 2600256 0
Vector Scatter 0 1 1076 0
Index Set 3 3 23108 0
Krylov Solver 2 2 2312 0
Preconditioner 2 2 1864 0
========================================================================================================================
Average time to get PetscTime(): 0
Average time for MPI_Barrier(): 1.19209e-06
Average time for zero size MPI_Send(): 9.53674e-07
#PETSc Option Table entries:
-check_symmetry
-cknorm
-f HEAT_TRANSPORT_19_eqs_A.dat
-ksp_converged_reason
-ksp_max_it 5000
-ksp_monitor_true_residual
-ksp_rtol 1e-10
-ksp_singmonitor
-ksp_type bcgs
-ksp_view
-log_summary
-matload_block_size 1
-options_table
-pc_type bjacobi
-rhs HEAT_TRANSPORT_19_eqs_rhs.dat
-vecload_block_size 1
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure run at: Mon Jan 6 14:42:19 2014
Configure options: PETSC_ARCH=linux-gnu --download-f2cblaslapack=1 -with-debugging=0 --download-superlu_dist --download-hypre=1 --download-ml=1 --download-parmetis --download-metis
-----------------------------------------
Libraries compiled on Mon Jan 6 14:42:19 2014 on envinf53
Machine characteristics: Linux-3.5.0-45-generic-x86_64-with-Ubuntu-12.10-quantal
Using PETSc directory: /home/localadmin/tools/petsc/petsc-3.4.3
Using PETSc arch: linux-gnu
-----------------------------------------
Using C compiler: mpicc -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90 -fPIC -Wall -Wno-unused-variable -Wno-unused-dummy-argument -O ${FOPTFLAGS} ${FFLAGS}
-----------------------------------------
Using include paths: -I/home/localadmin/tools/petsc/petsc-3.4.3/linux-gnu/include -I/home/localadmin/tools/petsc/petsc-3.4.3/include -I/home/localadmin/tools/petsc/petsc-3.4.3/include -I/home/localadmin/tools/petsc/petsc-3.4.3/linux-gnu/include -I/usr/lib/openmpi/include -I/usr/lib/openmpi/include/openmpi
-----------------------------------------
Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: -Wl,-rpath,/home/localadmin/tools/petsc/petsc-3.4.3/linux-gnu/lib -L/home/localadmin/tools/petsc/petsc-3.4.3/linux-gnu/lib -lpetsc -Wl,-rpath,/home/localadmin/tools/petsc/petsc-3.4.3/linux-gnu/lib -L/home/localadmin/tools/petsc/petsc-3.4.3/linux-gnu/lib -lsuperlu_dist_3.3 -lHYPRE -Wl,-rpath,/usr/lib/openmpi/lib -L/usr/lib/openmpi/lib -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/4.7 -L/usr/lib/gcc/x86_64-linux-gnu/4.7 -Wl,-rpath,/usr/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -Wl,-rpath,/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -Wl,-rpath,/home/localadmin/tools/petsc/petsc-3.4.3 -L/home/localadmin/tools/petsc/petsc-3.4.3 -lmpi_cxx -lstdc++ -lml -lmpi_cxx -lstdc++ -lf2clapack -lf2cblas -lX11 -lparmetis -lmetis -lpthread -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lm -lquadmath -lm -lmpi_cxx -lstdc++ -ldl -lmpi -lopen-rte -lopen-pal -lnsl -lutil -lgcc_s -lpthread -ldl
-----------------------------------------
#PETSc Option Table entries:
-check_symmetry
-cknorm
-f HEAT_TRANSPORT_19_eqs_A.dat
-ksp_converged_reason
-ksp_max_it 5000
-ksp_monitor_true_residual
-ksp_rtol 1e-10
-ksp_singmonitor
-ksp_type bcgs
-ksp_view
-log_summary
-matload_block_size 1
-options_table
-pc_type bjacobi
-rhs HEAT_TRANSPORT_19_eqs_rhs.dat
-vecload_block_size 1
#End of PETSc Option Table entries
*** PETSc linear solver
0 KSP preconditioned resid norm 2.157266014144e+03 true resid norm 3.917535137736e+06 ||r(i)||/||b|| 1.000000000000e+00
1 KSP preconditioned resid norm 3.274734387656e-03 true resid norm 3.922750531632e+06 ||r(i)||/||b|| 1.001331294733e+00
2 KSP preconditioned resid norm 3.274732792359e-03 true resid norm 3.922749983195e+06 ||r(i)||/||b|| 1.001331154738e+00
3 KSP preconditioned resid norm 8.065659075016e-02 true resid norm 2.440783955076e+06 ||r(i)||/||b|| 6.230407307811e-01
4 KSP preconditioned resid norm 5.879055150573e-02 true resid norm 2.440521205069e+06 ||r(i)||/||b|| 6.229736605448e-01
5 KSP preconditioned resid norm 3.986516452024e+00 true resid norm 2.492659210027e+06 ||r(i)||/||b|| 6.362825405230e-01
6 KSP preconditioned resid norm 3.380959922049e+00 true resid norm 2.492266138094e+06 ||r(i)||/||b|| 6.361822039799e-01
7 KSP preconditioned resid norm 1.913390440604e+01 true resid norm 2.282102670839e+06 ||r(i)||/||b|| 5.825353418929e-01
8 KSP preconditioned resid norm 1.423014917374e+01 true resid norm 2.314429181603e+06 ||r(i)||/||b|| 5.907870893891e-01
9 KSP preconditioned resid norm 1.339872186740e+01 true resid norm 2.320884919588e+06 ||r(i)||/||b|| 5.924349975145e-01
10 KSP preconditioned resid norm 1.246610515509e+01 true resid norm 2.320704070856e+06 ||r(i)||/||b|| 5.923888336065e-01
11 KSP preconditioned resid norm 4.621023614696e+01 true resid norm 1.383428488998e+06 ||r(i)||/||b|| 3.531374806754e-01
12 KSP preconditioned resid norm 1.409108330870e+01 true resid norm 2.292610498047e+06 ||r(i)||/||b|| 5.852175966370e-01
13 KSP preconditioned resid norm 1.594858580246e+01 true resid norm 1.169277718279e+06 ||r(i)||/||b|| 2.984728093477e-01
14 KSP preconditioned resid norm 9.633595856381e+00 true resid norm 1.924170542487e+06 ||r(i)||/||b|| 4.911686749027e-01
15 KSP preconditioned resid norm 9.017528970170e+00 true resid norm 1.909345399913e+06 ||r(i)||/||b|| 4.873843712391e-01
16 KSP preconditioned resid norm 1.317116409594e+01 true resid norm 2.250108463740e+06 ||r(i)||/||b|| 5.743684190770e-01
17 KSP preconditioned resid norm 5.409661822988e+00 true resid norm 1.514714239373e+06 ||r(i)||/||b|| 3.866498157943e-01
18 KSP preconditioned resid norm 5.493377827620e+00 true resid norm 1.444168564493e+06 ||r(i)||/||b|| 3.686421470945e-01
19 KSP preconditioned resid norm 5.174973743150e+00 true resid norm 1.453803617299e+06 ||r(i)||/||b|| 3.711016152211e-01
20 KSP preconditioned resid norm 6.005710918232e+00 true resid norm 1.420868474736e+06 ||r(i)||/||b|| 3.626945068213e-01
21 KSP preconditioned resid norm 1.645869925839e+01 true resid norm 2.136470350542e+06 ||r(i)||/||b|| 5.453608647852e-01
22 KSP preconditioned resid norm 5.727146575077e+00 true resid norm 1.784355761284e+06 ||r(i)||/||b|| 4.554791976455e-01
23 KSP preconditioned resid norm 5.635893501723e+00 true resid norm 1.788533915589e+06 ||r(i)||/||b|| 4.565457239581e-01
24 KSP preconditioned resid norm 5.577845134512e+00 true resid norm 1.789775994359e+06 ||r(i)||/||b|| 4.568627801494e-01
(I cut here)
4993 KSP preconditioned resid norm 2.425921637870e-06 true resid norm 2.478969889105e+02 ||r(i)||/||b|| 6.327881695881e-05
4994 KSP preconditioned resid norm 2.421114111186e-06 true resid norm 2.475799772355e+02 ||r(i)||/||b|| 6.319789575098e-05
4995 KSP preconditioned resid norm 2.529735477156e-06 true resid norm 2.545181533047e+02 ||r(i)||/||b|| 6.496895225088e-05
4996 KSP preconditioned resid norm 2.409531429586e-06 true resid norm 2.466810616803e+02 ||r(i)||/||b|| 6.296843627619e-05
4997 KSP preconditioned resid norm 2.391549002732e-06 true resid norm 2.451665468558e+02 ||r(i)||/||b|| 6.258183736354e-05
4998 KSP preconditioned resid norm 2.382407177679e-06 true resid norm 2.436743312709e+02 ||r(i)||/||b|| 6.220093061162e-05
4999 KSP preconditioned resid norm 2.379847475477e-06 true resid norm 2.436980492854e+02 ||r(i)||/||b|| 6.220698493243e-05
5000 KSP preconditioned resid norm 2.377903312632e-06 true resid norm 2.438502703874e+02 ||r(i)||/||b|| 6.224584127874e-05
Linear solve did not converge due to DIVERGED_ITS iterations 5000
KSP Object: 4 MPI processes
type: bcgs
maximum iterations=5000, initial guess is zero
tolerances: relative=1e-10, absolute=1e-50, divergence=10000
left preconditioning
using PRECONDITIONED norm type for convergence test
PC Object: 4 MPI processes
type: bjacobi
block Jacobi: number of blocks = 4
Local solve is same for all blocks, in the following KSP and PC objects:
KSP Object: (sub_) 1 MPI processes
type: preonly
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (sub_) 1 MPI processes
type: ilu
ILU: out-of-place factorization
0 levels of fill
tolerance for zero pivot 2.22045e-14
using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
matrix ordering: natural
factor fill ratio given 1, needed 1
Factored matrix follows:
Matrix Object: 1 MPI processes
type: seqaij
rows=5201, cols=5201
package used to perform factorization: petsc
total: nonzeros=71369, allocated nonzeros=71369
total number of mallocs used during MatSetValues calls =0
not using I-node routines
linear system matrix = precond matrix:
Matrix Object: 1 MPI processes
type: seqaij
rows=5201, cols=5201
total: nonzeros=71369, allocated nonzeros=105400
total number of mallocs used during MatSetValues calls =5293
not using I-node routines
linear system matrix = precond matrix:
Matrix Object: 4 MPI processes
type: mpiaij
rows=20801, cols=20801
total: nonzeros=295051, allocated nonzeros=532355
total number of mallocs used during MatSetValues calls =21623
not using I-node (on process 0) routines
************************************************************************************************************************
*** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document ***
************************************************************************************************************************
---------------------------------------------- PETSc Performance Summary: ----------------------------------------------
/home/localadmin/ogs/ogs5/ogs5-nw/BuildPETScRelease/bin/ogs on a linux-gnu named envinf53 with 4 processors, by localadmin Mon Jan 6 17:44:24 2014
Using Petsc Release Version 3.4.3, Oct, 15, 2013
Max Max/Min Avg Total
Time (sec): 1.471e+02 1.00000 1.471e+02
Objects: 6.430e+04 1.00000 6.430e+04
Flops: 2.688e+10 1.02441 2.654e+10 1.062e+11
Flops/sec: 1.828e+08 1.02441 1.804e+08 7.218e+08
MPI Messages: 2.877e+05 1.00208 2.873e+05 1.149e+06
MPI Message Lengths: 6.462e+08 1.68210 1.850e+03 2.127e+09
MPI Reductions: 1.610e+05 1.00000
Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N flops
and VecAXPY() for complex vectors of length N --> 8N flops
Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions --
Avg %Total Avg %Total counts %Total Avg %Total counts %Total
0: Main Stage: 1.4710e+02 100.0% 1.0617e+11 100.0% 1.149e+06 100.0% 1.850e+03 100.0% 1.610e+05 100.0%
------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
Count: number of times phase was executed
Time and Flops: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
Avg. len: average message length (bytes)
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
%T - percent time in this phase %f - percent flops in this phase
%M - percent messages in this phase %L - percent message lengths in this phase
%R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event Count Time (sec) Flops --- Global --- --- Stage --- Total
Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
--- Event Stage 0: Main Stage
VecView 209 1.0 6.4690e+01 1.2 0.00e+00 0.0 6.3e+02 1.4e+05 0.0e+00 41 0 0 4 0 41 0 0 4 0 0
VecDot 63682 1.0 1.5392e+00 2.0 6.62e+08 1.0 0.0e+00 0.0e+00 6.4e+04 1 2 0 0 40 1 2 0 0 40 1721
VecDotNorm2 31841 1.0 8.1543e-01 1.2 6.62e+08 1.0 0.0e+00 0.0e+00 3.2e+04 1 2 0 0 20 1 2 0 0 20 3249
VecNorm 63796 1.0 6.3157e-01 1.1 6.64e+08 1.0 0.0e+00 0.0e+00 6.4e+04 0 2 0 0 40 0 2 0 0 40 4202
VecCopy 31955 1.0 8.0598e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 63988 1.0 1.4214e-01 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecAYPX 31879 1.0 1.9763e-01 1.1 1.66e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 3355
VecAXPBYCZ 63682 1.0 6.3536e-01 1.0 1.32e+09 1.0 0.0e+00 0.0e+00 0.0e+00 0 5 0 0 0 0 5 0 0 0 8340
VecWAXPY 63682 1.0 4.4432e-01 1.0 6.62e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 2 0 0 0 0 2 0 0 0 5963
VecAssemblyBegin 190 1.0 8.1991e+0053.9 0.00e+00 0.0 6.1e+02 1.4e+03 5.7e+02 3 0 0 0 0 3 0 0 0 0 0
VecAssemblyEnd 190 1.0 4.0865e-04 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecScatterBegin 95637 1.0 2.8011e-01 1.2 0.00e+00 0.0 1.1e+06 1.7e+03 7.6e+01 0 0100 91 0 0 0100 91 0 0
VecScatterEnd 95561 1.0 2.7974e-01 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatMult 95561 1.0 1.1960e+01 1.0 1.38e+10 1.0 1.1e+06 1.6e+03 0.0e+00 8 51100 88 0 8 51100 88 0 4549
MatSolve 63720 1.0 8.0361e+00 1.0 8.93e+09 1.0 0.0e+00 0.0e+00 0.0e+00 5 33 0 0 0 5 33 0 0 0 4381
MatLUFactorNum 2 1.0 3.3648e-03 1.2 1.31e+06 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1495
MatILUFactorSym 2 1.0 6.6400e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyBegin 114 1.0 2.7165e-03 2.1 0.00e+00 0.0 3.4e+02 1.2e+04 2.3e+02 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyEnd 114 1.0 3.6252e-02 1.0 0.00e+00 0.0 4.8e+01 4.1e+02 1.3e+02 0 0 0 0 0 0 0 0 0 0 0
MatGetRowIJ 2 1.0 1.1921e-06 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatGetOrdering 2 1.0 7.7963e-05 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatZeroEntries 38 1.0 2.3112e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatView 152 2.0 6.4451e-01 2.5 0.00e+00 0.0 5.7e+02 1.8e+05 3.8e+01 0 0 0 5 0 0 0 0 5 0 0
KSPSetUp 4 1.0 1.4186e-04 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
KSPSolve 38 1.0 2.5419e+01 1.0 2.69e+10 1.0 1.1e+06 1.6e+03 1.6e+05 17100100 88 99 17100100 88 99 4177
PCSetUp 4 1.0 4.3221e-03 1.2 1.31e+06 1.1 0.0e+00 0.0e+00 1.0e+01 0 0 0 0 0 0 0 0 0 0 1164
PCSetUpOnBlocks 38 1.0 4.1878e-03 1.2 1.31e+06 1.1 0.0e+00 0.0e+00 6.0e+00 0 0 0 0 0 0 0 0 0 0 1201
PCApply 63720 1.0 8.6356e+00 1.0 8.93e+09 1.0 0.0e+00 0.0e+00 0.0e+00 6 33 0 0 0 6 33 0 0 0 4077
------------------------------------------------------------------------------------------------------------------------
Memory usage is given in bytes:
Object Type Creations Destructions Memory Descendants' Mem.
Reports information only for process 0.
--- Event Stage 0: Main Stage
Vector 63991 63760 2751798384 0
Vector Scatter 78 0 0 0
Matrix 8 0 0 0
Krylov Solver 4 0 0 0
Preconditioner 4 0 0 0
Viewer 134 133 99104 0
Index Set 86 80 67008 0
========================================================================================================================
Average time to get PetscTime(): 0
Average time for MPI_Barrier(): 1.19209e-06
Average time for zero size MPI_Send(): 7.15256e-07
#PETSc Option Table entries:
-ksp_converged_reason
-ksp_max_it 5000
-ksp_monitor_true_residual
-ksp_rtol 1e-10
-ksp_singmonitor
-ksp_type bcgs
-ksp_view
-log_summary
-options_table
-pc_type bjacobi
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure run at: Mon Jan 6 14:42:19 2014
Configure options: PETSC_ARCH=linux-gnu --download-f2cblaslapack=1 -with-debugging=0 --download-superlu_dist --download-hypre=1 --download-ml=1 --download-parmetis --download-metis
-----------------------------------------
Libraries compiled on Mon Jan 6 14:42:19 2014 on envinf53
Machine characteristics: Linux-3.5.0-45-generic-x86_64-with-Ubuntu-12.10-quantal
Using PETSc directory: /home/localadmin/tools/petsc/petsc-3.4.3
Using PETSc arch: linux-gnu
-----------------------------------------
Using C compiler: mpicc -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90 -fPIC -Wall -Wno-unused-variable -Wno-unused-dummy-argument -O ${FOPTFLAGS} ${FFLAGS}
-----------------------------------------
Using include paths: -I/home/localadmin/tools/petsc/petsc-3.4.3/linux-gnu/include -I/home/localadmin/tools/petsc/petsc-3.4.3/include -I/home/localadmin/tools/petsc/petsc-3.4.3/include -I/home/localadmin/tools/petsc/petsc-3.4.3/linux-gnu/include -I/usr/lib/openmpi/include -I/usr/lib/openmpi/include/openmpi
-----------------------------------------
Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: -Wl,-rpath,/home/localadmin/tools/petsc/petsc-3.4.3/linux-gnu/lib -L/home/localadmin/tools/petsc/petsc-3.4.3/linux-gnu/lib -lpetsc -Wl,-rpath,/home/localadmin/tools/petsc/petsc-3.4.3/linux-gnu/lib -L/home/localadmin/tools/petsc/petsc-3.4.3/linux-gnu/lib -lsuperlu_dist_3.3 -lHYPRE -Wl,-rpath,/usr/lib/openmpi/lib -L/usr/lib/openmpi/lib -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/4.7 -L/usr/lib/gcc/x86_64-linux-gnu/4.7 -Wl,-rpath,/usr/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -Wl,-rpath,/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -Wl,-rpath,/home/localadmin/tools/petsc/petsc-3.4.3 -L/home/localadmin/tools/petsc/petsc-3.4.3 -lmpi_cxx -lstdc++ -lml -lmpi_cxx -lstdc++ -lf2clapack -lf2cblas -lX11 -lparmetis -lmetis -lpthread -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lm -lquadmath -lm -lmpi_cxx -lstdc++ -ldl -lmpi -lopen-rte -lopen-pal -lnsl -lutil -lgcc_s -lpthread -ldl
-----------------------------------------
#PETSc Option Table entries:
-ksp_converged_reason
-ksp_max_it 5000
-ksp_monitor_true_residual
-ksp_rtol 1e-10
-ksp_singmonitor
-ksp_type bcgs
-ksp_view
-log_summary
-options_table
-pc_type bjacobi
#End of PETSc Option Table entries