Hi,
I have attached the new results.
Thank you
Yours sincerely,
TAY wee-beng
On 2/11/2015 12:27 PM, Barry Smith wrote:
Run without the -momentum_ksp_view -poisson_ksp_view and send the new results
You can see from the log summary that the PCSetUp is taking a much smaller
percentage of the time meaning that it is reusing the preconditioner and not
rebuilding it each time.
Barry
Something makes no sense with the output: it gives
KSPSolve 199 1.0 2.3298e+03 1.0 5.20e+09 1.8 3.8e+04 9.9e+05
5.0e+02 90100 66100 24 90100 66100 24 165
90% of the time is in the solve but there is no significant amount of time in
other events of the code which is just not possible. I hope it is due to your
IO.
On Nov 1, 2015, at 10:02 PM, TAY wee-beng <[email protected]> wrote:
Hi,
I have attached the new run with 100 time steps for 48 and 96 cores.
Only the Poisson eqn 's RHS changes, the LHS doesn't. So if I want to reuse the
preconditioner, what must I do? Or what must I not do?
Why does the number of processes increase so much? Is there something wrong
with my coding? Seems to be so too for my new run.
Thank you
Yours sincerely,
TAY wee-beng
On 2/11/2015 9:49 AM, Barry Smith wrote:
If you are doing many time steps with the same linear solver then you MUST
do your weak scaling studies with MANY time steps since the setup time of AMG
only takes place in the first stimestep. So run both 48 and 96 processes with
the same large number of time steps.
Barry
On Nov 1, 2015, at 7:35 PM, TAY wee-beng <[email protected]> wrote:
Hi,
Sorry I forgot and use the old a.out. I have attached the new log for 48cores
(log48), together with the 96cores log (log96).
Why does the number of processes increase so much? Is there something wrong
with my coding?
Only the Poisson eqn 's RHS changes, the LHS doesn't. So if I want to reuse the
preconditioner, what must I do? Or what must I not do?
Lastly, I only simulated 2 time steps previously. Now I run for 10 timesteps
(log48_10). Is it building the preconditioner at every timestep?
Also, what about momentum eqn? Is it working well?
I will try the gamg later too.
Thank you
Yours sincerely,
TAY wee-beng
On 2/11/2015 12:30 AM, Barry Smith wrote:
You used gmres with 48 processes but richardson with 96. You need to be
careful and make sure you don't change the solvers when you change the number
of processors since you can get very different inconsistent results
Anyways all the time is being spent in the BoomerAMG algebraic multigrid
setup and it is is scaling badly. When you double the problem size and number
of processes it went from 3.2445e+01 to 4.3599e+02 seconds.
PCSetUp 3 1.0 3.2445e+01 1.0 9.58e+06 2.0 0.0e+00 0.0e+00
4.0e+00 62 8 0 0 4 62 8 0 0 5 11
PCSetUp 3 1.0 4.3599e+02 1.0 9.58e+06 2.0 0.0e+00 0.0e+00
4.0e+00 85 18 0 0 6 85 18 0 0 6 2
Now is the Poisson problem changing at each timestep or can you use the same
preconditioner built with BoomerAMG for all the time steps? Algebraic multigrid
has a large set up time that you often doesn't matter if you have many time
steps but if you have to rebuild it each timestep it is too large?
You might also try -pc_type gamg and see how PETSc's algebraic multigrid
scales for your problem/machine.
Barry
On Nov 1, 2015, at 7:30 AM, TAY wee-beng <[email protected]> wrote:
On 1/11/2015 10:00 AM, Barry Smith wrote:
On Oct 31, 2015, at 8:43 PM, TAY wee-beng <[email protected]> wrote:
On 1/11/2015 12:47 AM, Matthew Knepley wrote:
On Sat, Oct 31, 2015 at 11:34 AM, TAY wee-beng <[email protected]> wrote:
Hi,
I understand that as mentioned in the faq, due to the limitations in memory,
the scaling is not linear. So, I am trying to write a proposal to use a
supercomputer.
Its specs are:
Compute nodes: 82,944 nodes (SPARC64 VIIIfx; 16GB of memory per node)
8 cores / processor
Interconnect: Tofu (6-dimensional mesh/torus) Interconnect
Each cabinet contains 96 computing nodes,
One of the requirement is to give the performance of my current code with my
current set of data, and there is a formula to calculate the estimated parallel
efficiency when using the new large set of data
There are 2 ways to give performance:
1. Strong scaling, which is defined as how the elapsed time varies with the
number of processors for a fixed
problem.
2. Weak scaling, which is defined as how the elapsed time varies with the
number of processors for a
fixed problem size per processor.
I ran my cases with 48 and 96 cores with my current cluster, giving 140 and 90
mins respectively. This is classified as strong scaling.
Cluster specs:
CPU: AMD 6234 2.4GHz
8 cores / processor (CPU)
6 CPU / node
So 48 Cores / CPU
Not sure abt the memory / node
The parallel efficiency ‘En’ for a given degree of parallelism ‘n’ indicates
how much the program is
efficiently accelerated by parallel processing. ‘En’ is given by the following
formulae. Although their
derivation processes are different depending on strong and weak scaling,
derived formulae are the
same.
From the estimated time, my parallel efficiency using Amdahl's law on the
current old cluster was 52.7%.
So is my results acceptable?
For the large data set, if using 2205 nodes (2205X8cores), my expected parallel
efficiency is only 0.5%. The proposal recommends value of > 50%.
The problem with this analysis is that the estimated serial fraction from
Amdahl's Law changes as a function
of problem size, so you cannot take the strong scaling from one problem and
apply it to another without a
model of this dependence.
Weak scaling does model changes with problem size, so I would measure weak
scaling on your current
cluster, and extrapolate to the big machine. I realize that this does not make
sense for many scientific
applications, but neither does requiring a certain parallel efficiency.
Ok I check the results for my weak scaling it is even worse for the expected
parallel efficiency. From the formula used, it's obvious it's doing some sort of
exponential extrapolation decrease. So unless I can achieve a near > 90% speed
up when I double the cores and problem size for my current 48/96 cores setup,
extrapolating from about 96 nodes to 10,000 nodes will give a much lower expected
parallel efficiency for the new case.
However, it's mentioned in the FAQ that due to memory requirement, it's impossible to
get >90% speed when I double the cores and problem size (ie linear increase in
performance), which means that I can't get >90% speed up when I double the cores
and problem size for my current 48/96 cores setup. Is that so?
What is the output of -ksp_view -log_summary on the problem and then on the
problem doubled in size and number of processors?
Barry
Hi,
I have attached the output
48 cores: log48
96 cores: log96
There are 2 solvers - The momentum linear eqn uses bcgs, while the Poisson eqn
uses hypre BoomerAMG.
Problem size doubled from 158x266x150 to 158x266x300.
So is it fair to say that the main problem does not lie in my programming
skills, but rather the way the linear equations are solved?
Thanks.
Thanks,
Matt
Is it possible for this type of scaling in PETSc (>50%), when using 17640
(2205X8) cores?
Btw, I do not have access to the system.
Sent using CloudMagic Email
--
What most experimenters take for granted before they begin their experiments is
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener
<log48.txt><log96.txt>
<log48_10.txt><log48.txt><log96.txt>
<log96_100.txt><log48_100.txt>
0.000000000000000E+000 0.353000000000000 0.000000000000000E+000
90.0000000000000 0.000000000000000E+000 0.000000000000000E+000
1.00000000000000 0.400000000000000 0 -400000
AB,AA,BB -3.41000006697141 3.44100006844383
3.46600006963126 3.40250006661518
size_x,size_y,size_z 158 266 301
body_cg_ini 0.523700833348298 0.778648765134454
7.03282656467989
Warning - length difference between element and cell
max_element_length,min_element_length,min_delta
0.000000000000000E+000 10000000000.0000 1.800000000000000E-002
maximum ngh_surfaces and ngh_vertics are 41 20
minimum ngh_surfaces and ngh_vertics are 28 10
body_cg_ini 0.896813342835977 -0.976707581163755
7.03282656467989
Warning - length difference between element and cell
max_element_length,min_element_length,min_delta
0.000000000000000E+000 10000000000.0000 1.800000000000000E-002
maximum ngh_surfaces and ngh_vertics are 41 20
minimum ngh_surfaces and ngh_vertics are 28 10
min IIB_cell_no 0
max IIB_cell_no 415
final initial IIB_cell_no 2075
min I_cell_no 0
max I_cell_no 468
final initial I_cell_no 2340
size(IIB_cell_u),size(I_cell_u),size(IIB_equal_cell_u),size(I_equal_cell_u)
2075 2340 2075 2340
IIB_I_cell_no_uvw_total1 7635 7644 7643 8279
8271 8297
IIB_I_cell_no_uvw_total2 7647 7646 7643 8271
8274 8266
1 0.00150000 0.35826998 0.36414728 1.27156134
-0.24352631E+04 -0.99308685E+02 0.12633660E+08
escape_time reached, so abort
body 1
implicit forces and moment 1
0.927442607223602 -0.562098081140987 0.170409685651173
0.483779468746378 0.422008389858664 -1.17504373525251
body 2
implicit forces and moment 2
0.569670444239399 0.795659947391087 0.159539659289149
-0.555930483541150 0.172727625010991 1.07040540515635
************************************************************************************************************************
*** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r
-fCourier9' to print this document ***
************************************************************************************************************************
---------------------------------------------- PETSc Performance Summary:
----------------------------------------------
./a.out on a petsc-3.6.2_shared_rel named n12-03 with 96 processors, by wtay
Mon Nov 2 06:33:26 2015
Using Petsc Release Version 3.6.2, Oct, 02, 2015
Max Max/Min Avg Total
Time (sec): 2.616e+03 1.00000 2.616e+03
Objects: 4.300e+01 1.00000 4.300e+01
Flops: 5.204e+09 1.75932 4.008e+09 3.848e+11
Flops/sec: 1.989e+06 1.75932 1.532e+06 1.471e+08
MPI Messages: 4.040e+02 2.00000 3.998e+02 3.838e+04
MPI Message Lengths: 3.953e+08 2.00000 9.784e+05 3.755e+10
MPI Reductions: 1.922e+03 1.00000
Flop counting convention: 1 flop = 1 real number operation of type
(multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N
flops
and VecAXPY() for complex vectors of length N -->
8N flops
Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- --
Message Lengths -- -- Reductions --
Avg %Total Avg %Total counts %Total
Avg %Total counts %Total
0: Main Stage: 2.6158e+03 100.0% 3.8481e+11 100.0% 3.838e+04 100.0%
9.784e+05 100.0% 1.921e+03 99.9%
------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting
output.
Phase summary info:
Count: number of times phase was executed
Time and Flops: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
Avg. len: average message length (bytes)
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and
PetscLogStagePop().
%T - percent time in this phase %F - percent flops in this phase
%M - percent messages in this phase %L - percent message lengths in
this phase
%R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all
processors)
------------------------------------------------------------------------------------------------------------------------
Event Count Time (sec) Flops
--- Global --- --- Stage --- Total
Max Ratio Max Ratio Max Ratio Mess Avg len Reduct
%T %F %M %L %R %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
--- Event Stage 0: Main Stage
MatMult 198 1.0 1.0111e+01 1.4 1.28e+09 1.9 3.8e+04 9.9e+05
0.0e+00 0 25 98100 0 0 25 98100 0 9509
MatSolve 297 1.0 7.3316e+00 1.5 1.78e+09 1.9 0.0e+00 0.0e+00
0.0e+00 0 34 0 0 0 0 34 0 0 0 17756
MatLUFactorNum 99 1.0 9.7915e+00 2.0 9.48e+08 2.0 0.0e+00 0.0e+00
0.0e+00 0 18 0 0 0 0 18 0 0 0 6977
MatILUFactorSym 1 1.0 8.0566e-02 2.4 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatConvert 1 1.0 5.6834e-02 1.2 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyBegin 100 1.0 1.5968e+0112.5 0.00e+00 0.0 0.0e+00 0.0e+00
2.0e+02 0 0 0 0 10 0 0 0 0 10 0
MatAssemblyEnd 100 1.0 3.0815e+00 1.4 0.00e+00 0.0 7.6e+02 1.7e+05
1.6e+01 0 0 2 0 1 0 0 2 0 1 0
MatGetRowIJ 3 1.0 5.9605e-06 6.2 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatGetOrdering 1 1.0 1.4124e-02 4.6 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
KSPSetUp 199 1.0 5.0311e-02 7.9 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
KSPSolve 199 1.0 2.3644e+03 1.0 5.20e+09 1.8 3.8e+04 9.9e+05
5.0e+02 90100 98100 26 90100 98100 26 163
VecDot 198 1.0 3.5329e+00 2.7 2.00e+08 1.3 0.0e+00 0.0e+00
2.0e+02 0 4 0 0 10 0 4 0 0 10 4254
VecDotNorm2 99 1.0 2.8264e+00 4.4 2.00e+08 1.3 0.0e+00 0.0e+00
9.9e+01 0 4 0 0 5 0 4 0 0 5 5317
VecNorm 198 1.0 6.6515e+00 5.1 2.00e+08 1.3 0.0e+00 0.0e+00
2.0e+02 0 4 0 0 10 0 4 0 0 10 2259
VecCopy 198 1.0 5.1771e-01 1.6 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 696 1.0 9.4293e-01 1.4 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecAXPBYCZ 198 1.0 1.4347e+00 1.5 3.99e+08 1.3 0.0e+00 0.0e+00
0.0e+00 0 8 0 0 0 0 8 0 0 0 20951
VecWAXPY 198 1.0 1.3298e+00 1.3 2.00e+08 1.3 0.0e+00 0.0e+00
0.0e+00 0 4 0 0 0 0 4 0 0 0 11302
VecAssemblyBegin 398 1.0 3.1136e+00 1.9 0.00e+00 0.0 0.0e+00 0.0e+00
1.2e+03 0 0 0 0 62 0 0 0 0 62 0
VecAssemblyEnd 398 1.0 1.3890e-03 2.1 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecScatterBegin 198 1.0 6.5443e-01 2.3 0.00e+00 0.0 3.8e+04 9.9e+05
0.0e+00 0 0 98100 0 0 0 98100 0 0
VecScatterEnd 198 1.0 4.2735e+00 5.6 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
PCSetUp 199 1.0 4.4552e+02 1.0 9.48e+08 2.0 0.0e+00 0.0e+00
4.0e+00 17 18 0 0 0 17 18 0 0 0 153
PCSetUpOnBlocks 99 1.0 9.8690e+00 1.9 9.48e+08 2.0 0.0e+00 0.0e+00
0.0e+00 0 18 0 0 0 0 18 0 0 0 6922
PCApply 297 1.0 7.7572e+00 1.4 1.78e+09 1.9 0.0e+00 0.0e+00
0.0e+00 0 34 0 0 0 0 34 0 0 0 16782
------------------------------------------------------------------------------------------------------------------------
Memory usage is given in bytes:
Object Type Creations Destructions Memory Descendants' Mem.
Reports information only for process 0.
--- Event Stage 0: Main Stage
Matrix 7 7 182147036 0
Krylov Solver 3 3 3464 0
Vector 20 20 41709448 0
Vector Scatter 2 2 2176 0
Index Set 7 7 4705612 0
Preconditioner 3 3 3208 0
Viewer 1 0 0 0
========================================================================================================================
Average time to get PetscTime(): 9.53674e-08
Average time for MPI_Barrier(): 0.000525999
Average time for zero size MPI_Send(): 8.90593e-06
#PETSc Option Table entries:
-log_summary
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8
sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --with-mpi-dir=/opt/ud/openmpi-1.8.8/
--with-blas-lapack-dir=/opt/ud/intel_xe_2013sp1/mkl/lib/intel64/
--with-debugging=0 --download-hypre=1
--prefix=/home/wtay/Lib/petsc-3.6.2_shared_rel --known-mpi-shared=1
--with-shared-libraries --with-fortran-interfaces=1
-----------------------------------------
Libraries compiled on Sun Oct 18 17:34:07 2015 on hpc12
Machine characteristics:
Linux-3.10.0-123.20.1.el7.x86_64-x86_64-with-centos-7.1.1503-Core
Using PETSc directory: /home/wtay/Codes/petsc-3.6.2
Using PETSc arch: petsc-3.6.2_shared_rel
-----------------------------------------
Using C compiler: /opt/ud/openmpi-1.8.8/bin/mpicc -fPIC -wd1572 -O3
${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: /opt/ud/openmpi-1.8.8/bin/mpif90 -fPIC -O3
${FOPTFLAGS} ${FFLAGS}
-----------------------------------------
Using include paths:
-I/home/wtay/Codes/petsc-3.6.2/petsc-3.6.2_shared_rel/include
-I/home/wtay/Codes/petsc-3.6.2/include -I/home/wtay/Codes/petsc-3.6.2/include
-I/home/wtay/Codes/petsc-3.6.2/petsc-3.6.2_shared_rel/include
-I/home/wtay/Lib/petsc-3.6.2_shared_rel/include -I/opt/ud/openmpi-1.8.8/include
-----------------------------------------
Using C linker: /opt/ud/openmpi-1.8.8/bin/mpicc
Using Fortran linker: /opt/ud/openmpi-1.8.8/bin/mpif90
Using libraries:
-Wl,-rpath,/home/wtay/Codes/petsc-3.6.2/petsc-3.6.2_shared_rel/lib
-L/home/wtay/Codes/petsc-3.6.2/petsc-3.6.2_shared_rel/lib -lpetsc
-Wl,-rpath,/home/wtay/Lib/petsc-3.6.2_shared_rel/lib
-L/home/wtay/Lib/petsc-3.6.2_shared_rel/lib -lHYPRE
-Wl,-rpath,/opt/ud/openmpi-1.8.8/lib -L/opt/ud/openmpi-1.8.8/lib
-Wl,-rpath,/opt/ud/intel_xe_2013sp1/composer_xe_2013_sp1.2.144/compiler/lib/intel64
-L/opt/ud/intel_xe_2013sp1/composer_xe_2013_sp1.2.144/compiler/lib/intel64
-Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.8.3
-L/usr/lib/gcc/x86_64-redhat-linux/4.8.3 -lmpi_cxx
-Wl,-rpath,/opt/ud/intel_xe_2013sp1/mkl/lib/intel64
-L/opt/ud/intel_xe_2013sp1/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_sequential
-lmkl_core -lpthread -lm -lX11 -lhwloc -lssl -lcrypto -lmpi_usempi -lmpi_mpifh
-lifport -lifcore -lm -lmpi_cxx -ldl -Wl,-rpath,/opt/ud/openmpi-1.8.8/lib
-L/opt/ud/openmpi-1.8.8/lib -lmpi -Wl,-rpath,/opt/ud/openmpi-1.8.8/lib
-L/opt/ud/openmpi-1.8.8/lib
-Wl,-rpath,/opt/ud/intel_xe_2013sp1/composer_xe_2013_sp1.2.144/compiler/lib/intel64
-L/opt/ud/intel_xe_2013sp1/composer_xe_2013_sp1.2.144/compiler/lib/intel64
-Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.8.3
-L/usr/lib/gcc/x86_64-redhat-linux/4.8.3 -Wl,-rpath,/opt/ud/openmpi-1.8.8/lib
-limf -lsvml -lirng -lipgo -ldecimal -lcilkrts -lstdc++ -lgcc_s -lirc -lpthread
-lirc_s -Wl,-rpath,/opt/ud/openmpi-1.8.8/lib -L/opt/ud/openmpi-1.8.8/lib
-Wl,-rpath,/opt/ud/intel_xe_2013sp1/composer_xe_2013_sp1.2.144/compiler/lib/intel64
-L/opt/ud/intel_xe_2013sp1/composer_xe_2013_sp1.2.144/compiler/lib/intel64
-Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.8.3
-L/usr/lib/gcc/x86_64-redhat-linux/4.8.3 -ldl
-----------------------------------------
--------------------------------------------------------------------------
[[61614,1],3]: A high-performance Open MPI point-to-point messaging module
was unable to find any relevant network interfaces:
Module: OpenFabrics (openib)
Host: n12-02
Another transport will be used instead, although this may result in
lower performance.
--------------------------------------------------------------------------
0.000000000000000E+000 0.353000000000000 0.000000000000000E+000
90.0000000000000 0.000000000000000E+000 0.000000000000000E+000
1.00000000000000 0.400000000000000 0 -400000
AB,AA,BB -2.47900002275128 2.50750002410496
3.46600006963126 3.40250006661518
size_x,size_y,size_z 158 266 150
body_cg_ini 0.523700833348298 0.778648765134454
7.03282656467989
Warning - length difference between element and cell
max_element_length,min_element_length,min_delta
0.000000000000000E+000 10000000000.0000 1.800000000000000E-002
maximum ngh_surfaces and ngh_vertics are 45 22
minimum ngh_surfaces and ngh_vertics are 28 10
body_cg_ini 0.896813342835977 -0.976707581163755
7.03282656467989
Warning - length difference between element and cell
max_element_length,min_element_length,min_delta
0.000000000000000E+000 10000000000.0000 1.800000000000000E-002
maximum ngh_surfaces and ngh_vertics are 45 22
minimum ngh_surfaces and ngh_vertics are 28 10
min IIB_cell_no 0
max IIB_cell_no 429
final initial IIB_cell_no 2145
min I_cell_no 0
max I_cell_no 460
final initial I_cell_no 2300
size(IIB_cell_u),size(I_cell_u),size(IIB_equal_cell_u),size(I_equal_cell_u)
2145 2300 2145 2300
IIB_I_cell_no_uvw_total1 3090 3094 3078 3080
3074 3073
IIB_I_cell_no_uvw_total2 3102 3108 3089 3077
3060 3086
1 0.00150000 0.26454057 0.26151125 1.18591342
-0.76697866E+03 -0.32601415E+02 0.62972429E+07
escape_time reached, so abort
body 1
implicit forces and moment 1
0.862585008111159 -0.514909355150849 0.188664224674766
0.478394001094961 0.368389427717324 -1.05426249343926
body 2
implicit forces and moment 2
0.527315451670885 0.731524817665969 0.148469052731966
-0.515183371217827 0.158120496614554 0.961546178988603
************************************************************************************************************************
*** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r
-fCourier9' to print this document ***
************************************************************************************************************************
---------------------------------------------- PETSc Performance Summary:
----------------------------------------------
./a.out on a petsc-3.6.2_shared_rel named n12-02 with 48 processors, by wtay
Mon Nov 2 06:04:49 2015
Using Petsc Release Version 3.6.2, Oct, 02, 2015
Max Max/Min Avg Total
Time (sec): 8.683e+02 1.00000 8.683e+02
Objects: 4.300e+01 1.00000 4.300e+01
Flops: 5.204e+09 1.75932 3.985e+09 1.913e+11
Flops/sec: 5.993e+06 1.75932 4.589e+06 2.203e+08
MPI Messages: 4.040e+02 2.00000 3.956e+02 1.899e+04
MPI Message Lengths: 3.953e+08 2.00000 9.784e+05 1.858e+10
MPI Reductions: 1.922e+03 1.00000
Flop counting convention: 1 flop = 1 real number operation of type
(multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N
flops
and VecAXPY() for complex vectors of length N -->
8N flops
Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- --
Message Lengths -- -- Reductions --
Avg %Total Avg %Total counts %Total
Avg %Total counts %Total
0: Main Stage: 8.6829e+02 100.0% 1.9126e+11 100.0% 1.899e+04 100.0%
9.784e+05 100.0% 1.921e+03 99.9%
------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting
output.
Phase summary info:
Count: number of times phase was executed
Time and Flops: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
Avg. len: average message length (bytes)
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and
PetscLogStagePop().
%T - percent time in this phase %F - percent flops in this phase
%M - percent messages in this phase %L - percent message lengths in
this phase
%R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all
processors)
------------------------------------------------------------------------------------------------------------------------
Event Count Time (sec) Flops
--- Global --- --- Stage --- Total
Max Ratio Max Ratio Max Ratio Mess Avg len Reduct
%T %F %M %L %R %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
--- Event Stage 0: Main Stage
MatMult 198 1.0 2.1698e+01 2.8 1.28e+09 1.9 1.9e+04 9.9e+05
0.0e+00 2 25 98100 0 2 25 98100 0 2200
MatSolve 297 1.0 1.1486e+01 2.8 1.78e+09 1.9 0.0e+00 0.0e+00
0.0e+00 1 34 0 0 0 1 34 0 0 0 5630
MatLUFactorNum 99 1.0 1.3933e+01 2.1 9.48e+08 2.0 0.0e+00 0.0e+00
0.0e+00 1 18 0 0 0 1 18 0 0 0 2434
MatILUFactorSym 1 1.0 2.7501e-01 4.6 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatConvert 1 1.0 8.8003e-02 1.2 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyBegin 100 1.0 1.3273e+0154.9 0.00e+00 0.0 0.0e+00 0.0e+00
2.0e+02 1 0 0 0 10 1 0 0 0 10 0
MatAssemblyEnd 100 1.0 4.6471e+00 1.9 0.00e+00 0.0 3.8e+02 1.7e+05
1.6e+01 0 0 2 0 1 0 0 2 0 1 0
MatGetRowIJ 3 1.0 6.1989e-06 0.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatGetOrdering 1 1.0 2.9773e-02 5.1 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
KSPSetUp 199 1.0 1.4844e-01 4.9 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
KSPSolve 199 1.0 6.7551e+02 1.0 5.20e+09 1.8 1.9e+04 9.9e+05
5.0e+02 78100 98100 26 78100 98100 26 283
VecDot 198 1.0 1.1890e+01 9.8 2.00e+08 1.3 0.0e+00 0.0e+00
2.0e+02 1 4 0 0 10 1 4 0 0 10 630
VecDotNorm2 99 1.0 1.0095e+0111.7 2.00e+08 1.3 0.0e+00 0.0e+00
9.9e+01 1 4 0 0 5 1 4 0 0 5 742
VecNorm 198 1.0 1.2050e+0110.0 2.00e+08 1.3 0.0e+00 0.0e+00
2.0e+02 1 4 0 0 10 1 4 0 0 10 622
VecCopy 198 1.0 1.5117e+00 4.2 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 696 1.0 1.8900e+00 3.2 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecAXPBYCZ 198 1.0 3.6260e+00 3.8 3.99e+08 1.3 0.0e+00 0.0e+00
0.0e+00 0 8 0 0 0 0 8 0 0 0 4131
VecWAXPY 198 1.0 2.8821e+00 2.9 2.00e+08 1.3 0.0e+00 0.0e+00
0.0e+00 0 4 0 0 0 0 4 0 0 0 2599
VecAssemblyBegin 398 1.0 3.3092e+00 5.1 0.00e+00 0.0 0.0e+00 0.0e+00
1.2e+03 0 0 0 0 62 0 0 0 0 62 0
VecAssemblyEnd 398 1.0 1.7860e-03 1.7 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecScatterBegin 198 1.0 1.8810e+00 7.3 0.00e+00 0.0 1.9e+04 9.9e+05
0.0e+00 0 0 98100 0 0 0 98100 0 0
VecScatterEnd 198 1.0 1.6243e+0112.2 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 1 0 0 0 0 1 0 0 0 0 0
PCSetUp 199 1.0 5.5428e+01 1.2 9.48e+08 2.0 0.0e+00 0.0e+00
4.0e+00 6 18 0 0 0 6 18 0 0 0 612
PCSetUpOnBlocks 99 1.0 1.4139e+01 2.1 9.48e+08 2.0 0.0e+00 0.0e+00
0.0e+00 1 18 0 0 0 1 18 0 0 0 2399
PCApply 297 1.0 1.2171e+01 2.8 1.78e+09 1.9 0.0e+00 0.0e+00
0.0e+00 1 34 0 0 0 1 34 0 0 0 5313
------------------------------------------------------------------------------------------------------------------------
Memory usage is given in bytes:
Object Type Creations Destructions Memory Descendants' Mem.
Reports information only for process 0.
--- Event Stage 0: Main Stage
Matrix 7 7 182147036 0
Krylov Solver 3 3 3464 0
Vector 20 20 41709448 0
Vector Scatter 2 2 2176 0
Index Set 7 7 4705612 0
Preconditioner 3 3 3208 0
Viewer 1 0 0 0
========================================================================================================================
Average time to get PetscTime(): 9.53674e-08
Average time for MPI_Barrier(): 9.39369e-06
Average time for zero size MPI_Send(): 5.21044e-06
#PETSc Option Table entries:
-log_summary
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8
sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --with-mpi-dir=/opt/ud/openmpi-1.8.8/
--with-blas-lapack-dir=/opt/ud/intel_xe_2013sp1/mkl/lib/intel64/
--with-debugging=0 --download-hypre=1
--prefix=/home/wtay/Lib/petsc-3.6.2_shared_rel --known-mpi-shared=1
--with-shared-libraries --with-fortran-interfaces=1
-----------------------------------------
Libraries compiled on Sun Oct 18 17:34:07 2015 on hpc12
Machine characteristics:
Linux-3.10.0-123.20.1.el7.x86_64-x86_64-with-centos-7.1.1503-Core
Using PETSc directory: /home/wtay/Codes/petsc-3.6.2
Using PETSc arch: petsc-3.6.2_shared_rel
-----------------------------------------
Using C compiler: /opt/ud/openmpi-1.8.8/bin/mpicc -fPIC -wd1572 -O3
${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: /opt/ud/openmpi-1.8.8/bin/mpif90 -fPIC -O3
${FOPTFLAGS} ${FFLAGS}
-----------------------------------------
Using include paths:
-I/home/wtay/Codes/petsc-3.6.2/petsc-3.6.2_shared_rel/include
-I/home/wtay/Codes/petsc-3.6.2/include -I/home/wtay/Codes/petsc-3.6.2/include
-I/home/wtay/Codes/petsc-3.6.2/petsc-3.6.2_shared_rel/include
-I/home/wtay/Lib/petsc-3.6.2_shared_rel/include -I/opt/ud/openmpi-1.8.8/include
-----------------------------------------
Using C linker: /opt/ud/openmpi-1.8.8/bin/mpicc
Using Fortran linker: /opt/ud/openmpi-1.8.8/bin/mpif90
Using libraries:
-Wl,-rpath,/home/wtay/Codes/petsc-3.6.2/petsc-3.6.2_shared_rel/lib
-L/home/wtay/Codes/petsc-3.6.2/petsc-3.6.2_shared_rel/lib -lpetsc
-Wl,-rpath,/home/wtay/Lib/petsc-3.6.2_shared_rel/lib
-L/home/wtay/Lib/petsc-3.6.2_shared_rel/lib -lHYPRE
-Wl,-rpath,/opt/ud/openmpi-1.8.8/lib -L/opt/ud/openmpi-1.8.8/lib
-Wl,-rpath,/opt/ud/intel_xe_2013sp1/composer_xe_2013_sp1.2.144/compiler/lib/intel64
-L/opt/ud/intel_xe_2013sp1/composer_xe_2013_sp1.2.144/compiler/lib/intel64
-Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.8.3
-L/usr/lib/gcc/x86_64-redhat-linux/4.8.3 -lmpi_cxx
-Wl,-rpath,/opt/ud/intel_xe_2013sp1/mkl/lib/intel64
-L/opt/ud/intel_xe_2013sp1/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_sequential
-lmkl_core -lpthread -lm -lX11 -lhwloc -lssl -lcrypto -lmpi_usempi -lmpi_mpifh
-lifport -lifcore -lm -lmpi_cxx -ldl -Wl,-rpath,/opt/ud/openmpi-1.8.8/lib
-L/opt/ud/openmpi-1.8.8/lib -lmpi -Wl,-rpath,/opt/ud/openmpi-1.8.8/lib
-L/opt/ud/openmpi-1.8.8/lib
-Wl,-rpath,/opt/ud/intel_xe_2013sp1/composer_xe_2013_sp1.2.144/compiler/lib/intel64
-L/opt/ud/intel_xe_2013sp1/composer_xe_2013_sp1.2.144/compiler/lib/intel64
-Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.8.3
-L/usr/lib/gcc/x86_64-redhat-linux/4.8.3 -Wl,-rpath,/opt/ud/openmpi-1.8.8/lib
-limf -lsvml -lirng -lipgo -ldecimal -lcilkrts -lstdc++ -lgcc_s -lirc -lpthread
-lirc_s -Wl,-rpath,/opt/ud/openmpi-1.8.8/lib -L/opt/ud/openmpi-1.8.8/lib
-Wl,-rpath,/opt/ud/intel_xe_2013sp1/composer_xe_2013_sp1.2.144/compiler/lib/intel64
-L/opt/ud/intel_xe_2013sp1/composer_xe_2013_sp1.2.144/compiler/lib/intel64
-Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.8.3
-L/usr/lib/gcc/x86_64-redhat-linux/4.8.3 -ldl
-----------------------------------------