Hello,
I am currently solving a 1.2 million by 1.2 million linear system using PETSc
2.3.3, Patch 13 using domain decomposition (iterative substructuring using a
Krylov Subspace Solver). I'm using a 120 CPU cluster with InfiniBand
interconnect; each node has 8 cores -- 2 quad-core Xeon CPUs (X5365) at 3.0
GHz, each node having 32 GB of RAM.
After running my code, I generate a log using the following:
?????? CALL PetscLogPrintSummary(PETSC_COMM_WORLD,"log.txt",ierr)
When looking at the log output, I noticed that the peak and average number of
Flops seems fairly low -- in the case of one run, a peak of 6.0e7 flops with an
average value of 5.2e7. The exact log output is:
???????????????????????? Max?????? Max/Min??????? Avg????? Total
Time (sec):?????????? 5.283e+01????? 1.02357?? 5.169e+01
Objects:????????????? 2.600e+02????? 1.00000?? 2.600e+02
Flops:??????????????? 3.187e+09????? 1.69853?? 2.721e+09? 3.265e+11
Flops/sec:??????????? 6.165e+07????? 1.69865?? 5.264e+07? 6.317e+09
Memory:?????????????? 6.081e+07????? 1.39801????????????? 6.608e+09
MPI Messages:???????? 1.067e+05????? 1.00000?? 1.067e+05? 1.281e+07
MPI Message Lengths:? 5.205e+08????? 1.00081?? 4.875e+03? 6.245e+10
MPI Reductions:?????? 1.898e+01????? 1.00000
Is my interpretation correct about this being a fairly low flop count? Does
this mean there's an issue with my code?
I am attaching my log file
Thanks
Waad
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20080731/ccf523b4/attachment.htm>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: log.txt
URL:
<http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20080731/ccf523b4/attachment.txt>