You could access the VecScatter inside the matrix-multiply and call
VecScatterView() with an ASCII viewer with the format PETSC_VIEWER_ASCII_INFO
(make sure you use this format) and it provides information about how much
communication is being done and how many neighbors are being
MatAssembly was called once (in stage 5) and cost 2.5% of the total time. Look
at stage 5. It says MatAssemblyBegin calls BuildTwoSidedF, which does global
synchronization. The high max/min ratio means load imbalance. What I do not
understand is MatAssemblyEnd. The ratio is 1.0. It means
What is the partition like? Suppose you randomly assigned nodes to
processes; then in the typical case, all neighbors would be on different
processors. Then the "diagonal block" would be nearly diagonal and the
off-diagonal block would be huge, requiring communication with many
other processes.
The load balance is definitely out of whack.
BuildTwoSidedF 1 1.0 1.6722e-0241.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatMult 138 1.0 2.6604e+02 7.4 3.19e+10 2.1 8.2e+07 7.8e+06
0.0e+00 2 4 13 13 0 15 25100100 0 2935476
On Fri, Jun 21, 2019 at 8:07 AM Ale Foggia
mailto:amfog...@gmail.com>> wrote:
Thanks both of you for your answers,
El jue., 20 jun. 2019 a las 22:20, Smith, Barry F.
(mailto:bsm...@mcs.anl.gov>>) escribió:
Note that this is a one time cost if the nonzero structure of the matrix
stays the
On Fri, Jun 21, 2019 at 4:56 AM Dongyu Liu - CITG via petsc-users <
petsc-users@mcs.anl.gov> wrote:
> Hi,
>
> we are using the Viewer class in pets4py to read a gmsh file, but after we
> use the function createASCII with the mode "READ", the gmsh file is
> emptied. Do you have any clue why this
Hi,
we are using the Viewer class in pets4py to read a gmsh file, but after we use
the function createASCII with the mode "READ", the gmsh file is emptied. Do you
have any clue why this happens.
Best,
Dongyu