I can understand that process 0 needs to have twice its own memory due
to the process Barry explained. However, in my case every process has
twice the "necessary" memory. That doesn't seem to be correct to me.
Especially with Barry's explanation in mind it seems strange that all
processes have the
On Thu, Oct 7, 2021 at 11:59 AM Michael Werner
wrote:
> Its twice the memory of the entire matrix (when stored on one process). I
> also just sent you the valgrind results, both for a serial run and a
> parallel run. The size on disk of the matrix I used is 20 GB.
> In the serial run, valgrind
Its twice the memory of the entire matrix (when stored on one process).
I also just sent you the valgrind results, both for a serial run and a
parallel run. The size on disk of the matrix I used is 20 GB.
In the serial run, valgrind shows a peak memory usage of 21GB, while in
the parallel run
> On Oct 7, 2021, at 11:35 AM, Michael Werner wrote:
>
> Currently I'm using psutil to query every process for its memory usage and
> sum it up. However, the spike was only visible in top (I had a call to psutil
> right before and after A.load(viewer), and both reported only 50 GB of RAM
>
Currently I'm using psutil to query every process for its memory usage
and sum it up. However, the spike was only visible in top (I had a call
to psutil right before and after A.load(viewer), and both reported only
50 GB of RAM usage). That's why I thought it might be directly tied to
loading the
On Thu, Oct 7, 2021 at 10:03 AM Barry Smith wrote:
>
>How many ranks are you using? Is it a sparse matrix with MPIAIJ?
>
>The intention is that for parallel runs the first rank reads in its own
> part of the matrix, then reads in the part of the next rank and sends it,
> then reads the
How many ranks are you using? Is it a sparse matrix with MPIAIJ?
The intention is that for parallel runs the first rank reads in its own part
of the matrix, then reads in the part of the next rank and sends it, then reads
the part of the third rank and sends it etc. So there should not
Hello,
I noticed that there is a peak in memory consumption when I load an
existing matrix into PETSc. The matrix is previously created by an
external program and saved in the PETSc binary format.
The code I'm using in petsc4py is simple:
viewer = PETSc.Viewer().createBinary(, "r",