Andrey Ovsyannikov writes:
> Thanks for your quick response. I like Massif tool and I have been using it
> recently. However, I was not able to run Valgrind for large jobs. I am
> interested in memory analysis of large scale runs with more than 1000 MPI
> ranks.
PETSc reporting of memory usage for objects is unfortunately not that great;
for example distinguishing between temporary work space allocation vs memory
that is kept for the life of the object is not always clear. Associating memory
with particular objects requires the PETSc source code to
Hi Matt,
Thanks for your quick response. I like Massif tool and I have been using it
recently. However, I was not able to run Valgrind for large jobs. I am
interested in memory analysis of large scale runs with more than 1000 MPI
ranks. PetscMemoryGetCurrentUsage() works fine for this puprpose
Andrey,
Maybe this is what you tried, but did you try running only a handful of MPI
ranks (out of your 1000) with Massif? I've had success doing things that
way. You won't know what every rank is doing, but you may be able to get a
good idea from your sample.
--Richard
On Mon, Nov 30, 2015 at
On Mon, Nov 30, 2015 at 5:20 PM, Andrey Ovsyannikov
wrote:
> Dear PETSc team,
>
> I am working on optimization of Chombo-Crunch CFD code for next-generation
> supercomputer architectures at NERSC (Berkeley Lab) and we use PETSc AMG
> solver. During memory analysis study I