Ashish Patel <ashish. patel@ ansys. com> writes: > Hi Jed, > VmRss is on a higher side and seems to match what PetscMallocGetMaximumUsage is reporting. HugetlbPages was 0 for me. > > Mark, running without the near nullspace also
ZjQcmQRYFpfptBannerStart
This Message Is From an External Sender
This message came from outside your organization.
 
ZjQcmQRYFpfptBannerEnd
Ashish Patel <ashish.pa...@ansys.com> writes:

> Hi Jed,
> VmRss is on a higher side and seems to match what PetscMallocGetMaximumUsage is reporting. HugetlbPages was 0 for me.
>
> Mark, running without the near nullspace also gives similar results. I have attached the malloc_view and gamg info for serial and 2 core runs. Some of the standout functions on rank 0 for parallel run seems to be
> 5.3 GB MatSeqAIJSetPreallocation_SeqAIJ
> 7.7 GB MatStashSortCompress_Private
> 10.1 GB PetscMatStashSpaceGet
> 7.7 GB  PetscSegBufferAlloc_Private
>
> malloc_view also says the following
> [0] Maximum memory PetscMalloc()ed 32387548912 maximum size of entire process 8270635008
> which fits the PetscMallocGetMaximumUsage > PetscMemoryGetMaximumUsage output.

This would occur if there was a large PetscMalloc'd block that did not get used (so only a portion of it is faulted and thus becomes resident).

Can you run a heap profiler like heaptrack?

https://urldefense.us/v3/__https://github.com/KDE/heaptrack__;!!G_uCfscf7eWS!df4YJk0DTT-nhR7waR508BVDCwjsXjQWK-Ng4rwx9hY2N6Wzg-qLoMvB5seh4A-GpnUIad0xjnCKheATOMQ$

Reply via email to