> On 19 Jul 2018, at 17:14, Miguel Zilhão > <[email protected]> wrote: > > hi Ian, > >>> i've noticed that my runs (using latest ET release) with CarpetRegrid2 >>> exhibit a significant >>> increase in memory during runtime. this seems to happen immediately after >>> some non-trivial >>> regridding operation is done. the increase is steady, and at some point i >>> run out of memory and the >>> simulation crashes. this is happening both on my workstation (running >>> Ubuntu 18.04) as well as our >>> local cluster (running Debian 9). i was wondering if someone has seen >>> something like this? >>> >>> i have not seen this happen for simulations without CarpetRegrid2. i show >>> below some relevant >>> portions of the stdout file for a standard inspiral BH run (note the last >>> column--maxrss_mb): >>> >> This could be caused by memory fragmentation due to all the freeing and >> mallocing that happens during regridding when the sizes of the grids change. >> Can you try using tcmalloc or jemalloc instead of glibc malloc and >> reporting back? One workaround could be to run shorter simulations (i.e. >> set a walltime of 12 h instead of 24 h). > > thanks for your reply. in one of my cases, for the resolution used and the > available memory, i was out of memory quite quickly -- within 6 hours or > so... so unfortunately it becomes a bit impractical for large simulations... > > what would i need to do in order to use tcmalloc or jemalloc?
I have used tcmalloc. I think you will need the following: - Install tcmalloc (https://github.com/gperftools/gperftools), and libunwind, which it depends on. - In your optionlist, link with tcmalloc. I have LDFLAGS = -rdynamic -L/home/ianhin/software/gperftools-2.1/lib -Wl,-rpath,/home/ianhin/software/gperftools-2.1/lib -ltcmalloc This should be sufficient I think for tcmalloc to be used instead of glibc malloc. Try this out, and see if things are better. I also have a thorn which hooks into the tcmalloc API. You can get it from https://bitbucket.org/ianhinder/tcmalloc It's very much a work in progress, and probably has some hard-coded assumptions in it. You can set Cactus parameters to: 1. Report memory statistics periodically 2. Release memory back to the OS periodically 3. Output a memory profile periodically Let us know how it goes! -- Ian Hinder https://ianhinder.net
_______________________________________________ Users mailing list [email protected] http://lists.einsteintoolkit.org/mailman/listinfo/users
