hi Ian and all,

>>> This could be caused by memory fragmentation due to all the freeing and 
>>> mallocing that happens 
>>> during regridding when the sizes of the grids change.  Can you try using 
>>> tcmalloc or jemalloc 
>>> instead of glibc malloc and reporting back?  One workaround could be to run 
>>> shorter simulations 
>>> (i.e. set a walltime of 12 h instead of 24 h).
>>
>> thanks for your reply. in one of my cases, for the resolution used and the 
>> available memory, i was 
>> out of memory quite quickly -- within 6 hours or so... so unfortunately it 
>> becomes a bit 
>> impractical for large simulations...
>>
>> what would i need to do in order to use tcmalloc or jemalloc?
> 
> I have used tcmalloc.  I think you will need the following:
> 
> - Install tcmalloc (https://github.com/gperftools/gperftools), and libunwind, 
> which it depends on.
> - In your optionlist, link with tcmalloc.  I have
> 
> LDFLAGS = -rdynamic -L/home/ianhin/software/gperftools-2.1/lib 
> -Wl,-rpath,/home/ianhin/software/gperftools-2.1/lib -ltcmalloc
> 
> This should be sufficient I think for tcmalloc to be used instead of glibc 
> malloc.  Try this out, 
> and see if things are better.  I also have a thorn which hooks into the 
> tcmalloc API.  You can get 
> it from

thanks a lot for these pointers. i've tried it out, though i've used tcmalloc 
from ubuntu's 
repositories and therefore compiled ET with -ltcmalloc_minimal. i don't know 
whether this makes a 
difference, but from the trial run that i'm doing i so far seem to see the same 
memory increase i 
had seen before...

is there anything else that can be tried to try to pinpoint this issue? it 
seems to be serious... i 
looked for an open ticket but didn't find anything. shall i submit one?

thanks,
Miguel
_______________________________________________
Users mailing list
[email protected]
http://lists.einsteintoolkit.org/mailman/listinfo/users

Reply via email to