On Thursday, 6 April 2017 at 20:49:00 UTC, Nierjerson wrote:
I am running out of memory trying to generate CTFE code. It is
quite large generation but just repetitive loop acting on an
array item.
Surely 16GB should be enough to compile such a thing? I am
using 64-bit dmd. It climes to about 12GB then eventually drops
down to around 1GB and then back up to 16GB and then quits.
I cannot generate the array in parts and stick them all
together because they are all interdependent(but not much,
nothing that should actually cost memory).
Seems that dmd not freeing up memory for speed is gonna start
causing problems with complex template generation.
Is there any way to fix this? A special switch that will enable
the compiler to reduce memory consumption(free unused stuff) or
use the swap file?
https://github.com/IllusionSoftware/COM2D/
At the very least have something to give feedback on how to
reduce memory consumption. Leaving things up in the air for
programmers to stumble upon after a lot of work is not good.
On the "Error: Out of Memory" at least report some statistics
on functions and lines and how much memory they have used and
how many times they have been called.
Some years ago I managed to force DMD to collect memory during
CTFE by loading the malloc replacement of Boehm GC using
LD_PRELOAD on Linux. It sounds totally crazy, but it worked...
Check the last part called "Simplified leak detection under
Linux" of this link:
https://www.hboehm.info/gc/leak.html
You can ignore the leak detection aspect and just build and
preload libgc.so. It will (very conservatively) collect memory if
a malloc fails. Lets hope DMD drops old references during CTFE...
Please report back if this still works.