On Tuesday, 30 September 2014 at 23:31:36 UTC, Cliff wrote:
Not a GC specialist here, so maybe the thought arises - why not
turn off automatic GC until such times in the code where you can
afford the cost of it, then call GC.collect explicitly -
essentially eliminating the opportunity for the GC to run at
random times and force running at deterministic times? Is
memory
usage so constrained that failing to execute runs in-between
those deterministic blocks could lead to OOM? Does such a
strategy have other nasty side-effects which make it
impractical?
The latter. If you want a game to run at 60 fps, you have about
16 ms for each frame, during which time you need to make all the
necessary game and graphics updates. There's no upper to limit to
the amount of time a GC run can take, so it can easily exceed the
few ms you have left for it.
There are however GC algorithms that support incremental
collection, meaning that you can give the GC a deadline. If it
can't finish before this deadline, it will have to interrupt its
work and continue on the next run. Unfortunately, these GCs
usually require special compiler support (barriers, and
distinguishing GC from non-GC pointers), which we don't have. But
there is CDGC writte by Leandro Lucarella for D1, which uses a
forking to achieve the same effect, and which Dicebot is
currently porting to D2:
http://forum.dlang.org/thread/[email protected]