I know GCs are a big source of contention, particularly so among game developers, but I have to say I've never really experienced one of the (theoretical) scenarios where a GC ruins a game's performance or something like that, at least not with the D GC. It seems to just be a matter of not doing crazy things (like running heap-based operations every frame) and pre-allocating everything that you can with pools or something similar.

The concept of GC is fine, it exist in both Unreal and Unity

The only difference is their implementation

Both Unreal/Unity doesn't have "much" problems because they use some sort of incremental GC, usually multithreaded

The problem of D is it's the worst implementation for games, it scans your entire heap, while doing so pauses all the threads

The bigger your heap is (wich games usually have), the longer the pause will be


It's not a problem for "some" games, but as 144hz monitors are becoming mainstream, the need of games running at 120/144fps is becoming crucial

For 120fps, your frame budget is only 8ms, no time for any GC pause

Even thought GC's story is better on Unreal/Unity, they still struggle, constantly, wich GC issues, a simple google request is enough to validate the point)

I used to not care about the GC, until it started to get in my way, since then, malloc/free/allocators, and nothing else

Designing an engine this way gives you much more control, GC for scripting only in isolated thread!

That's why it is dangerous to tell people to not mind the GC and "just program", no, you have to be meticulous about your allocation strategy to properly make use of the benefits that a GC will give you!

GC is an ice thing, when used properly, depending on its implementation!

Reply via email to