When I program, it's usually videogame ideas. That implies a soft, real-time requirement. In general, that requires the mantra "allocations are evil, use object pools whenever possible." [storing data in static arrays and 'deleting' is usually just marking an entry as is_deleted=true and re-using "dead" ones.]

I'm looking through D's parallelism module and the docs state, up-front:

 >Creates a Task on the GC heap that calls an alias.

The modern, scalable way to make a parallel game engine uses tasks. (as opposed to functional decomposition where 1 thread is networking 1 thread is physics, etc.) That requires making LOTS of tasks (_per frame_!) and dispatching them. And a 60 FPS frametime is... 16 ms or less.

So my question is: Has anyone done any analysis over how "dangerous" it is to use GC'd tasks for _small_ tasks (in terms of milliseconds)? Is the GC going to fire off all the time and send jitter off the charts? Because while freeze-the-world for 20 milliseconds would probably be unnoticable for many business apps, it would completely break a videogame.

I wonder how difficult it would be to either modify the existing parallel task codebase (or, write my own?) to use static pools instead. Allocate once an array of "MAX_NUM_TASKS" tasks (eating the minor memory hit) to prevent touching any allocation. [Even if it wasn't GC, allocating every frame in say, C++, is dangerous. malloc/new is slow and subject to fragmentation and permissions checks.]

Any advice, thoughts? Thanks,
--Chris Katko

Reply via email to