I'm well aware of that. But well-written games do not allocate memory during frame render, so GC shouldn't be triggered at all.

Or at least, that would be the case except that current safe languages force you to allocate in various circumstances where it is not really necessary, because the language/runtime design favors boxed objects.

It's also worth noting that malloc can be expensive!
 
I would think that a lot of real-time and soft real-time systems would have similar reasons to abandon any system that paused the program for such a long period of time.

Depends a lot on when, why, and for how long. 20ms is definitely a long time. On the other hand, it's pretty clear to me that I could build an audio system with 0.7-1ms requirements in a GC'd language provided the language offered adequate support for unboxed types.


shap

TL;DR: Things have changed, modern games these days allocate quite a bit during "render", and makes use of a whole slew of techniques, from pre-allocating buffers, using simple heap arena allocations, all the way to using garbage compaction and collection. In particular, game developers are hoping to see robust, fast, pause-less and concurrent garbage collection techniques.

Long version, if you are interested:

In our games, we continuously stream data from the BluRay in order to avoid load-times for the players. This necessitates a series of allocation systems, from malloc-like systems, GC-like systems and the like. Also, script based game-play systems are written by game-play designers, and these systems need GC.

In the same notion, we heavily "use" memory - though "allocation" is perhaps not the right name. What we do is pre-allocate many buffers for use by different systems. We group things into "single-buffered memory"/"double-buffered memory", as well as other private buffers and ring-buffers. Most allocation is done through heap arenas. We simply keep track of the base pointer, the current pointer and the end-of memory pointer, and allocation is as simple as this:

// Asserts and memory tracking are present in debug builds, but not shown here for clarity.
void *HeapArena::Allocate(size_t size)
{
    U8 *pRet = m_pCurr;
    U8 *pNext = m_pCurr + size;
    if (pNext > m_pEnd) return NULL;
    m_pCurr = pNext;
    return pRet;
}

The objects on the heap itself can never be deleted, but the whole heap can be reset (m_pCurr = m_pBuffer). This happens once a frame for the single-buffered memory buffers, and every other frame for double-buffered memory buffers. Essentially, when you allocate into a single-buffered memory buffer, you promise that the data will be temporary and that it is not needed next frame. (Garbage collection given life-time a promise? Wonder if you could statically type-check something like that...)

Finally, texture memory is a huge part of our memory management system. Most data is textures. For this, we need to garbage collect texture blocks based on actual usage. What we do is make the graphics chip move chunks of memory into holes created by unused textures, while at the same time streaming in new textures on top of the stack.

So, as you can see, your notion that "well-written games do not allocate memory during frame-render" is not quite accurate. :-)

Thanks,

PKE


On 4/10/2012 12:26 PM, Jonathan S. Shapiro wrote:
Crap! This was supposed to go to the list too.

Pal: I imagine that you are using "reply-all" instead of "reply". Please don't. Replies to bitc-dev are directed to bitc-dev automatically. When your email agent adds me as a recipient, it breaks this behavior, with the result that I end up replying to you privately instead of to the list. I'll try to pay better attention, but could you please try not to add me individually when you are replying to the list?

Thanks

On Tue, Apr 10, 2012 at 12:24 PM, Jonathan S. Shapiro <[email protected]> wrote:
On Tue, Apr 10, 2012 at 11:00 AM, Pal-Kristian Engstad <[email protected]> wrote:
On 4/10/2012 10:09 AM, Jonathan S. Shapiro wrote:

So I agree that the behavior you describe exists, but I do not agree that it is any sort of impediment to use. The occasional, rare, 20ms pause in an application like this just isn't a big deal

That depends on the application. Games, as an example, would never tolerate a 20 msec pause. The application has to produce images at 30 frames per second, or sometimes at 60 frames per second. In a 30 FPS game, you have 1,000 [msec] / 30 [frames] = 33.33.. [msec/frame] to respond to user input, perform game logic, prepare rendering information and then produce an image. As you can clearly see, 20 msec is *huge*, it is 60% of a frame!

I'm well aware of that. But well-written games do not allocate memory during frame render, so GC shouldn't be triggered at all.

Or at least, that would be the case except that current safe languages force you to allocate in various circumstances where it is not really necessary, because the language/runtime design favors boxed objects.

It's also worth noting that malloc can be expensive!
 
I would think that a lot of real-time and soft real-time systems would have similar reasons to abandon any system that paused the program for such a long period of time.

Depends a lot on when, why, and for how long. 20ms is definitely a long time. On the other hand, it's pretty clear to me that I could build an audio system with 0.7-1ms requirements in a GC'd language provided the language offered adequate support for unboxed types.


shap




_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev


_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to