When you allocate large objects they go on the Large Object Heap.
(Aside: the predominant literature says a "large object" is defined as
over 85,000 bytes; some sources say 20,000 but my empirical testing says
85k.  Could this be a server/workstation difference?)

This heap gets collected much less often than any of the "managed"
generations (0-2).  Furthermore, it never gets compacted because of the
cost of relocating objects that large -- so you are facing a highly
fragmented large object heap if you traffic heavily in objects of
unusual size.

Allocation requires contiguous blocks of memory -- which means going
back to the well if a contiguous block is unavailable, so memory will
tend to grow if the GC is unable to keep up with the app logic.

Microsoft's PSS response to me was basically 1) reuse the large objects,
or 2) make sure my objects are the same size.  Assuming neither of these
is feasible, and there's no way to force the large object heap to
compact during collection, it sounds like this addresses a class of
problems that .NET just can't answer yet...

Any thoughts?

Reply via email to