If you don't use them that often, you could spawn them off in another appdomain, then unload the appdomain when finished. (IIRC appdomains get their own heaps, so the large objects should be reclaimed when the appdomain is shutdown)
(Disclaimer: This is a total "OTTOMH" idea, and not something I have any experience of <g>) Merak > When you allocate large objects they go on the Large Object Heap. > (Aside: the predominant literature says a "large object" is defined as > over 85,000 bytes; some sources say 20,000 but my empirical > testing says > 85k. Could this be a server/workstation difference?) > > This heap gets collected much less often than any of the "managed" > generations (0-2). Furthermore, it never gets compacted > because of the > cost of relocating objects that large -- so you are facing a highly > fragmented large object heap if you traffic heavily in objects of > unusual size. > > Allocation requires contiguous blocks of memory -- which means going > back to the well if a contiguous block is unavailable, so memory will > tend to grow if the GC is unable to keep up with the app logic. > > Microsoft's PSS response to me was basically 1) reuse the > large objects, > or 2) make sure my objects are the same size. Assuming > neither of these > is feasible, and there's no way to force the large object heap to > compact during collection, it sounds like this addresses a class of > problems that .NET just can't answer yet... > > Any thoughts? > >