Randell Jesup <[EMAIL PROTECTED]> writes:
>><http://wam.umd.edu/~soref/stress.html> has a lot of large images on it.
>>Is this the image cache again?
> That site really is a stress test. I tried a build from a week
>or so ago on it, and I finally had to kill it when the process size hit
>560MB. Note: I was paging down through the page while it was loading.
It dropped a 500M core file, and we died allocating a 2M buffer for
an image in nsImageGTK::Init, called from ImageRendererImpl::NewPixmap.
Note: casual inspection of ::NewPixmap shows that it drops nsImageGTK
objects (or even buffers) on the floor if various other calls fail. (For
example, if dc->GetILColorSpace() fails, or img->Init() fails.)
~2M per image makes sense - 1024x768x24bit JPEGs. The page has
about 550 1024x768 images. So the obvious memory usage would be around
1GB for the decoded images unless some attempt was made to limit maximum
memory usage by decoding images on-the-fly as needed for display, or do so
if the memory in use by img is too large.
Perhaps after some max usage (by current-page images) is hit, for
the rest (or all) cache the encoded data, and decode the data as needed
into a not-too-large LRU cache. (LRU to avoid spending all our time
decoding the same images, while not keeping decoded versions of all
the non-visible images).
Hmmmm.
Or as a simple cutoff stop decoding images if we're using more than
some number of bytes for decoded images (say 50M). This would avoid the
crash at the cost of breaking functionality in extreme cases. It could be
a prefs item (I don't think a UI for it is needed). That would save a lot
of coding for the fancy/correct solution while avoiding crash cases.
--
Randell Jesup, Worldgate Communications, ex-Scala, ex-Amiga OS team ('88-94)
[EMAIL PROTECTED]