Ok, that's what i thought after doing the same modification, but I had a strange behavior with this modification.
It was very slow and ram usage went through the roof.

I did the following tests: load a test scene with two cache memory limits: 750Mb and 1GB. With one gig, the render took about 20seconds, the ram went up to around 1GB. Other render are then very fast, so i guess the cache isn't completely full.

With 750MB things got weird: the render took more than 5 minutes, the ram going up and down from 750MB to 6GB, although i think i have around 1GB of maps. I could understand that it need more time, but not if it is using that much memory. And i still don't understand why the ram usage climb like that.
(memory usage dropped as it should after the render)

I know I'm not using the way i'm supposed to, not using tiled image and so, but i'm trying to cover as many use cases as I can. For example I prevented the use of non tiled images, but i fear one day someone will used a single tile image, so i need to check the behavior in this case.


On 10/29/2012 01:01 AM, Larry Gritz wrote:
The assertion is because we didn't think this would ever happen.

The ImageCache is really heavily used, I doubt it's leaking.  But I can imagine cases where, with 
multiple threads and untiled images (and "autotile" not enabled, so each entire file is 
one tile), threads can touch tiles in a pattern such that it's very difficult to ever fine 
"not recently used" tiles to free.

I'm proposing a fix for this:

https://github.com/OpenImageIO/oiio/pull/443

This issues an error, rather than assert, and then just exits the loop, 
allowing a (hopefully temporary) excess of file handles and/or tiles.  
Presumably, taking too much memory is a lesser evil than terminating.

        -- lg


On Oct 24, 2012, at 9:01 AM, Michel Lerenard wrote:

To complete my previous post: I made tiled version of the images i was using 
with maketx.
It does not crash anymore.

I am now convinced there is an issue with the cache release function. Why use 
an assert, not simply exit the loop ?

On 10/24/2012 05:24 PM, Michel Lerenard wrote:
Hi everyone,

I have a problem with an assert popping up in ImageCacheImpl::check_max_mem.
On line 1952, ASSERT (full_loops<  100) becomes false.

I get this error on a scene with 4 meshes, having each 3 tiff maps applied through 
different material channels. Each mesh has its own maps =>  i have 12 maps 
total. ( file size varies between 16 and 70Mo. Maps are 8k*8k)

Depending on how much cache i set, i get the error sooner or later.  If i set 
more than 1.5GB, which seems enough to load everything, there is no problem. I 
tried setting the limit to 1000, thinking maybe there was really a lot of tiles 
to free, but it crashes the exact same way.

I run several evaluations at the same time ( up to 24 ) through a single 
texture system.
I made a test and restricted the number of evaluations to 1, and it worked. It 
seems to me that with several evaluations running, we try to free memory but we 
can't because everything is used. So I uncommented the log that displays the 
size of the memory freed, and i was suprised by the numbers :
  Freeing tile, recovering 67108864
  Freeing tile, recovering 67108864
  Freeing tile, recovering 67108864
  Freeing tile, recovering 201326592

Does this mean the file has no tile ? Or a single tile ? In these cases, 
shouldn't the image bypass the cache ?

Am I using the library correctly or not ? I'm not sure...

Michel

_______________________________________________
Oiio-dev mailing list
[email protected]
http://lists.openimageio.org/listinfo.cgi/oiio-dev-openimageio.org
--
Larry Gritz
[email protected]


_______________________________________________
Oiio-dev mailing list
[email protected]
http://lists.openimageio.org/listinfo.cgi/oiio-dev-openimageio.org

_______________________________________________
Oiio-dev mailing list
[email protected]
http://lists.openimageio.org/listinfo.cgi/oiio-dev-openimageio.org

Reply via email to