>From looking at the code, I actually don't think there's any danger in
changing the texture cache size mid render -- but I haven't actually tested
that out.

I agree that you shouldn't need to set the texture cache to super high
levels.  A few GB should be good enough, no? Although 512MB is very likely
too little if you're rendering with 24+ threads (more threads == bigger
texture working set).  Definitely CC me on those logs as well.  Make sure
to set options.texture_per_file_stats to true so that we can see what level
mipmaps are being read in.

In any case, while this is a neat idea, I think this would be somewhat
challenging to get working in a reliable way because it's very difficult
for code to actively detect how much memory is being used.

Just to give one example: Suppose arnold is at 10GB of memory usage and we
step into a user procedural.  Then in that procedural the user allocates
20GB of data.  Arnold won't be able to check for that until after control
is returned from the user procedural.  In that time, we might have started
swapping big time.

I can come up with a bunch of other ways that could cause us to temporarily
swap.  I think it's a good idea, but unless there's a serious need for
this, it's probably not worth doing, since doing it in a safe way would
probably take a bit of work.

What would be awesome is if there was an OS hook that would call us when
swapping is about to occur, then we could release resources as needed...

On Thu, Feb 19, 2015 at 11:41 AM, Larry Gritz <[email protected]> wrote:

> It's true that we have fast file caches, but we are not reading the same
> textures over and over and over again.
>
> I don't want to make a lot of more specific comments until I understand
> better what's going on by seeing the logs. Without that, it's just
> speculation.
>
> I hope you'll paste the OIIO section of the stats here, it might be
> instructive for me to comment on what I see there. The rest of the log you
> can send me privately, mostly because it's going to be big and full of
> irrelevant information, but I'll happily take a look and see what jumps out
> at me.
>
>
>
> On Feb 19, 2015, at 10:36 AM, Chad Dombrova <[email protected]> wrote:
>
>
>
> On Thu, Feb 19, 2015 at 9:45 AM, Larry Gritz <[email protected]> wrote:
>
>> Chad, are you ensuring that all of your textures are MIPmapped and tiled,
>> and that your shaders are computing derivatives properly (i.e., not ending
>> up with 0 derivs that inadvertently force point sampling at the highest-res
>> MIP level)?
>>
>
> yep. arnold is set to abort on non-mip-mapped textures.  as for 0
> derivatives, I think that arnold prints a warning in this case, but I can't
> remember for certain.  any ideas on the easiest way to determine this?
>
> We routinely render frames that reference 1TB of texture or more, using a
>> 1GB texture cache. Or is it 512 MB? I can't remember, which underscores
>> that it's such a non-problem that we almost never discuss changing it from
>> one show to another, let alone mid-render!
>>
>
> we do too, we just end up with hundreds of GB of textures re-read from
> disk and a disgusting amount of time spent on file IO, and the upshot is
> 45% CPU utilization.  the difference between Luma and Sony might be that
> you have fast caching servers like Avere which reduce the cost of
> re-reading textures.  in our case, if we increase the texture cache to
> avoid re-reading textures, we dramatically improve performance.  It's about
> $300,000 cheaper than an Avere cluster :)
>
> I'm going to go out on a limb and suggest that if you are having memory
>> issues that are making you contemplate exotic changes to the texture cache
>> policies, there may be other options, approaches, or changes to your
>> pipeline that would be much more effective in addressing the root cause.
>>
>
> I'm curious what types of options, approaches, or changes you might be
> thinking of.
>
> you may be right, but I *think* that our situation is not that uncommon:
> - we have a lot of large textures
> - we have servers which don't have enough cache
> - we have artists who don't know how (or don't have time) to read render
> stats and optimize their renders
>
> One possibility is that we may end up with tiles being read and used which
> are higher res than needed.  Is there a stat which would help to determine
> this?
>
> I'd be happy to look at a full Arnold render log output from a troublesome
>> frame, and see if anything stands out.
>>
>
> Thanks, I will take you up on that!  I'll look for some good candidates.
>
> -chad
>
> _______________________________________________
> Oiio-dev mailing list
> [email protected]
> http://lists.openimageio.org/listinfo.cgi/oiio-dev-openimageio.org
>
>
> --
> Larry Gritz
> [email protected]
>
>
>
>
> _______________________________________________
> Oiio-dev mailing list
> [email protected]
> http://lists.openimageio.org/listinfo.cgi/oiio-dev-openimageio.org
>
>
_______________________________________________
Oiio-dev mailing list
[email protected]
http://lists.openimageio.org/listinfo.cgi/oiio-dev-openimageio.org

Reply via email to