It's true that we have fast file caches, but we are not reading the same 
textures over and over and over again.

I don't want to make a lot of more specific comments until I understand better 
what's going on by seeing the logs. Without that, it's just speculation.

I hope you'll paste the OIIO section of the stats here, it might be instructive 
for me to comment on what I see there. The rest of the log you can send me 
privately, mostly because it's going to be big and full of irrelevant 
information, but I'll happily take a look and see what jumps out at me.



On Feb 19, 2015, at 10:36 AM, Chad Dombrova <[email protected]> wrote:

> 
> 
> On Thu, Feb 19, 2015 at 9:45 AM, Larry Gritz <[email protected]> wrote:
> Chad, are you ensuring that all of your textures are MIPmapped and tiled, and 
> that your shaders are computing derivatives properly (i.e., not ending up 
> with 0 derivs that inadvertently force point sampling at the highest-res MIP 
> level)?
> 
> yep. arnold is set to abort on non-mip-mapped textures.  as for 0 
> derivatives, I think that arnold prints a warning in this case, but I can't 
> remember for certain.  any ideas on the easiest way to determine this?
> 
> We routinely render frames that reference 1TB of texture or more, using a 1GB 
> texture cache. Or is it 512 MB? I can't remember, which underscores that it's 
> such a non-problem that we almost never discuss changing it from one show to 
> another, let alone mid-render!
> 
> we do too, we just end up with hundreds of GB of textures re-read from disk 
> and a disgusting amount of time spent on file IO, and the upshot is 45% CPU 
> utilization.  the difference between Luma and Sony might be that you have 
> fast caching servers like Avere which reduce the cost of re-reading textures. 
>  in our case, if we increase the texture cache to avoid re-reading textures, 
> we dramatically improve performance.  It's about $300,000 cheaper than an 
> Avere cluster :)
> 
> I'm going to go out on a limb and suggest that if you are having memory 
> issues that are making you contemplate exotic changes to the texture cache 
> policies, there may be other options, approaches, or changes to your pipeline 
> that would be much more effective in addressing the root cause.
> 
> I'm curious what types of options, approaches, or changes you might be 
> thinking of.
> 
> you may be right, but I *think* that our situation is not that uncommon:
> - we have a lot of large textures
> - we have servers which don't have enough cache
> - we have artists who don't know how (or don't have time) to read render 
> stats and optimize their renders
> 
> One possibility is that we may end up with tiles being read and used which 
> are higher res than needed.  Is there a stat which would help to determine 
> this?
> 
> I'd be happy to look at a full Arnold render log output from a troublesome 
> frame, and see if anything stands out.
> 
> Thanks, I will take you up on that!  I'll look for some good candidates.
> 
> -chad
> 
> _______________________________________________
> Oiio-dev mailing list
> [email protected]
> http://lists.openimageio.org/listinfo.cgi/oiio-dev-openimageio.org

--
Larry Gritz
[email protected]



_______________________________________________
Oiio-dev mailing list
[email protected]
http://lists.openimageio.org/listinfo.cgi/oiio-dev-openimageio.org

Reply via email to