For sure I'd suggest increasing the default to something that better
reflects modern machines.

However, just to play devil's advocate for 3D rendering: A smaller cache
that isn't thrashing could lead to better performance. How? The unused
memory can be used by the OS's filesystem cache. Since this contains the
compressed textures, this cache can contain more data than OIIO. This can
be especially advantageous when rendering one frame after another where
each frame is starting from a cold cache. It's likely we'll be reading many
of the same textures, so we'll often be much faster if we can read from OS
cache instead of from some far away overburdened file server.

On Fri, Nov 12, 2021 at 3:47 PM Nathan Rusch <nathanru...@gmail.com> wrote:

> Hey Larry,
>
> I don't think we need a specialized class. There is already an ImageBuf
> constructor that takes an optional pointer to an ImageCache.
>
> Whoops, my mistake for overlooking that!
>
> Contracting -- maybe? I think that a mode where it can grow if it detects
> bad thrashing is a lot easier to implement than knowing when it's safe to
> re-contract. I thought of it as a one-way ratchet, but maybe not?
>
> Yeah, I think it would be more complex. I guess I've just been imagining
> that any application that puts even a moderate amount of stress on the
> cache would likely end up triggering the growth mechanic. Maybe the idea
> would be for the detection algorithm to be fairly heartless, but still, as
> a user of the API, if the cache size only goes in one direction, then it
> seems like I might as well just use a static cache at the "hard max"
> threshold and call it a day.
>
> To frame this another way, if I know I have enough memory to let the cache
> grow into, then I might as well just allocate it to the cache up front.
> Otherwise, if I gamble with a variable-sized cache and the growth exceeds
> the amount of free RAM, I think the end result would be a lot worse than a
> thrashed ImageCache that would otherwise still fit in RAM.
>
> That said, there's a very real chance I'm not considering enough
> applications though, since most of my experience with ImageCache is in the
> context of 3D rendering.
>
>  I do like your idea of "IBs are dumb (local mem) unless you tell it an IC
> to use." That makes sense to me, and I think it will improve performance
> across the board for the majority of applications where you aren't dealing
> with enough image data to need the scalability of IC backing.
>
> Cool, glad to hear that!
>
> -Nathan
>
> On Nov 12, 2021, at 12:07 PM, Nathan Rusch <nathanru...@gmail.com> wrote:
>
> Interesting questions. I'll add some opinions to the mix.
>
> 1. It's 2021, computer memories are bigger and so are our images. Should I
> just raise the default cache size to 1GB? More?
>
> I think raising the default to 1 or 2 GB is totally reasonable.
>
> 2. Should ImageBuf internally track the total amount of local memory held
> by all ImageBuf's, and until the total reaches a threshold (of maybe a few
> GB), IB's should hold their memory locally and only have any of them fall
> back to ImageCache backing once the total is above the threshold? That
> would probably generally make them a bit faster than now usually, until you
> have a bunch of them and the cache starts to kick in.
>
> To be honest, I've always found the default cache-backed ImageBuf behavior
> a little odd from an API standpoint. If I were coming at the API with zero
> prior knowledge, I think I would intuitively expect some slightly different
> organization and usage patterns for the types at play:
> - ImageBuf would be a "lowest common denominator" type of class.
> Instantiating it directly would use local pixel buffer storage by default.
> - ImageCache would provide a method for creating cache-backed ImageBufs.
> Either that, or ImageBuf would include a constructor overload or other
> static factory function that allowed an ImageCache to be passed as the
> backing store.
>     - These patterns might make more sense if they instead involved an
> interface class (e.g. something like an abstract CacheInterface, with an
> implementation for ImageCache), but the end result would be about the same.
>     - Either way, I think this would make it easy for an application to
> manage multiple cache "pools" on its own terms.
>
> I know that's getting a bit off track from your question, but I generally
> like the idea of ImageBuf being relatively "dumb" out of the box, with any
> "smart" behavior implemented on a subclass of some kind (e.g.
> CachedImageBuf), or via another API layer.
>
> 3. Allow the cache to be self-adjusting if it sees thrashing behavior.
>
> I agree with Phil on this: I wouldn't want this behavior unless it was
> opt-in, since it could easily wreck havoc on a 3D render or other heavy
> TextureSystem client.
>
> If implemented, I also agree that it should hinge on more than just an
> enabled/disabled state. The simple parameterization you mentioned (target
> size + hard cap) sounds like a reasonable baseline, although my next
> question would be whether the cache would "contract" at some point as
> pressure backs off to try and maintain the target size.
>
> -Nathan
> _______________________________________________
> Oiio-dev mailing list
> Oiio-dev@lists.openimageio.org
> http://lists.openimageio.org/listinfo.cgi/oiio-dev-openimageio.org
>
>
> --
> Larry Gritz
> l...@larrygritz.com
>
>
>
>
>
> _______________________________________________
> Oiio-dev mailing 
> listOiio-dev@lists.openimageio.orghttp://lists.openimageio.org/listinfo.cgi/oiio-dev-openimageio.org
>
>
> _______________________________________________
> Oiio-dev mailing list
> Oiio-dev@lists.openimageio.org
> http://lists.openimageio.org/listinfo.cgi/oiio-dev-openimageio.org
>
_______________________________________________
Oiio-dev mailing list
Oiio-dev@lists.openimageio.org
http://lists.openimageio.org/listinfo.cgi/oiio-dev-openimageio.org

Reply via email to