Hi Laurent,

On 7/8/15 7:18 AM, Laurent Bourgès wrote:
- cached arrays should be wrapped by a WeakReference to reduce the
memory footprint => create few XXXArrayGrowthCache instances and wrap
the internal arrays[][] with a Weak Reference (as I did in the
RendererContext)
- share caches between use cases: reuse a IntArrayGrowthCache instance
(3 caches for now) shared among use cases:

I pointed these out as TBD in my comments.

- introduce references to keep initial arrays: use a
CacheReference(IntArrayGrowthCache, initial size) that wraps the cache
methods and deal efficiently with initial arrays.

I handled initial arrays as simply a differently sized first bucket. The class that determines the growth sizes takes an initial size parameter that it uses for the first bucket, and then it sets the sizes of all of the subsequent buckets to growth values of an indicated multiple.

- No use of Refs anywhere, TBD
- No attempts to share the base array buckets between multiple caches.

OK, I will try or you prefer to do it ?

I just wanted to get the skeleton on the table for discussion and to illustrate my concepts. I can probably do some work on it, or if you wanted then feel free - or I wanted to see if you had any issues that might make this a dead end before I went too far with it.

- DO_CLEAN_DIRTY and STATS are both handled via delegating wrappers which means 
they can be present in production code and enabled by a command line variable 
at runtime with absolutely no performance impact if they aren't used.

OK. I adopted static flags as hotspot is able to remove "dead" / "unused
code" => dynamic vs static optimization.

I realize that the static flags have no runtime performance, but they are also only changeable at compile time. The wrapper classes also have no runtime performance impact if you don't use them, but they can be dynamically enabled without a recompile. That was the advantage I was describing.

I'm envisioning these getting used something like:

MarlinConst:
public CacheGrowthStrategy defaultStrategy = new ExponentialGrowthStrategy(...);

Renderer:
private IntArrayGrowthSource edgeCache =
    IntArrayGrowthCache.getDirtyInstance(defaultStrategy, "Edge Array");

=> I would introduce 4 shared ArrayGrowthSource (Clean Int, Dirty Int,
Dirty Float and Dirty Byte) in the RendererContext (thread context)
accessible and provide few CacheReference helpers:

private IntArrayGrowthSource edgeCacheRef =
     RendererContext.getCleanIntCacheRef(initialSize, "Edge Array");

This CacheReference class handles both the default array (initialSize
capacity as I did) and hides the complexity of the proper and shared
Cache usage.

I'd have to see it to understand what you are getting at. Was there a reason to handle initialSize here instead of baking it into the growth strategy?

                        ...jim

Reply via email to