> On Dec 10, 2019, at 6:51 PM, John Huss <[email protected]> wrote:
> 
>>> One solution is to null out the ObjectContext on any objects that are
>>> inserted into the Query Cache. This solves both problems above, and it
>>> seems logical since when the objects are retrieved from the cache they
>> will
>>> be placed into a new context anyway. This should work, but the
>>> implementation has been tricky.
>> 
>> This will not work at the framework level, as Cayenne doesn't know who
>> else beside the cache references cached objects. So you'd be breaking the
>> state of the object while it is still exposed to the world.
> 
> My approach was to:
> 1) make a localObject copy of the object first
> 2) null the ObjectContext on the copy
> 3) store it in the cache
> 
> That way only the cached object is affected. Based on some VERY simple
> tests this seems to work, but I haven't tried with any real data or real
> apps yet.

That's an interesting approach. "localObject" has to be called on some context, 
so I guess you have another hidden context for this operation? And I assume you 
need to call "localObject" again when retrieving that object from cache into 
another context. So since you need to go through all these steps, you might as 
well use built-in shared cache.


>> * Is there a difference in terminology of what a "leak" is? Do you view it
>> as just an overuse of memory but with a fixed upper boundary, or do you see
>> it as a constant expansion that eventually leads to the app running out of
>> memory no matter how much memory it had?
> 
> The constant expansion is the real problem. The overuse of memory is
> undesirable, but not a real problem.

So we are on the same page about the definition of a leak. 

> I would prefer to leave my cache groups unbounded for groups I'm using Local
> Caching for, just so that I can have consistent (unsurprising) behavior
> (always hits the cache or not for a given code path).

I am starting to understand your thinking here (though still not the underlying 
reasons for it). To me cache is naturally probabilistic (entries expire, or 
fall off the end of an LRU map). You are talking about "consistent cache". So 
let's delve into that scenario. 

If you have a lot of possible cache keys (not cache groups, but rather 
combinations of query permutations), an unbounded cache will always leak. With 
a fixed set of keys and many contexts the unbounded cache will leak as well, 
and to add insult to injury, it will not reuse entries between the contexts.

It will NOT leak with a fixed set of keys and a singleton context (but will 
require calling "localObject" on each "get"). And it will not leak with a fixed 
set of keys and a shared cache. Do you see any issue with either of these 
solutions? 

And even though the solutions above should solve it, an idea of an unbounded, 
never-expiring cache still seems dangerous. Sooner or later a programming error 
will lead to a memory use explosion. Could you maybe expand on why 
probabilistic cache is not a good for for your app? 

Andrus

Reply via email to