On Wed, Dec 11, 2019 at 11:38 AM Andrus Adamchik <and...@objectstyle.org> wrote:
> > > > On Dec 10, 2019, at 6:51 PM, John Huss <johnth...@gmail.com> wrote: > > > >>> One solution is to null out the ObjectContext on any objects that are > >>> inserted into the Query Cache. This solves both problems above, and it > >>> seems logical since when the objects are retrieved from the cache they > >> will > >>> be placed into a new context anyway. This should work, but the > >>> implementation has been tricky. > >> > >> This will not work at the framework level, as Cayenne doesn't know who > >> else beside the cache references cached objects. So you'd be breaking > the > >> state of the object while it is still exposed to the world. > > > > My approach was to: > > 1) make a localObject copy of the object first > > 2) null the ObjectContext on the copy > > 3) store it in the cache > > > > That way only the cached object is affected. Based on some VERY simple > > tests this seems to work, but I haven't tried with any real data or real > > apps yet. > > That's an interesting approach. "localObject" has to be called on some > context, so I guess you have another hidden context for this operation? And > I assume you need to call "localObject" again when retrieving that object > from cache into another context. So since you need to go through all these > steps, you might as well use built-in shared cache. > Yes, obtaining a new ObjectContext to use for localObject is the sketchy part here. The point of this little dance is to avoid retaining extra memory other than the cached objects, so using the shared cache wouldn't improve that situation. I'm not actually using this yet, just experimenting. It looks like the extra memory issue can be addressed more directly. > > >> * Is there a difference in terminology of what a "leak" is? Do you view > it > >> as just an overuse of memory but with a fixed upper boundary, or do you > see > >> it as a constant expansion that eventually leads to the app running out > of > >> memory no matter how much memory it had? > > > > The constant expansion is the real problem. The overuse of memory is > > undesirable, but not a real problem. > > So we are on the same page about the definition of a leak. > > > I would prefer to leave my cache groups unbounded for groups I'm using > Local > > Caching for, just so that I can have consistent (unsurprising) behavior > > (always hits the cache or not for a given code path). > > I am starting to understand your thinking here (though still not the > underlying reasons for it). To me cache is naturally probabilistic (entries > expire, or fall off the end of an LRU map). You are talking about > "consistent cache". So let's delve into that scenario. > > If you have a lot of possible cache keys (not cache groups, but rather > combinations of query permutations), an unbounded cache will always leak. > With a fixed set of keys and many contexts the unbounded cache will leak as > well, and to add insult to injury, it will not reuse entries between the > contexts. > > It will NOT leak with a fixed set of keys and a singleton context (but > will require calling "localObject" on each "get"). And it will not leak > with a fixed set of keys and a shared cache. Do you see any issue with > either of these solutions? > > And even though the solutions above should solve it, an idea of an > unbounded, never-expiring cache still seems dangerous. Sooner or later a > programming error will lead to a memory use explosion. Could you maybe > expand on why probabilistic cache is not a good for for your app? > My use case is very limited in scope. I want to have fresh data basically all the time, but not fetch the same data twice in the same request. Once the request is over and the request's context is out of scope, then the local cache for that context can be cleared. So this cached data is extremely short lived, and as a consequence the size of it doesn't really matter (though I'm only caching small query results anyway). I'd like to have methods like: *public* MyEntity fetchMyEntity(ObjectContext context, *int* propValue) { *return* ObjectSelect.*query*(MyEntity.*class*).where(MyEntity .PROP_VALUE.eq(propValue)).localCache().selectFirst(); } Where I can call this from any code path without having to worry about whether the context has a lot of objects in it or not (sometimes it is huge). The localCache will keep it from fetching one than once per request. This is really just a shorthand for this: *private* MyEntity cachedMyEntity; *public* MyEntity fetchMyEntity(ObjectContext context, *int* propValue) { *if* (cachedMyEntity == *null*) { cachedMyEntity = ObjectSelect.*query*(MyEntity.*class*).where(MyEntity .PROP_VALUE.eq(propValue)).selectFirst(); } *return* cachedMyEntity; } But with localCache I don't have to declare a temporary cache variable every time I want to avoid a fetch. > Andrus > >