There are different use cases for this I guess. I believe your use case
involves a single long lived context which exists primarily because of the
large cache attached to it.

My use case is for lots of very short lived contexts with data that I
really want to stay cached for the entire very short life of the context
(which is one http request) and then be evicted. I don't care about how big
the cache capacity is - unlimited is fine, because I *need* all the data
right at that time. I also don't care about expiration time because once
the context is finished being used (the http request is done) I don't need
the data anymore.

On Fri, Apr 13, 2018 at 12:42 AM Andrus Adamchik <and...@objectstyle.org>
wrote:

> > The docs suggest(ed?) that local cache was local to that object context,
> but it really ends up being global.
>
>
> What is real? :) https://www.youtube.com/watch?v=-niPJMSeh34
>
> It is local from the app perspective. Query cache is a common memory
> region that can be used by multiple contexts. Just like with Java heap,
> when an object goes out of scope, it is not immediately GC'd, but since it
> can be GC'd on demand, that memory is effectively available to the app. If
> the entries are not in use (and the cache is properly sized against
> available memory), expired or not, they will be removed automatically when
> you add more entries to the cache.
>
> In fact we decided against a "localized" implementation exactly because it
> has no upper boundary on memory consumption, so it was deemed unsafe in a
> real app.
>
> I guess it wouldn't hurt to build/document recipes for aggressive removal
> of expired entries (based on Caffeine or EhCache API). Though since my
> caches usually operate at 100% capacity and are constantly churned, I never
> felt a big need to do that.
>
> Andrus
>
>
> > On Apr 13, 2018, at 1:43 AM, Lon Varscsak <lon.varsc...@gmail.com>
> wrote:
> >
> > I agree, I had the same issue.  The docs suggest(ed?) that local cache
> was
> > local to that object context, but it really ends up being global.  I had
> a
> > couple of Caffeine solutions in place too that did bind it closer to the
> > EC.  I've since scrapped that (not sure why) and it least ties the cache
> to
> > my web session, so I'm secure in knowing that at least they'll go away
> then.
> >
> > -Lon
> >
> > On Thu, Apr 12, 2018 at 12:06 PM, John Huss <johnth...@gmail.com> wrote:
> >
> >> My main problem was that the docs imply that locally cached things are
> tied
> >> to a single object context, giving the expectation that when the context
> >> goes away the locally cached things go away too. This is only
> superficially
> >> true -- indeed you can't access them anymore, but the objects are still
> in
> >> the cache taking up space. I can't see a benefit of having it work this
> >> way.
> >>
> >> This wouldn't be a problem if the cache was set to expire entries, but
> if
> >> you are only intending to use local caching you wouldn't think you need
> >> that. My contexts are all short lived and the cache is just a
> convenience
> >> to avoid having to keep passing a reference to a previously fetched
> object
> >> around while generating an HTTP response -- this is a good way to handle
> >> objects that you don't want to define an explicit relationship to, but
> >> still need to access occasionally.
> >>
> >> Additionally, the fact the the entire state of the object context is
> >> prevented from being garbage collected while these objects are in the
> cache
> >> makes the problem much worse if you happen to be fetching a lot of data,
> >> but only caching a small amount.
> >>
> >> On Thu, Apr 12, 2018 at 12:20 AM Andrus Adamchik <
> and...@objectstyle.org>
> >> wrote:
> >>
> >>> Yeah, LRU caches are prone to individual cache entry size fluctuations.
> >>> With large active caches this averages out (more or less), but is still
> >> an
> >>> issue.
> >>>
> >>> Somehow I overlooked Caffeine. Looks interesting. Let me try to switch
> a
> >>> few projects.
> >>>
> >>> Andrus
> >>>
> >>>> On Apr 12, 2018, at 6:45 AM, Aristedes Maniatis <a...@maniatis.org>
> >>> wrote:
> >>>>
> >>>> On 11/4/18 11:28pm, Andrus Adamchik wrote:
> >>>>> Also EhCache may actively remove expired items (?), but OSCache
> >>> certainly did not, and it was not a problem either, also because of
> >>> maxsize/LRU.
> >>>>
> >>>> Actually that's a problem in EHcache that had us leave it and use
> >>> Caffeine instead. EHCache only tries to clean up memory when you write
> a
> >>> new item to the cache and it is hardcoded to try and evict two items
> for
> >>> every new item you add.
> >>>>
> >>>> This is a real problem if you try to add a new 100Mb query result and
> >> it
> >>> happens to try to evict two 1kB queries already in the cache. Boom,
> your
> >>> app runs out of memory and dies. The response from the EHCache people
> was
> >>> that it works best when all items our roughly the same size.
> >> Unfortunately
> >>> the Cayenne query cache can easily add items with hugely different
> sizes.
> >>>>
> >>>>
> >>>> We found that caffeine gives us a little more control over this
> process
> >>> ( https://github.com/ben-manes/caffeine/wiki/Cleanup ) and it is
> still a
> >>> trivial replacement since it implements jCache.
> >>>>
> >>>> Caching is definitely not one of those "click this box to make
> >>> everything faster" and forget about it things.
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> Oh, hi everyone. I know I've been absent for a while... my life has
> >> been
> >>> a little crazy.
> >>>>
> >>>>
> >>>> Ari
> >>>
> >>
>
>

Reply via email to