2014-11-04 14:33 GMT+01:00 Sven Van Caekenberghe <s...@stfx.eu>:

> Hi Thierry,
>
> I made a prototype implementation, would this be helpful ?
>
> LRUCache>>#at: key put: value
>   "Populate the receiver by putting value under key.
>    This operation is neither considered a hit or a miss.
>    If key is already present, replace it. Return value"
>
>   ^ self critical: [
>      (keyIndex associationAt: key ifAbsent: [ nil ])
>         ifNil: [ | newAssociation link |
>           newAssociation := self newAssociationKey: key value: value.
>           link := lruList addLast: newAssociation.
>           keyIndex at: key put: link ]
>         ifNotNil: [ :existingAssociation | | link |
>           link := existingAssociation value.
>           weight remove: link value value.
>           link value value: value.
>           self promote: link ].
>      self addWeight: value.
>      value ]
>
> (#newAssociationKey:value: is new, you can replace it with key->value)
>

Thanks,

yet, that would fit. I'll make a try once the code is released; I'd be
interested to have your comments on how to handle the cache. Release should
happen in a few days, I hope.

Thierry


> Then you can do:
>
> testPopulate
>         | cache |
>         cache := self newCache.
>         cache factory: [ self fail ].
>         cache at: #foo put: -1.
>         cache at: #bar put: 200.
>         cache at: #foo put: 100.
>         self assert: cache size equals: 2.
>         self assert: cache totalWeight equals: 2.
>         self assert: cache hits isZero.
>         self assert: cache misses isZero.
>         self assert: (cache at: #foo) equals: 100.
>         self assert: (cache at: #bar) equals: 200.
>         cache validateInvariantWith: self
>
> For Norbert I did a little change that allows you to subclass either
> LRUCache or TTLCache and customise the Associations being used as well as
> the stale testing. He wanted to have entry specific TTLs, not just one
> value for the whole cache. Maybe that could also help you.
>
> The other question about HTTP client behaviour: I don't know how many
> requests you could run parallel, 10s maybe 100s. I have it somewhere using
> a custom limited resource pool, but when the pool is empty (all workers
> active), clients wait. In my use case, it was all synchronous.
>
> If you like the #at:put: I could move it into the image.
>
> Sven
>
> > On 03 Nov 2014, at 23:14, Thierry Goubier <thierry.goub...@gmail.com>
> wrote:
> >
> > Hi Sven,
> >
> > 2014-11-03 22:52 GMT+01:00 Sven Van Caekenberghe <s...@stfx.eu>:
> > Hi Thierry,
> >
> > > On 31 Oct 2014, at 14:10, Thierry Goubier <thierry.goub...@gmail.com>
> wrote:
> > >
> > > Hi all,
> > >
> > > I tried to use LRUCache for some application of mine with a two
> stages, over-the-internet data requests, and I couldn't find a way of using
> it.
> > >
> > > My use case was:
> > > - retrieve some data with high latency (over 10 seconds)
> > > - in a interactive application, where I need to be able to continue
> while the data is being loaded.
> > >
> > > So I designed it in two stages:
> > > - Have a cache
> > > - if the data isn't in the cache
> > > -- trigger a load with a fork
> > > -- put some temporary data in the cache (something which says loading)
> > > -- continue
> > > - When the data load is terminated
> > > -- replace the cached temporary data with the final version.
> > >
> > > The thing is that I couldn't do the last one with the LRUCache (there
> is not at:put:, only a at:ifAbsentPut: []). Did I miss something?
> > >
> > > Thierry
> >
> > Relevant question and a rather special use case I would say.
> >
> > The API of AbstractCache and its subclasses is minimal by design. It
> certainly does not contain everything that Dictionary has.
> >
> > A cache delivers values based on keys. The ifAbsent: block is similar to
> the factory: block, it implements getting the value. Either you hit the
> cache or you miss it and it gets loaded. Entries drop off either because
> there are too many in some metric (LRUCache) or because they expire
> (TTLCache).
> >
> > Now, #at:put: is like a replace, is it a hit or a miss ? Or neither ?
> >
> > It would be a replace, and neither a hit nor a miss.
> >
> >
> > You could implement it with a #removeKey: and and #at:ifAbsent: - modulo
> some concurrent access problems.
> >
> > I do those operations under a mutex, otherwise the system becomes
> incoherent :)
> >
> >
> > Maybe we should add it (I guess it could be useful to fill a cold cache
> as well) - but I am not totally convinced.
> >
> > Now, your use case is a bit odd and dangerous. You spawn a
> thread/process for each miss, there could be a large number of them - do
> you want that ? Will the value always/still be  needed when it arrives ?
> >
> > A non needed value (to late value) will be discarded almost immediately,
> so this is not an issue. The number of threads may be high, yes, but it
> stays apparently within reasonable limits (how many Zn requests one can do
> simultaneously from Pharo? Is there any throttling?).
> >
> >
> > One way to implement your use case would be to encapsulate your special
> behaviour (the delayed loading) in the value itself (since you have the two
> cases/states exposed), no ?
> >
> > Yes. This is more or less what I implemented, but without merging them
> in a single object.
> >
> > This makes using the LRUCache as is, but is no better than the previous
> approach about the number of threads in flight (those could be solved by a
> sharedQueue and a few loading processes feeding themselves out of the
> shared queue).
> >
> > Now I know how to use the LRUCache, but I think my use case is in a way
> too simple for its features (no real TTL, LRU only works if the cache size
> is over a certain threshold and that threshold isn't static).
> >
> > Thanks,
> >
> > Thierry
> >
> > Sven
>
>
>

Reply via email to