I've had a look at the page now, and here are my thoughts:

1) One thing that isn't really made clear is the interaction between multiple sorcery wizards. In a real-world example, you'd want to hit the memory_cache first and then hit the database afterwards if you couldn't retrieve the object from the cache. Does the order in which items are configured in sorcery.conf dictate the order in which wizards are consulted? Right now, I don't think that's the case. Or is it just that sorcery will automatically see cached stores as being higher priority than others?

2) I agree with Scott's assessment that the global expiration isn't the best idea. The object lifetime option makes much more sense.

3) Object expiration interval is something I feel most people wouldn't really want to set. They'd just hope that something sane was done by default. I think that defaulting to 60 seconds isn't that great a plan either, especially if objects in the cache are set to, say, a 10 second lifetime. Defaulting to some fraction of the object lifetime would work and probably satisfy most people. The value could potentially be updated during a rebalancing operation. Another option is to make use of some sort of timer heap so that in the case of long-lived cached objects, you don't run unneeded checks for object expiration.

4) In addition to the CLI operations, I think equivalent AMI operations would be useful. I also considered the idea of being able to change cache configuration for an object type via AMI/CLI. I'm not sure how useful that would be, and it likely would just create extra contention points where they're really just not needed, so meh.

5) It may be useful to have C-level API calls for invalidating objects/caches. This way, we could implement behavior such as a reload of res_pjsip.so resulting in invalidation of all cached objects owned by res_pjsip.so.

6) The tests are good for testing basic operation of the cache. However, there's not much being done so far with regards to off-nominal code paths (like a user creating a cache with a negative number of maximum objects). First, something needs to be decided regarding how such an error is treated. Do we still create the cache but with a default value configured instead, or do we completely fail to create the cache? Once this is decided, some off-nominal tests with bad configuration should be tested.

7) There doesn't seem to be any way for someone to state that they never want cached objects to automatically become invalidated (i.e. infinite object lifetime). For some installations where configuration is performed through web browsers, for instance, it may be that at the time the config is changed, the system would issue an AMI/CLI command to Asterisk to invalidate the old object. They would essentially always be in charge of telling Asterisk when to invalidate cached objects. I know this is treading close to the old "sip prune realtime" territory, but I feel like this isn't quite as bad since you would have control over individual objects and object types.

8) My final comment is in regards to the "expiration" of cached items. Based on wording in tests, it sounds like when the expiration interval arrives, the item is removed from the cache entirely. I wonder if there is a benefit to instead, updating the item when the expiration interval arrives. On the one hand, this has the benefit of meaning that a user of sorcery will always be hitting the cache and never have to actually fall back to a DB lookup. On the other hand, on an idle system, this can result in many pointless DB lookups. So there are tradeoffs, but it's still something to consider. Also, I think Scott pretty much brought up this same idea now that I look back at his response again.

On 04/28/2015 11:28 AM, Joshua Colp wrote:
Kia ora,

I've created a wiki page[1] which details the beginnings of a basic memory based caching wizard for sorcery. Right now while caching is possible using the existing memory wizard it's not possible to define object lifetimes, so once cached it's always pulled from the cache. This wiki page uses the memory wizard as a base but defines options which can tweak the behavior. Going forward this could serve as a basis for other wizards to be created for caching purposes.

Some things to consider:
1. How much control and flexibility should we allow?
2. Are there additional mechanisms that should be exposed to allow explicit object expiration?
3. Are the defaults sane?
4. Is there additional testing that should be done?
5. Does anything need additional explanation?

Cheers,

[1] https://wiki.asterisk.org/wiki/display/~jcolp/Sorcery+Caching



--
_____________________________________________________________________
-- Bandwidth and Colocation Provided by http://www.api-digital.com --

asterisk-dev mailing list
To UNSUBSCRIBE or update options visit:
  http://lists.digium.com/mailman/listinfo/asterisk-dev

Reply via email to