I have been working on in memory cache implementation for the last couple of
months.
I want to share with memcached users what I have built so far.

- It supports memcached protocol (tcp and ascii)
- Memory management is not slab based. It is self tuning. Just tell how much
memory to use.
- LRU is not slab based. It is global. Always the LRU entry is deleted,
irrespective of its size.
- It is scriptable using LUA. What this means is that instead of being
restricted to set, lists
  and other predefined data structures exposed via redis, new data
structures can be created
  and used. Currently I have implemented set, map, quota and sliding window
counter in lua.
  New objects can be implemented without touching the c source code.

 The interface for accessing scriptable objects is implemented via
memcached get requests.

  For example:
  get *set:new*:mykey                         - would create a new set
object referred via myKey


  *set* refers to name of the file in scripts directory and *new *is
one of the functions declared in set.lua.


  get *set:put*:myKey:a                      - would put key a in the set myKey
  get *set:count*:myKey                     - would return number of
elements in the set
  get *set:union:*myKey1:myKey2   - would return union of sets myKey1 and myKey2

  See scripts/set.lua for other functions

 Source code is available at https://github.com/iamrohit/cacheismo

 It is single threaded, so consider using multiple instances for better
performance.
 The virtual key functionality (accessing lua objects) doesn't works when
multiple servers are used
 because hash(virtualKey) is usually not equal to hash(key).
 Currently I am working on cluster support by including client capabilities
in the
 server code.

 I ran some tests on my laptop for comparing the HIT rate of cacheismo vs
memcached.
 This post has a graph which shows the difference.
 http://chakpak.blogspot.com/2011/09/introducing-cacheismo.html


thanks for your time and attention!
rohitk

Reply via email to