Hi Dustin,

* Answers Inline *

On Wed, Sep 28, 2011 at 11:53 AM, Dustin <[email protected]> wrote:

>
>
> Tiny bit of clarification/questions below.
>
> On Tuesday, September 27, 2011 3:25:18 PM UTC-7, Rohit Karlupia wrote:
>
> > - Memory management is not slab based. It is self tuning. Just tell how
> much memory to use.
>
>
> Looks like you've got a few memory allocator options in there. The issue in
> most of them is what happens when you run the thing with a heavy production
> load for several days. The slab allocator won't fragment. I've heard similar
> things about jemalloc in really large production installs, but in a "normal"
> application scenario. It's much tougher in memcached because of the
> following:
>
>     That is true. Reasons why it doesn't impact cacheismo so much.
        -  valid buffer sizes are fixed (16 + n16 .. upto 4096) . It is not
possible to allocate
           more than 4KB of contiguous memory.
        -  objects are stored in "list of buffers" instead of a contiguous
block of memory.
        -   allocator provides a relaxedMalloc function which can return
less memory than
            what you asked for. item storage code doesn't depend on
malloc/but relaxedMalloc.
       Thus if we need to store 4KB item, we can use 3KB+1KB or 2K+2K or
2048+1024+512+512 etc
       Currently code works with array of buffers, but i guess it should be
easy to change
       that to a linked list. This also limits the max size of item stored
in the cache.

     So yes, this allocator also fragments, but it doesn't matter much. We
still can use the
     fragmented memory but at a higher memory management cost.

> > - LRU is not slab based. It is global. Always the LRU entry is deleted,
> irrespective of its size.
>
>
> If your LRU isn't slab based, how do you free the right amount of memory
> for the incoming data?
>
> For example, if you load up tons and tons of 13 byte objects and then
> suddenly you need to store a 1MB object, an LRU that *can* free up the space
> for you can't do it within any kind of time bounds. Freeing up ~81k LRU
> items to get to 1MB is not terribly likely to get you a 1MB contiguous
> memory block. The slab will always do so.
>

*   *     As I explained earlier, cacheismo doesn't works with contiguous
objects (at max 4KB, if available).
  So what matters is that we have 1MB free and not how much fragmented it
is.  What it does ensure is
 that what ever items are freed, 13 bytes or 1300 bytes or 13KB, they are
least useful bytes to cache.



>
> Also, from your README:
>
>     "For example when using slab sizes of 64/128/256/512/1024 bytes object
> with size 513 is stored in 1024 bytes slab, wasting almost half the memory."
>
> You'd have to intentionally mistune it to get it to use those sizes.  By
> default, 513 bytes is stored in 600 bytes (which leaves 87 bytes for your
> key, flags, and expiration).  The slabber can be tweaked to use those sizes,
> but you'd only do so if it actually benefitted your application.
>
>
  I though those were the defaults and most people would use it that way
only.


>
> > - It is scriptable using LUA. What this means is that instead of being
> restricted to set, lists
> > and other predefined data structures exposed via redis, new data
> structures can be created
> > and used. Currently I have implemented set, map, quota and sliding window
> counter in lua.
> > New objects can be implemented without touching the c source code.
>
>   This is pretty awesome.  Have you considered just building it as an
> engine?  Then you'd also get the binary protocol, IPv6, UDP, domain sockets,
> threading, etc... and any future bug fixes for free.
>
>
   At some time in future when the code is mature enough.


>
> > The interface for accessing scriptable objects is implemented via
> memcached get requests.
> > For example: get set:new:mykey - would create a new set object referred
> via myKey
>
>   Those are valid script keys.  Why not   "script set:new:mykey" ?
>
> Most memcached clients will need to be changed to support new keyword
"script".
Hence the decision to overload the get command.


> > I ran some tests on my laptop for comparing the HIT rate of cacheismo vs
> memcached.
> > This post has a graph which shows the difference.
> > http://chakpak.blogspot.com/2011/09/introducing-cacheismo.html
>
>   Do you have more info on how you built this?  I'd be interested to see
> the actual rates you were getting on which version of memcached with what
> configuration -- specifically, what you were doing and what you saw when
> cacheismo was 80% faster.
>

Cacheismo is not faster than memcached.  Memcached was about 20% faster.
Cacheismo had better HIT Rate.

These are the raw numbers....

- cacheismo 1024 mb , 64 mb io

threads=16,repeats=30000,valueLength=128,tps=27654,miss=0,fail=0,hit=360048,all=480000,hitRate=1.00
avg latency 0
threads=16,repeats=30000,valueLength=256,tps=42517,miss=0,fail=0,hit=360048,all=480000,hitRate=1.00
avg latency 0
threads=16,repeats=30000,valueLength=512,tps=42661,miss=0,fail=0,hit=360048,all=480000,hitRate=1.00
avg latency 0
threads=16,repeats=30000,valueLength=1024,tps=37519,miss=0,fail=0,hit=360048,all=480000,hitRate=1.00
avg latency 0
threads=16,repeats=30000,valueLength=2048,tps=28842,miss=0,fail=0,hit=360048,all=480000,hitRate=1.00
avg latency 0
threads=32,repeats=30000,valueLength=128,tps=52106,miss=0,fail=0,hit=720096,all=960000,hitRate=1.00
avg latency 0
threads=32,repeats=30000,valueLength=256,tps=49816,miss=0,fail=0,hit=720096,all=960000,hitRate=1.00
avg latency 0
threads=32,repeats=30000,valueLength=512,tps=45954,miss=0,fail=0,hit=720096,all=960000,hitRate=1.00
avg latency 0
threads=32,repeats=30000,valueLength=1024,tps=40158,miss=0,fail=0,hit=720096,all=960000,hitRate=1.00
avg latency 0
threads=32,repeats=30000,valueLength=2048,tps=29346,miss=0,fail=0,hit=720096,all=960000,hitRate=1.00
avg latency 1
threads=64,repeats=30000,valueLength=128,tps=65993,miss=0,fail=0,hit=1440192,all=1920000,hitRate=1.00
avg latency 0
threads=64,repeats=30000,valueLength=256,tps=61365,miss=0,fail=0,hit=1440192,all=1920000,hitRate=1.00
avg latency 0
threads=64,repeats=30000,valueLength=512,tps=55185,miss=0,fail=0,hit=1440192,all=1920000,hitRate=1.00
avg latency 1
threads=64,repeats=30000,valueLength=1024,tps=42129,miss=0,fail=0,hit=1440192,all=1920000,hitRate=1.00
avg latency 1
threads=64,repeats=30000,valueLength=2048,tps=28435,miss=32424,fail=0,hit=1407768,all=1920000,hitRate=0.98
avg latency 2
threads=128,repeats=30000,valueLength=128,tps=68707,miss=0,fail=0,hit=2880384,all=3840000,hitRate=1.00
avg latency 1
threads=128,repeats=30000,valueLength=256,tps=64088,miss=0,fail=0,hit=2880384,all=3840000,hitRate=1.00
avg latency 1
threads=128,repeats=30000,valueLength=512,tps=50692,miss=0,fail=0,hit=2880384,all=3840000,hitRate=1.00
avg latency 2
threads=128,repeats=30000,valueLength=1024,tps=38673,miss=292329,fail=0,hit=2588055,all=3840000,hitRate=0.90
avg latency 3
threads=128,repeats=30000,valueLength=2048,tps=29454,miss=1549680,fail=0,hit=1330704,all=3840000,hitRate=0.46
avg latency 3


-- memcached  1024mb

Xmemcached startup
threads=16,repeats=30000,valueLength=128,tps=27513,miss=0,fail=0,hit=360048,all=480000,hitRate=1.00
avg latency 0
threads=16,repeats=30000,valueLength=256,tps=52367,miss=0,fail=0,hit=360048,all=480000,hitRate=1.00
avg latency 0
threads=16,repeats=30000,valueLength=512,tps=47330,miss=0,fail=0,hit=360048,all=480000,hitRate=1.00
avg latency 0
threads=16,repeats=30000,valueLength=1024,tps=39891,miss=0,fail=0,hit=360048,all=480000,hitRate=1.00
avg latency 0
threads=16,repeats=30000,valueLength=2048,tps=30359,miss=0,fail=0,hit=360048,all=480000,hitRate=1.00
avg latency 0
threads=32,repeats=30000,valueLength=128,tps=63211,miss=0,fail=0,hit=720096,all=960000,hitRate=1.00
avg latency 0
threads=32,repeats=30000,valueLength=256,tps=57559,miss=0,fail=0,hit=720096,all=960000,hitRate=1.00
avg latency 0
threads=32,repeats=30000,valueLength=512,tps=50923,miss=0,fail=0,hit=720096,all=960000,hitRate=1.00
avg latency 0
threads=32,repeats=30000,valueLength=1024,tps=40542,miss=0,fail=0,hit=720096,all=960000,hitRate=1.00
avg latency 0
threads=32,repeats=30000,valueLength=2048,tps=28973,miss=81096,fail=0,hit=639000,all=960000,hitRate=0.89
avg latency 1
threads=64,repeats=30000,valueLength=128,tps=72721,miss=719307,fail=0,hit=720885,all=1920000,hitRate=0.50
avg latency 0
threads=64,repeats=30000,valueLength=256,tps=64807,miss=719472,fail=0,hit=720720,all=1920000,hitRate=0.50
avg latency 0
threads=64,repeats=30000,valueLength=512,tps=55936,miss=716934,fail=0,hit=723258,all=1920000,hitRate=0.50
avg latency 1
threads=64,repeats=30000,valueLength=1024,tps=43174,miss=718032,fail=0,hit=722160,all=1920000,hitRate=0.50
avg latency 1
threads=64,repeats=30000,valueLength=2048,tps=29908,miss=801576,fail=0,hit=638616,all=1920000,hitRate=0.44
avg latency 1
threads=128,repeats=30000,valueLength=128,tps=78359,miss=2159499,fail=0,hit=720885,all=3840000,hitRate=0.25
avg latency 1
threads=128,repeats=30000,valueLength=256,tps=72286,miss=2159664,fail=0,hit=720720,all=3840000,hitRate=0.25
avg latency 1
threads=128,repeats=30000,valueLength=512,tps=60291,miss=2157126,fail=0,hit=723258,all=3840000,hitRate=0.25
avg latency 1
threads=128,repeats=30000,valueLength=1024,tps=45924,miss=2158224,fail=0,hit=722160,all=3840000,hitRate=0.25
avg latency 2
threads=128,repeats=30000,valueLength=2048,tps=31839,miss=2241768,fail=0,hit=638616,all=3840000,hitRate=0.22
avg latency 3


 memcached version  was *memcached 1.4.6_4_g2c56090*
*
*
thanks!
rohitk

Reply via email to