Re: genhash
I'm not a lawyer, but I would assume that sine the code has been released into memcached under that license, you should still be able to use it... I nuked it from the memcached core since we don't use it anymore in the core. Anyway. Dustin Sallings wrote it, so you could just send him an email about using it. Cheers, Trond On Tue, Sep 27, 2011 at 10:07 PM, John David Duncan john.david.dun...@gmail.com wrote: (Resending from the gmail account so Google will forward to the list) Hi Trond, commit e70f5ace86dc71a2683b884182fa46**d57965a25a Author: Trond Norbye trond.nor...@gmail.com Date: Fri Sep 23 09:35:44 2011 +0200 Removed topkeys implementation Measurements showed memcached only able to handle about 50% of the operations with top keys on vs. when it was off. OK, that's interesting. I see the motivation for that. But I had assumed genhash.h was part of the public (utilities) API, and I was planning to use it. Suddenly it's gone! -- Trond Norbye
genhash
(Resending from the gmail account so Google will forward to the list) Hi Trond, commit e70f5ace86dc71a2683b884182fa46d57965a25a Author: Trond Norbye trond.nor...@gmail.com Date: Fri Sep 23 09:35:44 2011 +0200 Removed topkeys implementation Measurements showed memcached only able to handle about 50% of the operations with top keys on vs. when it was off. OK, that's interesting. I see the motivation for that. But I had assumed genhash.h was part of the public (utilities) API, and I was planning to use it. Suddenly it's gone!
Re: genhash
Well, I work at a company with lots of lawyers. The easiest fastest thing for me to do will be to write a hashtable. On Sep 27, 2011, at 1:13 PM, Trond Norbye wrote: I'm not a lawyer, but I would assume that sine the code has been released into memcached under that license, you should still be able to use it... I nuked it from the memcached core since we don't use it anymore in the core. Anyway. Dustin Sallings wrote it, so you could just send him an email about using it. Cheers, Trond On Tue, Sep 27, 2011 at 10:07 PM, John David Duncan john.david.dun...@gmail.com wrote: (Resending from the gmail account so Google will forward to the list) Hi Trond, commit e70f5ace86dc71a2683b884182fa46d57965a25a Author: Trond Norbye trond.nor...@gmail.com Date: Fri Sep 23 09:35:44 2011 +0200 Removed topkeys implementation Measurements showed memcached only able to handle about 50% of the operations with top keys on vs. when it was off. OK, that's interesting. I see the motivation for that. But I had assumed genhash.h was part of the public (utilities) API, and I was planning to use it. Suddenly it's gone! -- Trond Norbye
Value size distribution change and slab allocation issue.
This is an issue described on the memcached documentation: ...Unfortunately, slabs allocated early in memcached's process life might over time be effectively in the wrong slabclass. Imagine, for example, that you store session data in memcached, and memcached has been up and running for months. Finally, you deploy a new version of your application code, which stores more interesting information in your sessions -- so your session sizes have grown. Suddenly, memcached starts thrashing with huge amounts of evictions. What might have happened is that since the session size grew, the slab allocator needs to use a different slabclass. Most of the slabs are now sitting idle and unused in the old slabclass. The usual solution is to just restart memcached, unless you've turned on ALLOW_SLABS_REASSIGN... We were having that same issue in many of our servers, and since ALLOW_SLABS_REASSING is no longer supported the only thing we could do was to restart the servers, which lead to a storm of cache misses and other operational issues for us. That's why we developed an experimental command named drop_slab which when run just deletes all values in a slab class and deallocates that memory returning it to the OS. My question are: a) Has any of you run into this issue and if so how did you handle it? b) Do you think this command is something you would use? If so I can submit a patch. I'm planning to port it to version 1.6 (currently is for version 1.4) Thanks.
Re: Value size distribution change and slab allocation issue.
This is an issue described on the memcached documentation: ...Unfortunately, slabs allocated early in memcached's process life might over time be effectively in the wrong slabclass. Imagine, for example, that you store session data in memcached, and memcached has been up and running for months. Finally, you deploy a new version of your application code, which stores more interesting information in your sessions -- so your session sizes have grown. Suddenly, memcached starts thrashing with huge amounts of evictions. What might have happened is that since the session size grew, the slab allocator needs to use a different slabclass. Most of the slabs are now sitting idle and unused in the old slabclass. The usual solution is to just restart memcached, unless you've turned on ALLOW_SLABS_REASSIGN... We were having that same issue in many of our servers, and since ALLOW_SLABS_REASSING is no longer supported the only thing we could do was to restart the servers, which lead to a storm of cache misses and other operational issues for us. That's why we developed an experimental command named drop_slab which when run just deletes all values in a slab class and deallocates that memory returning it to the OS. My question are: a) Has any of you run into this issue and if so how did you handle it? b) Do you think this command is something you would use? If so I can submit a patch. I'm planning to port it to version 1.6 (currently is for version 1.4) Yes, we're aware of this. Feel free to post your patch somewhere and talk about it. However what we end up using for mainline is taking more time to develop as it's difficult to do this automatically, correctly, for most users. It's coming up pretty soon in my TODO list though; we've been catching up on the backlog with 1.4. -Dormando
Re: Value size distribution change and slab allocation issue.
It's coming up pretty soon in my TODO list though; we've been catching up on the backlog with 1.4. Are you planning to implement this for version 1.6? On Tue, Sep 27, 2011 at 6:54 PM, dormando dorma...@rydia.net wrote: This is an issue described on the memcached documentation: ...Unfortunately, slabs allocated early in memcached's process life might over time be effectively in the wrong slabclass. Imagine, for example, that you store session data in memcached, and memcached has been up and running for months. Finally, you deploy a new version of your application code, which stores more interesting information in your sessions -- so your session sizes have grown. Suddenly, memcached starts thrashing with huge amounts of evictions. What might have happened is that since the session size grew, the slab allocator needs to use a different slabclass. Most of the slabs are now sitting idle and unused in the old slabclass. The usual solution is to just restart memcached, unless you've turned on ALLOW_SLABS_REASSIGN... We were having that same issue in many of our servers, and since ALLOW_SLABS_REASSING is no longer supported the only thing we could do was to restart the servers, which lead to a storm of cache misses and other operational issues for us. That's why we developed an experimental command named drop_slab which when run just deletes all values in a slab class and deallocates that memory returning it to the OS. My question are: a) Has any of you run into this issue and if so how did you handle it? b) Do you think this command is something you would use? If so I can submit a patch. I'm planning to port it to version 1.6 (currently is for version 1.4) Yes, we're aware of this. Feel free to post your patch somewhere and talk about it. However what we end up using for mainline is taking more time to develop as it's difficult to do this automatically, correctly, for most users. It's coming up pretty soon in my TODO list though; we've been catching up on the backlog with 1.4. -Dormando
Re: Value size distribution change and slab allocation issue.
On Tue, 27 Sep 2011, Gonzalo de Pedro wrote: It's coming up pretty soon in my TODO list though; we've been catching up on the backlog with 1.4. Are you planning to implement this for version 1.6? I can't/won't predict what version number that change will be in.
Cacheismo
I have been working on in memory cache implementation for the last couple of months. I want to share with memcached users what I have built so far. - It supports memcached protocol (tcp and ascii) - Memory management is not slab based. It is self tuning. Just tell how much memory to use. - LRU is not slab based. It is global. Always the LRU entry is deleted, irrespective of its size. - It is scriptable using LUA. What this means is that instead of being restricted to set, lists and other predefined data structures exposed via redis, new data structures can be created and used. Currently I have implemented set, map, quota and sliding window counter in lua. New objects can be implemented without touching the c source code. The interface for accessing scriptable objects is implemented via memcached get requests. For example: get *set:new*:mykey - would create a new set object referred via myKey *set* refers to name of the file in scripts directory and *new *is one of the functions declared in set.lua. get *set:put*:myKey:a - would put key a in the set myKey get *set:count*:myKey - would return number of elements in the set get *set:union:*myKey1:myKey2 - would return union of sets myKey1 and myKey2 See scripts/set.lua for other functions Source code is available at https://github.com/iamrohit/cacheismo It is single threaded, so consider using multiple instances for better performance. The virtual key functionality (accessing lua objects) doesn't works when multiple servers are used because hash(virtualKey) is usually not equal to hash(key). Currently I am working on cluster support by including client capabilities in the server code. I ran some tests on my laptop for comparing the HIT rate of cacheismo vs memcached. This post has a graph which shows the difference. http://chakpak.blogspot.com/2011/09/introducing-cacheismo.html thanks for your time and attention! rohitk
Re: Issue 106 in memcached: binary protocol parsing can cause memcached server lockup
Updates: Status: Fixed Comment #12 on issue 106 by dorma...@rydia.net: binary protocol parsing can cause memcached server lockup http://code.google.com/p/memcached/issues/detail?id=106 think this was merged up. closing.
Re: Issue 193 in memcached: maxconns should not rely upon EMFILE
Updates: Status: Fixed Owner: dorma...@rydia.net Comment #1 on issue 193 by dorma...@rydia.net: maxconns should not rely upon EMFILE http://code.google.com/p/memcached/issues/detail?id=193 Fixed this in my for_148 branch, as an experimental preview feature.
Re: Issue 219 in memcached: make error array subscript is above array bounds for v=1.4.5 on Opensuse; patch workaround included
Comment #1 on issue 219 by dorma...@rydia.net: make error array subscript is above array bounds for v=1.4.5 on Opensuse; patch workaround included http://code.google.com/p/memcached/issues/detail?id=219 This patch makes super little sense to me. Why would it fail on that section, but not the ten other areas which have the same exact code? Don't see this error on any other platforms as well. I'd like to sit down with a suse instance and play with it. I don't have one nor does my KVM setup work at the moment, so I'm going to punt for 1.4.9. If you or someone has a shell I can use to fiddle, I'd take a look.
Re: Issue 220 in memcached: Binary Increment returns old cas
Comment #3 on issue 220 by dorma...@rydia.net: Binary Increment returns old cas http://code.google.com/p/memcached/issues/detail?id=220 I haven't written a test, but I don't see this bug in 1.4.7 or newer. That exact block of code was added to the end of do_add_delta, so there's no way for the old cas to persist past do_add_delta. Unless I messed up the reference in there, but I don't see it. Unfortunately the perl tests don't test incrdecr's cas return, so I'll have to add support for that before closing the bug :/ Are you *absolutely sure* you used 1.4.7 in your latest test?