On Thursday, June 14, 2012 4:13:28 AM UTC+3, Dormando wrote:
>
> > Hi 
> > 
> > I was wandering if is there any modification of memcached where the 
> > cache size can be dynamically controlled? 
> > If there is no such thing, how difficult will it be to create one? 
> > The idea is that once in a while a thread will wake up and change the 
> > cache size according to the available memory, such that the overall 
> > used memory in the machine won't exceed a defined percentage. I think 
> > that when the cache suppose to shrink, the free memory will be 
> > released according to the memcached LRU algorithm. 
>
> Not easily, no. It's best to just give it the most RAM you want it to use 
> and leave it at that. Most OS's can't always release memory from a program 
> back to the OS at runtime, anyway. Depending on the size/type of the 
> allocation a bit too... 
>

 
Hello Yiftach, Dormando and everyone,

I work with Eyal exactly on that: OSes that get and lose physical memory at 
runtime. 
We are interested in memcached because it is an important cloud benchmark 
which stresses the memory.

I think the way memcached deals with changes in the value size distribution 
has to do with dynamic memory. 
If memchaced caches many small objects, many slabs for small-size items are 
allocated. If then the distribution changes, and suddenly all objects are 
large-sized, then at some point small-size slabs need to be freed, or at 
least cleared and replaced by large-size slabs. If this is indeed what 
happens, we could take advantage of the point in time when a slab is freed 
or cleared, and reclaim the  slab (assuming the memory was not 
preallocated).
I found a comment saying /* so slab size changer can tell later if item is 
already free or not */", but I could not find the implementation of such a 
mechanism.

Do you find this a reasonable approach?

Thanks
Orna


Reply via email to