Hi Dormando and everyone else, I re-implemented the dynamic memory on the basis of Dormando's tree ( https://github.com/dormando/memcached/commits/slab_boost ), in https://github.com/ladypine/memcached .
Dormando - thank you for your comments, suggestions, and changes, which helped me make the patch more elegant: *Auto shrink is enabled when slab_reassign is active and memory needs to be shrunk. Actually, memory growth may be limited if no slab_reassign is active (up to the point where memory was already used). No additional command is required. *The command for changing the max memory in run time is the same as the command in init time - "m". *The automove_levels are used for aggression (inactive, lazy, "angry birds mode"), disregarding the operation - shrink or move. So a best candidate can be chosen in the aggressive mode even if no candidate fits the strict conditions of automove level 1. The test (slabs_shrink.t) shows the difference in the responsiveness: the aggressive mode needs only 1 second of sleep, the lazy modes need at least 30. Actually, this test might annoy developers, because it takes over 60 seconds, so maybe it should not be done on a regular basis (use only the aggressive test). *Several slabs can be killed together fast (currently only in shrinkage, which normally requires killing many slabs). Once the best source was determined, it should kill at least its equal share of the required slabs. So the process can repeat without changing the source, and only then new statistics are sought. *Shrinkage can be activated from the command line using a negative destination. In this case, the absolute value of the destination is used as the number of slabs to kill. I did not find the "slab reassign -1 -1" option so useful, because it can be achieved by reducing the memory by one slab: In this case one slab will get nuked from the best place (assuming aggressive mode). If automove is enabled, it will automatically go into action after the shrink, so this is left for the user. Orna On Wed, Jun 27, 2012 at 11:42 AM, Ladypine <[email protected]> wrote: > Hi Dormando, > > Thank you for your insightful answer. I implemented the autoshrink > mechanism in 1.4, because I could not find slab mover in 1.6. > The patch is attached, and I will be happy to hear your opinion. It > contains the following content: > " > Added slab autoshrink: dynamic memory support. > mem_limit can now be shrunk or expanded in mid-run using the maxmegabytes > command. It uses the infrastructure of slab automove, prefering slab shink > to slab automove if both mechanisms are active. It is tested in the new > test t/slabs_shrink.t. > > Bugfix: when USE_SYSTEM_MALLOC, asserting clsid is zero accessed > uninitialized memory. > Fixed t/binary.t for number of tests. > Improved error message in devtools/clean-whitespace.pl > " > > Next, I intend to enable the user to query the mem_limit and mem_allocated > current sizes, and to improve the user control over the speed in which > slabs are freed. Currently it is always one slab at a time, at most 1 per > second, which gives a rate of 1MB/sec. This is not enough to changing 200MB > within seconds. > > Thanks > Orna > > On Thursday, June 14, 2012 10:41:21 AM UTC+3, Dormando wrote: >> >> > >> > Hello Yiftach, Dormando and everyone, >> > >> > I work with Eyal exactly on that: OSes that get and lose physical >> memory at runtime. >> > We are interested in memcached because it is an important cloud >> benchmark which stresses the memory. >> > >> > I think the way memcached deals with changes in the value size >> distribution has to do with dynamic memory. >> > If memchaced caches many small objects, many slabs for small-size items >> are allocated. If then the distribution changes, and suddenly all objects >> are large-sized, >> > then at some point small-size slabs need to be freed, or at least >> cleared and replaced by large-size slabs. If this is indeed what happens, >> we could take advantage >> > of the point in time when a slab is freed or cleared, and reclaim the >> slab (assuming the memory was not preallocated). >> > I found a comment saying /* so slab size changer can tell later if item >> is already free or not */", but I could not find the implementation of such >> a mechanism. >> > >> > Do you find this a reasonable approach? >> >> You're making some assumptions between the LRU and slab allocation. A >> slab >> will be full of completely random allocations, ejecting memory to free it >> up will lose a bunch of items. It also doesn't do what you suggest, not >> natively. Before 1.4.11 slab memory allocations were static. If you >> loaded >> small items, then large items, your large items would have no memory. >> >> http://code.google.com/p/**memcached/wiki/**ReleaseNotes1411<http://code.google.com/p/memcached/wiki/ReleaseNotes1411> >> >> If you look at how the slab mover is implemented, you could inject >> yourself into the slab mover, and instead of moving a slab between one >> class and another, free it once the items are cleared. >> >> It is, as I said, not something you want to do all willy-nilly. >> Preserving >> the items in the slab page you intend to move requires a lot more careful >> work than I was willing to do at the time. It might be too slow as well. > > -- Orna Agmon Ben-Yehuda. http://ladypine.org
