Maybe I forgot to remove the stat. Again, you're not doing anything wrong. free_chunks_end is just a weird indicator.
free_chunks == free_chunks + free_chunks_end. that's all. On Wed, 19 Sep 2012, mmsilveira wrote: > Dormando, > > Thank you for your reply! > > I understood your explanation. But I installed 1.4.15 version in a test > server (not production) and the behaviour is the same: > > # Item_Size Max_age Pages Count Full? Evicted Evict_Time OOM > 2 224B 59s 1 2 yes 0 0 0 > 3 448B 3s 1 1 yes 0 0 0 > > STAT 2:chunk_size 224 > STAT 2:chunks_per_page 4681 > STAT 2:total_pages 1 > STAT 2:total_chunks 4681 > STAT 2:used_chunks 2 > STAT 2:free_chunks 4679 > STAT 2:free_chunks_end 0 > STAT 2:mem_requested 447 > STAT 2:get_hits 0 > STAT 2:cmd_set 2 > STAT 2:delete_hits 0 > STAT 2:incr_hits 0 > STAT 2:decr_hits 0 > STAT 2:cas_hits 0 > STAT 2:cas_badval 0 > STAT 2:touch_hits 0 > STAT 3:chunk_size 448 > STAT 3:chunks_per_page 2340 > STAT 3:total_pages 1 > STAT 3:total_chunks 2340 > STAT 3:used_chunks 1 > STAT 3:free_chunks 2339 > STAT 3:free_chunks_end 0 > STAT 3:mem_requested 225 > STAT 3:get_hits 0 > STAT 3:cmd_set 1 > STAT 3:delete_hits 0 > STAT 3:incr_hits 0 > STAT 3:decr_hits 0 > STAT 3:cas_hits 0 > STAT 3:cas_badval 0 > STAT 3:touch_hits 0 > STAT active_slabs 2 > > The configuration is the same as production server, but the growth factor > that is different. > > Am I doing anything wrong? > > Thanks, > > Mauricio > > Em quarta-feira, 19 de setembro de 2012 16h36min46s UTC-3, Dormando escreveu: > free_chunks_end is just a counter for how many chunks are available in a > recently allocated slab page. So if a slab class grabs 1MB of new > memory, > it'll have something in free_chunks_end temporarily, then it all moves > into free_chunks or otherwise gets used. > > In 1.4.15 this counter is gone completely as we pre-split chunks > directly > into the freelist. > > If it says free_chunks then you have free space. it's not full. > > On Wed, 19 Sep 2012, mmsilveira wrote: > > > Hi, > > > > I'm running memcached-1.4.14 in CentOS 6 x86_64 system, and my > clients write small objects in memcached. I'm staring memcached with > this directives: > > > > memcached -d -p 11211 -u memcache -m 18432 -c 12288 -P > /var/run/memcached/memcached.pid -t 64 > > > > The memory has enough space, the slabs are not full and there are > free chunks. But, using memcached-tool, it returns full slab: > > > > # Item_Size Max_age Pages Count Full? Evicted Evict_Time OOM > > 1 96B 3596s 2 14928 yes 0 0 0 > > 2 120B 3601s 1 5042 yes 0 0 0 > > 5 240B 118s 1 3 yes 0 0 0 > > > > It's possible verify the follow stats of my slabs: > > > > STAT 1:chunk_size 96 > > STAT 1:chunks_per_page 10922 > > STAT 1:total_pages 2 > > STAT 1:total_chunks 21844 > > STAT 1:used_chunks 14919 > > STAT 1:free_chunks 6925 > > STAT 1:free_chunks_end 0 > > STAT 1:mem_requested 1235946 > > STAT 1:get_hits 0 > > STAT 1:cmd_set 1 > > STAT 1:delete_hits 0 > > STAT 1:incr_hits 199280204 > > STAT 1:decr_hits 0 > > STAT 1:cas_hits 0 > > STAT 1:cas_badval 0 > > STAT 1:touch_hits 0 > > STAT 2:chunk_size 120 > > STAT 2:chunks_per_page 8738 > > STAT 2:total_pages 1 > > STAT 2:total_chunks 8738 > > STAT 2:used_chunks 5038 > > STAT 2:free_chunks 3700 > > STAT 2:free_chunks_end 0 > > STAT 2:mem_requested 524720 > > STAT 2:get_hits 0 > > STAT 2:cmd_set 0 > > STAT 2:delete_hits 0 > > STAT 2:incr_hits 1638482 > > STAT 2:decr_hits 0 > > STAT 2:cas_hits 0 > > STAT 2:cas_badval 0 > > STAT 2:touch_hits 0 > > STAT 5:chunk_size 240 > > STAT 5:chunks_per_page 4369 > > STAT 5:total_pages 1 > > STAT 5:total_chunks 4369 > > STAT 5:used_chunks 3 > > STAT 5:free_chunks 4366 > > STAT 5:free_chunks_end 0 > > STAT 5:mem_requested 672 > > STAT 5:get_hits 3 > > STAT 5:cmd_set 11 > > STAT 5:delete_hits 1 > > STAT 5:incr_hits 0 > > STAT 5:decr_hits 0 > > STAT 5:cas_hits 0 > > STAT 5:cas_badval 0 > > STAT 5:touch_hits 0 > > STAT active_slabs 3 > > > > Why free_chunks_end is always returning "0"? Is there any error in my > configuration or any allocation feature that solves this? > > > > Thank you, > > > > Mauricio > > > > > > >
