If your memory is very low (only 64m), its would work better the
smaller the chunks are, or slabs for big chunks will ocupy a lot of
memory. With gigs of RAM (typically people with dedicated memcaches
reserve 70-80% of total RAM) the slab allocation does not pose any
problem.

I agree that a flush should probably also remove allocated slabs, but
flush is really never used in production for obvoius reasons :)

On 2010-07-07, Sergei Bobovich <[email protected]> wrote:
> Thanks, Brian,
> I understand that. My goal here is to better understand possible limitations
> and set expectations properly. Actually per what I saw in my tests (if the
> second series of inserts will still be of 512K then all of them will be
> stored successfully) I would conclude that if my data is about the same size
> (let's say from 9 to 10K) then I will do much more better by having all data
> pieces of the same size (align to 10K). Again this is a speculation without
> knowing internals but my impression is that memcached successfully reuses
> slots of the same size.
>
> Regards,
> Sergei
>
> -----Original Message-----
> From: Brian Moon [mailto:[email protected]]
> Sent: Tuesday, July 06, 2010 8:36 PM
> To: [email protected]
> Cc: siroga
> Subject: Re: LRU mechanism question
>
> Just to pile on, test data that is all the same size like that is
> probably a very bad test of memcached. Most likely, all your data is not
> the exact same size.
>
> Brian.
> --------
> http://brian.moonspot.net/
>
> On 7/6/10 5:36 PM, siroga wrote:
>> Hi,
>> I just started playing with memcached. While doing very basic stuff I
>> found one thing that confused me a lot.
>> I have memcached running with default settings - 64M of memory for
>> caching.
>> 1. Called flushALL to clean the cache.
>> 2. insert 100 of byte arrays 512K each - this should consume about 51M
>> of memory so  I should have enough space to keep all of them - and to
>> very that call get() for each of them  - as expected all arrays are
>> present
>> 3. I call flushAll again - so cache should be clear
>> 4. insert 100 arrays of smaller size ( 256K). I also expected that I
>> have enough memory to store them (overall I need about 26M), but
>> surprisingly to me when calling get() only last 15 where found in the
>> cache!!!
>>
>> It looks like memcached still hold memory occupied by first 100
>> arrays.
>> Memcache-top says that only 3.8M out of 64 used.
>>
>> Any info/explanation on memcached memory management details is very
>> welcomed. Sorry if it is a well known feature, but I did not find much
>> on a wiki that would suggest explanation.
>>
>> Regards,
>> Sergei
>>
>> Here is my test program (I got the same result using both danga and
>> spy.memcached. clients):
>>
>>      MemCachedClient cl;
>>
>> @Test
>>      public void strange() throws Throwable
>>      {
>>          byte[] testLarge = new byte[1024*512];
>>          byte[] testSmall = new byte[1024*256];
>>          int COUNT = 100;
>>          cl.flushAll();
>>          Thread.sleep(1000);
>>          for (int i = 0; i<  COUNT; i++)
>>          {
>>              cl.set("largekey" + i, testLarge, 600);
>>          }
>>          for (int i = 0; i<  COUNT; i++)
>>          {
>>              if (null != cl.get("largekey" + i))
>>              {
>>                  System.out.println("First not null " + i);
>>                  break;
>>              }
>>          }
>>          Thread.sleep(1000);
>>          cl.flushAll();
>>          Thread.sleep(1000);
>>          for (int i = 0; i<  COUNT; i++)
>>          {
>>              cl.set("smallkey" + i, testSmall, 600);
>>          }
>>          for (int i = 0; i<  COUNT; i++)
>>          {
>>              if (null != cl.get("smallkey" + i))
>>              {
>>                  System.out.println("First not null " + i);
>>                  break;
>>              }
>>          }
>>
>>      }
>
>


-- 
Guille -ℬḭṩḩø- <[email protected]>
:wq

Reply via email to