That's not really true in practice.  Yes, memcached does reuse slots, but
your items don't need to actually be the exact same size, they just need to
be in the same slab class.  In production, you'll probably never run into a
situation like your test where 100% of the slab space is allocated to the
same item size.

Memcached is very good at what it does.

On Tue, Jul 6, 2010 at 10:03 PM, Sergei Bobovich <[email protected]>wrote:

> Thanks, Brian,
> I understand that. My goal here is to better understand possible
> limitations
> and set expectations properly. Actually per what I saw in my tests (if the
> second series of inserts will still be of 512K then all of them will be
> stored successfully) I would conclude that if my data is about the same
> size
> (let's say from 9 to 10K) then I will do much more better by having all
> data
> pieces of the same size (align to 10K). Again this is a speculation without
> knowing internals but my impression is that memcached successfully reuses
> slots of the same size.
>
> Regards,
> Sergei
>
> -----Original Message-----
> From: Brian Moon [mailto:[email protected]]
> Sent: Tuesday, July 06, 2010 8:36 PM
> To: [email protected]
> Cc: siroga
> Subject: Re: LRU mechanism question
>
> Just to pile on, test data that is all the same size like that is
> probably a very bad test of memcached. Most likely, all your data is not
> the exact same size.
>
> Brian.
> --------
> http://brian.moonspot.net/
>
> On 7/6/10 5:36 PM, siroga wrote:
> > Hi,
> > I just started playing with memcached. While doing very basic stuff I
> > found one thing that confused me a lot.
> > I have memcached running with default settings - 64M of memory for
> > caching.
> > 1. Called flushALL to clean the cache.
> > 2. insert 100 of byte arrays 512K each - this should consume about 51M
> > of memory so  I should have enough space to keep all of them - and to
> > very that call get() for each of them  - as expected all arrays are
> > present
> > 3. I call flushAll again - so cache should be clear
> > 4. insert 100 arrays of smaller size ( 256K). I also expected that I
> > have enough memory to store them (overall I need about 26M), but
> > surprisingly to me when calling get() only last 15 where found in the
> > cache!!!
> >
> > It looks like memcached still hold memory occupied by first 100
> > arrays.
> > Memcache-top says that only 3.8M out of 64 used.
> >
> > Any info/explanation on memcached memory management details is very
> > welcomed. Sorry if it is a well known feature, but I did not find much
> > on a wiki that would suggest explanation.
> >
> > Regards,
> > Sergei
> >
> > Here is my test program (I got the same result using both danga and
> > spy.memcached. clients):
> >
> >      MemCachedClient cl;
> >
> > @Test
> >      public void strange() throws Throwable
> >      {
> >          byte[] testLarge = new byte[1024*512];
> >          byte[] testSmall = new byte[1024*256];
> >          int COUNT = 100;
> >          cl.flushAll();
> >          Thread.sleep(1000);
> >          for (int i = 0; i<  COUNT; i++)
> >          {
> >              cl.set("largekey" + i, testLarge, 600);
> >          }
> >          for (int i = 0; i<  COUNT; i++)
> >          {
> >              if (null != cl.get("largekey" + i))
> >              {
> >                  System.out.println("First not null " + i);
> >                  break;
> >              }
> >          }
> >          Thread.sleep(1000);
> >          cl.flushAll();
> >          Thread.sleep(1000);
> >          for (int i = 0; i<  COUNT; i++)
> >          {
> >              cl.set("smallkey" + i, testSmall, 600);
> >          }
> >          for (int i = 0; i<  COUNT; i++)
> >          {
> >              if (null != cl.get("smallkey" + i))
> >              {
> >                  System.out.println("First not null " + i);
> >                  break;
> >              }
> >          }
> >
> >      }
>
>


-- 
awl

Reply via email to