Yo,

The higher the max page size is set, the more slab classes it'll
generate. As the class size increases the memory overhead per class
raises...

Each class needs a minimum of one POWER_BLOCK-sized page allocated to
it. So as you increase POWER_BLOCK in size, it needs more POWER_BLOCK's
allocated up front. Just as an example; normally you'd have 34 or 39
slab classes of 1MB each. Then you might need 70 of 16MB each just to
get off the ground.

So as a side effect of the way the slabber is designed, POWER_BLOCK
turns into suck crank. Also, if you're loading 5 megabyte objects and
demarshalling them for a web request, you have larger issues. Might as
well spool those monsters to file or persist them in memory... I think
it appears to just be that for web requests, 1MB being the upper bound
of what you can fetch in a single go is a handy sanity check.

-Dormando

Nick Grandy wrote:
> Hi all,
> 
>  I've been searching around for recent experiences storing larger than
>  normal objects in memcached.  Since I didn't find much, I'm hoping
>  people will share their experiences.
> 
>  I've installed 1.2.5 and edited the slabs.c file as follows:
>  #define POWER_BLOCK 16777216
> 
>  This has the effect (I believe!) of setting the max object size to
>  16MB, and it seems to work.  Running with the -vv option shows that
>  there is a nice distribution of slabs created up to 16MB, and
>  memcached does work.  So I'm optimistic.
> 
>  Now here are the questions.  Have other people used this technique
>  successfully?  What sort of 'gotchas' might be waiting around the
>  corner?  Perhaps related, I am curious why the memcached protocol
>  limits the max size to 1MB.  Would it make sense to make the max slab
>  size a command line option?
> 
>  I guess not that many people need to store large objects, or this
>  would come up more often.  In my case, I am running a web app in Rails
>  that makes use of large hashtables.  I realize there are other
>  workarounds; eg, I could refactor so the data is stored in smaller
>  chunks <1MB ( but that seems fairly arbitrary); or share the lookup
>  tables between processes by dumping to a file.   But, sharing via
>  memcached seems more flexible and robust - assuming it doesn't blow
>  up!   I'm running on EC2 with ample memory, so the potential
>  inefficiency of allocating large slabs is not currently a concern.
> 
>  So, in short, is setting a large slab size a reasonable thing to do,
>  or am I making a big mistake?
> 
>  Thoughts appreciated.
>  And huge thanks to the memcached contributors for such a valuable tool!
> 
>  Nick

Reply via email to