A couple more random thoughts to dormando's:

1) If your memcached thread count is set correctly, each thread of a single
memcached instance will end up with, effectively, its very own processor
core (assuming a server dedicated to memcached). If you have multiple
instances, you can also get that effect, but you end up robbing your
instances of useful CPU (e.g. if you have 6 instances and 12 cores, 2
threads per instance will give each thread a dedicated core, but each
instance would only get 2 cores even if it got a burst of traffic that
could take advantage of more, leaving other cores potentially idle)

1a) I could see running a memcached instance with a single thread, locked
to a single core to which it has exclusive access, as a way to get highly
deterministic response times (more or less) for some kind of soft-realtime
system, though.

2) You'd stand to waste more memory — if all your instances have slab
classes that are fairly unpopulated (e.g. you have classes with a single
object), you end up wasting 1MB per underutilized slab per instance. So if
you have 6 instances each with one barely-used slab class, you’re throwing
away 6MB instead of 1MB. Multiply this by the number of barely-used slab
classes you have.

3) memcached doesn’t really decrease in performance as the number of items
grows (barring pauses to grow the hash table and allocate memory initially,
but that quickly reaches steady state), so you don’t end up gaining
anything by going with “more but smaller” instances

4) Lock contention between threads could totally be a thing, but as
dormando notes, that should mostly not be a thing using modern memcached
code. Unless all of your objects are really tiny *and* your request rate is
really high, I suspect you’ll run into other resource limits before lock
contention becomes a problem (e.g. one of my memcached installs
periodically saturates a 10Gb NIC without having any noticeable amount of
lock contention)

5) If your systems are NUMA, *and* your hardware has high cross-node
latency, *and* your OS’s scheduling tends to get your memory allocations
spread across nodes, you might get some benefit from having multiple
instances if you can keep each instance strictly on its own NUMA node,

So, long story short: The situations in which “many smaller instances”
would be a performance win are pretty specialized. They certainly exist,
but they’re gonna be pretty rare.

-j

disclaimer: dormando is the expert here! I am just running my mouth ;)


On Wed, Jun 15, 2016 at 12:11 PM, dormando <dorma...@rydia.net> wrote:

> Hey,
>
> It'll depend on a few things. I'd say in almost all cases using fewer
> larger instances is probably better from a maintenance standpoint, but
> I need to ask some questions:
>
> 1) Are you running 2G instances because you're running a 32bit memcached?
> Some folks do that to save some pointer space, but it's not that necessary
> anymore.
>
> 2) What version are you running? Very old versions weren't as good at
> thread scalability
>
> 3) Are all 6 of your per-node instances in the same pool? Or are you
> segregating different pools for different types of objects?
>
> 4) What are your metrics, if they can be shared? (feel free to send
> privately if that helps). You'd need extremely high rates of requests to
> run into threading problems under the latest software. It'll be very
> slightly measurably slower than running a single thread, but other
> benefits should outweigh.
>
> Given the improvements to memory handling up through 1.4.25, I'd highly
> recommend testing a recent version with modern options enabled. You can
> see the release notes for ideas there. Thread scaling is good and you
> shouldn't really need to manage so many instances.
>
> I'm also hoping to finish up this today or tomorrow:
> https://github.com/memcached/memcached/pull/127 - which has all of the
> previous benefits plus an ability to take a look at what's going on on a
> live server.
>
> On Wed, 15 Jun 2016, Geoff Galitz wrote:
>
> >
> > Hi.
> >
> > We have a number of servers setup with up to 6 memcache instances per
> node.   Supposedly this was done to increase performance and avoid
> threading bottlenecks, historically.
> >
> > My question is this... at a general/best practice level are multiple
> smaller instances (e.g. 2G) favorable over a single large (e.g. 10G)
> memcached instance?  Assume we have a
> > large fleet of servers backing this memcache service.
> >
> > Thanks.
> > -G
> >
> >
> > --
> >
> > ---
> > You received this message because you are subscribed to the Google
> Groups "memcached" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to memcached+unsubscr...@googlegroups.com.
> > For more options, visit https://groups.google.com/d/optout.
> >
> >
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to memcached+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to