I can understand a set-up with replication rather than "sharding". In our situation we have only about 800MB of data in memcached, so there is hardly any need to have more than one instance... But it does get a fair amount of work, the average cmd_get's per second works out to about 1500 and it gets close to 100 connections per second. So if that single memcached server would go down, the few fairly expensive combined with the additional simple queries will start to overload the (master) databaseserver. And in turn our site can go down... And yes, we've seen that happen.

As always, there are multiple ways to solve such a problem - like additional denormalization or a more permanent cache. But we were also setting up our secondary location at the time, so the single memcached would contradict with the whole idea of a "back up location" anyway. Besides, the latency from going to our primary location made pages on the secondary somewhat slower.

So we solved both problems by setting up two memcached instances. Our client-library sends gets to the "closest" server that is actually online and it sends the updates and deletes to both servers. So that's somewhat similar to multi-master replication. But we solved it in our own client-library, rather than some proxy-solution (plus we can use memcached 1.4 ;) ). Since its for a cache, its mostly okay if some deletes or sets are missed.

Best regards,

Arjen

On 9-10-2012 18:24 Roberto Spadim wrote:
if you want shared keys, use a shared server...
one memcache for app1, one memcache for app2, and one memcache for
shared cache

if you want replication (aka cluster) you should use a replicated
memcache fork (repcache for example or many many others)

2012/10/9 Les Mikesell <[email protected]
<mailto:[email protected]>>

    On Tue, Oct 9, 2012 at 8:59 AM, Kiran Kumar <[email protected]
    <mailto:[email protected]>> wrote:
     > Thanks , let me explain my architecture more clearly
     >
     >  you misunderstand how memcached works. It doesn't front-end the
     >  database servers like you think it does. It works like this:
     >
     >  Memcache1   Memcache2
     >       \           /
     >        \         /
     >         \       /
     >          \     /
     >           \   /
     >            \ /
     >         MySQL Master
     >         MySQL Slave
     >
     >  There is NO replication between Memcache1 and Memcache2 because they
     >  do not cache the same data. Memcache1 caches 50% of the data and
     >  Memcache2 caches the other 50% of the data. There is no data overlap
     >  so no synchronization is necessary.
     >
     >  If either memcached server fails, the application requesting the
    data
     >  should "fall through" and hit the database server(s) directly.
     >
     >  Do you understand now?

    That description is correct, but it doesn't explain why your 2
    applications wouldn't work the way you want in this configuration or
    why you would want separate independent servers for each application.
      Being able to distribute the cache over multiple servers is the main
    reason people would use memcache.

    --
        Les Mikesell
    [email protected] <mailto:[email protected]>




--
Roberto Spadim
Spadim Technology / SPAEmpresarial

Reply via email to