You say you're doing that for "scaling" right? Do you know how frequently
each key is getting hit? Memcached has a very high capacity and it's
unlikely you actually need to do what you're doing.

The libmemcached client has a client-side replication feature which lets
you store a key twice into a cluster. It manages deletes as well. I'm not
sure how well it works though.

A more low tech approach is to use multiple keys per object, which will
end up split across the instances half the time (depending on the hashing
algo).

So you'd store: fookey1 fookey2
You'd fetch: fookey2 (Chosen randomly)
You'd delete: fookey1 fookey2

I still seriously doubt you're doing it right though. If you only have two
instances in total there's no way either will be overwhelming a memcached
instance. If you have to "scale" and then you have 3 instances that need
to be in sync, you've screwed yourself.

On Thu, 11 Oct 2012, Kiran Kumar wrote:

> Let me put things on a clear not .
>
> This is a Streaming application , where one part of application writes data  
> to both the instances of the Memcache , and there is an other side of the 
> application which reads from either one of the instance of Memcache , so if 
> Data is read from one instance needs to
> delete from another instance at the same time .
>
> Hope i am clear here .
>
> On Thursday, 11 October 2012 21:14:50 UTC+5:30, Dormando wrote:
>       > I am working on a heavy traffic web site , where there will be GB's 
> of data written per minute into our Memcache . So we have decided to use two 
> separate instances of Memcache for the application .
>       >
>       > Right now the setup is that , there is NO clustering between 
> Memcache1 and Memcache2 because , Memcache1 caches 50% of the data and 
> Memcache2 caches the other 50% of the data.
>       >
>       >  Memcache1   Memcache2
>       >       \           /
>       >        \         /
>       >         \       /
>       >          \     /
>       >           \   /
>       >            \ /
>       >      CustomerData
>       >
>       > So right now as per the set up , there are two Memcache instances for 
> a single application .
>       >
>       > Now my question is , once we recive a value inside the application , 
> which writes/sets to both the Memcache instances , assume that if a key is 
> read one of the instance of Memcache - 1 , i need to delete the same key on 
> the other instance of memcahce
>       also at the same
>       > time , so taht they will be in sync with each other .
>       >
>       > As per the code point of view once a value is read from Memcache , i 
> am deleting that key .
>
>       Right now you say:
>
>       - There is NO clustering. 50% of keys are on Memcache1, 50% of keys are 
> on
>       Memcache2.
>
>       Then you say:
>
>       - When you receive a value inside the applicatin, it writes to both
>       memcache instances.
>
>       Which is the truth? If 50% of keys are on each, you are NOT writing to
>       both. Half your writes go to one, half to the other. In this *completely
>       standard setup*, deletes will go to the right place as well.
>
>       If you're trying to have 100% of your keys available in Memcache1, and
>       100% of your keys available in Memcache2, don't fucking do that.
>
>
>

Reply via email to