Let me put things on a clear not . This is a Streaming application , where one part of application writes data to both the instances of the Memcache , and there is an other side of the application which reads from either one of the instance of Memcache , so if Data is read from one instance needs to delete from another instance at the same time .
Hope i am clear here . On Thursday, 11 October 2012 21:14:50 UTC+5:30, Dormando wrote: > > > I am working on a heavy traffic web site , where there will be GB's of > data written per minute into our Memcache . So we have decided to use two > separate instances of Memcache for the application . > > > > Right now the setup is that , there is NO clustering between Memcache1 > and Memcache2 because , Memcache1 caches 50% of the data and Memcache2 > caches the other 50% of the data. > > > > Memcache1 Memcache2 > > \ / > > \ / > > \ / > > \ / > > \ / > > \ / > > CustomerData > > > > So right now as per the set up , there are two Memcache instances for a > single application . > > > > Now my question is , once we recive a value inside the application , > which writes/sets to both the Memcache instances , assume that if a key is > read one of the instance of Memcache - 1 , i need to delete the same key on > the other instance of memcahce also at the same > > time , so taht they will be in sync with each other . > > > > As per the code point of view once a value is read from Memcache , i am > deleting that key . > > Right now you say: > > - There is NO clustering. 50% of keys are on Memcache1, 50% of keys are on > Memcache2. > > Then you say: > > - When you receive a value inside the applicatin, it writes to both > memcache instances. > > Which is the truth? If 50% of keys are on each, you are NOT writing to > both. Half your writes go to one, half to the other. In this *completely > standard setup*, deletes will go to the right place as well. > > If you're trying to have 100% of your keys available in Memcache1, and > 100% of your keys available in Memcache2, don't fucking do that. >
