On Sat, Oct 27, 2012 at 10:44 PM, David Walter <[email protected]>wrote:
> > The reason that you are getting push back from respondents seems to me > to be that the questions that you are asking aren't in what are > considered the 'natural' use of memcached. That is to say, letting the > library hash keys to servers transparently. It is however a 'natural' > design goal to want to have synchronizations. > If you use memcached the 'natural' way, if every key is stored on exactly one server, and if there's no failover or replication, then all your memcached operations will automatically be 100% synchronized and atomic. OPs original problem was that some other application stores data in both servers, and his application needs to process that somehow, and then delete data from both servers. And then he rightly discovered that deleting a key on both servers wasn't atomic. Your solution to that is then to add some sort of lock system on top, so that deleting a key first involves trying to grab the lock, get the lock, delete on both, and finally release the lock. And you rightly list all the possible problems with this approach. But it's nuts. It's over-complicated. It's brittle. And all of these problems come from the original requirement of trying to use memcached as a replicated data-store, which is why a lot of us are telling the OP that it can't be done as long as that requirement stands. Also, if OP can't change how the producing application works, he can't force it to use his locking system, which means that it will gladly disregard the locks and write data to the memcached servers out of sync. And then you didn't solve anything. /Henrik
