Hi,

I have recently started looking at memcached.  In my testing I
discovered a scenario in which it's possible to pick up stale data
from
a cache server.  By "stale", I mean data that was assumed to be
deleted.
I was wondering if this is a well known problem and whether there's
any
solution or workaround.

Here is the scenario:

Suppose there are 2 memcached instances.  Let's call them Server A and
Server B.
1. A and B are both up and running
2. Client sets foo=bar in the cache.  Let's suppose it goes to Server
A.
3. Server A goes down
4. Client makes a cache request for "foo".  Since A is down, it
attempts
to read from B, which gives us a cache miss.
5. Client sets foo=bar in the cache.  This time it goes to B since A
is
down
6. Server A comes back up
7. Client makes a cache request for "foo".  It attempts to read from
A,
which is empty since it just came back up, so we get a cache miss
again.
8. Client sets foo=bar in the cache.   The data goes to Server A.
(Note
B is still populated from earlier).
9. Client deletes "foo" from the cache.  It is deleted from A (but not
B).
10. Server A goes down again.
11. Client reads foo from cache.  Since A is down, it reads from
Server
B, and the value is "bar".  But "bar" is stale now; it should have
been
deleted, per step 9.

The only way I could envision solving this, is if the memcached
servers
were aware of each other, and could propagate the deletes to their
peers.   I suppose this would be a pretty drastic change to the
system.

Am I just missing some simple solution?

Thanks,
Taso

Reply via email to