Exactly, which is why the solution to avoiding cache synchronization errors
is to simply not implement failover. In my client, I use the ketama method
for server selection, but if a server fails, I just mark it as dead, and
keep retrying it with a growing interval. As long as it's dead, all requests
that would end up on that server simply misses.


/Henrik

On Tue, Jun 23, 2009 at 14:33, Pau Freixes <[email protected]> wrote:

> Henrik
>
>
>
> Again, if you want clients with good server selection algorithms, you
>> should take a look at the ones that implement the libketama method:
>> http://www.last.fm/user/RJ/journal/2007/04/10/rz_libketama_-_a_consistent_hashing_algo_for_memcache_clients
>>
>
> I have see this code, and it's good to minimize the problem of invalidate
> the major part of cache [1], but it has the same problem in failover scheme.
>
>
> When you "shutdown" some server - perhaps network error and use a monit [2]
> strategie - and you update your ketama file deleting this server, some keys
> will be remapped to more close server, and when you startup the last fail
> server again the keys will be remapped to origin server.
>
> With no invalidate cache strategies you can retrieve a inconsistency data
> from you memcached server.
>
> Yesterday at night i saw one video [3] about facebook memcache improvements
> and their architecture, they talks about some component called mcproxy to
> handle cache consisteny, do you know something about this ?
>
> [1]
> http://code.google.com/p/memcached/wiki/FAQ#How_does_memcached_handle_failover
> ?
> [2] http://mmonit.com/monit/
> [3] www.*facebook*.com/*video*/*video*.php?v=631826881803
>
> --
> --pau
>

Reply via email to