On May 4, 2007, at 1:45 , Just Marc wrote:

Regarding large installs, has anyone considered a memcached proxy? It seems that a lot could be gained by having a local proxy on your frontend servers maintaining backend connections and configuration and perform the optimizations my java client performs (converting individual gets into a single get and optimizing out duplicate gets without otherwise processing requests out of order) even across multi-process clients.

Something like that would be a single point of failure and a bottleneck bound by your favorite operating system's efficiency to handle connections. I think you would scale better if you leave the decision making to the clients.

I don't know how you figure it'd be a single point of failure or a bottleneck. What I described wouldn't be any more a single point of failure than the processor(s) in your frontend servers.

Barring any bugs, you could almost guarantee an efficiency increase similar to what I observed when I wrote my java client. For example, my client will take n consecutive gets and send them as a single request (after deduplicating them). It will also take a get and a set being performed by two different requestors and send them in the same packet (at least, as closely as they'll fit).

Additionally, memcached cluster state can be pushed into such a proxy without forcing you to reconfigure every client on every platform. This is the main reason I brought it up. The client- facing side speaks memcached, and could have a few special keys like __server_list__ and __hash_type__ that can allow dynamic control over destinations. Except for a brief pause as requests complete during a refresh, dynamically reconfiguring your cluster via your monitoring system should have no impact on your applications.

--
Dustin Sallings


Reply via email to