On May 3, 2007, at 23:50, Just Marc wrote:

I hope you're doing a single multi-get or say a handful of them, rather than actually 200 to 1000 DIFFERENT CONNECTIONS. I don't think Steve separated each object from within a large multi-get request when saying that some of his nodes do 30-60k reqs/s, maybe he did? ...

That's an implementation detail that's probably covered. My java client will automatically aggregate multiple distinct sequential gets that are destined to a single server into a multi-get. Of course, it has to take those client requests and split them across the servers anyway.

I'm certainly not running a facebook or bloglines, but my applications so far are pretty efficient at issuing multiple individual requests and just letting the client perform optimization.


Regarding large installs, has anyone considered a memcached proxy? It seems that a lot could be gained by having a local proxy on your frontend servers maintaining backend connections and configuration and perform the optimizations my java client performs (converting individual gets into a single get and optimizing out duplicate gets without otherwise processing requests out of order) even across multi- process clients.

--
Dustin Sallings


Reply via email to