On May 10, 2007, at 14:58 , Steve Grimm wrote:

On 5/10/07 11:09 AM, "Dustin Sallings" <[EMAIL PROTECTED]> wrote:
No, I don't wait, but with a single connection to memcached (where "single" may be substituted with a small number), requests naturally stack up and can be merged. Write buffers can pull from all queued events. Reducing the number of packets moved around the network for requests would seem like it
should increase performance.

Yes, absolutely it will, both from a server CPU point of view (multi-get is
much more efficient than get) and, if you're really pushing a lot of
traffic, from a network capacity point of view. So you're saying the
batching only happens when the socket buffer on your memcached connection is full and you have to hang on to data and wait for the connection to become writeable again anyway? That makes sense. (Yes, sorry, I know I could just
go look at the code...)

Right. I'm a bit lazier in this than I could be (partially because buffers in java are...weird), but effectively, the IO thread maintains its own write buffer that it fills from buffers from other operations. There's a special case in the fill method that merges sequential gets.

Even without the get merging, multiple requests to the same server may be sent in the same packet.

I think it isn't something we ever considered because the ratio of number of memcached hosts to number of client processes on any single client machine here is pretty high; the chances of enough separate client processes on one host needing to write enough requests to the same memcached host at the same time to clog up a connection are pretty slim. But if your number of clients per host is much higher than your number of memcached servers, then that
wouldn't be so true.

This makes sense. In a java app server context like mine, I'd get very little benefit from such a thing because I'm already able to share the connection resource very effectively. I'm thinking more about the cases where people aren't able to.

        You seem to have your farm well under control.

  It's interesting that you tried this.  Do you still have the proxy
application available? It may be a good starting point for my experiments,
and it may not be as optimal as what I'm thinking.

We do still have it, and the plan from the start was to release it as open source at some point. Since it ended up being less useful than we thought it would be, I didn't finish the prep work for that. It still needs a bit of
cleaning up (not to mention documentation) before I'd be comfortable
unleashing it on the world. If people are really interested I'll see what I
can do, but don't expect it tomorrow or anything.

Well, don't worry about the state of it too much, but take your time. I'm not in a huge hurry to take on more work now anyway.

--
Dustin Sallings


Reply via email to