No, I don't wait, but with a single connection to memcached (where "single" 
may be substituted with a small number), requests naturally stack up and can be 
merged.  Write buffers can pull from all queued events.  Reducing the number of 
packets moved around the network for requests would seem like it should 
increase performance.

  It's interesting that you tried this.  Do you still have the proxy 
application available?  It may be a good starting point for my experiments, and 
it may not be as optimal as what I'm thinking.

-- 
Dustin mobile.  

-----Original Message-----
From: Steve Grimm <[EMAIL PROTECTED]>
Date: Thu, 10 May 2007 10:52:17 
To:Dustin Sallings <[EMAIL PROTECTED]>,Les Mikesell <[EMAIL PROTECTED]>
Cc:<[email protected]>
Subject: Re: Multiple nodes vs multiple servers

We tried to go the proxy route at one point and ended up not using it (at least 
not as a generic “send everything through it” proxy as originally planned) 
because even without any batching of requests, the added response latency of 
passing everything through another user process made our application measurably 
slower. A big percentage of our page generation time is spent waiting for 
memcached requests to come back, so anything that systematically increases 
memcached round-trip times is generally a huge no-no for us. We’ve actually 
selected the operating systems on some of our servers based largely on the 
latency variance in their network stacks, no joke.
 
 However, in an environment where you are not so latency-sensitive — and I 
guess yours qualifies, if I’m correct in thinking your client is doing 
Nagle-style “wait a little while to see if another request happens so we can 
batch them together” -- that may not matter so much and a proxy may be a 
reasonable approach.
 
 -Steve 
 
 
 On 5/10/07 10:35 AM, "Dustin Sallings" <[EMAIL PROTECTED]> wrote:
 
 
 On May 10, 2007, at 10:19 , Les Mikesell wrote:
 
 
 How graceful is the system about making these changes while in production?  If 
you add servers do you have to stop the clients to reconfigure to use them, and 
is there any problem other than less than optimal caching while some clients 
run with the old setup?
  
 
 The memcached nodes don't care.  They don't know about each other.
 
 The clients are where the issue is.  For example, where I'm using my java 
client, I initialize it at application startup time and inject it where it's 
needed.  This effectively leaves me with no reconfiguration facility.
 
 Alternatively, I could more dynamically access my client and a means of 
pushing a new config into it and the users of the client wouldn't care at all.
 
 I've mentioned a memcached proxy that I think would be an ideal solution this 
problem as well as providing a performance benefit from multi-process 
applications.  I haven't written any of it yet, though.
 
  
 -- 
 Dustin Sallings
 
  
 
 
 
 

Reply via email to