Thanks for the reply. Sorry that I used the connection and request interchangeably.
Let me put the questions as : If I keep max connection as 2048 and the backlog queue limit as 2048 then how does this differ say in performance from setting max connection as 3072 and backlog queue limit as 1024. Now if it is better to keep max connections higher than backlog queue limit, what is the penalty of keeping the max connections very high. On May 23, 11:30 am, Dustin <[email protected]> wrote: > On May 22, 10:45 pm, ktechie <[email protected]> wrote: > > > There is also another option of "-b" to set a backlog queue limit. > > These many requests can be kept waiting for connection. Default is > > 1024 > > > So taking the default values can we say that 1024+1024 requests will > > be supported. Out of which 1024 will be served and 1024 will be in > > queue. The ones on queue may have slightly more delay. > > I'm not sure I'd classify it that way. TCP backlog is how many > incoming connections we tell the kernel to allow to build up between > accept(2) calls. We're answering those as quickly as possible to > accept new connections, but connections and requests are not really > related. Ideally, connections are held open and operations are > executed on existing connections (otherwise, you can easily spend more > time connecting to memcached than having it process a request). > > Your application shouldn't ever really be subjected to the TCP backlog > -- it's kind of bad behavior on the part of the server since it leads > to unpredictable delays (vs. a "connection refused" or immediate > hangup). Without a TCP backlog limit of 0, we'd spuriously report > connection refused during connection rushes even when there are few > connections into the server because we can't call accept fast enough.
