On May 30, 2008, at 11:19 AM, Shannon -jj Behrens wrote:

* Socket connections have a setting for how many requests are allowed
to wait in the queue before being accepted.  I don't know what the
setting for Paster is or how to change it.  I do know that I've used
ab before, and it hit this limit.  This is the likely cause of
"Connection Denied" errors.

The default socket connection queue for Python is 5 I believe, its pretty trivial to up it if you're using the cherrypy server, ie, in your ini file:
[server:main]
use = egg:PasteScript#cherrypy
numthreads = 20
request_queue_size = 50
host = 0.0.0.0
port = 5000

Of course, you can up the number of threads and request queue size, at some point you won't see any gain and will see a loss in performance, you'll want to benchmark to see when you hit this. Upping the request queue size will make a substantial difference in how many concurrent requests you can throw at it, and see *zero* request failures in ab. Will that help in the real world? Who knows. (A former coworker of mine runs their processes with request_queue_size at 128 and it seems to help them out)

Regarding real world vs synthetic benchmarks, I'm also reminded of Brad Fitzpatrick's talk on scaling LiveJournal, specifically Perlbal. Perlbal doesn't come out the fastest on many synthetic benchmarks, but does some stuff the other ones don't.... like knowing which HTTP servers on the back end are actually capable of *immediately* responding to a request, vs dumbly throwing requests around and weighting servers without actually knowing whether they'll get a HTTP handler to process it immediately (vs being stuck on a socket queue). Brad said this led to a noticeable drop in user-visible website latency, as their request was handled by a free HTTP server immediately, and none of the benchmark programs like ab that I've seen, are able to make such measurements.

Cheers,
Ben

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to