> My theory is that if the threads get tied up with a few slow requests, the server can no longer service the faster ones.
That's usually the issue. It's compounded more when you don't pipe things through something nginx, which can block resources on slow/dropped connections. A few ideas come to mind: i'd take a look at your nginx config. there are options to throttle the number of connections per client. (upstream and WAN) your browser could also have a limit on requests as well, and the keepalive implementation (if enabled on nginx) could be a factor. are you sure they're being sent in parallel and not serial? it's possible that you're having issues with database blocking. it's also possible, though i doubt it, that you're running into issues with the GIL. you could try using uwsgi to see if there is any difference. -- You received this message because you are subscribed to the Google Groups "pylons-discuss" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at https://groups.google.com/group/pylons-discuss. For more options, visit https://groups.google.com/d/optout.
