On Thu, Nov 25, 2010 at 5:36 PM, Martin Pool <[email protected]> wrote: > On 25 November 2010 04:53, Robert Collins <[email protected]> > wrote: >>> We run multiple appservers per core. Appserver CPU utilization seems >>> to sit around 85% spiking up to 96% during peak times so I'm not sure >>> what we gain as running more single threaded instances doesn't grow us >>> extra cores. >> >> We gain better responsiveness. > > Because, why? Do you expect the OS to give fairer scheduling between > concurrent request threads than Python can do?
Several things: Firstly by having new requests go to genuinely idle workers, not workers serialising some % of their work with other threads via the GIL. Currently an 'idle' slot may be sharing its resources with 3 CPU bound slow-render requests. Secondly by avoiding general inefficiencies in GIL handling http://www.dabeaz.com/python/UnderstandingGIL.pdf (which stub reminded me of on IRC today). Thirdly by having the number of workers chosen to balance with the amount of expected CPU to get near-dedicated resources servicing it. (assuming 50% [good pages have this ratio] work split between DB and python, we can run CPU count*2 python processes. At the moment we run enough appservers to use 75% or so of the machine, but this is done by having lots of 4-worker processes, so the efficiency per request is low. Fourthly by avoiding starvation when one thread is CPU bound (another discrete, known, GIL issue). This is the specific fairer scheduling aspect. > It seems to me you'll be almost certainly wasting some resources by > having multiple non-shareable copies of Python modules, which at > hundreds of MB per thread could be a big fraction of the appserver's > physical memory. We've put 12GB in the machine to compensate. Baseline footprint for an appserver is 200MB resident, 500MB virtual, we're expecting the final config to be 16 single threaded appservers on the test machine (wampee) - 3.2GB resident. Plenty of headroom for memcache and appserver footprint there. -Rob _______________________________________________ Mailing list: https://launchpad.net/~launchpad-dev Post to : [email protected] Unsubscribe : https://launchpad.net/~launchpad-dev More help : https://help.launchpad.net/ListHelp

