That's what I tried first :) For some reason load distribution was still uneven. I'll check this again, maybe I missed something.
On Tue, Feb 23, 2016 at 5:37 PM, Chris Friesen <chris.frie...@windriver.com> wrote: > On 02/23/2016 05:25 AM, Roman Podoliaka wrote: > >> So looks like it's two related problems here: >> >> 1) the distribution of load between workers is uneven. One way to fix >> this is to decrease the default number of greenlets in pool [2], which >> will effectively cause a particular worker to give up new connections >> to other forks, as soon as there are no more greenlets available in >> the pool to process incoming requests. But this alone will *only* be >> effective when the concurrency level is greater than the number of >> greenlets in pool. Another way would be to add a context switch to >> eventlet accept() loop [8] right after spawn_n() - this is what I've >> got with greenthread.sleep(0.05) [9][10] (the trade off is that we now >> only can accept() 1/ 0.05 = 20 new connections per second per worker - >> I'll try to experiment with numbers here). > > > Would greenthread.sleep(0) be enough to trigger a context switch? > > Chris > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev