> Hi Roberto,
>
> On Thu, 10 Jan 2013 13:23:54 +0100
> "Roberto De Ioris" <[email protected]> wrote:
>
>>
>> > We've recently migrated from Apache which does round-robin
>> > distribution to its workers to uWSGI which does it on a
>> > first-worker-not-busy basis.
>>
>> ???
>>
>> apache (as the vast majority of preforking daemons) works in the same
>> way as uWSGI, there is a shared socket on which various
>> processes/threads wait() and accept(). The only solution using round
>> robin is passenger so i suppose you are referring to it ?
>
> Well, I've done some testing and looks like Apache2 is distributing
> requests in round robin fashion (I was looking at pids handling
> consequtive static requests). From what I could see it uses poll (as
> opposite to epoll for uWSGI on Linux). I wrote a simple program which
> bind to socket in parent and accept connections in children who
> inherited socket from parent. Connections were accepted in round-robin
> fashion (although that was done in blocking mode). What's special
> about uWSGI in this respect ? epoll or maybe something else ?

It is because in apache a semaphore is hold on accept() so effectively
only a single process can wait for connections. This is required in
context when you have lot of processes to avoid thundering herd (all of
the processes wakeup on event). This is mithigated in uWSGI as generally
there are few processes, but for example is required in multithread mode
(using the uwsgi.thunder_lock) where dozens of threads can wait on the
same socket.

What you are experiencing in both apache and your test script, is an
undeterministic behaviour, because the kernel will always choose the best
(more or less) process. So in your machine you will get an almost
round-robin behaviour, on another you could get the same process always
answering and so on.

Using the thunder_lock increase the possibility of round robin distribution:

1-accept() 2-locked 3-locked
1-running 2-accept() 3-locked
1-running 2-running 3-accept()

and so on

in fact if you run the stats server in multithread mode you will se an
almost-fair distribution of requests between threads (but not between
workers)

>
>> > Some parts of our application cache things per-child assuming that
>> > that cache isn't going to live for very long, since we have a max
>> > request limit per child (implemented with childs doing harakiri)
>> > under Apache any given worker wouldn't live for more than 5-10m.
>>
>> --max-requests is the equivalent of apache MaxRequestPerChild
>
> I recall the other day we spoke about --idle option: maybe it would be
> possible to implement it on per-worker basis rather than on per-instance
> (all workers) basis ?


yes, but i am not sure if:

- you are speaking about destroying a single worker after inactivity

- you are speaking about recycling a worker after a fixed amount of time
(instead of the number of managed requests)


both solutions are pretty easy to accomplish (and the first one could be
done simply tuning the cheaper mode)

>
> Well, spawning a worker is an expensive task whereas cheaping them out
> (as name suggests) is cheap - that why in some cases keeping them
> around might be handy.. but then you are facing other problems :-)
>

that's why multiple cheaper algos exists (and more will be added).
Stackable cheaper algos will be another interesting feature (multiple
algos will be run in sequence to choose who shall die)

-- 
Roberto De Ioris
http://unbit.it
_______________________________________________
uWSGI mailing list
[email protected]
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi

Reply via email to