We currently run our Django app using uWSGI 1.2.3 in Emperor mode. Our config 
looks like this:

> root     17445     1  0 16:39 ?        00:00:00 /opt/nextdoor-ve/bin/uwsgi 
> --log-syslog --master --die-on-term --emperor /etc/uwsgi
> root     17446 17445  0 16:39 ?        00:00:00 /opt/nextdoor-ve/bin/uwsgi 
> --log-syslog --master --die-on-term --emperor /etc/uwsgi
> nextdoor 17447 17446  0 16:39 ?        00:00:01 /opt/nextdoor-ve/bin/uwsgi 
> --ini frontend.ini
> nextdoor 17564 17447 17 16:51 ?        00:01:26 /opt/nextdoor-ve/bin/uwsgi 
> --ini frontend.ini
> nextdoor 17565 17447 22 16:51 ?        00:01:49 /opt/nextdoor-ve/bin/uwsgi 
> --ini frontend.ini
> nextdoor 17566 17447 14 16:51 ?        00:01:12 /opt/nextdoor-ve/bin/uwsgi 
> --ini frontend.ini
> nextdoor 17567 17447 11 16:51 ?        00:00:58 /opt/nextdoor-ve/bin/uwsgi 
> --ini frontend.ini


> [uwsgi]
> uid = nextdoor
> home = /opt/nextdoor-ve
> pythonpath = /home/nextdoor/src/nextdoor.com/apps/nextdoor
> env = NEXTDOOR_HOME=/home/nextdoor
> env = HOME=/home/nextdoor
> uwsgi-socket = /tmp/frontend.socket
> chmod-socket  = 1
> logdate = 1
> optimize = 2
> post-buffering = 1
> enable-threads = 1
> threads = 0
> processes = 4
> master = 1
> log-syslog = frontend
> harakiri = 120
> harakiri-verbose = 1
> memory-report = 1
> reload-on-rss = 372
> blocksize = 16384
> module = frontend:application


In front of the app, we use Nginx 1.0.14. Its config looks like this (greatly 
simplified here of course)

> worker_processes  1;
> syslog local2 nginx;
> events {    worker_connections  1024;}
> 
> http {
>     keepalive_timeout         10;
>     client_max_body_size      50m;
>     sendfile                  on;
>     gzip                        on;
>     charset                     utf-8;
>     server_names_hash_bucket_size 128;
>     proxy_intercept_errors      on;
>     include                     uwsgi_params;
>     uwsgi_cache                 off;
> 
>     server {
>         listen                        443 ssl;
>         server_name             nextdoor.com *.nextdoor.com 
> prod-fe-uswest1-33-i-9b73bfc2.cloud.nextdoor.com localhost 127.0.0.1 
> 10.160.223.224;
>         root                    /home/nextdoor/src/nextdoor.com;
>         include                 uwsgi_params;
>         uwsgi_param             X-Forwarded-Ssl on;
>         uwsgi_param             HTTP_COOKIE $http_cookie;
>         location / {
>             try_files $uri @frontend; 
>         }
>         location @frontend { uwsgi_pass  unix:/tmp/frontend.socket; }
>     }
> }

We've noticed that in the event that a particular worker process is restarted 
through the HARAKIRI parameter, we see more than just that single user request 
fail. We tend to see a whole bunch of failures in our Nginx error log at that 
time. I believe that this is because as a particular uWSGI process works on 
request1, it queues up request2,request3,request4, etc as new requests come in. 
As soon as request1 is finished, it should move on to the other requests ... 
but when its forcefully restarted by the HARAKIRI setting, these requests get 
dropped on the floor. Is my understanding correct?

If my base understanding is correct, any thoughts on the best way to handle 
this scenario? Obviously we don't want these timeouts to happen in the first 
place.. but when they do, we'd rather they don't impact dozens of other queued 
up requests. Is there a way to put some kind of "query queuing mechanism" in 
front of the individual Django apps, so if one is restarted, it doesn't impact 
other queued up requests?

--Matt

_______________________________________________
uWSGI mailing list
[email protected]
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi

Reply via email to