Hi ,

I have a flask application which is running in nginx server and i am unbale
to serve the application for more then 20 users (concurrently) as its gets
break.

*Error:*
app: 0|req: 1/35] x.x.x.x () {44 vars in 5149 bytes} [Thu Feb  7 14:01:42
2019] GET /url/edit/7e08e5c4-11cf-485b-9b05-823fd4006a60 => generated 0
bytes in 69000 msecs (HTTP/2.0 200) 4 headers in 0 bytes (1 switches on
core 0)

*OS version:*
ubuntu 16.04 (aws)

*CPU:*
2 Core with 4 GB RAM

*WebServer:*
nginx version: nginx/1.15.0

*APP Architecture:*
I have 2 application running on different servers app 1(using for frontend
) and app 2( using for REST API Calls) both are flask applications


*app1 uWSGI config :*

[uwsgi]
module = wsgi
master = true
processes = 3
socket = app.sock
chmod-socket = 777
vacuum = true
die-on-term = true
logto = test.log
buffer-size=7765535
worker-reload-mercy = 240
thunder-lock = true
async=10
ugreen
listen = 950
enable-threads= True

*app 1 nginx config*


user  root;
worker_processes  5;
events {
    worker_connections  4000;
}
http {
    server {
       limit_req zone=mylimit burst=20 nodelay;
       limit_req_status 444;
        listen 80 backlog=1000;
         listen [::]:80;
        server_name domain name;
        location /static {
           alias /home/ubuntu/flaskapp/app/static;
        }
        location / {
            include uwsgi_params;
            uwsgi_read_timeout 120;
            client_max_body_size 1000M;
          uwsgi_pass unix:///home/ubuntu/flaskapp/app.sock;
       }

    }

}


*app 2 uWsgi config:*

[uwsgi]
module = wsgi
master = true
processes = 5
socket = app2.sock
chmod-socket = 777
vacuum = true
die-on-term = true
logto = sptms.log
async = 10
ugreen
worker-reload-mercy = 240
enable-threads = true
thunder-lock = true
listen=2000
buffer-size=65535
no-defer-accept=true
stats=stats.sock
memory-report = true

*app 2 nginx config :*

worker_processes  1;
events {
    worker_connections  1024;
}
http {
access_log /var/log/nginx/access.log;
proxy_connect_timeout 2000;
proxy_read_timeout 2000;
fastcgi_read_timeout 2000;
error_log /var/log/nginx/error.log info;
    include       mime.types;
    gzip on;
    server {
        listen 80 backlog=2048;
        server_name x.x.x.x;
        location / {
            include uwsgi_params;
            uwsgi_pass unix:///home/ubuntu/app/app2.sock;
            #keepalive_timeout 155s;
        }
    }
}


So please help to scale the application for concurrent users.


Thanks
Ashraf
_______________________________________________
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi

Reply via email to