It should probably also be noted that ’ab’ is widely (well, I know, citation 
needed and all that) considered outdated.

You might have better luck benchmarking your application with wrk, for 
instance: https://github.com/wg/wrk

- Aarni

From: [email protected] [mailto:[email protected]] On 
Behalf Of Dig
Sent: 29. tammikuuta 2014 17:44
To: uWSGI developers and users list
Subject: Re: [uWSGI] How to support large concurrent


Hi Andriy and Łukasz,

Thanks for your reply.

On Jan 29, 2014 8:28 PM, "Andriy Kornatskyy" 
<[email protected]<mailto:[email protected]>> wrote:
>
> According to benchmark reported here:
>
> http://mindref.blogspot.com/2012/09/python-fastest-web-framework.html
>
> django survies 1K concurrent connections for “hello world", however the 
> performance is dramatically degraded as other players are in place, e.g. 
> routing, template engine, orm, etc. Look through other benchmarks available 
> there to get an idea.
>

  My benchmark is base on a 'Hello World' page, no orm enabled. TCP parameters 
has been set, so that nginx can handle 500 concurrent.

My question is why ab breaked instead of queue the requests or report  503 
error. QPS looks fast enough (>500) in 300 concurrent.

I also tried gevent, but no lucky.

Anyway, I'll try it on a multi-core test bed with more worker processes.

Thanks,
Dig

> Thanks.
>
> Andriy Kornatskyy
>
> On Jan 29, 2014, at 2:20 PM, Andriy Kornatskyy 
> <[email protected]<mailto:[email protected]>> wrote:
>
> > Try up TCP connection limits:
> >
> > sysctl net.core.somaxconn=2048
> > sysctl net.ipv4.tcp_max_syn_backlog=2048
> >
> > A sample UWSGI configuration:
> >
> > https://bitbucket.org/akorn/helloworld/src/tip/01-welcome/django/uwsgi.ini
> >
> > Adjust uwsgi for more processes as you find reasonable.
> >
> > Thanks.
> >
> > Andriy Kornatskyy
> >
> > On Jan 29, 2014, at 1:42 PM, Łukasz Mierzwa 
> > <[email protected]<mailto:[email protected]>> wrote:
> >
> >> Remember that rate (N requests / second) and concurrency (N requests 
> >> running at the same time) are 2 different things. Single worker process 
> >> can handle single requests at once, but if request is handled in 0.1 
> >> second it can handle 10 requests / second. So you can't really expect to 
> >> have 500 concurrent requests being handled using only 2 worker processes. 
> >> The only way to handle > 1 requests at the same time using single worker 
> >> process is using threads (but  AFAIR django isn't thread safe and if it 
> >> was threads with python have limitations so don't expect linear scaling) 
> >> or gevent (or other async engines).
> >>
> >>
> >> 2014-01-29 Andriy Kornatskyy 
> >> <[email protected]<mailto:[email protected]>>
> >> Dig,
> >>
> >> I really doubt and uwsgi can’t help there.
> >>
> >> Look at some benchmarks here:
> >> http://mindref.blogspot.com/search/label/benchmark
> >>
> >> Thanks.
> >>
> >> Andriy Kornatskyy
> >>
> >> On Jan 29, 2014, at 8:16 AM, Dig 
> >> <[email protected]<mailto:[email protected]>> wrote:
> >>
> >>> Hi uWSGI,
> >>>
> >>> [Task]
> >>>  I have a Django application to serve my visitor for large concurrent 
> >>> (>500).
> >>> To simplify, we create empty project for deploy testing.
> >>>
> >>> [Background]
> >>>    Server:
> >>>        Env: Ubuntu 12.04 server 64bit + nginx 1.1.19 (1 worker)
> >>>        App: An empty Django (1.6.1) project (no database) + uWSGI (2.0)
> >>>        uwsgi command
> >>>            uwsgi --master --socket 
> >>> 127.0.0.1:54321<http://127.0.0.1:54321> --uid 1000 --gid 1000 --harakiri 
> >>> 120 --reload-on-rss 256 --vacuum --limit-post 10485760 --post-buffering 
> >>> 4096 --touch-reload /home/dig/st/touch-to-reload-speed-test --python-path 
> >>> /home/dig/st --python-path /home/dig/st/st/ --module wsgi 
> >>> --socket-timeout 30 --listen 1000 --close-on-exec --processes=2
> >>>
> >>>    Test tool:
> >>>        OS: Ubuntu 12.04 server 64bit
> >>>        Apache benchmark (ab)
> >>>
> >>>    Network: 1Gbps Ethernet
> >>>
> >>> [Steps]
> >>> 1. start nginx, and execute test for static content
> >>>    $ ab ab -k -c 500 -n 10000 http://10.144.166.55/static/
> >>>    All requests completed, and about 10,000 requests handled per second.
> >>>
> >>> 2. create an empty Django project, and deploy with uwsgi, test with ab 
> >>> again:
> >>>    $ ab ab -k -c 500 -n 10000 http://10.144.166.55/static/
> >>>    I got error:
> >>>        Benchmarking 10.144.166.55 (be patient)
> >>>        apr_socket_recv: Connection reset by peer (104)
> >>>
> >>> 3. decrease the concurrent to 300:
> >>>    $ ab ab -k -c 300 -n 10000 http://10.144.166.55/static/
> >>>    it successed, and about 530 requests handled per second.
> >>>
> >>> [Question]
> >>>  Is there any instructions to make the empty Django application support 
> >>> over 500 concurrent in a single box?
> >>>
> >>> Thanks,
> >>> Dig
> >>> _______________________________________________
> >>> uWSGI mailing list
> >>> [email protected]<mailto:[email protected]>
> >>> http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi
> >>
> >> _______________________________________________
> >> uWSGI mailing list
> >> [email protected]<mailto:[email protected]>
> >> http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi
> >>
> >>
> >>
> >> --
> >> Łukasz Mierzwa
> >> _______________________________________________
> >> uWSGI mailing list
> >> [email protected]<mailto:[email protected]>
> >> http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi
> >
> > _______________________________________________
> > uWSGI mailing list
> > [email protected]<mailto:[email protected]>
> > http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi
>
> _______________________________________________
> uWSGI mailing list
> [email protected]<mailto:[email protected]>
> http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi
_______________________________________________
uWSGI mailing list
[email protected]
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi

Reply via email to