Hi everybody, With the FrameworkBenchmarks suite, I've found an interesting counter-example, where a more classical Web stack is really better than API-Hour+aiohttp.web, it's with a simple JSON serialization. I've tested the WGSI+Meinheld because is the best, but the flask version is also better than API-Hour+aiohttp.web:
WSGI: $ wrk -t8 -c256 -d1m http://192.168.2.100:8080/json Running 1m test @ http://192.168.2.100:8080/json 8 threads and 256 connections Thread Stats Avg Stdev Max +/- Stdev Latency *1.81ms* 2.24ms 32.56ms 99.04% Req/Sec 20.49k 3.09k 52.56k 81.39% 9300719 requests in 1.00m, 1.59GB read Requests/sec: *155019.04* Transfer/sec: 27.05MB API-Hour+aiohttp.web: $ wrk -t8 -c256 -d1m http://192.168.2.100:8008/json Running 1m test @ http://192.168.2.100:8008/json 8 threads and 256 connections Thread Stats Avg Stdev Max +/- Stdev Latency *18.36ms* 11.36ms 117.66ms 67.44% Req/Sec 1.79k 238.97 2.65k 74.02% 854843 requests in 1.00m, 158.16MB read Requests/sec: *14248.79* Transfer/sec: 2.64MB Nevertheless, finally, when you use Meinheld, even if it's hidden from your business logic, you use async pattern, isn't it ? For the fun, I've made a WebService without aiohttp.web, in pure AsyncIO. It isn't a production ready approach, only to see the maximum performance we should except with AsyncIO: $ wrk -t8 -c256 -d1m http://192.168.2.100:8009/json Running 1m test @ http://192.168.2.100:8009/json 8 threads and 256 connections Thread Stats Avg Stdev Max +/- Stdev Latency *1.96ms* 3.55ms 60.51ms 99.05% Req/Sec 19.77k 3.06k 55.78k 85.49% 8972814 requests in 1.00m, 1.49GB read Requests/sec: *149565.74* Transfer/sec: 25.39MB In pure AsyncIO, it's still slower than WGSI+Meinheld, but more closer in term of results. Maybe the difference should be because Meinheld is coded in C, where AsyncIO is in pure Python. -- Ludovic Gasc (GMLudo) http://www.gmludo.eu/ On Fri, Mar 13, 2015 at 10:07 AM, Ludovic Gasc <[email protected]> wrote: > On Fri, Mar 13, 2015 at 8:35 AM, INADA Naoki <[email protected]> > wrote: > >> >> IMO, When I really needs high performance (1000~ req/(sec*cores)), I >> >> use Go in these days. >> > >> > From my point of view, Python is the best language to write business >> logic. >> > I hope to continue to find solutions to improve performance with Python >> > instead of to regenerate a toolbox with another language. >> >> Agree. I'm waiting PyPy3 2.5.0 (compatible with Python 3.3) to run >> asyncio on PyPy. >> > > Yes, I've the same hope with PyPy because I've seen benchmarks like this: > https://twitter.com/oberstet/status/550741713762136064 > > https://github.com/oberstet/scratchbox/tree/master/python/twisted/sharedsocket > > Especially theses results: > https://github.com/oberstet/scratchbox/raw/master/python/twisted/sharedsocket/results.pdf > I've no idea if it means that PyPy should be more performant than Go, > nevertheless, if I arrive to have the half performance of an Nginx, it > means I only need the double of servers to have the same performance. > It's more acceptable that x10 or x100. > > BTW, if theses values become true with AsyncIO, nevertheless, it will be > hard to convince people to use that, for most people, Python is slow. > > >> >> Go is best for networking middleware and micro~small services. >> So I want to have common way to call Go from asyncio. >> > > From my point of view, it sounds a little bit a Frankenstein association. > Nevertheless, maybe you're right in term of performances, compare to use > something like Redis or ZeroMQ to have fast communications between your Go > application and your Python. > If you do something, I'm interested in to know your results. > > >> >> >> P.S. I hope dropbox/pyston also support Python 3.4 and asyncio runs on it >> too. >> >> I'm sad about both Google (Guido has been there) and Dropbox (Guido is >> in now) haven't >> moved to Python 3 yet. >> > > I'm agree with you, nevertheless, I understand Dropbox's priorities if > they have a lot of Python 2 code. > Moreover, you certainly know we have some performance regressions between > CPython 2 and CPython 3. > I've spotted 2, 3 things, I'll publish that when I'll be sure to > understand the problem. > > >> >> -- >> INADA Naoki <[email protected]> >> > >
