On Fri, Mar 27, 2015 at 1:37 AM, Nikolay Kim <[email protected]> wrote:

> Could you post your code with aiohttp. 10x difference seems too big.
>

I'd published source code here:
https://github.com/Eyepea/FrameworkBenchmarks/blob/API-Hour_fine_tuning/frameworks/Python/API-Hour/

You can see aiohttp.web initialization here:
https://github.com/Eyepea/FrameworkBenchmarks/blob/API-Hour_fine_tuning/frameworks/Python/API-Hour/hello/hello/__init__.py#L24

List of endpoints:
https://github.com/Eyepea/FrameworkBenchmarks/blob/API-Hour_fine_tuning/frameworks/Python/API-Hour/hello/hello/endpoints/world.py

Pure AsyncIO implementation:
https://github.com/Eyepea/FrameworkBenchmarks/blob/API-Hour_fine_tuning/frameworks/Python/API-Hour/hello/hello/servers/yocto_http.py

As you can see, pure AsyncIO implementation is reduced as the simplest
expression: You don't have router, Headers/GET/POST parsing, WebSockets...
I'm not shocked by the difference, you've almost the same difference value
between WSGI test and almost other Web frameworks.

If you want to run implementation, you can generate pyvenv with this
requirements.txt:
https://github.com/Eyepea/FrameworkBenchmarks/blob/API-Hour_fine_tuning/frameworks/Python/API-Hour/requirements.txt

Command line you must launch at the same level of requirements.txt file,
with pyvenv enabled:
api_hour -ac --chdir=hello/ --config_dir=hello/etc/hello/ hello:Container

aiohttp.web listens at http://127.0.0.1:8008/json and pure AsyncIO
implementation at http://127.0.0.1:8009/json

If you have any issue, you can contact me directly.


>
> Sent from Outlook <http://taps.io/outlookmobile>
>
>
>
>
> On Thu, Mar 26, 2015 at 4:36 PM -0700, "Ludovic Gasc" <[email protected]>
> wrote:
>
>  Since two weeks, I'm trying to use AsyncIO on top of PyPy3.3
>>
>> From my experience, two main elements aren't present on PyPy3.3:
>> 1. pip doesn't work on PyPy3 => For pure Python libraries, you can
>> install Python packages in CPython pyvenv and change PYTHONPATH
>> 2. Monotonic clock and time.get_clock_info() aren't implemented => The
>> workaround I've found is to use a standard clock (I know it's important to
>> use monotonic, it's only for tests) and hardcode the answer
>> of time.get_clock_info(). At least for me, it isn't very easy to implement
>> monotonic clock in PyPy.
>>
>> Nevertheless, I've tested:
>> 1. aiotests: All tests passed
>> 2. Several random scripts from AsyncIO doc: everything is ok
>> 3. aiohttp examples: no issues
>> 4. API-Hour+aiohttp.web: It runs like on CPython 3.4
>>
>> It's a good surprise for me, I didn't think it should be possible to use
>> PyPy directly right now: I found only one bug with "yield from" quickly
>> fixed by PyPy developers.
>>
>> As usual, I've did some benchmarks. You must note four points:
>> 1. PyPy 3.3 is not yet released: certainly some improvements should be
>> available when it will be released
>> 2. My PyPy benchmark doesn't use ujson, because ujson works only with
>> CPython. It should change some values if ujson is ported for PyPy.
>> 3. More I launch benchmarks on PyPy daemon, more I've performances. I've
>> launched several 5 minutes benchmarks before to launch theses 1 minutes.
>> 4. this use case it's a micro-benchmark, you don't have any connections
>> to a backend like PostgreSQL or Redis, more realistic use cases where
>> AsyncIO has better results.
>>
>> I've redid the same benchmark I've told you few days ago with simple JSON
>> payload.
>>
>> *PyPy + API-Hour + aiohttp.web:*
>>
>> $ wrk -t8 -c256 -d1m http://192.168.2.100:8008/json
>> Running 1m test @ http://192.168.2.100:8008/json
>>   8 threads and 256 connections
>>   Thread Stats   Avg      Stdev     Max   +/- Stdev
>>     Latency    *32.89ms*   18.58ms 260.08ms   78.10%
>>     Req/Sec     0.99k   117.11     1.42k    71.97%
>>   472966 requests in 1.00m, 87.96MB read
>> Requests/sec:   *7883.06*
>> Transfer/sec:      1.47MB
>>
>> *PyPy + API-Hour + AsyncIO:*
>>
>> $ wrk -t8 -c256 -d1m http://192.168.2.100:8009/json
>> Running 1m test @ http://192.168.2.100:8009/json
>>   8 threads and 256 connections
>>   Thread Stats   Avg      Stdev     Max   +/- Stdev
>>     Latency     *3.32ms*   11.99ms 224.86ms   95.26%
>>     Req/Sec    12.63k     4.46k   43.67k    70.53%
>>   5744657 requests in 1.00m, 0.96GB read
>> Requests/sec:  *95760.07*
>> Transfer/sec:     16.35MB
>>
>>
>> To remember, the results I give you for WSGI and API-Hour on CPython:
>>
>> *WSGI:*
>>
>> $ wrk -t8 -c256 -d1m http://192.168.2.100:8080/json
>> Running 1m test @ http://192.168.2.100:8080/json
>>   8 threads and 256 connections
>>   Thread Stats   Avg      Stdev     Max   +/- Stdev
>>     Latency     *1.81ms*    2.24ms  32.56ms   99.04%
>>     Req/Sec    20.49k     3.09k   52.56k    81.39%
>>   9300719 requests in 1.00m, 1.59GB read
>> Requests/sec: *155019.04*
>> Transfer/sec:     27.05MB
>>
>> *API-Hour + aiohttp.web:*
>>
>> $ wrk -t8 -c256 -d1m http://192.168.2.100:8008/json
>> Running 1m test @ http://192.168.2.100:8008/json
>>   8 threads and 256 connections
>>   Thread Stats   Avg      Stdev     Max   +/- Stdev
>>     Latency    *18.36ms*   11.36ms 117.66ms   67.44%
>>     Req/Sec     1.79k   238.97     2.65k    74.02%
>>   854843 requests in 1.00m, 158.16MB read
>> Requests/sec:  *14248.79*
>> Transfer/sec:      2.64MB
>>
>> *API-Hour + AsyncIO:*
>>
>> $ wrk -t8 -c256 -d1m http://192.168.2.100:8009/json
>> Running 1m test @ http://192.168.2.100:8009/json
>>   8 threads and 256 connections
>>   Thread Stats   Avg      Stdev     Max   +/- Stdev
>>     Latency     *1.96ms*    3.55ms  60.51ms   99.05%
>>     Req/Sec    19.77k     3.06k   55.78k    85.49%
>>   8972814 requests in 1.00m, 1.49GB read
>> Requests/sec: *149565.74*
>> Transfer/sec:     25.39MB
>>
>> As you can see, *in this specific benchmark with theses hit values*,
>> PyPy3.3 is slower than CPython with AsyncIO for now.
>> But, I imagine that the PyPy dev team goal is more to be fully compliant
>> with CPython 3.3 than have the best performances.
>>
>> Finally, if you are interested in to help PyPy project but you have no
>> time, you can donate money: http://pypy.org/py3donate.html
>> I've did a recurring donation for the PyPy project.
>> --
>> Ludovic Gasc (GMLudo)
>> http://www.gmludo.eu/
>>
>

Reply via email to