Yes, i was runing gevent benchmark on python2, so today i build a python3.4
branch of gevent (https://github.com/johbo/gevent) , and rerun the
benchmarks.
I've updated the gist: https://gist.github.com/yihuang/eb0a670c9fab188c6e3e
.

First the throughput of gevent and asyncio is 29188.56/32071.84 rps, what's
interesting is that, after i run a batched-request benchmark, and rerun the
normal benchmark, asyncio's throughput boost to 57110.22 rps, i guess it
has something to do with the memory management.

The profile result:

stat% stats 20
Mon Jul 21 18:38:04 2014    stat

         2060148 function calls (2059471 primitive calls) in 16.929 seconds

   Ordered by: internal time
   List reduced from 801 to 20 due to restriction <20>

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
     2060   12.533    0.006   12.533    0.006 {method 'poll' of
'select.epoll' objects}
   100051    1.215    0.000    1.215    0.000 {method 'recv' of
'_socket.socket' objects}
   100000    0.929    0.000    0.929    0.000 {method 'send' of
'_socket.socket' objects}
   100000    0.342    0.000    1.901    0.000
redis_asyncio.py:28(data_received)
   100206    0.302    0.000    3.637    0.000
/home/yihuang/py3/lib/python3.4/asyncio/events.py:37(_run)
   100050    0.209    0.000    3.326    0.000
/home/yihuang/py3/lib/python3.4/asyncio/selector_events.py:464(_read_ready)
   100000    0.206    0.000    1.161    0.000
/home/yihuang/py3/lib/python3.4/asyncio/selector_events.py:484(write)
     2060    0.153    0.000   16.861    0.008
/home/yihuang/py3/lib/python3.4/asyncio/base_events.py:761(_run_once)
     2060    0.145    0.000   12.726    0.006
/home/yihuang/py3/lib/python3.4/selectors.py:412(select)
   100000    0.133    0.000    0.208    0.000 redis_asyncio.py:6(process)
   100101    0.130    0.000    0.198    0.000
/home/yihuang/py3/lib/python3.4/asyncio/base_events.py:746(_add_callback)
   200000    0.117    0.000    0.117    0.000 {method 'gets' of
'hiredis.Reader' objects}
     2059    0.112    0.000    0.310    0.000
/home/yihuang/py3/lib/python3.4/asyncio/selector_events.py:337(_process_events)
   301354    0.078    0.000    0.078    0.000 {built-in method isinstance}
   100000    0.074    0.000    0.074    0.000 {method 'feed' of
'hiredis.Reader' objects}
   100038    0.051    0.000    0.051    0.000 {method 'get' of 'dict'
objects}
   100101    0.027    0.000    0.027    0.000
/home/yihuang/py3/lib/python3.4/selectors.py:263(_key_from_fd)
   100206    0.026    0.000    0.026    0.000 {method 'popleft' of
'collections.deque' objects}
   100000    0.024    0.000    0.024    0.000 {method 'lower' of 'bytes'
objects}
   100563    0.020    0.000    0.020    0.000 {method 'append' of 'list'
objects}



On Mon, Jul 21, 2014 at 3:55 AM, Guido van Rossum <[email protected]> wrote:

> This is exciting! It's quite possible that cProfile will find some issues
> in asyncio that are easy to fix -- we haven't done much tuning. I'm not
> sure whether cProfile will correctly profile generators, but if there are
> any issues they will be obvious when you try (and then we'll figure out
> what to do about it).
>
> Have you run the benchmarks several times and found that the results match
> what you posted?
>
> Have you tried varying the number of threads?
>
> Finally, it looks like you are using Python 2 for the gevent test, while
> the asyncio requires Python 3. I worry that the differences between these
> Python versions might overwhelm any other useful data you might find.
>
> Other things to vary would be the selector used by the event loop (select,
> kqueue or epoll, the latter depending on your platform).
>
>
>
> On Fri, Jul 18, 2014 at 9:03 AM, yi huang <[email protected]> wrote:
>
>> https://gist.github.com/yihuang/eb0a670c9fab188c6e3e
>> I've done a benchmark shows that twisted-mode asyncio can handle more
>> request than gevent (though the latency seems bigger), which is exciting,
>> but coroutine-mode is slower(I'll post that code latter).
>> I'll do some profiling latter, I guess I can profile asyncio's coroutine
>> with standard cProfile, right?
>>
>>
>> On Friday, July 18, 2014, Victor Stinner <[email protected]>
>> wrote:
>>
>>> Hi,
>>>
>>> Here is a very basic example to show how to use start_server().
>>> echo_server() coroutines will run in parallel.
>>> ---
>>> import asyncio
>>>
>>> @asyncio.coroutine
>>> def echo_server(reader, writer):
>>>     line = yield from reader.readline()
>>>     print("Reply %r" % line)
>>>     writer.write(line)
>>>     yield from writer.drain()
>>>     writer.close()
>>>
>>> loop = asyncio.get_event_loop()
>>> loop.run_until_complete(asyncio.start_server(echo_server, port=8000))
>>> loop.run_forever()
>>> loop.close()
>>> ---
>>>
>>> You may add something like that to the documentation.
>>>
>>> Victor
>>>
>>> 2014-07-18 10:47 GMT+02:00 yi huang <[email protected]>:
>>> > The example in asyncio documentation only shows how to write echo
>>> server in
>>> > twisted style, and i wonder how can i write server handler as a
>>> coroutine
>>> > like in gevent, something like:
>>> >
>>> > def handle_client(transport):
>>> >     while True:
>>> >         buf = yield from transport.read(4096)
>>> >         # handle request
>>> >
>>> >         # read some result from database without blocking other
>>> coroutine.
>>> >         result = yield from block_read_from_database()
>>> >
>>> > loop.create_server(handle_client, '127.0.0.1', 3000)
>>> >
>>> >
>>> > Thanks.
>>> >
>>> > --
>>> > http://yi-programmer.com/
>>>
>>
>>
>> --
>> http://yi-programmer.com/
>>
>
>
>
> --
> --Guido van Rossum (python.org/~guido)
>



-- 
http://yi-programmer.com/

Reply via email to