On Tue, Oct 4, 2016 at 8:32 AM, Greg Ewing <greg.ew...@canterbury.ac.nz> wrote:
> Yann Kaiser wrote:
>>
>> The way I see it, the great thing about async/await as opposed to
>> threading is that it is explicit about when execution will "take a break"
>> from your function or resume into it.
>
>
> Another thing is that async/await tasks are very lightweight
> compared to OS threads, so you can afford to have a large
> number of them active at once.
>
> Rene's approach seems to be based on ordinary threads, so
> it would not have this property.

That keeps getting talked about, but one thing I've never seen is any
sort of benchmark showing (probably per operating system) how many
concurrent requests you need to have before threads become unworkable.
Maybe calculate it in milli-Wikipedias, on the basis that English
Wikipedia is a well-known site and has some solid stats published [1].
Of late, it's been seeing about 8e9 hits per month, or about
3000/second. So one millipedia would be three page requests per
second. A single-core CPU running a simple and naive Python web
application can probably handle several millipedias without any
concurrency at all. (I would define "handle" as "respond to requests
without visible queueing", which would mean about 50ms, for a guess -
could maybe go to 100 or even 250, but then people might start
noticing the slowness.) Once you're handling more requests than you
can handle on a single thread, you need concurrency, but threading
will do you fine for a while. Then at some further mark, threading is
no longer sufficient, and you need something lighter-weight, such as
asyncio. But has anyone ever measured what those two points are?

ChrisA

[1] http://reportcard.wmflabs.org/
_______________________________________________
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to