Yeah,

I've seen them.

I don't care about them.

Here's why:

https://github.com/omedhabib/WSGI_Benchmarks/blob/master/src/app.py 
<https://github.com/omedhabib/WSGI_Benchmarks/blob/master/src/app.py>

This is an unrealistic of a test as it can get. It does 0 I/O other than 
accepting a connection, reading the full request, and then sending a response. 
There is no further parsing of the request, there is no database interaction, 
there is nothing realistic about this test. If all we had to do was send back a 
200 OK + OK body, I could write a VERY fast server, that would blow away Bjoern 
too.

What you need to do:

1. Use your code base and determine what your KPI's are
2. Test with various different WSGI servers
3. Deploy the one that you like best and can most easily manage/install

Why is 3 important? Because the reality is that once you start slinging real 
world code at any of the WSGI servers mentioned they are all going to perform 
pretty similarly. Some will maybe be a tiny bit faster, or make it easier to 
manage memory growth (uWSGI) or has proxy handling already built-in 
(Gunicorn/Waitress) or has some other feature you want.

Ultimately Bjoern may be fast at accepting/parsing the requests and thus at 
sending it down the WSGI stack and getting a response out, but with only a 
single thread it means if you have any I/O what so ever (such as connection to 
PostgreSQL and waiting for a response after querying the database) it won't be 
able to answer any other requests.

There's upsides and downsides to all of them. Pre-fork is great if you want to 
save some memory and have the ability to kill processes in the future, prefork 
+ threaded allows you to handle many requests with a single forked child, and 
just threaded means you have a single process and let Python deal with 
spreading the load across threads (which in most web applications is fine, 
because they are going to be spending an inordinate amount of clock time just 
waiting on IO answers).

So in the real world, pick what you want. Ignore those benchmarks. Until you 
get to the size of Instagram and need to eke out every last bit of performance 
from your servers because even 2 msec per request adds up to hours of CPU time 
saved. Your application is the bottle neck, not the HTTP -> WSGI server.

Last but not least, you should absolutely put something like NGINX/Apache in 
front of whatever WSGI server you end up picking and reverse proxying to them, 
so that you get cheap TLS/HTTP2/and other features.

Bert

> On Sep 12, 2019, at 10:27, Alexander Mills <[email protected]> 
> wrote:
> 
> Thanks Bert, have you seen the results:
> 
> https://www.appdynamics.com/blog/engineering/a-performance-analysis-of-python-wsgi-servers-part-2/
>  
> <https://www.appdynamics.com/blog/engineering/a-performance-analysis-of-python-wsgi-servers-part-2/>
> 
> bjoern seems to be much more performant than the alternatives.
> 
> -alex
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "pylons-discuss" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected] 
> <mailto:[email protected]>.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/pylons-discuss/a4bb546c-fb43-476b-9614-cc9b96f6945a%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/pylons-discuss/a4bb546c-fb43-476b-9614-cc9b96f6945a%40googlegroups.com?utm_medium=email&utm_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"pylons-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/pylons-discuss/67F683E4-DA46-4126-A092-5B5054002300%400x58.com.

Reply via email to