Since I saw a few replies that I don’t entirely agree with, let me give you a 
bunch of different opinions:

- I also do 1 proc/Docker image, I would recommend to have a look on 
https://pythonspeed.com/docker/ for various things to keep in mind if you 
haven’t seen it already.

- As others pointed out, I don’t think your WSGI container is gonna be a huge 
bottle neck and there’s other details to keep in mind that weight heavier 
overall.

- bjoern is a pile of C with very few users 
<https://pypistats.org/packages/bjoern> and maintained mostly by one person. 
Without passing any judgement on the maintainer and bjoern _specifically_, 
there’s no way I’d expose something like that to the internet – not even with a 
proxy in front of it.

- I don’t know whether it’s me, my Python DB driver (sqlanydb 😞), or the 
underlying libs: there’s stuff leaking all the time so I wouldn’t use a WSGI 
container that doesn’t do worker recycling after a configurable amount of 
requests served. Otherwise you get best case uncontrolled recycling via crash 
and worst case deadlocks.

- Defense in depth: we have the policy of PII and secrets never hitting our 
internal network unencrypted because we don’t want one compromised application 
lead to user credentials, PII or credit card data leaking via a network sniffer.

  There might even be legislation for that in some countries and whenever I 
talk to AWS or Google engineers they are very adamant that it’s important 
despite all VPCs and whatnot – use your own judgement. In my case this means 
that unless I want to have a proxy sidecar (I usually don’t), waitress and 
bjoern are right out. In case of waitress it’s kinda a bummer but oh well.

- The argument that you need a sidecar for static files is mostly obsolete 
since we got sendfile http://man7.org/linux/man-pages/man2/sendfile.2.html> and 
can be usually disregarded given the complexity it entails. I like using 
whitenoise for it because it allows for nice 
re-mappings etc: http://whitenoise.evans.io/ (see also 
http://whitenoise.evans.io/en/stable/#isn-t-serving-static-files-from-python-horribly-inefficient)

- HAProxy kicks nginx’s and Apache’s behinds in almost every regard. This is my 
hill. I like my hill. I will retire on this hill.

All that said, my docker-entrypoint.sh usually looks something like this:

```
#!/bin/bash

set -euo pipefail
ifs=$'\n\t'

exec 2>&1 \
    /app/bin/gunicorn \
        --bind 0.0.0.0:8000 \
        --workers 1 \
        --threads 8 \
        --max-requests 4096 \
        --max-requests-jitter 128 \
        —timeout 60 \
        --forwarded-allow-ips=“EDGE-PROXY-IP" \
        --ssl-version 5 \
        --ciphers=ecdh+aesgcm \
        --keyfile "/etc/ssl/private/*.node.consul.key" \
        --certfile "/etc/ssl/certs/*.node.consul.crt" \
        --worker-tmp-dir /dev/shm \
        "app.wsgi"
```

So yeah, I’m running gunicorn and it’s just fine. It’s pure Python, therefore 
easy to install, and widely used, therefore quite stable.

If you want that percent more performance you can go for uWSGI but be aware 
that it can be quite rough on the edges and there isn’t much development going 
on anymore (neither is for gunicorn to be fair, but it seems mostly 
feature-complete and I’m not aware of any gross bugs). You should use 
<https://www.techatbloomberg.com/blog/configuring-uwsgi-production-deployment/> 
to guide you configuring it and I’ve heard rumors of an upcoming fork. We’ll 
see.

Finally, a lot I’m applying nowadays is straight from warehouse, the new 
Pyramid-based PyPI: <https://github.com/pypa/warehouse>. They are certainly 
running at scale and there’s a lot to learn and myths to debunk.

—h

> On 11. Sep 2019, at 20:49, Alexander Mills <[email protected]> 
> wrote:
> 
> We are trying to adhere to philosophy of one process per Docker container. So 
> instead of Apache + 4 Python procs in a container, we just want 1 python proc 
> per container.
> 
> I am new to WSGI apps. My question is - what is the most performant native 
> python server which can run a WSGI app like Pyramid?
> 
> Remember, I am trying to just launch one python process. I am not trying to 
> put a server in front of the python WSGI process.
> Any recommendations?  A digitalocean article says this should work:
> 
> 
> from wsgiref.simple_server import make_server
> from pyramid.config import Configurator
> from pyramid.response import Response
> 
> def hello_world(request):
>     return Response('<h1>Hello world!</h1>')
> 
> if __name__ == '__main__':
>     config = Configurator()
>     config.add_view(hello_world)
>     app = config.make_wsgi_app()
>     server = make_server('0.0.0.0', 8080, app)
>     server.serve_forever()
> 
> 
> 
> 
> I assume this all runs as one process. Is this performant enough compared to 
> Apache or should I use something else?
> 
> -alex
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "pylons-discuss" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected].
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/pylons-discuss/b9cf75b9-71f0-4d7d-a786-c2564796ff78%40googlegroups.com.

-- 
You received this message because you are subscribed to the Google Groups 
"pylons-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/pylons-discuss/41D2F807-E4FE-47B2-B9B3-B5926763ABBA%40ox.cx.

Reply via email to