Hi,
On 04/01/2016 16:30, Cory Benfield wrote:
Your core question seems to be: “why do we need a spec that specifies
concurrency?” I think this is reasonable. One way out might be to
take the route of ASGI[0], which essentially uses a message broker to
act as the interface between server and application. This lets
applications handle their own concurrency without needing to
co-ordinate with the server. From there the spec doesn’t need to
handle concurrency, as the application and server are in separate
processes.
I think the *only* way to scale websockets is to use an RPC system to
dispatch commands and to handle fan out somewhere centralized. This for
instance works really well with redis as a broker. All larger
deployments of websockets I have worked with so far involved a simple
redis to websocket server that barely restarts and dispatches commands
(and receives messages) via redis.
That's a simple an straightforward way that still keeps deployments work
well because you never restart the actual connections unless you need to
pull a cable or you have a bug in the websocket server.
That's why I'm personally also not really interested in this topic as
for large scale deployments this is not really an issue and for toy
applications I do not use websockets ;)
Regards,
Armin
_______________________________________________
Web-SIG mailing list
Web-SIG@python.org
Web SIG: http://www.python.org/sigs/web-sig
Unsubscribe:
https://mail.python.org/mailman/options/web-sig/archive%40mail-archive.com