Hi,
I personally probably do not want to participate in this discussion much
but I want to leave some thoughts in case someone finds them useful.
I personally think that fundamentally "concurrent programming" and just
getting access to a socket is not something that fits into a generically
deployable container specification which is what WSGI largely is. WSGI
was quite trivially specified as what happens from request to response
and even in that area it already suffered from significant limitations
in regards to where the specification did not consider what servers
would do with it.
I do not want to go into detail too much but WSGI the spec never really
concerned itself with the vast complexity that is HTTP in practice
(chunked requests, transfer encodings, stream termination signalling etc.)
I heavily doubt that dragging concurrency into the spec will make it any
less problematic for real world situations.
Why do we need concurrency on the spec level? I honestly do not see the
point because in practical terms we might just make a spec that then
cannot really be deployed in practice just because nobody would want to.
Making a server that gracefully shuts down when things are purely
request/response is already tricky enough, but finding a method to shut
down a server with active stream connections is something that does not
even have enough agreement between implementations yet (which also needs
a lot of client support) that I don't think will fit into a specification.
I honestly do not think that you can have it both ways: a WSGI
specification and a raw socket. Maybe we reached the point where WSGI
should just be deprecated and frameworks themselves will fill the gap.
We would only specify a data exchange layer so that frameworks can
interoperate in some way or another.
Regards,
Armin
_______________________________________________
Web-SIG mailing list
Web-SIG@python.org
Web SIG: http://www.python.org/sigs/web-sig
Unsubscribe:
https://mail.python.org/mailman/options/web-sig/archive%40mail-archive.com