Hey, I've been haunted by this for quite some time, seen it in different deployments, and think might make for some good ol' mailing list discussion.
When - using keep-alive connections to a backend service (eg php, rails, python) - this backend needs to be updatable (it is not okay to have lingering workers for hours or days) - requests are often not idem-potent (can't repeat them) current deployments need to close the kept-alive connection from the backend-side, always opening up a race condition where nginx has just sent a request and the connection gets closed. This leaves nginx in limbo not knowing if the request has been executed and can be repeated. When using keep-alive connections the only reliable way of closing them is from the client-side (in this case: nginx). I would therefor expect either - a feature to signal nginx to close all connections to the backend after having deployed new backend code. - an upstream keepAliveIdleTimeout config value that guarantees that kept-alive connections are not left lingering indefinitely long. If nginx guarantees it closes idle connections after 5 seconds, we can be sure that 5s+max_request_time after a new backend is deployed all old workers are gone. - (variant on the previous) support for a http header from the backend to indicate such a timeout value. It's funny that this header kind-of already exists in the spec < https://tools.ietf.org/id/draft-thomson-hybi-http-timeout-01.html#keep-alive >, but in practice is implemented by no-one. The 2nd and/or 3rd options seem most elegant to me. I wouldn't mind implementing myself if someone versed in the architecture would give some pointers. Best regards, - Emiel BTW: a similar issue should exist between browsers and web servers. Since latency is a lot higher on these links, I can only assume it to happen a lot.
_______________________________________________ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx