Ruediger Pluem wrote:
As a result, the connection pool has made the server slower, not faster, and very much needs to be fixed.I agree in theory. But I don't think so in practice.
Unfortunately I know so in practice. In this example we are seeing single connections being held open for 30 second or more. :(
1. 2.0.x behaviour: If you did use keepalive connections to the backend the connection to the backenend was kept alive and as it was bound to the frontend connection in 2.0.x it couldn't be used by other connections. Depending on the backend server it wasted the same number of resources as without the optimization (backend like httpd worker, httpd prefork) or a small amount of resources (backend like httpd event with HTTP or a recent Tomcat web connector). So you didn't benefit very well from this optimization in 2.0.x as long as you did not turn off the keepalives to the backend.
Those who did need the optimisation, would have turned off keepalives to the backend.
2. The optimization only helps for the last chunk being read from the backend which has the size of ProxyIOBufferSize at most. If ProxyIOBuffer size isn't set explicitly this amounts to just 8k. I guess if you are having clients or connections that take a long time to consume just 8k you are troubled anyway.
Which is why you would increase the size from 8k to something big enough to hold your complete pages. The CNN home page for example is 92k.
As recent as 3 years ago, I had one client running an entire company on a 64kbps legacy telco connection that cost over USD1000 per month. These clients tie up your backend for many seconds, sometimes minutes, and protecting you from this is one of the key reasons you would use a reverse proxy.
Plus the default socket and TCP buffers on most OS should be already larger then this. So in order to profit from the optimization the time the client needs to consume the ProxyIOBufferSize needs to be remarkable.
It makes no difference how large the TCP buffers are, the backend will only be released for reuse when the frontend has completely flushed and acknowledged the request, so all your buffers don't help at all.
As soon as the backend has provided the very last bit of the request, the backend should be released immediately and placed back in the pool. The backend might only take tens or hundreds of milliseconds to complete its work, but is then tied up frozen for many orders of magnitude more than that waiting for the client to say that is is done.
Regards, Graham --
smime.p7s
Description: S/MIME Cryptographic Signature
