https://bz.apache.org/bugzilla/show_bug.cgi?id=53693
Merijn van den Kroonenberg <[email protected]> changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |[email protected] --- Comment #8 from Merijn van den Kroonenberg <[email protected]> --- Created attachment 35611 --> https://bz.apache.org/bugzilla/attachment.cgi?id=35611&action=edit Rewrite handle_request in fcgid_bridge.c to fix 1sec delay I rewrote the handle_request in fcgid_bridge.c to fix 1 sec delay issue. The original situation would wait 1 second before trying to aquire a process and then spawn one. This has the drawback of creating 'sluggisch' feel on low traffic sites which make use of ajax calls (parrallel requests). If there is only one process available the parallel ajax request will be delayed by a second. After the second that one process will be probably free, so the request will be handled and no new process will be spawned. As a result the next request will behave exactly the same, with the same 1 sec delay because it takes more requests to actually spawn a new process. This rewrite will throw the one second delay out of the window. It will check more often if a process is available and it will try less often to spawn a new process. The original code would take 64 seconds of trying before it gave up and a HTTP_SERVICE_UNAVAILABLE was returned. My new code takes 60.8 seconds for this to happen, but what happens during this time is much different. Original: 64000ms (64x spawn attempts, 128 process apply attempts) New: 60800ms (8x spawn attempts, 148 process apply attempts) But where the old code was linear, it would just check every second, the new code is not. This table will show the spawn attempts and the process apply attempts. There are 8 spawn attempts. and for each spawn attempt a number of process apply attempts is done. The time between these attempts also differs, small waits at the beginning (and end) and long waits in the middle. 0) 2 x 50ms = 100ms 1) 8 x 200ms = 1600ms 2) 14 x 350ms = 4900ms 3) 20 x 500ms = 10000ms 4) 26 x 650ms = 16900ms 5) 26 x 500ms = 13000ms 6) 26 x 350ms = 9100ms 7) 26 x 200ms = 5200ms Shortening the waits at the end will prevent long waiting requests to starve and hopefully allow less HTTP_SERVICE_UNAVAILABLE when there is a short peak/overload on the server. We are using this patch in production for two months now after a three month test period. Both on servers with single high load sites and with low load small sites. -- You are receiving this mail because: You are the assignee for the bug. --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
