At 09:27 AM 6/17/2005, Brian Akins wrote: >>Also, I'd be very concerned >>about additional load - clients who are retrieving many gifs (with >>no pause at all) in a pipelined fashion will end up hurting the >>over resource usage if you force them back to HTTP/1.0 behavior. > >Yes, but if all threads are "waiting" for x seconds for keepalives (even if it >is 3-5 seconds), the server cannot service any new clients. I'm willing to >take an overall resource hit (and "inconvenience" some clients) to maintain >the overall availability of the server. > >Does that make any sense? It does to me, but I may not be explaining our >problem well.
Yes it makes sense. But I'd encourage you to consider dropping that keepalive time and see if the problem isn't significantly mitigated. We have a schema today to create 'parallel' scoreboards, but perhaps in the core we should offer this is a public API to module authors, to keep it very simple? I believe keepalive-blocked read should be able to be determined from the scoreboard. As far as 'counting' states, that would be somewhat interesting. Right now, it does take cycles to walk the scoreboard to determine the number in a given state (and this is somewhat fuzzy since values are flipping as you walk along the list of workers.) Adding an indexed list of 'counts' would be very lightweight, and one atomic increment and decrement per state change. This would probably be more efficient than walking the entire list. In any case, I would simply extend counts for all registered request states in the scoreboard, rather than a one-off for every state someone becomes interested in. Bill
