Justin Erenkrantz wrote:
On Fri, Apr 15, 2005 at 03:46:37PM -0400, Greg Ames wrote:

it is sounding better all the time as far as performance. if I understand you correctly I think this one eliminates the extra trips down the input and output filter chains. but unfortunately we still have the extra read() compared to 1.3, and what about data stashed in the input filter chain?


Well, I think the extra read() is required if we want to see if there is a
pending request in the pipeline (heh). I wouldn't see a way to know that
without that read()

[...]

   I'm guessing 1.3 just didn't
even bother doing a read() on the socket...which sort ofbegs the question, why
are we doing that in 2.x then?

the people who put the code into 1.3 realized that the whole thing is an optimization, therefore you loose if you spend too many cycles on it. if the network data for pipelined request n+1 isn't already present during ap_read_request for request n, it is highly unlikely to arrive by the time we start writing the response for request n. if a new request does arrive during that window it's no big deal. we might write one small packet to the network and nothing breaks.


why are we doing the read() in 2.x? I believe it was a lack of deep understanding of how 1.3 works and a rush to get something working with input filters. but now it's time to move beyond the proof of concept.

But, moving it to a request output filter triggered by EOS presence does
eliminate one round trip through the filter chain which can save CPU cycles.

So, I guess what we should do is let Rici finish up the patch to implement
EATCRLF entirely via speculative reads - ensure that works and then we can
move around that code as necessary.

let's eliminate the speculative read too. that's a round trip down the input filter chain we don't need. all we need to know is if there is data stashed in the input filters. the input filters can indicate that somehow every time they are called and save a lot of cycles.


Greg



Reply via email to