Re: [users@httpd] Chunked transfer delay with httpd 2.4 on Windows.

2015-10-23 Thread Andy Wang



On 10/22/2015 04:50 PM, Yann Ylavic wrote:

On Thu, Oct 22, 2015 at 3:42 PM, Andy Wang  wrote:


Tested with the patch and looks good.


Not that much actually, the patch fails to consume the CRLFs, and
hence can end up in an infinite loop.

So I'm attaching a new one here (committed in trunk with a larger
scope, this version is for 2.4.x and limited to your use case).
Could you give it a (new) try please (I have already done some testing
but it's probably worth passing your tests, before I can propose its
backport to 2.4.x)?
While the previous patch could not handle more than a single
(trailing) [CR]LF, this new one should (up to ten, which is the new
limit for tolerated blank lines in between requests).



The new one works as well.  My VMs that I use for my actual test cases 
are all in a logistics nightmare right now, so I can't do anything more 
thorough than the specific point test, but hopefully when I build the 
next httpd that contains this they'll be back up and I can put things 
through a more thorough cycle.


I'm pretty much only equipped to reproduce the specific problem right now.

That said, this whole, why does it only happen on Windows and why can't 
I simulate a request using another client has been bugging me, so I 
decided to go back to pre-patch and enable dumpio and try to compare the 
two.


guess what
I cannot get the problem to occur with dumpio enabled now.  As soon as 
it's enabled, the requests respond immediately.


Disable dumpio, 5000ms.

I give up.  The plugin developer fixed their plugin, you made apache 
more accepting (thank you very much again), and I'm going to just 
pretend this whole issue never happened :)


Andy



Re: [users@httpd] Chunked transfer delay with httpd 2.4 on Windows.

2015-10-22 Thread Yann Ylavic
On Thu, Oct 22, 2015 at 3:42 PM, Andy Wang  wrote:
>
> Tested with the patch and looks good.

Not that much actually, the patch fails to consume the CRLFs, and
hence can end up in an infinite loop.

So I'm attaching a new one here (committed in trunk with a larger
scope, this version is for 2.4.x and limited to your use case).
Could you give it a (new) try please (I have already done some testing
but it's probably worth passing your tests, before I can propose its
backport to 2.4.x)?
While the previous patch could not handle more than a single
(trailing) [CR]LF, this new one should (up to ten, which is the new
limit for tolerated blank lines in between requests).

Thanks,
Yann.
Index: include/httpd.h
===
--- include/httpd.h	(revision 1710105)
+++ include/httpd.h	(working copy)
@@ -200,6 +200,10 @@ extern "C" {
 #ifndef DEFAULT_LIMIT_REQUEST_FIELDS
 #define DEFAULT_LIMIT_REQUEST_FIELDS 100
 #endif
+/** default/hard limit on number of leading/trailing empty lines */
+#ifndef DEFAULT_LIMIT_BLANK_LINES
+#define DEFAULT_LIMIT_BLANK_LINES 10
+#endif
 
 /**
  * The default default character set name to add if AddDefaultCharset is
Index: modules/http/http_request.c
===
--- modules/http/http_request.c	(revision 1710105)
+++ modules/http/http_request.c	(working copy)
@@ -230,22 +230,91 @@ AP_DECLARE(void) ap_die(int type, request_rec *r)
 
 static void check_pipeline(conn_rec *c, apr_bucket_brigade *bb)
 {
+c->data_in_input_filters = 0;
 if (c->keepalive != AP_CONN_CLOSE && !c->aborted) {
 apr_status_t rv;
+int num_blank_lines = DEFAULT_LIMIT_BLANK_LINES;
+ap_input_mode_t mode = AP_MODE_SPECULATIVE;
+apr_size_t len, cr = 0;
+char buf[2];
 
-AP_DEBUG_ASSERT(APR_BRIGADE_EMPTY(bb));
-rv = ap_get_brigade(c->input_filters, bb, AP_MODE_SPECULATIVE,
-APR_NONBLOCK_READ, 1);
-if (rv != APR_SUCCESS || APR_BRIGADE_EMPTY(bb)) {
-/*
- * Error or empty brigade: There is no data present in the input
- * filter
+do {
+apr_brigade_cleanup(bb);
+rv = ap_get_brigade(c->input_filters, bb, mode,
+APR_NONBLOCK_READ, cr + 1);
+if (rv != APR_SUCCESS || APR_BRIGADE_EMPTY(bb)) {
+/*
+ * Error or empty brigade: There is no data present in the input
+ * filter
+ */
+if (mode == AP_MODE_READBYTES) {
+/* Unexpected error, stop with this connection */
+ap_log_cerror(APLOG_MARK, APLOG_ERR, rv, c, APLOGNO(02967)
+  "Can't consume pipelined empty lines");
+c->keepalive = AP_CONN_CLOSE;
+}
+return;
+}
+
+/* Ignore trailing blank lines (which must not be interpreted as
+ * pipelined requests) up to the limit, otherwise we would block
+ * on the next read without flushing data, and hence possibly delay
+ * pending response(s) until the next/real request comes in or the
+ * keepalive timeout expires.
  */
-c->data_in_input_filters = 0;
-}
-else {
-c->data_in_input_filters = 1;
-}
+len = cr + 1;
+rv = apr_brigade_flatten(bb, buf, &len);
+if (rv != APR_SUCCESS || len != cr + 1) {
+int level;
+if (mode == AP_MODE_READBYTES) {
+/* Unexpected error, stop with this connection */
+c->keepalive = AP_CONN_CLOSE;
+level = APLOG_ERR;
+}
+else {
+/* Let outside (non-speculative/blocking) read determine
+ * where this possible failure comes from (metadata,
+ * morphed EOF socket => empty bucket? debug only here).
+ */
+c->data_in_input_filters = 1;
+level = APLOG_DEBUG;
+}
+ap_log_cerror(APLOG_MARK, level, rv, c, APLOGNO(02968)
+  "Can't check pipelined data");
+return;
+}
+
+if (mode == AP_MODE_READBYTES) {
+mode = AP_MODE_SPECULATIVE;
+cr = 0;
+continue;
+}
+
+if (cr) {
+AP_DEBUG_ASSERT(len == 2 && buf[0] == APR_ASCII_CR);
+if (buf[1] != APR_ASCII_LF) {
+return;
+}
+mode = AP_MODE_READBYTES;
+num_blank_lines--;
+}
+else {
+if (buf[0] == APR_ASCII_CR) {
+cr = 1;
+}
+else

Re: [users@httpd] Chunked transfer delay with httpd 2.4 on Windows.

2015-10-22 Thread Andy Wang



On 10/22/2015 10:31 AM, Andy Wang wrote:



On 10/22/2015 10:06 AM, Yann Ylavic wrote:

Does it make a difference with "AcceptFilter http none" configured?
It shouldn't, but since we are in the x-files...


I already had to do that.  We have this weird scenario where when our
installer installs httpd, httpd hangs.  Packet captures show the request
arriving but no ACK or any form of response.


Just on a whim I decided to try the opposite and remove the AcceptFilter 
http none.


Same problem still (without your patch of course - with your patch 
everything is still happy).


Andy


Re: [users@httpd] Chunked transfer delay with httpd 2.4 on Windows.

2015-10-22 Thread Andy Wang



On 10/22/2015 10:52 AM, Andy Wang wrote:



On 10/22/2015 10:38 AM, Graham Leggett wrote:

On 22 Oct 2015, at 5:31 PM, Andy Wang  wrote:


I already had to do that.  We have this weird scenario where when our
installer installs httpd, httpd hangs.  Packet captures show the
request arriving but no ACK or any form of response.

If I kill the installer process, the problem goes away.  I have no
idea how or why that could possibly occur, but Acceptfilter http none
resolves that issue.


Sounds like a request that the http accept filter doesn’t believe is
complete?

Could be an RFC violation on the client side.



I'm not sure how that's possible.  The TCP stack doesn't even respond to
the initial SYN.

As soon as we kill the installer process, httpd starts to respond.



Oh wait, my bad.
I'm getting problems confused in my head.

The request is to the best of my research complete.

The TCP handshake occurs, but there is no GET response.

Also, used processexplorer to look at the process threads and it shows 
no activity on the threads, and mod_dumpio shows nothing coming in at all.


Even if it was a bad request on the client side, I'd have expected 
something to show up on the httpd side.


As soon as we kill the installer's java.exe process (installanywhere 
installer) httpd starts to respond.'


Andy


Re: [users@httpd] Chunked transfer delay with httpd 2.4 on Windows.

2015-10-22 Thread Andy Wang



On 10/22/2015 10:38 AM, Graham Leggett wrote:

On 22 Oct 2015, at 5:31 PM, Andy Wang  wrote:


I already had to do that.  We have this weird scenario where when our installer 
installs httpd, httpd hangs.  Packet captures show the request arriving but no 
ACK or any form of response.

If I kill the installer process, the problem goes away.  I have no idea how or 
why that could possibly occur, but Acceptfilter http none resolves that issue.


Sounds like a request that the http accept filter doesn’t believe is complete?

Could be an RFC violation on the client side.



I'm not sure how that's possible.  The TCP stack doesn't even respond to 
the initial SYN.


As soon as we kill the installer process, httpd starts to respond.

Andy


Re: [users@httpd] Chunked transfer delay with httpd 2.4 on Windows.

2015-10-22 Thread Graham Leggett
On 22 Oct 2015, at 5:31 PM, Andy Wang  wrote:

> I already had to do that.  We have this weird scenario where when our 
> installer installs httpd, httpd hangs.  Packet captures show the request 
> arriving but no ACK or any form of response.
> 
> If I kill the installer process, the problem goes away.  I have no idea how 
> or why that could possibly occur, but Acceptfilter http none resolves that 
> issue.

Sounds like a request that the http accept filter doesn’t believe is complete?

Could be an RFC violation on the client side.

Regards,
Graham
—



Re: [users@httpd] Chunked transfer delay with httpd 2.4 on Windows.

2015-10-22 Thread Andy Wang



On 10/22/2015 10:06 AM, Yann Ylavic wrote:

Does it make a difference with "AcceptFilter http none" configured?
It shouldn't, but since we are in the x-files...


I already had to do that.  We have this weird scenario where when our 
installer installs httpd, httpd hangs.  Packet captures show the request 
arriving but no ACK or any form of response.


If I kill the installer process, the problem goes away.  I have no idea 
how or why that could possibly occur, but Acceptfilter http none 
resolves that issue.


Andy


Re: [users@httpd] Chunked transfer delay with httpd 2.4 on Windows.

2015-10-22 Thread Yann Ylavic
Does it make a difference with "AcceptFilter http none" configured?
It shouldn't, but since we are in the x-files...

On Thu, Oct 22, 2015 at 4:52 PM, Andy Wang  wrote:
> So this is the problematic request:
> 02D0  6e 74 2d 54 79 70 65 3a  09 74 65 78 74 2f 70 6c nt-Type: .text/pl
> 02E0  61 69 6e 0d 0a 0d 0a 0a  ain.
>
> This is a "regular" request using HttpPRequester extension or just using
> ncat or curl:
> 02D0  6e 74 2d 54 79 70 65 3a  20 74 65 78 74 2f 70 6c nt-Type:  text/pl
> 02E0  61 69 6e 0d 0a 0d 0a ain
>
> Ignore the 0x09 and 0x20 difference.  I had him fix that already and it
> didn't make a difference.
>
> I recreate the exact same request as the problematic request by adding a
> 0x0a to the end of it but it doesn't recreate the problem.
>
> Andy
>
> On 10/22/2015 08:59 AM, William A Rowe Jr wrote:
>>
>> On Thu, Oct 22, 2015 at 8:42 AM, Andy Wang > > wrote:
>>
>>
>> On 10/21/2015 10:01 AM, Andy Wang wrote:
>>
>> I will do that today.
>>
>> And thank you to Rudiger and yourself, and everyone else on
>> the thread
>> for all the help.
>>
>> I missed the trailing 0x0a in the different wireshark
>> captures.  I was
>> trusting wireshark's http dissection rather than looking at
>> the raw hex
>> data and it didn't show the trailing \n.
>>
>> I gotta go back to the basics and lean less on wireshark's
>> "intelligence" :)
>>
>>
>> Oh, and the plugin developer actually identified where the \n
>> was coming
>> from and has resolved it on the client end.
>>
>> I do have one question.  Any idea why this only occurs on
>> Windows servers?
>>
>>
>> Tested with the patch and looks good.
>> I tried to recreate it using ncat and sending an extra \n but I'm
>> not having luck.  I see the extra byte on my pcap.  Still curious
>> what other conditions create this but oh well, at this point it's in
>> my rear-view-mirror.
>>
>> Thanks again for all the help and the solution.
>> Andy
>>
>>
>>
>> I can't help but wonder if the '\n' is consistent but a '\r' is injected
>> on Windows... are you sure the variance was strictly a '\n'?
>>
>> Generally an httpd module emits exactly what it means, whether it is a
>> unix '\n' line ending, or an http '\r\n' sequence as defined by spec.
>> But with generated content, you are more likely to see variances based
>> on whatever API generated the content, and lots of code will generate \n
>> on unix vs. the \r\n sequence on Windows - httpd didn't influence that.
>>
>


Re: [users@httpd] Chunked transfer delay with httpd 2.4 on Windows.

2015-10-22 Thread Andy Wang

So this is the problematic request:
02D0  6e 74 2d 54 79 70 65 3a  09 74 65 78 74 2f 70 6c nt-Type: .text/pl
02E0  61 69 6e 0d 0a 0d 0a 0a  ain.

This is a "regular" request using HttpPRequester extension or just using 
ncat or curl:

02D0  6e 74 2d 54 79 70 65 3a  20 74 65 78 74 2f 70 6c nt-Type:  text/pl
02E0  61 69 6e 0d 0a 0d 0a ain

Ignore the 0x09 and 0x20 difference.  I had him fix that already and it 
didn't make a difference.


I recreate the exact same request as the problematic request by adding a 
0x0a to the end of it but it doesn't recreate the problem.


Andy

On 10/22/2015 08:59 AM, William A Rowe Jr wrote:

On Thu, Oct 22, 2015 at 8:42 AM, Andy Wang mailto:[email protected]>> wrote:


On 10/21/2015 10:01 AM, Andy Wang wrote:

I will do that today.

And thank you to Rudiger and yourself, and everyone else on
the thread
for all the help.

I missed the trailing 0x0a in the different wireshark
captures.  I was
trusting wireshark's http dissection rather than looking at
the raw hex
data and it didn't show the trailing \n.

I gotta go back to the basics and lean less on wireshark's
"intelligence" :)


Oh, and the plugin developer actually identified where the \n
was coming
from and has resolved it on the client end.

I do have one question.  Any idea why this only occurs on
Windows servers?


Tested with the patch and looks good.
I tried to recreate it using ncat and sending an extra \n but I'm
not having luck.  I see the extra byte on my pcap.  Still curious
what other conditions create this but oh well, at this point it's in
my rear-view-mirror.

Thanks again for all the help and the solution.
Andy



I can't help but wonder if the '\n' is consistent but a '\r' is injected
on Windows... are you sure the variance was strictly a '\n'?

Generally an httpd module emits exactly what it means, whether it is a
unix '\n' line ending, or an http '\r\n' sequence as defined by spec.
But with generated content, you are more likely to see variances based
on whatever API generated the content, and lots of code will generate \n
on unix vs. the \r\n sequence on Windows - httpd didn't influence that.



Re: [users@httpd] Chunked transfer delay with httpd 2.4 on Windows.

2015-10-22 Thread William A Rowe Jr
On Thu, Oct 22, 2015 at 8:42 AM, Andy Wang  wrote:

>
> On 10/21/2015 10:01 AM, Andy Wang wrote:
>
> I will do that today.
>>>
>>> And thank you to Rudiger and yourself, and everyone else on the thread
>>> for all the help.
>>>
>>> I missed the trailing 0x0a in the different wireshark captures.  I was
>>> trusting wireshark's http dissection rather than looking at the raw hex
>>> data and it didn't show the trailing \n.
>>>
>>> I gotta go back to the basics and lean less on wireshark's
>>> "intelligence" :)
>>>
>>
>> Oh, and the plugin developer actually identified where the \n was coming
>> from and has resolved it on the client end.
>>
>> I do have one question.  Any idea why this only occurs on Windows servers?
>>
>
> Tested with the patch and looks good.
> I tried to recreate it using ncat and sending an extra \n but I'm not
> having luck.  I see the extra byte on my pcap.  Still curious what other
> conditions create this but oh well, at this point it's in my
> rear-view-mirror.
>
> Thanks again for all the help and the solution.
> Andy
>


I can't help but wonder if the '\n' is consistent but a '\r' is injected on
Windows... are you sure the variance was strictly a '\n'?

Generally an httpd module emits exactly what it means, whether it is a unix
'\n' line ending, or an http '\r\n' sequence as defined by spec. But with
generated content, you are more likely to see variances based on whatever
API generated the content, and lots of code will generate \n on unix vs.
the \r\n sequence on Windows - httpd didn't influence that.


Re: [users@httpd] Chunked transfer delay with httpd 2.4 on Windows.

2015-10-22 Thread Andy Wang



On 10/21/2015 10:01 AM, Andy Wang wrote:


I will do that today.

And thank you to Rudiger and yourself, and everyone else on the thread
for all the help.

I missed the trailing 0x0a in the different wireshark captures.  I was
trusting wireshark's http dissection rather than looking at the raw hex
data and it didn't show the trailing \n.

I gotta go back to the basics and lean less on wireshark's
"intelligence" :)


Oh, and the plugin developer actually identified where the \n was coming
from and has resolved it on the client end.

I do have one question.  Any idea why this only occurs on Windows servers?


Tested with the patch and looks good.
I tried to recreate it using ncat and sending an extra \n but I'm not 
having luck.  I see the extra byte on my pcap.  Still curious what other 
conditions create this but oh well, at this point it's in my 
rear-view-mirror.


Thanks again for all the help and the solution.
Andy


Re: [users@httpd] Chunked transfer delay with httpd 2.4 on Windows.

2015-10-21 Thread Andy Wang



On 10/21/2015 09:54 AM, Andy Wang wrote:



On 10/21/2015 09:31 AM, Yann Ylavic wrote:


OK, thanks :)

Andy, can you give the proposed patch a try?



I will do that today.

And thank you to Rudiger and yourself, and everyone else on the thread
for all the help.

I missed the trailing 0x0a in the different wireshark captures.  I was
trusting wireshark's http dissection rather than looking at the raw hex
data and it didn't show the trailing \n.

I gotta go back to the basics and lean less on wireshark's
"intelligence" :)


Oh, and the plugin developer actually identified where the \n was coming 
from and has resolved it on the client end.


I do have one question.  Any idea why this only occurs on Windows servers?

Andy



Re: [users@httpd] Chunked transfer delay with httpd 2.4 on Windows.

2015-10-21 Thread Andy Wang



On 10/21/2015 09:31 AM, Yann Ylavic wrote:


OK, thanks :)

Andy, can you give the proposed patch a try?



I will do that today.

And thank you to Rudiger and yourself, and everyone else on the thread 
for all the help.


I missed the trailing 0x0a in the different wireshark captures.  I was 
trusting wireshark's http dissection rather than looking at the raw hex 
data and it didn't show the trailing \n.


I gotta go back to the basics and lean less on wireshark's "intelligence" :)

Andy


Re: [users@httpd] Chunked transfer delay with httpd 2.4 on Windows.

2015-10-21 Thread Yann Ylavic
On Wed, Oct 21, 2015 at 4:29 PM, Plüm, Rüdiger, Vodafone Group
 wrote:
>
>
>> -Ursprüngliche Nachricht-
>> Von: Yann Ylavic [mailto:[email protected]]
>> Gesendet: Mittwoch, 21. Oktober 2015 16:28
>> An: [email protected]
>> Betreff: Re: [users@httpd] Chunked transfer delay with httpd 2.4 on
>> Windows.
>>
>> On Wed, Oct 21, 2015 at 4:22 PM, Plüm, Rüdiger, Vodafone Group
>>  wrote:
>> >
>> >
>> >> -Ursprüngliche Nachricht-
>> >> Von: Yann Ylavic [mailto:[email protected]]
>> >> Gesendet: Mittwoch, 21. Oktober 2015 16:07
>> >> An: [email protected]
>> >> Betreff: Re: [users@httpd] Chunked transfer delay with httpd 2.4 on
>> >> Windows.
>> >>
>> >> On Wed, Oct 21, 2015 at 2:29 PM, Ruediger Pluem 
>> >> wrote:
>> >> >
>> >> > This looks like there is a stray \n in the input queue that causes
>> >> httpd to think that there is a pipelined request.
>> >>
>> >> I think we should tolerate blank lines in check_pipeline(), like
>> >> read_request_line() does (this is also a RFC compliance).
>> >>
>> >> How about the following patch?
>> >
>> > In general this looks good, but why not moving the max_blank_lines
>> logic
>> > into check_pipeline using c->server->limit_req_fields, so that we do
>> not need to change
>> > its prototype?
>>
>> Hmm, check_pipeline() is static, why bother?
>> Also c->base_server may be different than r->server (after the first
>> request), and we probably want to use the value from the last
>> request's vhost (a bit like we did already for keep_alive_timeout).
>>
>
> OK. Fair enough.

OK, thanks :)

Andy, can you give the proposed patch a try?


Re: [users@httpd] Chunked transfer delay with httpd 2.4 on Windows.

2015-10-21 Thread Yann Ylavic
On Wed, Oct 21, 2015 at 4:22 PM, Plüm, Rüdiger, Vodafone Group
 wrote:
>
>
>> -Ursprüngliche Nachricht-
>> Von: Yann Ylavic [mailto:[email protected]]
>> Gesendet: Mittwoch, 21. Oktober 2015 16:07
>> An: [email protected]
>> Betreff: Re: [users@httpd] Chunked transfer delay with httpd 2.4 on
>> Windows.
>>
>> On Wed, Oct 21, 2015 at 2:29 PM, Ruediger Pluem 
>> wrote:
>> >
>> > This looks like there is a stray \n in the input queue that causes
>> httpd to think that there is a pipelined request.
>>
>> I think we should tolerate blank lines in check_pipeline(), like
>> read_request_line() does (this is also a RFC compliance).
>>
>> How about the following patch?
>
> In general this looks good, but why not moving the max_blank_lines logic
> into check_pipeline using c->server->limit_req_fields, so that we do not need 
> to change
> its prototype?

Hmm, check_pipeline() is static, why bother?
Also c->base_server may be different than r->server (after the first
request), and we probably want to use the value from the last
request's vhost (a bit like we did already for keep_alive_timeout).

Regards,
Yann.


Re: [users@httpd] Chunked transfer delay with httpd 2.4 on Windows.

2015-10-21 Thread Yann Ylavic
On Wed, Oct 21, 2015 at 2:29 PM, Ruediger Pluem  wrote:
>
> This looks like there is a stray \n in the input queue that causes httpd to 
> think that there is a pipelined request.

I think we should tolerate blank lines in check_pipeline(), like
read_request_line() does (this is also a RFC compliance).

How about the following patch?

Index: modules/http/http_request.c
===
--- modules/http/http_request.c(revision 1709107)
+++ modules/http/http_request.c(working copy)
@@ -228,24 +228,50 @@ AP_DECLARE(void) ap_die(int type, request_rec *r)
 ap_die_r(type, r, r->status);
 }

-static void check_pipeline(conn_rec *c, apr_bucket_brigade *bb)
+static void check_pipeline(conn_rec *c, apr_bucket_brigade *bb,
+   int max_blank_lines)
 {
+c->data_in_input_filters = 0;
 if (c->keepalive != AP_CONN_CLOSE && !c->aborted) {
 apr_status_t rv;
+int num_blank_lines = 0;
+char ch, cr = 0;
+apr_size_t n;

-AP_DEBUG_ASSERT(APR_BRIGADE_EMPTY(bb));
-rv = ap_get_brigade(c->input_filters, bb, AP_MODE_SPECULATIVE,
-APR_NONBLOCK_READ, 1);
-if (rv != APR_SUCCESS || APR_BRIGADE_EMPTY(bb)) {
-/*
- * Error or empty brigade: There is no data present in the input
- * filter
- */
-c->data_in_input_filters = 0;
+do {
+apr_brigade_cleanup(bb);
+rv = ap_get_brigade(c->input_filters, bb, AP_MODE_SPECULATIVE,
+APR_NONBLOCK_READ, 1);
+if (rv != APR_SUCCESS || APR_BRIGADE_EMPTY(bb)) {
+/*
+ * Error or empty brigade: There is no data present
in the input
+ * filter
+ */
+c->data_in_input_filters = 0;
+break;
+}
+
+n = 0;
+apr_brigade_flatten(bb, &ch, &n);
+if (!n) {
+break;
+}
+if (ch == APR_ASCII_LF) {
+num_blank_lines++;
+cr = 0;
+}
+else if (!cr && ch == APR_ASCII_CR) {
+cr = 1;
+}
+else {
+c->data_in_input_filters = 1;
+break;
+}
+} while (num_blank_lines < max_blank_lines);
+
+if (num_blank_lines >= max_blank_lines) {
+c->keepalive = AP_CONN_CLOSE;
 }
-else {
-c->data_in_input_filters = 1;
-}
 }
 }

@@ -255,7 +281,12 @@ AP_DECLARE(void) ap_process_request_after_handler(
 apr_bucket_brigade *bb;
 apr_bucket *b;
 conn_rec *c = r->connection;
+int max_blank_lines = r->server->limit_req_fields;

+if (max_blank_lines <= 0) {
+max_blank_lines = DEFAULT_LIMIT_REQUEST_FIELDS;
+}
+
 /* Send an EOR bucket through the output filter chain.  When
  * this bucket is destroyed, the request will be logged and
  * its pool will be freed
@@ -279,7 +310,7 @@ AP_DECLARE(void) ap_process_request_after_handler(
  * already by the EOR bucket's cleanup function.
  */

-check_pipeline(c, bb);
+check_pipeline(c, bb, max_blank_lines);
 apr_brigade_destroy(bb);
 if (c->cs)
 c->cs->state = (c->aborted) ? CONN_STATE_LINGER
--

Regards,
Yann.


Re: [users@httpd] Chunked transfer delay with httpd 2.4 on Windows.

2015-10-21 Thread Ruediger Pluem


On 10/20/2015 09:57 PM, Andy Wang wrote:
> 
> 
> On 10/20/2015 11:16 AM, Andy Wang wrote:
>>
>>
>> On 10/20/2015 05:19 AM, Yann Ylavic wrote:
>>
>>>
>>> mod_dumpio's traces (level TRACE7) could be helpful here, Andy?
>>>
>>
>> I'll reconfigure to get that in a bit today.
>> I'll also try with mod_proxy_ajp as well to see if the same occurs.
> 
> 
> mod_proxy_ajp has the same behavior.
> Here's the output from mod_dumpio - lemme know if you'd like the raw file.  
> I'd have to figure out how to make it
> available somewhere after sanitizing a few things.
> 
> [Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
> mod_dumpio.c(164): [client 132.253.8.198:55373]
> mod_dumpio: dumpio_out
> [Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
> mod_dumpio.c(58): [client 132.253.8.198:55373]
> mod_dumpio:  dumpio_out (data-HEAP): 362 bytes
> [Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
> mod_dumpio.c(100): [client 132.253.8.198:55373]
> mod_dumpio:  dumpio_out (data-HEAP): HTTP/1.1 200 OK\r\nDate: Tue, 20 Oct 
> 2015 19:53:53 GMT\r\nServer: Apache/2.4.16
> (Win64)\r\nX-Frame-Options: SAMEORIGIN\r\nExpires: Thu, 01 Jan 1970 00:00:00 
> GMT\r\nCache-Control:
> no-cache\r\nContent-Type: text/html;charset=UTF-8\r\nVary: 
> Accept-Encoding,User-Agent\r\nContent-Encoding:
> gzip\r\nKeep-Alive: timeout=5, max=100\r\nConnection: 
> Keep-Alive\r\nTransfer-Encoding: chunked\r\n\r\n
> [Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
> mod_dumpio.c(164): [client 132.253.8.198:55373]
> mod_dumpio: dumpio_out
> [Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
> mod_dumpio.c(58): [client 132.253.8.198:55373]
> mod_dumpio:  dumpio_out (data-TRANSIENT): 4 bytes
> [Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
> mod_dumpio.c(100): [client 132.253.8.198:55373]
> mod_dumpio:  dumpio_out (data-TRANSIENT): 1e\r\n
> [Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
> mod_dumpio.c(58): [client 132.253.8.198:55373]
> mod_dumpio:  dumpio_out (data-IMMORTAL): 10 bytes
> [Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
> mod_dumpio.c(100): [client 132.253.8.198:55373]
> mod_dumpio:  dumpio_out (data-IMMORTAL): \x1f\x8b\b
> [Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
> mod_dumpio.c(58): [client 132.253.8.198:55373]
> mod_dumpio:  dumpio_out (data-HEAP): 20 bytes
> [Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
> mod_dumpio.c(100): [client 132.253.8.198:55373]
> mod_dumpio:  dumpio_out (data-HEAP): 
> \x8a\x8a\x02\x02\xab\x92\xd4\xe2\x12+\x10+\x8a\x97\v
> [Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
> mod_dumpio.c(58): [client 132.253.8.198:55373]
> mod_dumpio:  dumpio_out (data-IMMORTAL): 2 bytes
> [Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
> mod_dumpio.c(100): [client 132.253.8.198:55373]
> mod_dumpio:  dumpio_out (data-IMMORTAL): \r\n
> [Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
> mod_dumpio.c(58): [client 132.253.8.198:55373]
> mod_dumpio:  dumpio_out (metadata-FLUSH): 0 bytes
> [Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
> mod_dumpio.c(164): [client 132.253.8.198:55373]
> mod_dumpio: dumpio_out
> [Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
> mod_dumpio.c(58): [client 132.253.8.198:55373]
> mod_dumpio:  dumpio_out (data-TRANSIENT): 3 bytes
> [Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
> mod_dumpio.c(100): [client 132.253.8.198:55373]
> mod_dumpio:  dumpio_out (data-TRANSIENT): a\r\n
> [Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
> mod_dumpio.c(58): [client 132.253.8.198:55373]
> mod_dumpio:  dumpio_out (data-HEAP): 2 bytes
> [Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
> mod_dumpio.c(100): [client 132.253.8.198:55373]
> mod_dumpio:  dumpio_out (data-HEAP): \x03
> [Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
> mod_dumpio.c(58): [client 132.253.8.198:55373]
> mod_dumpio:  dumpio_out (data-POOL): 8 bytes
> [Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
> mod_dumpio.c(100): [client 132.253.8.198:55373]
> mod_dumpio:  dumpio_out (data-POOL): \xfe\x8e\xed\xee\x12
> [Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
> mod_dumpio.c(58): [client 132.253.8.198:55373]
> mod_dumpio:  dumpio_out (data-IMMORTAL): 2 bytes
> [Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
> mod_dumpio.c(100): [client 132.253.8.198:55373]
> mod_dumpio:  dumpio_out (data-IMMORTAL): \r\n
> [Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
> mod_dumpio.c(58): [client 132.253.8.198:55373]
> mod_dumpio:  dumpio_out (data-IMMORTAL): 5 bytes
> [Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
> mod_dumpio.c(100):

Re: [users@httpd] Chunked transfer delay with httpd 2.4 on Windows.

2015-10-20 Thread Andy Wang



On 10/20/2015 02:57 PM, Andy Wang wrote:



On 10/20/2015 11:16 AM, Andy Wang wrote:



On 10/20/2015 05:19 AM, Yann Ylavic wrote:



mod_dumpio's traces (level TRACE7) could be helpful here, Andy?



I'll reconfigure to get that in a bit today.
I'll also try with mod_proxy_ajp as well to see if the same occurs.



mod_proxy_ajp has the same behavior.
Here's the output from mod_dumpio - lemme know if you'd like the raw
file.  I'd have to figure out how to make it available somewhere after
sanitizing a few things.





[Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
mod_dumpio.c(135): [client 132.253.8.198:55373] mod_dumpio: dumpio_in 
[getline-blocking] 0 readbytes
[Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
mod_dumpio.c(58): [client 132.253.8.198:55373] mod_dumpio:  dumpio_in 
(data-HEAP): 1 bytes
[Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
mod_dumpio.c(100): [client 132.253.8.198:55373] mod_dumpio:  dumpio_in 
(data-HEAP): \n
[Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
mod_dumpio.c(135): [client 132.253.8.198:55373] mod_dumpio: dumpio_in 
[getline-blocking] 0 readbytes
[Tue Oct 20 14:53:58.801084 2015] [dumpio:trace7] [pid 5228:tid 2776] 
mod_dumpio.c(150): [client 132.253.8.198:55373] mod_dumpio: dumpio_in - 
70007
[Tue Oct 20 14:53:58.801084 2015] [dumpio:trace7] [pid 5228:tid 2776] 
mod_dumpio.c(164): [client 132.253.8.198:55373] mod_dumpio: dumpio_out


Does this appear to be saying that it's timing out tryign to read more 
from the input (the request?)


Re: [users@httpd] Chunked transfer delay with httpd 2.4 on Windows.

2015-10-20 Thread Andy Wang



On 10/20/2015 11:16 AM, Andy Wang wrote:



On 10/20/2015 05:19 AM, Yann Ylavic wrote:



mod_dumpio's traces (level TRACE7) could be helpful here, Andy?



I'll reconfigure to get that in a bit today.
I'll also try with mod_proxy_ajp as well to see if the same occurs.



mod_proxy_ajp has the same behavior.
Here's the output from mod_dumpio - lemme know if you'd like the raw 
file.  I'd have to figure out how to make it available somewhere after 
sanitizing a few things.


[Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
mod_dumpio.c(164): [client 132.253.8.198:55373] mod_dumpio: dumpio_out
[Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
mod_dumpio.c(58): [client 132.253.8.198:55373] mod_dumpio:  dumpio_out 
(data-HEAP): 362 bytes
[Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
mod_dumpio.c(100): [client 132.253.8.198:55373] mod_dumpio:  dumpio_out 
(data-HEAP): HTTP/1.1 200 OK\r\nDate: Tue, 20 Oct 2015 19:53:53 
GMT\r\nServer: Apache/2.4.16 (Win64)\r\nX-Frame-Options: 
SAMEORIGIN\r\nExpires: Thu, 01 Jan 1970 00:00:00 GMT\r\nCache-Control: 
no-cache\r\nContent-Type: text/html;charset=UTF-8\r\nVary: 
Accept-Encoding,User-Agent\r\nContent-Encoding: gzip\r\nKeep-Alive: 
timeout=5, max=100\r\nConnection: Keep-Alive\r\nTransfer-Encoding: 
chunked\r\n\r\n
[Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
mod_dumpio.c(164): [client 132.253.8.198:55373] mod_dumpio: dumpio_out
[Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
mod_dumpio.c(58): [client 132.253.8.198:55373] mod_dumpio:  dumpio_out 
(data-TRANSIENT): 4 bytes
[Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
mod_dumpio.c(100): [client 132.253.8.198:55373] mod_dumpio:  dumpio_out 
(data-TRANSIENT): 1e\r\n
[Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
mod_dumpio.c(58): [client 132.253.8.198:55373] mod_dumpio:  dumpio_out 
(data-IMMORTAL): 10 bytes
[Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
mod_dumpio.c(100): [client 132.253.8.198:55373] mod_dumpio:  dumpio_out 
(data-IMMORTAL): \x1f\x8b\b
[Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
mod_dumpio.c(58): [client 132.253.8.198:55373] mod_dumpio:  dumpio_out 
(data-HEAP): 20 bytes
[Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
mod_dumpio.c(100): [client 132.253.8.198:55373] mod_dumpio:  dumpio_out 
(data-HEAP): \x8a\x8a\x02\x02\xab\x92\xd4\xe2\x12+\x10+\x8a\x97\v
[Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
mod_dumpio.c(58): [client 132.253.8.198:55373] mod_dumpio:  dumpio_out 
(data-IMMORTAL): 2 bytes
[Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
mod_dumpio.c(100): [client 132.253.8.198:55373] mod_dumpio:  dumpio_out 
(data-IMMORTAL): \r\n
[Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
mod_dumpio.c(58): [client 132.253.8.198:55373] mod_dumpio:  dumpio_out 
(metadata-FLUSH): 0 bytes
[Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
mod_dumpio.c(164): [client 132.253.8.198:55373] mod_dumpio: dumpio_out
[Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
mod_dumpio.c(58): [client 132.253.8.198:55373] mod_dumpio:  dumpio_out 
(data-TRANSIENT): 3 bytes
[Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
mod_dumpio.c(100): [client 132.253.8.198:55373] mod_dumpio:  dumpio_out 
(data-TRANSIENT): a\r\n
[Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
mod_dumpio.c(58): [client 132.253.8.198:55373] mod_dumpio:  dumpio_out 
(data-HEAP): 2 bytes
[Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
mod_dumpio.c(100): [client 132.253.8.198:55373] mod_dumpio:  dumpio_out 
(data-HEAP): \x03
[Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
mod_dumpio.c(58): [client 132.253.8.198:55373] mod_dumpio:  dumpio_out 
(data-POOL): 8 bytes
[Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
mod_dumpio.c(100): [client 132.253.8.198:55373] mod_dumpio:  dumpio_out 
(data-POOL): \xfe\x8e\xed\xee\x12
[Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
mod_dumpio.c(58): [client 132.253.8.198:55373] mod_dumpio:  dumpio_out 
(data-IMMORTAL): 2 bytes
[Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
mod_dumpio.c(100): [client 132.253.8.198:55373] mod_dumpio:  dumpio_out 
(data-IMMORTAL): \r\n
[Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
mod_dumpio.c(58): [client 132.253.8.198:55373] mod_dumpio:  dumpio_out 
(data-IMMORTAL): 5 bytes
[Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
mod_dumpio.c(100): [client 132.253.8.198:55373] mod_dumpio:  dumpio_out 
(data-IMMORTAL): 0\r\n\r\n
[Tue Oct 20 14:53:53.800924 2015] [dumpio:trace7] [pid 5228:tid 2776] 
mod_dumpio.c(58): [client 132.253.8.198:55373] mod_dumpio:  dumpio_

Re: [users@httpd] Chunked transfer delay with httpd 2.4 on Windows.

2015-10-20 Thread Rainer Jung

Am 20.10.2015 um 11:38 schrieb Plüm, Rüdiger, Vodafone Group:




-Original Message-
From: Yann Ylavic [mailto:[email protected]]
Sent: Dienstag, 20. Oktober 2015 10:54
To: [email protected]
Subject: Re: [users@httpd] Chunked transfer delay with httpd 2.4 on
Windows.

On Tue, Oct 20, 2015 at 10:17 AM, Plüm, Rüdiger, Vodafone Group
 wrote:


Or is this something with mod_jk not correctly sending an EOS? Does this

happen with mod_proxy_ajp as well?

If that was the case, I think we wouldn't enter the keepalive state,


Why not? If the handler of mod_jk would just sent the data in a brigade without 
an EOS
and then exit with HTTP_OK you IMHO would get there.


mod_jk gets the data from the backend via AJP as chunks and for each 
chunk it received, it simply calls ap_rwrite() to let the web server 
send it to the client.


It does not directly interact with buckets or brigades.

Regards,

Rainer



Re: [users@httpd] Chunked transfer delay with httpd 2.4 on Windows.

2015-10-20 Thread Andy Wang



On 10/20/2015 05:19 AM, Yann Ylavic wrote:



mod_dumpio's traces (level TRACE7) could be helpful here, Andy?



I'll reconfigure to get that in a bit today.
I'll also try with mod_proxy_ajp as well to see if the same occurs.

Thanks,
Andy


Re: [users@httpd] Chunked transfer delay with httpd 2.4 on Windows.

2015-10-20 Thread Yann Ylavic
On Tue, Oct 20, 2015 at 12:14 PM, Yann Ylavic  wrote:
> On Tue, Oct 20, 2015 at 11:38 AM, Plüm, Rüdiger, Vodafone Group
>  wrote:
>>
>>
>>> -Original Message-
>>> From: Yann Ylavic [mailto:[email protected]]
>>> Sent: Dienstag, 20. Oktober 2015 10:54
>>> To: [email protected]
>>> Subject: Re: [users@httpd] Chunked transfer delay with httpd 2.4 on
>>> Windows.
>>>
>>> On Tue, Oct 20, 2015 at 10:17 AM, Plüm, Rüdiger, Vodafone Group
>>>  wrote:
>>> >
>>> > Or is this something with mod_jk not correctly sending an EOS? Does this
>>> happen with mod_proxy_ajp as well?
>>>
>>> If that was the case, I think we wouldn't enter the keepalive state,
>>
>> Why not? If the handler of mod_jk would just sent the data in a brigade 
>> without an EOS
>> and then exit with HTTP_OK you IMHO would get there.
>
> Good point, let's see the log.

mod_dumpio's traces (level TRACE7) could be helpful here, Andy?


Re: [users@httpd] Chunked transfer delay with httpd 2.4 on Windows.

2015-10-20 Thread Yann Ylavic
On Tue, Oct 20, 2015 at 11:38 AM, Plüm, Rüdiger, Vodafone Group
 wrote:
>
>
>> -Original Message-
>> From: Yann Ylavic [mailto:[email protected]]
>> Sent: Dienstag, 20. Oktober 2015 10:54
>> To: [email protected]
>> Subject: Re: [users@httpd] Chunked transfer delay with httpd 2.4 on
>> Windows.
>>
>> On Tue, Oct 20, 2015 at 10:17 AM, Plüm, Rüdiger, Vodafone Group
>>  wrote:
>> >
>> > Or is this something with mod_jk not correctly sending an EOS? Does this
>> happen with mod_proxy_ajp as well?
>>
>> If that was the case, I think we wouldn't enter the keepalive state,
>
> Why not? If the handler of mod_jk would just sent the data in a brigade 
> without an EOS
> and then exit with HTTP_OK you IMHO would get there.

Good point, let's see the log.


RE: [users@httpd] Chunked transfer delay with httpd 2.4 on Windows.

2015-10-20 Thread Plüm , Rüdiger , Vodafone Group


> -Original Message-
> From: Yann Ylavic [mailto:[email protected]]
> Sent: Dienstag, 20. Oktober 2015 10:54
> To: [email protected]
> Subject: Re: [users@httpd] Chunked transfer delay with httpd 2.4 on
> Windows.
> 
> On Tue, Oct 20, 2015 at 10:17 AM, Plüm, Rüdiger, Vodafone Group
>  wrote:
> >
> > Or is this something with mod_jk not correctly sending an EOS? Does this
> happen with mod_proxy_ajp as well?
> 
> If that was the case, I think we wouldn't enter the keepalive state,

Why not? If the handler of mod_jk would just sent the data in a brigade without 
an EOS
and then exit with HTTP_OK you IMHO would get there.

Regards

Rüdiger


Re: [users@httpd] Chunked transfer delay with httpd 2.4 on Windows.

2015-10-20 Thread Yann Ylavic
On Tue, Oct 20, 2015 at 10:17 AM, Plüm, Rüdiger, Vodafone Group
 wrote:
>
> Or is this something with mod_jk not correctly sending an EOS? Does this 
> happen with mod_proxy_ajp as well?

If that was the case, I think we wouldn't enter the keepalive state,
that's why I mentioned the EOS handling in mod_deflate.
However the code is not Windows specific there... Maybe a Windows
specific zlib issue?

I agree that log TRACEs could help here too.

Regards,
Yann.


Re: [users@httpd] Chunked transfer delay with httpd 2.4 on Windows.

2015-10-20 Thread Yann Ylavic
The proposed patch is dead anyway, just one remark though.

On Tue, Oct 20, 2015 at 10:08 AM, Plüm, Rüdiger, Vodafone Group
 wrote:
>
>
>> -Original Message-
>> From: Yann Ylavic [mailto:[email protected]]
>> Sent: Dienstag, 20. Oktober 2015 01:05
>>
>> Index: modules/http/http_request.c
>> ===
[]
>> --- modules/http/http_request.c(revision 1708095)
>> +++ modules/http/http_request.c(working copy)
>>
>> -if (!c->data_in_input_filters) {
>> -bb = apr_brigade_create(c->pool, c->bucket_alloc);
>> -b = apr_bucket_flush_create(c->bucket_alloc);
>> -APR_BRIGADE_INSERT_HEAD(bb, b);
>> -rv = ap_pass_brigade(c->output_filters, bb);
>> -if (APR_STATUS_IS_TIMEUP(rv)) {
>> -/*
>> - * Notice a timeout as an error message. This might be
>> - * valuable for detecting clients with broken network
>> - * connections or possible DoS attacks.
>> - *
>> - * It is still safe to use r / r->pool here as the eor bucket
>> - * could not have been destroyed in the event of a timeout.
>> - */
>> -ap_log_rerror(APLOG_MARK, APLOG_INFO, rv, r, APLOGNO(01581)
>> -  "Timeout while writing data for URI %s to the"
>> -  " client", r->unparsed_uri);
>> -}
>> -}
>>  if (ap_extended_status) {
>> +conn_rec *c = r->connection;
>
> r is likely to be dead here

Correct, and I think the above code is quite risky too.
Does it really worth logging at the request level here, whereas using
something like:
ap_log_cerror(APLOG_MARK, APLOG_INFO, rv, c, APLOGNO(01581)
  "Timeout flusing data to the client");
would be as much informative IMHO.


RE: [users@httpd] Chunked transfer delay with httpd 2.4 on Windows.

2015-10-20 Thread Plüm , Rüdiger , Vodafone Group


> -Original Message-
> From: Yann Ylavic [mailto:[email protected]]
> Sent: Dienstag, 20. Oktober 2015 10:01
> To: [email protected]
> Subject: Re: [users@httpd] Chunked transfer delay with httpd 2.4 on
> Windows.
> 
> On Tue, Oct 20, 2015 at 4:24 AM, Andy Wang  wrote:
> >
> >
> > On 10/19/2015 07:44 PM, Eric Covener wrote:
> >>
> >> On Mon, Oct 19, 2015 at 7:05 PM, Yann Ylavic 
> wrote:
> >>>
> >>> This is the deferred write triggering *after* the keepalive timeout,
> >>> whereas no subsequent request was pipelined.
> >>> I wonder if we shouldn't issue a flush at the end of each request when
> >>> the following is not already there, ie:
> >>
> >>
> >> Can you describe what breaks the current code? It looks like it's
> >> already trying to handle this case, I couldn't tell the operative
> >> difference.
> >>
> >
> > I'm also curious why it is that I seem to only be able to reproduce it
> with
> > a particular client.  i would have expected using ncat to simulate the
> exact
> > same request would have been able to trigger the same behavior.
> >
> > And why is this only occurring on windows?
> 
> Yes, a complete misinterpretation on my side!
> The issue must be somewhere in mod_deflate (EOS handling?), since the
> flush done at connection level is of no help for the request filters.
> Will look at this...

Or is this something with mod_jk not correctly sending an EOS? Does this happen 
with mod_proxy_ajp as well?

Regards

Rüdiger


RE: [users@httpd] Chunked transfer delay with httpd 2.4 on Windows.

2015-10-20 Thread Plüm , Rüdiger , Vodafone Group


> -Original Message-
> From: Yann Ylavic [mailto:[email protected]]
> Sent: Dienstag, 20. Oktober 2015 10:01
> To: [email protected]
> Subject: Re: [users@httpd] Chunked transfer delay with httpd 2.4 on
> Windows.
>
> On Tue, Oct 20, 2015 at 4:24 AM, Andy Wang  wrote:
> >
> >
> > On 10/19/2015 07:44 PM, Eric Covener wrote:
> >>
> >> On Mon, Oct 19, 2015 at 7:05 PM, Yann Ylavic 
> wrote:
> >>>
> >>> This is the deferred write triggering *after* the keepalive timeout,
> >>> whereas no subsequent request was pipelined.
> >>> I wonder if we shouldn't issue a flush at the end of each request when
> >>> the following is not already there, ie:
> >>
> >>
> >> Can you describe what breaks the current code? It looks like it's
> >> already trying to handle this case, I couldn't tell the operative
> >> difference.
> >>
> >
> > I'm also curious why it is that I seem to only be able to reproduce it
> with
> > a particular client.  i would have expected using ncat to simulate the
> exact
> > same request would have been able to trigger the same behavior.
> >
> > And why is this only occurring on windows?
>
> Yes, a complete misinterpretation on my side!
> The issue must be somewhere in mod_deflate (EOS handling?), since the
> flush done at connection level is of no help for the request filters.
> Will look at this...

Yeah. There is only a difference between async and sync mpm's and Windows is 
sync
and hence the flush is already sent.
Any error messages in the logfile?

Regards

Rüdiger


RE: [users@httpd] Chunked transfer delay with httpd 2.4 on Windows.

2015-10-20 Thread Plüm , Rüdiger , Vodafone Group


> -Original Message-
> From: Yann Ylavic [mailto:[email protected]]
> Sent: Dienstag, 20. Oktober 2015 01:05
> To: [email protected]
> Subject: Re: [users@httpd] Chunked transfer delay with httpd 2.4 on
> Windows.
> 
> [From users@]
> 
> On Mon, Oct 19, 2015 at 11:44 PM, Andy Wang  wrote:
> >
> > The issue is currently reproduced using Apache httpd 2.4.16, mod_jk
> 1.2.41
> > and tomcat 8.0.28.
> >
> > I've created a very very simple JSP page that does nothing but print a
> small
> > string, but I've tried changing the jsp page to print a very very large
> > string (1+ characters) and no difference.
> >
> > If I POST to this JSP page, and something like mod_deflate is in place
> to
> > force a chunked transfer the TCP packet capture looks like this:
> >
> > No. Time   Source  Destination   Protocol Length Info
> >1850 4827.762721000 client  serverTCP  66 54131→2280
> > [SYN] Seq=0 Win=8192 Len=0 MSS=1460 WS=256 SACK_PERM=1
> >1851 4827.764976000 server  clientTCP  66 2280→54131
> > [SYN, ACK] Seq=0 Ack=1 Win=8192 Len=0 MSS=1460 WS=256 SACK_PERM=1
> >1852 4827.765053000 client  serverTCP  54 54131→2280
> > [ACK] Seq=1 Ack=1 Win=131328 Len=0
> >1853 4827.765315000 client  serverHTTP 791POST
> > /JSPtoPostTo HTTP/1.1
> >1854 4827.777981000 server  clientTCP  466[TCP
> segment of
> > a reassembled PDU]
> >1855 4827.982961000 client  serverTCP  54 54131→2280
> > [ACK] Seq=738 Ack=413 Win=130816 Len=0
> >1856 4832.770458000 server  clientHTTP 74 HTTP/1.1
> 200 OK
> > (text/html)
> >1857 4832.770459000 server  clientTCP  60 2280→54131
> > [FIN, ACK] Seq=433 Ack=738 Win=65536 Len=0
> >1858 4832.770555000 client  serverTCP  54 54131→2280
> > [ACK] Seq=738 Ack=434 Win=130816 Len=0
> >1859 4832.770904000 client  serverTCP  54 54131→2280
> > [FIN, ACK] Seq=738 Ack=434 Win=130816 Len=0
> >1860 4832.77420 server  clientTCP  60 2280→54131
> > [ACK] Seq=434 Ack=739 Win=65536 Len=0
> >
> > Spdficially, note the 5 second delay between the first segment (No.
> 1854)
> > and the second data segment (1856).
> 
> This is the deferred write triggering *after* the keepalive timeout,
> whereas no subsequent request was pipelined.
> I wonder if we shouldn't issue a flush at the end of each request when
> the following is not already there, ie:
> 
> Index: modules/http/http_request.c
> ===
> --- modules/http/http_request.c(revision 1708095)
> +++ modules/http/http_request.c(working copy)
> @@ -228,8 +228,9 @@ AP_DECLARE(void) ap_die(int type, request_rec *r)
>  ap_die_r(type, r, r->status);
>  }
> 
> -static void check_pipeline(conn_rec *c, apr_bucket_brigade *bb)
> +static int check_pipeline(conn_rec *c, apr_bucket_brigade *bb)

Why changing the prototype? We could check for c->data_in_input_filters instead.

>  {
> +c->data_in_input_filters = 0;
>  if (c->keepalive != AP_CONN_CLOSE && !c->aborted) {
>  apr_status_t rv;
> 
> @@ -236,17 +237,12 @@ AP_DECLARE(void) ap_die(int type, request_rec *r)
>  AP_DEBUG_ASSERT(APR_BRIGADE_EMPTY(bb));
>  rv = ap_get_brigade(c->input_filters, bb, AP_MODE_SPECULATIVE,
>  APR_NONBLOCK_READ, 1);
> -if (rv != APR_SUCCESS || APR_BRIGADE_EMPTY(bb)) {
> -/*
> - * Error or empty brigade: There is no data present in the
> input
> - * filter
> - */
> -c->data_in_input_filters = 0;
> -}
> -else {
> +if (rv == APR_SUCCESS && !APR_BRIGADE_EMPTY(bb)) {
>  c->data_in_input_filters = 1;
> +return 1;
>  }
>  }
> +return 0;
>  }
> 
> 
> @@ -287,11 +283,30 @@ AP_DECLARE(void) ap_process_request_after_handler(
>   * already by the EOR bucket's cleanup function.
>   */
> 
> -check_pipeline(c, bb);
> +if (!check_pipeline(c, bb)) {
> +apr_status_t rv;
> +
> +b = apr_bucket_flush_create(c->bucket_alloc);
> +APR_BRIGADE_INSERT_HEAD(bb, b);
> +rv = ap_pass_brigade(c->output_filters, bb);
> +if (APR_STATUS_IS_TIMEUP(rv)) {
> +/*
> + * Notice a timeout as an error message. This might be
> + * valuable for detecting 

Re: [users@httpd] Chunked transfer delay with httpd 2.4 on Windows.

2015-10-20 Thread Yann Ylavic
On Tue, Oct 20, 2015 at 4:24 AM, Andy Wang  wrote:
>
>
> On 10/19/2015 07:44 PM, Eric Covener wrote:
>>
>> On Mon, Oct 19, 2015 at 7:05 PM, Yann Ylavic  wrote:
>>>
>>> This is the deferred write triggering *after* the keepalive timeout,
>>> whereas no subsequent request was pipelined.
>>> I wonder if we shouldn't issue a flush at the end of each request when
>>> the following is not already there, ie:
>>
>>
>> Can you describe what breaks the current code? It looks like it's
>> already trying to handle this case, I couldn't tell the operative
>> difference.
>>
>
> I'm also curious why it is that I seem to only be able to reproduce it with
> a particular client.  i would have expected using ncat to simulate the exact
> same request would have been able to trigger the same behavior.
>
> And why is this only occurring on windows?

Yes, a complete misinterpretation on my side!
The issue must be somewhere in mod_deflate (EOS handling?), since the
flush done at connection level is of no help for the request filters.
Will look at this...


Re: [users@httpd] Chunked transfer delay with httpd 2.4 on Windows.

2015-10-19 Thread Andy Wang



On 10/19/2015 07:44 PM, Eric Covener wrote:

On Mon, Oct 19, 2015 at 7:05 PM, Yann Ylavic  wrote:

This is the deferred write triggering *after* the keepalive timeout,
whereas no subsequent request was pipelined.
I wonder if we shouldn't issue a flush at the end of each request when
the following is not already there, ie:


Can you describe what breaks the current code? It looks like it's
already trying to handle this case, I couldn't tell the operative
difference.



I'm also curious why it is that I seem to only be able to reproduce it 
with a particular client.  i would have expected using ncat to simulate 
the exact same request would have been able to trigger the same behavior.


And why is this only occurring on windows?


Re: [users@httpd] Chunked transfer delay with httpd 2.4 on Windows.

2015-10-19 Thread Eric Covener
On Mon, Oct 19, 2015 at 7:05 PM, Yann Ylavic  wrote:
> This is the deferred write triggering *after* the keepalive timeout,
> whereas no subsequent request was pipelined.
> I wonder if we shouldn't issue a flush at the end of each request when
> the following is not already there, ie:

Can you describe what breaks the current code? It looks like it's
already trying to handle this case, I couldn't tell the operative
difference.


Re: [users@httpd] Chunked transfer delay with httpd 2.4 on Windows.

2015-10-19 Thread Andy Wang


On 10/19/2015 06:05 PM, Yann Ylavic wrote:

[From users@]

On Mon, Oct 19, 2015 at 11:44 PM, Andy Wang  wrote:


The issue is currently reproduced using Apache httpd 2.4.16, mod_jk 1.2.41
and tomcat 8.0.28.

I've created a very very simple JSP page that does nothing but print a small
string, but I've tried changing the jsp page to print a very very large
string (1+ characters) and no difference.

If I POST to this JSP page, and something like mod_deflate is in place to
force a chunked transfer the TCP packet capture looks like this:

No. Time   Source  Destination   Protocol Length Info
1850 4827.762721000 client  serverTCP  66 54131→2280
[SYN] Seq=0 Win=8192 Len=0 MSS=1460 WS=256 SACK_PERM=1
1851 4827.764976000 server  clientTCP  66 2280→54131
[SYN, ACK] Seq=0 Ack=1 Win=8192 Len=0 MSS=1460 WS=256 SACK_PERM=1
1852 4827.765053000 client  serverTCP  54 54131→2280
[ACK] Seq=1 Ack=1 Win=131328 Len=0
1853 4827.765315000 client  serverHTTP 791POST
/JSPtoPostTo HTTP/1.1
1854 4827.777981000 server  clientTCP  466[TCP segment of
a reassembled PDU]
1855 4827.982961000 client  serverTCP  54 54131→2280
[ACK] Seq=738 Ack=413 Win=130816 Len=0
1856 4832.770458000 server  clientHTTP 74 HTTP/1.1 200 OK
(text/html)
1857 4832.770459000 server  clientTCP  60 2280→54131
[FIN, ACK] Seq=433 Ack=738 Win=65536 Len=0
1858 4832.770555000 client  serverTCP  54 54131→2280
[ACK] Seq=738 Ack=434 Win=130816 Len=0
1859 4832.770904000 client  serverTCP  54 54131→2280
[FIN, ACK] Seq=738 Ack=434 Win=130816 Len=0
1860 4832.77420 server  clientTCP  60 2280→54131
[ACK] Seq=434 Ack=739 Win=65536 Len=0

Spdficially, note the 5 second delay between the first segment (No. 1854)
and the second data segment (1856).


This is the deferred write triggering *after* the keepalive timeout,
whereas no subsequent request was pipelined.
I wonder if we shouldn't issue a flush at the end of each request when
the following is not already there, ie:

Index: modules/http/http_request.c
===
--- modules/http/http_request.c(revision 1708095)
+++ modules/http/http_request.c(working copy)
@@ -228,8 +228,9 @@ AP_DECLARE(void) ap_die(int type, request_rec *r)
  ap_die_r(type, r, r->status);
  }

-static void check_pipeline(conn_rec *c, apr_bucket_brigade *bb)
+static int check_pipeline(conn_rec *c, apr_bucket_brigade *bb)
  {
+c->data_in_input_filters = 0;
  if (c->keepalive != AP_CONN_CLOSE && !c->aborted) {
  apr_status_t rv;

@@ -236,17 +237,12 @@ AP_DECLARE(void) ap_die(int type, request_rec *r)
  AP_DEBUG_ASSERT(APR_BRIGADE_EMPTY(bb));
  rv = ap_get_brigade(c->input_filters, bb, AP_MODE_SPECULATIVE,
  APR_NONBLOCK_READ, 1);
-if (rv != APR_SUCCESS || APR_BRIGADE_EMPTY(bb)) {
-/*
- * Error or empty brigade: There is no data present in the input
- * filter
- */
-c->data_in_input_filters = 0;
-}
-else {
+if (rv == APR_SUCCESS && !APR_BRIGADE_EMPTY(bb)) {
  c->data_in_input_filters = 1;
+return 1;
  }
  }
+return 0;
  }


@@ -287,11 +283,30 @@ AP_DECLARE(void) ap_process_request_after_handler(
   * already by the EOR bucket's cleanup function.
   */

-check_pipeline(c, bb);
+if (!check_pipeline(c, bb)) {
+apr_status_t rv;
+
+b = apr_bucket_flush_create(c->bucket_alloc);
+APR_BRIGADE_INSERT_HEAD(bb, b);
+rv = ap_pass_brigade(c->output_filters, bb);
+if (APR_STATUS_IS_TIMEUP(rv)) {
+/*
+ * Notice a timeout as an error message. This might be
+ * valuable for detecting clients with broken network
+ * connections or possible DoS attacks.
+ *
+ * It is still safe to use r / r->pool here as the eor bucket
+ * could not have been destroyed in the event of a timeout.
+ */
+ap_log_cerror(APLOG_MARK, APLOG_INFO, rv, c, APLOGNO(01581)
+  "Timeout while flushing data to the client");
+}
+}
  apr_brigade_destroy(bb);
-if (c->cs)
+if (c->cs) {
  c->cs->state = (c->aborted) ? CONN_STATE_LINGER
  : CONN_STATE_WRITE_COMPLETION;
+}
  AP_PROCESS_REQUEST_RETURN((uintptr_t)r, r->uri, r->status);
  if (ap_extended_status) {
  ap_time_process_request(c->sbh, STOP_PREQUEST);
@@ -373,33 +388,10 @@ void ap_process_async_request(request_rec *r)

  AP_DECLARE(void) ap_process_request(request_rec *r)
  {
-apr_bucket_brigade *bb;
-apr_bucket *b;
-conn_rec *c = r->connection;
-apr_status_t rv;
-
  ap_process_as

Re: [users@httpd] Chunked transfer delay with httpd 2.4 on Windows.

2015-10-19 Thread Yann Ylavic
[From users@]

On Mon, Oct 19, 2015 at 11:44 PM, Andy Wang  wrote:
>
> The issue is currently reproduced using Apache httpd 2.4.16, mod_jk 1.2.41
> and tomcat 8.0.28.
>
> I've created a very very simple JSP page that does nothing but print a small
> string, but I've tried changing the jsp page to print a very very large
> string (1+ characters) and no difference.
>
> If I POST to this JSP page, and something like mod_deflate is in place to
> force a chunked transfer the TCP packet capture looks like this:
>
> No. Time   Source  Destination   Protocol Length Info
>1850 4827.762721000 client  serverTCP  66 54131→2280
> [SYN] Seq=0 Win=8192 Len=0 MSS=1460 WS=256 SACK_PERM=1
>1851 4827.764976000 server  clientTCP  66 2280→54131
> [SYN, ACK] Seq=0 Ack=1 Win=8192 Len=0 MSS=1460 WS=256 SACK_PERM=1
>1852 4827.765053000 client  serverTCP  54 54131→2280
> [ACK] Seq=1 Ack=1 Win=131328 Len=0
>1853 4827.765315000 client  serverHTTP 791POST
> /JSPtoPostTo HTTP/1.1
>1854 4827.777981000 server  clientTCP  466[TCP segment of
> a reassembled PDU]
>1855 4827.982961000 client  serverTCP  54 54131→2280
> [ACK] Seq=738 Ack=413 Win=130816 Len=0
>1856 4832.770458000 server  clientHTTP 74 HTTP/1.1 200 OK
> (text/html)
>1857 4832.770459000 server  clientTCP  60 2280→54131
> [FIN, ACK] Seq=433 Ack=738 Win=65536 Len=0
>1858 4832.770555000 client  serverTCP  54 54131→2280
> [ACK] Seq=738 Ack=434 Win=130816 Len=0
>1859 4832.770904000 client  serverTCP  54 54131→2280
> [FIN, ACK] Seq=738 Ack=434 Win=130816 Len=0
>1860 4832.77420 server  clientTCP  60 2280→54131
> [ACK] Seq=434 Ack=739 Win=65536 Len=0
>
> Spdficially, note the 5 second delay between the first segment (No. 1854)
> and the second data segment (1856).

This is the deferred write triggering *after* the keepalive timeout,
whereas no subsequent request was pipelined.
I wonder if we shouldn't issue a flush at the end of each request when
the following is not already there, ie:

Index: modules/http/http_request.c
===
--- modules/http/http_request.c(revision 1708095)
+++ modules/http/http_request.c(working copy)
@@ -228,8 +228,9 @@ AP_DECLARE(void) ap_die(int type, request_rec *r)
 ap_die_r(type, r, r->status);
 }

-static void check_pipeline(conn_rec *c, apr_bucket_brigade *bb)
+static int check_pipeline(conn_rec *c, apr_bucket_brigade *bb)
 {
+c->data_in_input_filters = 0;
 if (c->keepalive != AP_CONN_CLOSE && !c->aborted) {
 apr_status_t rv;

@@ -236,17 +237,12 @@ AP_DECLARE(void) ap_die(int type, request_rec *r)
 AP_DEBUG_ASSERT(APR_BRIGADE_EMPTY(bb));
 rv = ap_get_brigade(c->input_filters, bb, AP_MODE_SPECULATIVE,
 APR_NONBLOCK_READ, 1);
-if (rv != APR_SUCCESS || APR_BRIGADE_EMPTY(bb)) {
-/*
- * Error or empty brigade: There is no data present in the input
- * filter
- */
-c->data_in_input_filters = 0;
-}
-else {
+if (rv == APR_SUCCESS && !APR_BRIGADE_EMPTY(bb)) {
 c->data_in_input_filters = 1;
+return 1;
 }
 }
+return 0;
 }


@@ -287,11 +283,30 @@ AP_DECLARE(void) ap_process_request_after_handler(
  * already by the EOR bucket's cleanup function.
  */

-check_pipeline(c, bb);
+if (!check_pipeline(c, bb)) {
+apr_status_t rv;
+
+b = apr_bucket_flush_create(c->bucket_alloc);
+APR_BRIGADE_INSERT_HEAD(bb, b);
+rv = ap_pass_brigade(c->output_filters, bb);
+if (APR_STATUS_IS_TIMEUP(rv)) {
+/*
+ * Notice a timeout as an error message. This might be
+ * valuable for detecting clients with broken network
+ * connections or possible DoS attacks.
+ *
+ * It is still safe to use r / r->pool here as the eor bucket
+ * could not have been destroyed in the event of a timeout.
+ */
+ap_log_cerror(APLOG_MARK, APLOG_INFO, rv, c, APLOGNO(01581)
+  "Timeout while flushing data to the client");
+}
+}
 apr_brigade_destroy(bb);
-if (c->cs)
+if (c->cs) {
 c->cs->state = (c->aborted) ? CONN_STATE_LINGER
 : CONN_STATE_WRITE_COMPLETION;
+}
 AP_PROCESS_REQUEST_RETURN((uintptr_t)r, r->uri, r->status);
 if (ap_extended_status) {
 ap_time_process_request(c->sbh, STOP_PREQUEST);
@@ -373,33 +388,10 @@ void ap_process_async_request(request_rec *r)

 AP_DECLARE(void) ap_process_request(request_rec *r)
 {
-apr_bucket_brigade *bb;
-apr_bucket *b;
-conn_rec *c = r->connection;
-apr_status_t rv;
-
 ap_process_async_request(r