Re: svn commit: r1860166 - in /httpd/httpd/branches/2.4.x: ./ docs/manual/mod/ include/ modules/http2/ modules/proxy/ server/

2019-10-31 Thread Dennis Clarke

On 2019-10-30 07:07, Yann Ylavic wrote:

On Tue, Oct 29, 2019 at 8:01 PM Rainer Jung  wrote:

...

That's intended?


Yes, if req->prefetch_nonblocking we need to enter the loop at least
once, but since we are non-blocking it does not hurt to _try_ to read
more (even when we come back here in the balancer fallback case).

There is corner case though


Some of love to chase into dark corners and this looks like one of them.


--
Dennis Clarke
RISC-V/SPARC/PPC/ARM/CISC
UNIX and Linux spoken
GreyBeard and suspenders optional



Re: svn commit: r1860166 - in /httpd/httpd/branches/2.4.x: ./ docs/manual/mod/ include/ modules/http2/ modules/proxy/ server/

2019-10-30 Thread Yann Ylavic
On Tue, Oct 29, 2019 at 8:01 PM Rainer Jung  wrote:
>
> Double check: the condition in the do-while loop that was chaned to a
> while loop has also changed:
>
> FROM
>
> do { ... }
> while ((bytes_read < MAX_MEM_SPOOL - 80)
> && !APR_BUCKET_IS_EOS(APR_BRIGADE_LAST(input_brigade))
> && !req->prefetch_nonblocking);
>
> TO
>
> while (((bytes_read < MAX_MEM_SPOOL - 80)
> && (APR_BRIGADE_EMPTY(input_brigade)
> || !APR_BUCKET_IS_EOS(APR_BRIGADE_LAST(input_brigade
> { ... }
>
> That's intended?

Yes, if req->prefetch_nonblocking we need to enter the loop at least
once, but since we are non-blocking it does not hurt to _try_ to read
more (even when we come back here in the balancer fallback case).

There is corner case though, if balancer members don't have the same
Proxy100Continue configuration we may enter ap_proxy_http_prefetch()
with different req->prefetch_nonblocking but mainly req->expecting_100
too. Since we can rely on the HTTP_IN filter to do 100 continue above
the first call, in case where the first balancer member reached is
"Proxy100Continue on" but the fallback is "off" then could deadlock
(similarly to r1868576).
Possibly it's too much of a corner case, but it looks easy enough to
address in this patch (by ignoring "Proxy100Continue off" on the
fallback), so maybe this v2 (attached) instead?


Regards,
Yann.
Index: modules/proxy/mod_proxy_http.c
===
--- modules/proxy/mod_proxy_http.c	(revision 1869076)
+++ modules/proxy/mod_proxy_http.c	(working copy)
@@ -648,9 +660,12 @@ static int ap_proxy_http_prefetch(proxy_http_req_t
  * us an instant C-L election if the body is of some
  * reasonable size.
  */
+apr_brigade_length(input_brigade, 1, _read);
 temp_brigade = apr_brigade_create(p, bucket_alloc);
 block = req->prefetch_nonblocking ? APR_NONBLOCK_READ : APR_BLOCK_READ;
-do {
+while (bytes_read < MAX_MEM_SPOOL - 80
+   && (APR_BRIGADE_EMPTY(input_brigade)
+   || !APR_BUCKET_IS_EOS(APR_BRIGADE_LAST(input_brigade {
 status = ap_get_brigade(r->input_filters, temp_brigade,
 AP_MODE_READBYTES, block,
 MAX_MEM_SPOOL - bytes_read);
@@ -710,9 +725,7 @@ static int ap_proxy_http_prefetch(proxy_http_req_t
  * surrender once we hit 80 bytes less than MAX_MEM_SPOOL
  * (an arbitrary value.)
  */
-} while ((bytes_read < MAX_MEM_SPOOL - 80)
-  && !APR_BUCKET_IS_EOS(APR_BRIGADE_LAST(input_brigade))
-  && !req->prefetch_nonblocking);
+}
 
 /* Use chunked request body encoding or send a content-length body?
  *
@@ -1972,6 +1965,7 @@ static int proxy_http_handler(request_rec *r, prox
 const char *u;
 proxy_http_req_t *req = NULL;
 proxy_conn_rec *backend = NULL;
+apr_bucket_brigade *input_brigade = NULL;
 int is_ssl = 0;
 conn_rec *c = r->connection;
 proxy_dir_conf *dconf;
@@ -2036,9 +2030,10 @@ static int proxy_http_handler(request_rec *r, prox
 req->rb_method = RB_INIT;
 
 dconf = ap_get_module_config(r->per_dir_config, _module);
+apr_pool_userdata_get((void **)_brigade, "proxy-req-input", p);
 
 /* Should we handle end-to-end or ping 100-continue? */
-if ((r->expecting_100 && dconf->forward_100_continue)
+if ((r->expecting_100 && (dconf->forward_100_continue || input_brigade))
 || PROXY_DO_100_CONTINUE(worker, r)) {
 /* We need to reset r->expecting_100 or prefetching will cause
  * ap_http_filter() to send "100 Continue" response by itself. So
@@ -2090,8 +2085,12 @@ static int proxy_http_handler(request_rec *r, prox
  * to reduce to the minimum the unavoidable local is_socket_connected() vs
  * remote keepalive race condition.
  */
-req->input_brigade = apr_brigade_create(p, req->bucket_alloc);
 req->header_brigade = apr_brigade_create(p, req->bucket_alloc);
+req->input_brigade = input_brigade;
+if (req->input_brigade == NULL) {
+req->input_brigade = apr_brigade_create(p, req->bucket_alloc);
+apr_pool_userdata_setn(req->input_brigade, "proxy-req-input", NULL, p);
+}
 if ((status = ap_proxy_http_prefetch(req, uri, locurl)) != OK)
 goto cleanup;
 


Re: svn commit: r1860166 - in /httpd/httpd/branches/2.4.x: ./ docs/manual/mod/ include/ modules/http2/ modules/proxy/ server/

2019-10-29 Thread Rainer Jung
Double check: the condition in the do-while loop that was chaned to a 
while loop has also changed:


FROM

do { ... }
while ((bytes_read < MAX_MEM_SPOOL - 80)
   && !APR_BUCKET_IS_EOS(APR_BRIGADE_LAST(input_brigade))
   && !req->prefetch_nonblocking);

TO

while (((bytes_read < MAX_MEM_SPOOL - 80)
   && (APR_BRIGADE_EMPTY(input_brigade)
   || !APR_BUCKET_IS_EOS(APR_BRIGADE_LAST(input_brigade 
{ ... }


That's intended?

Regards,

Rainer

Am 29.10.2019 um 16:23 schrieb Rainer Jung:

Am 29.10.2019 um 16:19 schrieb Yann Ylavic:

Hi Rainer,

thanks for looking at this.



Aha, and this is due to the fact, that r1656259 "mod_proxy_http: don't
connect or reuse backend before prefetching request body." or parts of
it was backported from trunk to 2.4 as part of r1860166.


Yes, that's where prefetch was moved before connect, but we shouldn't
change that IMHO, besides fixing bugs...

Let me look at how we can reuse the input_brigade between balancers. I
have a small patch already and testing it.



So I think (not yet verified), that the same problems applies to trunk
since r1656259 in February 2015 :(


Yes, I'm fixing on trunk (first).


Thank you Yann. Let me know when I should test something. It's OK, if it 
is not yet the final fix ;)


Regards,

Rainer


Re: svn commit: r1860166 - in /httpd/httpd/branches/2.4.x: ./ docs/manual/mod/ include/ modules/http2/ modules/proxy/ server/

2019-10-29 Thread Rainer Jung

Hi Yann,

Am 29.10.2019 um 16:58 schrieb Yann Ylavic:

On Tue, Oct 29, 2019 at 4:24 PM Rainer Jung  wrote:


Thank you Yann. Let me know when I should test something. It's OK, if it
is not yet the final fix ;)


The attached patch seems to work for me..


LGTM. I applied/ported to 2.4.x, it fixes the problem for me as well and 
looks local enough to not intruduce new problems.


Thanks!

Rainer


Re: svn commit: r1860166 - in /httpd/httpd/branches/2.4.x: ./ docs/manual/mod/ include/ modules/http2/ modules/proxy/ server/

2019-10-29 Thread Yann Ylavic
On Tue, Oct 29, 2019 at 4:24 PM Rainer Jung  wrote:
>
> Thank you Yann. Let me know when I should test something. It's OK, if it
> is not yet the final fix ;)

The attached patch seems to work for me..
Index: modules/proxy/mod_proxy_http.c
===
--- modules/proxy/mod_proxy_http.c	(revision 1869076)
+++ modules/proxy/mod_proxy_http.c	(working copy)
@@ -648,9 +648,12 @@ static int ap_proxy_http_prefetch(proxy_http_req_t
  * us an instant C-L election if the body is of some
  * reasonable size.
  */
+apr_brigade_length(input_brigade, 1, _read);
 temp_brigade = apr_brigade_create(p, bucket_alloc);
 block = req->prefetch_nonblocking ? APR_NONBLOCK_READ : APR_BLOCK_READ;
-do {
+while ((bytes_read < MAX_MEM_SPOOL - 80)
+  && (APR_BRIGADE_EMPTY(input_brigade)
+  || !APR_BUCKET_IS_EOS(APR_BRIGADE_LAST(input_brigade {
 status = ap_get_brigade(r->input_filters, temp_brigade,
 AP_MODE_READBYTES, block,
 MAX_MEM_SPOOL - bytes_read);
@@ -710,9 +713,7 @@ static int ap_proxy_http_prefetch(proxy_http_req_t
  * surrender once we hit 80 bytes less than MAX_MEM_SPOOL
  * (an arbitrary value.)
  */
-} while ((bytes_read < MAX_MEM_SPOOL - 80)
-  && !APR_BUCKET_IS_EOS(APR_BRIGADE_LAST(input_brigade))
-  && !req->prefetch_nonblocking);
+}
 
 /* Use chunked request body encoding or send a content-length body?
  *
@@ -2090,8 +2091,12 @@ static int proxy_http_handler(request_rec *r, prox
  * to reduce to the minimum the unavoidable local is_socket_connected() vs
  * remote keepalive race condition.
  */
-req->input_brigade = apr_brigade_create(p, req->bucket_alloc);
 req->header_brigade = apr_brigade_create(p, req->bucket_alloc);
+apr_pool_userdata_get((void **)>input_brigade, "proxy-req-input", p);
+if (req->input_brigade == NULL) {
+req->input_brigade = apr_brigade_create(p, req->bucket_alloc);
+apr_pool_userdata_setn(req->input_brigade, "proxy-req-input", NULL, p);
+}
 if ((status = ap_proxy_http_prefetch(req, uri, locurl)) != OK)
 goto cleanup;
 


Re: svn commit: r1860166 - in /httpd/httpd/branches/2.4.x: ./ docs/manual/mod/ include/ modules/http2/ modules/proxy/ server/

2019-10-29 Thread Rainer Jung

Am 29.10.2019 um 16:19 schrieb Yann Ylavic:

Hi Rainer,

thanks for looking at this.



Aha, and this is due to the fact, that r1656259 "mod_proxy_http: don't
connect or reuse backend before prefetching request body." or parts of
it was backported from trunk to 2.4 as part of r1860166.


Yes, that's where prefetch was moved before connect, but we shouldn't
change that IMHO, besides fixing bugs...

Let me look at how we can reuse the input_brigade between balancers. I
have a small patch already and testing it.



So I think (not yet verified), that the same problems applies to trunk
since r1656259 in February 2015 :(


Yes, I'm fixing on trunk (first).


Thank you Yann. Let me know when I should test something. It's OK, if it 
is not yet the final fix ;)


Regards,

Rainer



Re: svn commit: r1860166 - in /httpd/httpd/branches/2.4.x: ./ docs/manual/mod/ include/ modules/http2/ modules/proxy/ server/

2019-10-29 Thread Yann Ylavic
Hi Rainer,

thanks for looking at this.

>
> Aha, and this is due to the fact, that r1656259 "mod_proxy_http: don't
> connect or reuse backend before prefetching request body." or parts of
> it was backported from trunk to 2.4 as part of r1860166.

Yes, that's where prefetch was moved before connect, but we shouldn't
change that IMHO, besides fixing bugs...

Let me look at how we can reuse the input_brigade between balancers. I
have a small patch already and testing it.

>
> So I think (not yet verified), that the same problems applies to trunk
> since r1656259 in February 2015 :(

Yes, I'm fixing on trunk (first).


Regards,
Yann.


Re: svn commit: r1860166 - in /httpd/httpd/branches/2.4.x: ./ docs/manual/mod/ include/ modules/http2/ modules/proxy/ server/

2019-10-29 Thread Stefan Eissing



> Am 29.10.2019 um 14:45 schrieb Rainer Jung :
> 
> Aha, and this is due to the fact, that r1656259 "mod_proxy_http: don't 
> connect or reuse backend before prefetching request body." or parts of it was 
> backported from trunk to 2.4 as part of r1860166.
> 
> So I think (not yet verified), that the same problems applies to trunk since 
> r1656259 in February 2015 :(

trunk is the root of all problems!

> Regards,
> 
> Rainer
> 
> Am 29.10.2019 um 14:21 schrieb Rainer Jung:
>> The reason why this fails now is that we prefetch in 2.4.41 the request body 
>> before doing the connection check to the backend. In 2.4.39 we did that 
>> after doing the check, so the body was still there when doing the final 
>> request sending.
>> 2.4.39:
>> In proxy_http_handler():
>> - ap_proxy_determine_connection()
>> - ap_proxy_check_connection()
>> - optionally ap_proxy_connect_backend() which might fail.
>> - ap_proxy_connection_create_ex()
>> - ap_proxy_http_request() which does prefetch late!
>> 2.4.41:
>> In proxy_http_handler():
>> - ap_proxy_determine_connection()
>> - early ap_proxy_http_prefetch() which does the prefetch!
>> - optionally again ap_proxy_determine_connection()
>> - ap_proxy_check_connection()
>> - optionally ap_proxy_connect_backend() which might fail.
>> - ap_proxy_connection_create_ex()
>> - ap_proxy_http_request()
>> So we either need to remember the prefetch result for later retries or 
>> switch back to the old order.
>> Regards,
>> Rainer
>> Am 29.10.2019 um 12:46 schrieb Rainer Jung:
>>> This happens in the case of a small body. We read the body into 
>>> req->input_brigade in ap_proxy_http_prefetch() before trying the first 
>>> node, but then loose it on the second node, because we use another req and 
>>> thus also another req->input_brigade then.
>>> 
>>> Not sure, how we could best save the read input_brigade for the second 
>>> attempt after failover. Will try some experiments.
>>> 
>>> If you try to reproduce yourself, make sure you use a small POST (here: 30 
>>> bytes) and also exclude /favicon.ico from forwarding when using a browser. 
>>> Otherwise some of the failovers will be triggered by favicon.ico and you'll 
>>> not notice the problem in the POST request:
>>> 
>>> ProxyPass /favicon.ico !
>>> 
>>> Regards,
>>> 
>>> Rainer
>>> 
>>> Am 29.10.2019 um 11:15 schrieb Rainer Jung:
 A first heads-up: it seems this commit broke failover for POST requests. 
 Most (or all?) of the times a balancer failover happens for a POST 
 request, the request send to the failover node has a Content-Length of "0" 
 instead of the real content length.
 
 I use a trivial setup like this:
 
 
ProxySet lbmethod=byrequests
BalancerMember http://localhost:5680
BalancerMember http://localhost:5681
 
 
 ProxyPass / balancer://backends/
 
 where one backend node is up and the second node is down.
 
 I will investigate further.
 
 Regards,
 
 Rainer
> 
> -- 
> kippdata
> informationstechnologie GmbH   Tel: 0228 98549 -0
> Bornheimer Str. 33aFax: 0228 98549 -50
> 53111 Bonn www.kippdata.de
> 
> HRB 8018 Amtsgericht Bonn / USt.-IdNr. DE 196 457 417
> Geschäftsführer: Dr. Thomas Höfer, Rainer Jung, Sven Maurmann



Re: svn commit: r1860166 - in /httpd/httpd/branches/2.4.x: ./ docs/manual/mod/ include/ modules/http2/ modules/proxy/ server/

2019-10-29 Thread Jim Jagielski
Yikes! How can we test for that via the Perl test framework?

> On Oct 29, 2019, at 9:45 AM, Rainer Jung  wrote:
> 
> Aha, and this is due to the fact, that r1656259 "mod_proxy_http: don't 
> connect or reuse backend before prefetching request body." or parts of it was 
> backported from trunk to 2.4 as part of r1860166.
> 
> So I think (not yet verified), that the same problems applies to trunk since 
> r1656259 in February 2015 :(
> 
> Regards,
> 
> Rainer
> 
> Am 29.10.2019 um 14:21 schrieb Rainer Jung:
>> The reason why this fails now is that we prefetch in 2.4.41 the request body 
>> before doing the connection check to the backend. In 2.4.39 we did that 
>> after doing the check, so the body was still there when doing the final 
>> request sending.
>> 2.4.39:
>> In proxy_http_handler():
>> - ap_proxy_determine_connection()
>> - ap_proxy_check_connection()
>> - optionally ap_proxy_connect_backend() which might fail.
>> - ap_proxy_connection_create_ex()
>> - ap_proxy_http_request() which does prefetch late!
>> 2.4.41:
>> In proxy_http_handler():
>> - ap_proxy_determine_connection()
>> - early ap_proxy_http_prefetch() which does the prefetch!
>> - optionally again ap_proxy_determine_connection()
>> - ap_proxy_check_connection()
>> - optionally ap_proxy_connect_backend() which might fail.
>> - ap_proxy_connection_create_ex()
>> - ap_proxy_http_request()
>> So we either need to remember the prefetch result for later retries or 
>> switch back to the old order.
>> Regards,
>> Rainer
>> Am 29.10.2019 um 12:46 schrieb Rainer Jung:
>>> This happens in the case of a small body. We read the body into 
>>> req->input_brigade in ap_proxy_http_prefetch() before trying the first 
>>> node, but then loose it on the second node, because we use another req and 
>>> thus also another req->input_brigade then.
>>> 
>>> Not sure, how we could best save the read input_brigade for the second 
>>> attempt after failover. Will try some experiments.
>>> 
>>> If you try to reproduce yourself, make sure you use a small POST (here: 30 
>>> bytes) and also exclude /favicon.ico from forwarding when using a browser. 
>>> Otherwise some of the failovers will be triggered by favicon.ico and you'll 
>>> not notice the problem in the POST request:
>>> 
>>> ProxyPass /favicon.ico !
>>> 
>>> Regards,
>>> 
>>> Rainer
>>> 
>>> Am 29.10.2019 um 11:15 schrieb Rainer Jung:
 A first heads-up: it seems this commit broke failover for POST requests. 
 Most (or all?) of the times a balancer failover happens for a POST 
 request, the request send to the failover node has a Content-Length of "0" 
 instead of the real content length.
 
 I use a trivial setup like this:
 
 
ProxySet lbmethod=byrequests
BalancerMember http://localhost:5680
BalancerMember http://localhost:5681
 
 
 ProxyPass / balancer://backends/
 
 where one backend node is up and the second node is down.
 
 I will investigate further.
 
 Regards,
 
 Rainer
> 
> -- 
> kippdata
> informationstechnologie GmbH   Tel: 0228 98549 -0
> Bornheimer Str. 33aFax: 0228 98549 -50
> 53111 Bonn www.kippdata.de
> 
> HRB 8018 Amtsgericht Bonn / USt.-IdNr. DE 196 457 417
> Geschäftsführer: Dr. Thomas Höfer, Rainer Jung, Sven Maurmann



Re: svn commit: r1860166 - in /httpd/httpd/branches/2.4.x: ./ docs/manual/mod/ include/ modules/http2/ modules/proxy/ server/

2019-10-29 Thread Rainer Jung
Aha, and this is due to the fact, that r1656259 "mod_proxy_http: don't 
connect or reuse backend before prefetching request body." or parts of 
it was backported from trunk to 2.4 as part of r1860166.


So I think (not yet verified), that the same problems applies to trunk 
since r1656259 in February 2015 :(


Regards,

Rainer

Am 29.10.2019 um 14:21 schrieb Rainer Jung:
The reason why this fails now is that we prefetch in 2.4.41 the request 
body before doing the connection check to the backend. In 2.4.39 we did 
that after doing the check, so the body was still there when doing the 
final request sending.


2.4.39:

In proxy_http_handler():
- ap_proxy_determine_connection()
- ap_proxy_check_connection()
- optionally ap_proxy_connect_backend() which might fail.
- ap_proxy_connection_create_ex()
- ap_proxy_http_request() which does prefetch late!

2.4.41:

In proxy_http_handler():
- ap_proxy_determine_connection()
- early ap_proxy_http_prefetch() which does the prefetch!
- optionally again ap_proxy_determine_connection()
- ap_proxy_check_connection()
- optionally ap_proxy_connect_backend() which might fail.
- ap_proxy_connection_create_ex()
- ap_proxy_http_request()

So we either need to remember the prefetch result for later retries or 
switch back to the old order.


Regards,

Rainer

Am 29.10.2019 um 12:46 schrieb Rainer Jung:
This happens in the case of a small body. We read the body into 
req->input_brigade in ap_proxy_http_prefetch() before trying the first 
node, but then loose it on the second node, because we use another req 
and thus also another req->input_brigade then.


Not sure, how we could best save the read input_brigade for the second 
attempt after failover. Will try some experiments.


If you try to reproduce yourself, make sure you use a small POST 
(here: 30 bytes) and also exclude /favicon.ico from forwarding when 
using a browser. Otherwise some of the failovers will be triggered by 
favicon.ico and you'll not notice the problem in the POST request:


ProxyPass /favicon.ico !

Regards,

Rainer

Am 29.10.2019 um 11:15 schrieb Rainer Jung:
A first heads-up: it seems this commit broke failover for POST 
requests. Most (or all?) of the times a balancer failover happens for 
a POST request, the request send to the failover node has a 
Content-Length of "0" instead of the real content length.


I use a trivial setup like this:


   ProxySet lbmethod=byrequests
   BalancerMember http://localhost:5680
   BalancerMember http://localhost:5681


ProxyPass / balancer://backends/

where one backend node is up and the second node is down.

I will investigate further.

Regards,

Rainer


--
kippdata
informationstechnologie GmbH   Tel: 0228 98549 -0
Bornheimer Str. 33aFax: 0228 98549 -50
53111 Bonn www.kippdata.de

HRB 8018 Amtsgericht Bonn / USt.-IdNr. DE 196 457 417
Geschäftsführer: Dr. Thomas Höfer, Rainer Jung, Sven Maurmann


Re: svn commit: r1860166 - in /httpd/httpd/branches/2.4.x: ./ docs/manual/mod/ include/ modules/http2/ modules/proxy/ server/

2019-10-29 Thread Rainer Jung
The reason why this fails now is that we prefetch in 2.4.41 the request 
body before doing the connection check to the backend. In 2.4.39 we did 
that after doing the check, so the body was still there when doing the 
final request sending.


2.4.39:

In proxy_http_handler():
- ap_proxy_determine_connection()
- ap_proxy_check_connection()
- optionally ap_proxy_connect_backend() which might fail.
- ap_proxy_connection_create_ex()
- ap_proxy_http_request() which does prefetch late!

2.4.41:

In proxy_http_handler():
- ap_proxy_determine_connection()
- early ap_proxy_http_prefetch() which does the prefetch!
- optionally again ap_proxy_determine_connection()
- ap_proxy_check_connection()
- optionally ap_proxy_connect_backend() which might fail.
- ap_proxy_connection_create_ex()
- ap_proxy_http_request()

So we either need to remember the prefetch result for later retries or 
switch back to the old order.


Regards,

Rainer

Am 29.10.2019 um 12:46 schrieb Rainer Jung:
This happens in the case of a small body. We read the body into 
req->input_brigade in ap_proxy_http_prefetch() before trying the first 
node, but then loose it on the second node, because we use another req 
and thus also another req->input_brigade then.


Not sure, how we could best save the read input_brigade for the second 
attempt after failover. Will try some experiments.


If you try to reproduce yourself, make sure you use a small POST (here: 
30 bytes) and also exclude /favicon.ico from forwarding when using a 
browser. Otherwise some of the failovers will be triggered by 
favicon.ico and you'll not notice the problem in the POST request:


ProxyPass /favicon.ico !

Regards,

Rainer

Am 29.10.2019 um 11:15 schrieb Rainer Jung:
A first heads-up: it seems this commit broke failover for POST 
requests. Most (or all?) of the times a balancer failover happens for 
a POST request, the request send to the failover node has a 
Content-Length of "0" instead of the real content length.


I use a trivial setup like this:


   ProxySet lbmethod=byrequests
   BalancerMember http://localhost:5680
   BalancerMember http://localhost:5681


ProxyPass / balancer://backends/

where one backend node is up and the second node is down.

I will investigate further.

Regards,

Rainer


Re: svn commit: r1860166 - in /httpd/httpd/branches/2.4.x: ./ docs/manual/mod/ include/ modules/http2/ modules/proxy/ server/

2019-10-29 Thread Rainer Jung
This happens in the case of a small body. We read the body into 
req->input_brigade in ap_proxy_http_prefetch() before trying the first 
node, but then loose it on the second node, because we use another req 
and thus also another req->input_brigade then.


Not sure, how we could best save the read input_brigade for the second 
attempt after failover. Will try some experiments.


If you try to reproduce yourself, make sure you use a small POST (here: 
30 bytes) and also exclude /favicon.ico from forwarding when using a 
browser. Otherwise some of the failovers will be triggered by 
favicon.ico and you'll not notice the problem in the POST request:


ProxyPass /favicon.ico !

Regards,

Rainer

Am 29.10.2019 um 11:15 schrieb Rainer Jung:
A first heads-up: it seems this commit broke failover for POST requests. 
Most (or all?) of the times a balancer failover happens for a POST 
request, the request send to the failover node has a Content-Length of 
"0" instead of the real content length.


I use a trivial setup like this:


   ProxySet lbmethod=byrequests
   BalancerMember http://localhost:5680
   BalancerMember http://localhost:5681


ProxyPass / balancer://backends/

where one backend node is up and the second node is down.

I will investigate further.

Regards,

Rainer


Re: svn commit: r1860166 - in /httpd/httpd/branches/2.4.x: ./ docs/manual/mod/ include/ modules/http2/ modules/proxy/ server/

2019-10-29 Thread Rainer Jung
A first heads-up: it seems this commit broke failover for POST requests. 
Most (or all?) of the times a balancer failover happens for a POST 
request, the request send to the failover node has a Content-Length of 
"0" instead of the real content length.


I use a trivial setup like this:


  ProxySet lbmethod=byrequests
  BalancerMember http://localhost:5680
  BalancerMember http://localhost:5681


ProxyPass / balancer://backends/

where one backend node is up and the second node is down.

I will investigate further.

Regards,

Rainer

Am 28.05.2019 um 01:19 schrieb minf...@apache.org:

Author: minfrin
Date: Mon May 27 23:19:12 2019
New Revision: 1860166

URL: http://svn.apache.org/viewvc?rev=1860166=rev
Log:
mod_proxy_http: forward 100-continue, and minimize race conditions when
reusing backend connections. PR 60330.

+1: ylavic, icing, jim
ylavic: plus http://svn.apache.org/r1856036 (opt-out)
2.4.x patch: 
http://people.apache.org/~ylavic/patches/httpd-2.4.x-forward_100_continue-v6.patch
+1: ylavic, jim, minfrin

Modified:
 httpd/httpd/branches/2.4.x/CHANGES
 httpd/httpd/branches/2.4.x/STATUS
 httpd/httpd/branches/2.4.x/docs/manual/mod/mod_proxy.xml
 httpd/httpd/branches/2.4.x/include/ap_mmn.h
 httpd/httpd/branches/2.4.x/modules/http2/mod_proxy_http2.c
 httpd/httpd/branches/2.4.x/modules/proxy/mod_proxy.c
 httpd/httpd/branches/2.4.x/modules/proxy/mod_proxy.h
 httpd/httpd/branches/2.4.x/modules/proxy/mod_proxy_http.c
 httpd/httpd/branches/2.4.x/modules/proxy/mod_proxy_wstunnel.c
 httpd/httpd/branches/2.4.x/modules/proxy/proxy_util.c
 httpd/httpd/branches/2.4.x/server/protocol.c

Modified: httpd/httpd/branches/2.4.x/CHANGES
URL: 
http://svn.apache.org/viewvc/httpd/httpd/branches/2.4.x/CHANGES?rev=1860166=1860165=1860166=diff
==
--- httpd/httpd/branches/2.4.x/CHANGES [utf-8] (original)
+++ httpd/httpd/branches/2.4.x/CHANGES [utf-8] Mon May 27 23:19:12 2019
@@ -1,6 +1,9 @@
   -*- coding: utf-8 -*-
  Changes with Apache 2.4.40
  
+  *) mod_proxy_http: forward 100-continue, and minimize race conditions when

+ reusing backend connections. PR 60330. [Yann Ylavic, Jean-Frederic Clere]
+
*) Easy patches: synch 2.4.x and trunk
  - core: 80 chars
  - http_core: Clean-uo and style. No functional change overall

Modified: httpd/httpd/branches/2.4.x/STATUS
URL: 
http://svn.apache.org/viewvc/httpd/httpd/branches/2.4.x/STATUS?rev=1860166=1860165=1860166=diff
==
--- httpd/httpd/branches/2.4.x/STATUS (original)
+++ httpd/httpd/branches/2.4.x/STATUS Mon May 27 23:19:12 2019
@@ -128,41 +128,6 @@ PATCHES ACCEPTED TO BACKPORT FROM TRUNK:
[ start all new proposals below, under PATCHES PROPOSED. ]
  
  
-  *) mod_proxy_http: forward 100-continue, and minimize race conditions when

- reusing backend connections. PR 60330.
- trunk patch: http://svn.apache.org/r1656259
-  http://svn.apache.org/r1656359
-  http://svn.apache.org/r1721759
-  http://svn.apache.org/r1729507
-  http://svn.apache.org/r1729749
-  http://svn.apache.org/r1754159
-  http://svn.apache.org/r1836588
-  http://svn.apache.org/r1836648
-  http://svn.apache.org/r1836716
-  http://svn.apache.org/r1836750
-  http://svn.apache.org/r1837040
-  http://svn.apache.org/r1853407
-  http://svn.apache.org/r1853518
-  http://svn.apache.org/r1853561
-  http://svn.apache.org/r1853566
-  http://svn.apache.org/r1853953
-  http://svn.apache.org/r1853956
- 2.4.x patch: 
http://people.apache.org/~ylavic/patches/httpd-2.4.x-forward_100_continue-v4.patch
- ylavic: please note the "s/ASCII_(CRLF|ZERO)/\1_ASCII/g" in the backport,
- same as in trunk (where definitions come from httpd.h) so doing
- the change now in 2.4.x helps backports. Since in 2.4.x these
- macros are still locally #defined, I also added the #ifdefs around
- them to avoid potential issues..
- @icing: sorry I had to reset your vote for v4, but it looks
- sensible to me to have trunk and 2.4.x code in sync as much as
- possible. Changes from v3 to v4 (r1853953 mainly, r1853956 is only
- comment) are non functional (or at least intended as such).
- +1: ylavic, icing, jim
- ylavic: plus http://svn.apache.org/r1856036 (opt-out)
- 2.4.x patch: 
http://people.apache.org/~ylavic/patches/httpd-2.4.x-forward_100_continue-v6.patch
- +1: ylavic, jim, minfrin
-
-
  PATCHES PROPOSED TO BACKPORT FROM TRUNK:
[ New proposals should be added at the end of the list ]