Re: request_rec.unparsed_uri missing scheme and host. parsed_uri missing most fields

2019-05-14 Thread Sorin Manolache

On 14/05/2019 20.35, Paul Callahan wrote:

Hello,
I'm having trouble getting the full uri of a request from request_rec.
  The comment string for request_rec.unparsed_uri makes it sound like it
should have the entire url, e.g. http:://hostname/path?etc.

But it only has the path and the query parameters.

The parsed_uri struct is populated with port, path and query paramters.
  Everything else (scheme, hostname, username, password, etc) is null.

I set a breakpoint in "apr_uri_parse()" and verified the incoming *uri
field only has the path and query parameters.

Is this expected?How can I get the full URI?


Hello,

Yes, it is expected.

When the client (meaning a program, not a human) makes a request it 
sends the following first line over the network connection:


GET /path?arg1=val1=val2 HTTP/1.1

(I assume here that it uses the version 1.1 of the HTTP protocol.)

In HTTP/1.1 a "Host" header must be present (it is not present in 
HTTP/1.0 but there is little HTTP/1.0 traffic nowadays)


So you might get

GET /path?arg1=val1=val2 HTTP/1.1
Host: www.example.com

A browser will decompose the address 
http://www.example.com/path?arg1=val1=val2 that you type in its 
address bar and generate the two text lines shown above.


But the server will not receive the string 
http://www.example.com/path?arg1=val1=val2


Moreover, http:// or https:// are not sent by the client. It's the 
server (apache) that determines (reconstructs) the scheme (i.e. http:// 
or https://) from the port and transport protocol (SSL/TLS or plain 
text) used by the request.


The HTTP RFC (https://tools.ietf.org/html/rfc7230) has more details. 
Especially section 5.3 might be of interest to you.


HTH,
Sorin


request_rec.unparsed_uri missing scheme and host. parsed_uri missing most fields

2019-05-14 Thread Paul Callahan
Hello,
I'm having trouble getting the full uri of a request from request_rec.
 The comment string for request_rec.unparsed_uri makes it sound like it
should have the entire url, e.g. http:://hostname/path?etc.

But it only has the path and the query parameters.

The parsed_uri struct is populated with port, path and query paramters.
 Everything else (scheme, hostname, username, password, etc) is null.

I set a breakpoint in "apr_uri_parse()" and verified the incoming *uri
field only has the path and query parameters.

Is this expected?How can I get the full URI?

Thanks


Re: Questions with mod_proxy_http2

2019-05-14 Thread Ruediger Pluem



On 05/14/2019 09:38 AM, Stefan Eissing wrote:
> 
> 
>> Am 14.05.2019 um 09:30 schrieb Ruediger Pluem :
>>
>>
>>
>> On 05/13/2019 03:30 PM, Stefan Eissing wrote:
>>>
>>>
 Am 13.05.2019 um 14:42 schrieb Plüm, Rüdiger, Vodafone Group 
 :

 I recently started using mod_proxy_http2 and some questions popped up for 
 me:

 1. Why do we retry 5 times hardcoded (leading to AH10023 in case we fail)?
  The other protocols only try to retry once if the ping failed (additional 
 Expect 100 header in the HTTP case, PING packet in the AJP case).
>>>
>>> I think that is buried in history. I cannot think of a good reason for it.
>>
>> Thanks for confirmation. For trunk my immediate idea would be to change '5' 
>> to '2' and thus be somewhat in line with the
>> other schemas. But I guess this is not possible for 2.4.x due to 
>> compatibility concerns. I guess we need to make it
>> configurable there with a default of 5. Opinions?
> 
> mod_proxy_http2 is still experimental. Feel free to change anything that 
> improves in your setup!

Done on trunk in r1859213. So anybody objections if I backport this straight to 
2.4.x?

> 
>>>
 2. Could we try to leverage the ping configuration parameter for workers 
 used by HTTP / AJP by sending a PING frame on the HTTP/2 backend
  connection and wait for the reply for the configured seconds in ping, 
 retry once if failed with a new connection and return 503 if failed again
  or continue processing if things were fine?
>>>
>>> Is that something the module can provide directly or is this for the 
>>> proxy/balancer infrastructure?
>>
>> This is something the module provides. It just needs to reply with a 503 if 
>> the worker is seen as faulty. This causes
>> a possibly configured loadbalancer that has this worker as member to fail 
>> over to a different member.
>>
>> I understand the modules current behavior as follows:
>>
>> 1. If the TCP connection is reused and no frame was received within the last 
>> second a PING frame is added before
>>   the stream for the request is set up.
>> 2. The request body if any is not sent until the PING reply has arrived.
>> 3. If the ping does not arrive reply with 503.
> 
> Yes, that was my intention. I added it to prevent a race with a GOAWAY frame 
> from the backend (e.g. its keepalive timed out).
> 
>> The issue I see with the above is:
>>
>> 1. Once the request is sent only idempotent requests could be sent to 
>> another worker in case we have a loadbalancer
>>   setup. Once a non idempotent request is sent we cannot be sure if it was 
>> not processed somehow by the backend.
>> 2. Even if for new connections a TCP connection can be established it is not 
>> clear if the backend is ready to process it
>>   due to things like backlogs, deferred accept setups or separate accept 
>> threads. Using a different short ping
>>   timeout removes these uncertainties. With regards to the ping overhead 
>> this should only be done if the admin
>>   configured the ping and hence is aware of the additional overhead with 
>> respect to traffic and latency.
>>
>> A similar behavior to the other modules would be
>>
>> 1. Sent the PING frame no matter if it is a new or existing connection in 
>> case the ping option is configured on
>>   the worker.
>> 2. Wait for the time configured in the ping option for the PING reply to 
>> arrive. If it does not arrive reply with a 503,
>>   if it does continue with the request.
> 
> I agree to your observation in regard to idempotency. Holding back the body 
> alone is not enough. Always sending a PING simplifies things, although for a 
> new connection, the arrival of the remote SETTINGS frames should be indiction 
> enough that the HTTP/2 is sound. But since latency to backend usually is low, 
> this may not be an issue.
> 

I guess seeing any frame returning from the backend within the timeout set by 
ping before sending the request should be
enough, but PING seems to be the correct thing to use for these kind of tests.
The important thing is that if configured we wait for ping time to see 
something from the backend until we send the
request.

Regards

Rüdiger



Arranging mod_proxy_balancer to make it provider of balancer/worker.

2019-05-14 Thread jean-frederic clere
Hi,

I would like to be able to add worker/balancer from another module
(mod_proxy_cluster) basically using a part of balancer_handler() to make
a provider (like insert_update_worker(params)), any objections or
suggestions?

-- 
Cheers

Jean-Frederic


Re: Questions with mod_proxy_http2

2019-05-14 Thread Stefan Eissing



> Am 14.05.2019 um 09:30 schrieb Ruediger Pluem :
> 
> 
> 
> On 05/13/2019 03:30 PM, Stefan Eissing wrote:
>> 
>> 
>>> Am 13.05.2019 um 14:42 schrieb Plüm, Rüdiger, Vodafone Group 
>>> :
>>> 
>>> I recently started using mod_proxy_http2 and some questions popped up for 
>>> me:
>>> 
>>> 1. Why do we retry 5 times hardcoded (leading to AH10023 in case we fail)?
>>>  The other protocols only try to retry once if the ping failed (additional 
>>> Expect 100 header in the HTTP case, PING packet in the AJP case).
>> 
>> I think that is buried in history. I cannot think of a good reason for it.
> 
> Thanks for confirmation. For trunk my immediate idea would be to change '5' 
> to '2' and thus be somewhat in line with the
> other schemas. But I guess this is not possible for 2.4.x due to 
> compatibility concerns. I guess we need to make it
> configurable there with a default of 5. Opinions?

mod_proxy_http2 is still experimental. Feel free to change anything that 
improves in your setup!

>> 
>>> 2. Could we try to leverage the ping configuration parameter for workers 
>>> used by HTTP / AJP by sending a PING frame on the HTTP/2 backend
>>>  connection and wait for the reply for the configured seconds in ping, 
>>> retry once if failed with a new connection and return 503 if failed again
>>>  or continue processing if things were fine?
>> 
>> Is that something the module can provide directly or is this for the 
>> proxy/balancer infrastructure?
> 
> This is something the module provides. It just needs to reply with a 503 if 
> the worker is seen as faulty. This causes
> a possibly configured loadbalancer that has this worker as member to fail 
> over to a different member.
> 
> I understand the modules current behavior as follows:
> 
> 1. If the TCP connection is reused and no frame was received within the last 
> second a PING frame is added before
>   the stream for the request is set up.
> 2. The request body if any is not sent until the PING reply has arrived.
> 3. If the ping does not arrive reply with 503.

Yes, that was my intention. I added it to prevent a race with a GOAWAY frame 
from the backend (e.g. its keepalive timed out).

> The issue I see with the above is:
> 
> 1. Once the request is sent only idempotent requests could be sent to another 
> worker in case we have a loadbalancer
>   setup. Once a non idempotent request is sent we cannot be sure if it was 
> not processed somehow by the backend.
> 2. Even if for new connections a TCP connection can be established it is not 
> clear if the backend is ready to process it
>   due to things like backlogs, deferred accept setups or separate accept 
> threads. Using a different short ping
>   timeout removes these uncertainties. With regards to the ping overhead this 
> should only be done if the admin
>   configured the ping and hence is aware of the additional overhead with 
> respect to traffic and latency.
> 
> A similar behavior to the other modules would be
> 
> 1. Sent the PING frame no matter if it is a new or existing connection in 
> case the ping option is configured on
>   the worker.
> 2. Wait for the time configured in the ping option for the PING reply to 
> arrive. If it does not arrive reply with a 503,
>   if it does continue with the request.

I agree to your observation in regard to idempotency. Holding back the body 
alone is not enough. Always sending a PING simplifies things, although for a 
new connection, the arrival of the remote SETTINGS frames should be indiction 
enough that the HTTP/2 is sound. But since latency to backend usually is low, 
this may not be an issue.

- Stefan

> 
> Regards
> 
> Rüdiger



Re: mod_md version 2

2019-05-14 Thread Stefan Eissing
Thanks!

> Am 14.05.2019 um 09:02 schrieb Ruediger Pluem :
> 
> 
> 
> On 05/06/2019 02:53 PM, Stefan Eissing wrote:
>> Heya,
>> 
>> the beautiful people at MOSS, Mozilla's Open Source Support, decided to give 
>> me a grant for Let's Encrypt and Stapling improvements in Apache! Big thanks!
>> 
>> I described what I plan to do here: 
>> https://github.com/icing/mod_md/wiki/V2Design
>> 
>> There are also github issues for collecting feedback and I pointed people to 
>> the Apache users mailing list as well.
>> 
>> Besides the support for ACMEv2, which is in-scope of the module, I plan to 
>> add a new OCSP stapling implementation in the module as well. That may lead 
>> to some head scratching here and I want to explain my reasoning and, 
>> ideally, get feedback from you.
> 
> Great to hear this. I digged out some discussions from the past that might be 
> useful (some even started by you :-)):
> 
> https://lists.apache.org/thread.html/1a61e9dfbd685c4102b097e8189bccb7d5da39bf9f32fcbe7407a760@%3Cdev.httpd.apache.org%3E
> 
> https://lists.apache.org/thread.html/040a5ef30dbe7649b88c24cd9716eaf4c47d2d800f4a6858508d4fab@%3Cdev.httpd.apache.org%3E
> 
> 
> Regards
> 
> Rüdiger
> 



Re: Questions with mod_proxy_http2

2019-05-14 Thread Ruediger Pluem



On 05/13/2019 03:30 PM, Stefan Eissing wrote:
> 
> 
>> Am 13.05.2019 um 14:42 schrieb Plüm, Rüdiger, Vodafone Group 
>> :
>>
>> I recently started using mod_proxy_http2 and some questions popped up for me:
>>
>> 1. Why do we retry 5 times hardcoded (leading to AH10023 in case we fail)?
>>   The other protocols only try to retry once if the ping failed (additional 
>> Expect 100 header in the HTTP case, PING packet in the AJP case).
> 
> I think that is buried in history. I cannot think of a good reason for it.

Thanks for confirmation. For trunk my immediate idea would be to change '5' to 
'2' and thus be somewhat in line with the
other schemas. But I guess this is not possible for 2.4.x due to compatibility 
concerns. I guess we need to make it
configurable there with a default of 5. Opinions?

> 
>> 2. Could we try to leverage the ping configuration parameter for workers 
>> used by HTTP / AJP by sending a PING frame on the HTTP/2 backend
>>   connection and wait for the reply for the configured seconds in ping, 
>> retry once if failed with a new connection and return 503 if failed again
>>   or continue processing if things were fine?
> 
> Is that something the module can provide directly or is this for the 
> proxy/balancer infrastructure?

This is something the module provides. It just needs to reply with a 503 if the 
worker is seen as faulty. This causes
a possibly configured loadbalancer that has this worker as member to fail over 
to a different member.

I understand the modules current behavior as follows:

1. If the TCP connection is reused and no frame was received within the last 
second a PING frame is added before
   the stream for the request is set up.
2. The request body if any is not sent until the PING reply has arrived.
3. If the ping does not arrive reply with 503.

The issue I see with the above is:

1. Once the request is sent only idempotent requests could be sent to another 
worker in case we have a loadbalancer
   setup. Once a non idempotent request is sent we cannot be sure if it was not 
processed somehow by the backend.
2. Even if for new connections a TCP connection can be established it is not 
clear if the backend is ready to process it
   due to things like backlogs, deferred accept setups or separate accept 
threads. Using a different short ping
   timeout removes these uncertainties. With regards to the ping overhead this 
should only be done if the admin
   configured the ping and hence is aware of the additional overhead with 
respect to traffic and latency.

A similar behavior to the other modules would be

1. Sent the PING frame no matter if it is a new or existing connection in case 
the ping option is configured on
   the worker.
2. Wait for the time configured in the ping option for the PING reply to 
arrive. If it does not arrive reply with a 503,
   if it does continue with the request.

Regards

Rüdiger


Re: mod_md version 2

2019-05-14 Thread Ruediger Pluem



On 05/06/2019 02:53 PM, Stefan Eissing wrote:
> Heya,
> 
> the beautiful people at MOSS, Mozilla's Open Source Support, decided to give 
> me a grant for Let's Encrypt and Stapling improvements in Apache! Big thanks!
> 
> I described what I plan to do here: 
> https://github.com/icing/mod_md/wiki/V2Design
> 
> There are also github issues for collecting feedback and I pointed people to 
> the Apache users mailing list as well.
> 
> Besides the support for ACMEv2, which is in-scope of the module, I plan to 
> add a new OCSP stapling implementation in the module as well. That may lead 
> to some head scratching here and I want to explain my reasoning and, ideally, 
> get feedback from you.

Great to hear this. I digged out some discussions from the past that might be 
useful (some even started by you :-)):

https://lists.apache.org/thread.html/1a61e9dfbd685c4102b097e8189bccb7d5da39bf9f32fcbe7407a760@%3Cdev.httpd.apache.org%3E

https://lists.apache.org/thread.html/040a5ef30dbe7649b88c24cd9716eaf4c47d2d800f4a6858508d4fab@%3Cdev.httpd.apache.org%3E


Regards

Rüdiger