Re: Reuse backend connections

2018-06-29 Thread Bryan Talbot


> On Jun 29, 2018, at Jun 29, 12:50 PM, Leela Kalidindi (lkalidin) 
>  wrote:
> 
> Not for Remote desktop protocol, it is for haproxy backend server with option 
> persist as in
> "HAPROXY_0_BACKEND_HEAD": "\nbackend {backend}\n balance {balance}\n mode 
> http\n option httplog\n  option forwardfor\n option http-keep-alive\n option 
> persist\n http-reuse aggressive\n maxconn 16\n",
>  


You need to stop playing 20 questions on the mailing list and RTFM already.

https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#option%20persist 


-Bryan




Re: Reuse backend connections

2018-06-29 Thread Leela Kalidindi (lkalidin)
Not for Remote desktop protocol, it is for haproxy backend server with option 
persist as in
"HAPROXY_0_BACKEND_HEAD": "\nbackend {backend}\n balance {balance}\n mode 
http\n option httplog\n  option forwardfor\n option http-keep-alive\n option 
persist\n http-reuse aggressive\n maxconn 16\n",


Thanks!



From: Bryan Talbot 
Date: Friday, June 29, 2018 at 12:47 PM
To: "Leela Kalidindi (lkalidin)" 
Cc: HAproxy Mailing Lists 
Subject: Re: Reuse backend connections




On Jun 29, 2018, at Jun 29, 12:42 PM, Leela Kalidindi (lkalidin) 
mailto:lkali...@cisco.com>> wrote:

Bryan,

One another follow-up question - what does persist do?  Thanks!



https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#persist

is for

https://en.wikipedia.org/wiki/Remote_Desktop_Protocol

Is that what you were asking?

-Bryan



Re: Reuse backend connections

2018-06-29 Thread Bryan Talbot


> On Jun 29, 2018, at Jun 29, 12:42 PM, Leela Kalidindi (lkalidin) 
>  wrote:
> 
> Bryan,
>  
> One another follow-up question - what does persist do?  Thanks!
>  


https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#persist 


is for 

https://en.wikipedia.org/wiki/Remote_Desktop_Protocol 


Is that what you were asking?

-Bryan



Re: Reuse backend connections

2018-06-29 Thread Leela Kalidindi (lkalidin)
Bryan,

One another follow-up question - what does persist do?  Thanks!

-Leela


From: Bryan Talbot 
Date: Friday, June 29, 2018 at 12:40 PM
To: "Leela Kalidindi (lkalidin)" 
Cc: HAproxy Mailing Lists 
Subject: Re: Reuse backend connections




On Jun 29, 2018, at Jun 29, 12:38 PM, Leela Kalidindi (lkalidin) 
mailto:lkali...@cisco.com>> wrote:

Hi Bryan,

Thanks a lot for the prompt response.

Is there a such kind of thing to leave the backend connections open forever 
that can serve any client request?



No, not to my knowledge.

-Bryan



Re: Reuse backend connections

2018-06-29 Thread Bryan Talbot


> On Jun 29, 2018, at Jun 29, 12:38 PM, Leela Kalidindi (lkalidin) 
>  wrote:
> 
> Hi Bryan,
>  
> Thanks a lot for the prompt response.
>  
> Is there a such kind of thing to leave the backend connections open forever 
> that can serve any client request? 
>  


No, not to my knowledge.

-Bryan



Re: Reuse backend connections

2018-06-29 Thread Leela Kalidindi (lkalidin)
Hi Bryan,

Thanks a lot for the prompt response.

Is there a such kind of thing to leave the backend connections open forever 
that can serve any client request?

-Leela



From: Bryan Talbot 
Date: Friday, June 29, 2018 at 12:30 PM
To: "Leela Kalidindi (lkalidin)" 
Cc: HAproxy Mailing Lists 
Subject: Re: Reuse backend connections




On Jun 29, 2018, at Jun 29, 5:11 AM, Leela Kalidindi (lkalidin) 
mailto:lkali...@cisco.com>> wrote:

Hi,

How can I enforce haproxy to reuse limited backend connections regardless of 
number of client connections? Basically I do not want to recreate backend 
connection for every front end client.

"HAPROXY_0_BACKEND_HEAD": "\nbackend {backend}\n balance {balance}\n mode 
http\n option httplog\n  option forwardfor\n option http-keep-alive\n option 
persist\n http-reuse aggressive\n maxconn 16\n",
"HAPROXY_0_FRONTEND_HEAD": "\nfrontend {backend}\n  bind 
{bindAddr}:{servicePort}\n  mode http\n  option httplog\n  option forwardfor\n 
option http-keep-alive\n maxconn 16\n"

I currently have the above configuration, but still backend connections are 
getting closed when the next client request comes in.

Could someone help me with the issue?  Thanks in advance!



I suspect that there is a misunderstanding of what backend connection re-use 
means. Specifically this portion from the documentation seems to trip people up:




https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#http-reuse

No connection pool is involved, once a session dies, the last idle connection

it was attached to is deleted at the same time. This ensures that connections

may not last after all sessions are closed.

I suspect that in your testing, you send one request, observe TCP state, then 
send a second request and expect the second request to use the same TCP 
connection. This is not how the feature works. The feature is optimized to 
support busy / loaded servers where the TCP open rate should be minimized. This 
allows a server to avoid, say opening 2,000 new connections per second, and 
instead just keep re-using a handful. It’s not a connection pool that pre-opens 
10 connections and keeps them around in case they might be needed.

-Bryan



Re: Reuse backend connections

2018-06-29 Thread Bryan Talbot


> On Jun 29, 2018, at Jun 29, 5:11 AM, Leela Kalidindi (lkalidin) 
>  wrote:
> 
> Hi,
>  
> How can I enforce haproxy to reuse limited backend connections regardless of 
> number of client connections? Basically I do not want to recreate backend 
> connection for every front end client.  
>  
> "HAPROXY_0_BACKEND_HEAD": "\nbackend {backend}\n balance {balance}\n mode 
> http\n option httplog\n  option forwardfor\n option http-keep-alive\n option 
> persist\n http-reuse aggressive\n maxconn 16\n",
> "HAPROXY_0_FRONTEND_HEAD": "\nfrontend {backend}\n  bind 
> {bindAddr}:{servicePort}\n  mode http\n  option httplog\n  option 
> forwardfor\n option http-keep-alive\n maxconn 16\n"
>  
> I currently have the above configuration, but still backend connections are 
> getting closed when the next client request comes in.
>  
> Could someone help me with the issue?  Thanks in advance!
>  


I suspect that there is a misunderstanding of what backend connection re-use 
means. Specifically this portion from the documentation seems to trip people up:


https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#http-reuse 

No connection pool is involved, once a session dies, the last idle connection
it was attached to is deleted at the same time. This ensures that connections
may not last after all sessions are closed.

I suspect that in your testing, you send one request, observe TCP state, then 
send a second request and expect the second request to use the same TCP 
connection. This is not how the feature works. The feature is optimized to 
support busy / loaded servers where the TCP open rate should be minimized. This 
allows a server to avoid, say opening 2,000 new connections per second, and 
instead just keep re-using a handful. It’s not a connection pool that pre-opens 
10 connections and keeps them around in case they might be needed.

-Bryan



[PATCH] MEDIUM: proxy_protocol: Send IPv4 addresses when possible

2018-06-29 Thread Tim Duesterhus
This patch changes the sending side of proxy protocol to convert IP
addresses to IPv4 when possible (and converts them IPv6 otherwise).

Previously the code failed to properly provide information under
certain circumstances:

1. haproxy is being accessed using IPv4, http-request set-src sets
   a IPv6 address.
2. haproxy is being accessed using IPv6, http-request set-src sets
   a IPv4 address.
3. haproxy listens on `::` with v4v6 and is accessed using IPv4:
   It would send a TCP6 line, instead of a proper TCP4 line, because
   the IP addresses are representing as a mapped IPv4 address internally.

Once correctness of this patch has been verified it should be evaluated
whether it should be backported, as (1) and (2) are bugs. (3) is an
enhancement.
---
 include/common/standard.h  |  6 ++
 include/proto/connection.h |  2 +-
 src/connection.c   | 29 -
 src/standard.c | 21 +
 4 files changed, 52 insertions(+), 6 deletions(-)

diff --git a/include/common/standard.h b/include/common/standard.h
index 6542759d..eb92b22b 100644
--- a/include/common/standard.h
+++ b/include/common/standard.h
@@ -1047,6 +1047,12 @@ extern void v4tov6(struct in6_addr *sin6_addr, struct 
in_addr *sin_addr);
  */
 extern int v6tov4(struct in_addr *sin_addr, struct in6_addr *sin6_addr);
 
+/* Calls v4tov6 on the addr in v4. Copies v4 to v6 v6 already is of type 
AF_INET6 */
+extern void sockaddr_v4tov6(struct sockaddr_storage *v6, struct 
sockaddr_storage *v4);
+
+/* Calls v6tov4 on the addr in v6. Copies v6 to v4 if conversion fails or v6 
already is of type AF_INET */
+extern int sockaddr_v6tov4(struct sockaddr_storage *v4, struct 
sockaddr_storage *v6);
+
 /* compare two struct sockaddr_storage and return:
  *  0 (true)  if the addr is the same in both
  *  1 (false) if the addr is not the same in both
diff --git a/include/proto/connection.h b/include/proto/connection.h
index 8566736f..4e19756b 100644
--- a/include/proto/connection.h
+++ b/include/proto/connection.h
@@ -46,7 +46,7 @@ void conn_fd_handler(int fd);
 /* receive a PROXY protocol header over a connection */
 int conn_recv_proxy(struct connection *conn, int flag);
 int make_proxy_line(char *buf, int buf_len, struct server *srv, struct 
connection *remote);
-int make_proxy_line_v1(char *buf, int buf_len, struct sockaddr_storage *src, 
struct sockaddr_storage *dst);
+int make_proxy_line_v1(char *buf, int buf_len, struct connection *remote);
 int make_proxy_line_v2(char *buf, int buf_len, struct server *srv, struct 
connection *remote);
 
 /* receive a NetScaler Client IP insertion header over a connection */
diff --git a/src/connection.c b/src/connection.c
index 1ea96ae3..e8e48d69 100644
--- a/src/connection.c
+++ b/src/connection.c
@@ -888,14 +888,24 @@ int make_proxy_line(char *buf, int buf_len, struct server 
*srv, struct connectio
 {
int ret = 0;
 
+   struct connection tmp;
+   if (remote) {
+   memcpy(, remote, sizeof(tmp));
+
+   if (!sockaddr_v6tov4(, >addr.from) || 
!sockaddr_v6tov4(, >addr.to)) {
+   sockaddr_v4tov6(, >addr.from);
+   sockaddr_v4tov6(, >addr.to);
+   }
+
+   remote = 
+   }
+
+
if (srv && (srv->pp_opts & SRV_PP_V2)) {
ret = make_proxy_line_v2(buf, buf_len, srv, remote);
}
else {
-   if (remote)
-   ret = make_proxy_line_v1(buf, buf_len, 
>addr.from, >addr.to);
-   else
-   ret = make_proxy_line_v1(buf, buf_len, NULL, NULL);
+   ret = make_proxy_line_v1(buf, buf_len, remote);
}
 
return ret;
@@ -908,10 +918,19 @@ int make_proxy_line(char *buf, int buf_len, struct server 
*srv, struct connectio
  * TCP6 and "UNKNOWN" formats. If any of  or  is null, UNKNOWN is
  * emitted as well.
  */
-int make_proxy_line_v1(char *buf, int buf_len, struct sockaddr_storage *src, 
struct sockaddr_storage *dst)
+int make_proxy_line_v1(char *buf, int buf_len, struct connection *remote)
 {
int ret = 0;
 
+   struct sockaddr_storage null_addr = { .ss_family = 0 };
+   struct sockaddr_storage *src = _addr;
+   struct sockaddr_storage *dst = _addr;
+
+   if (remote) {
+   src = >addr.from;
+   dst = >addr.to;
+   }
+
if (src && dst && src->ss_family == dst->ss_family && src->ss_family == 
AF_INET) {
ret = snprintf(buf + ret, buf_len - ret, "PROXY TCP4 ");
if (ret >= buf_len)
diff --git a/src/standard.c b/src/standard.c
index ebe043f1..51fd2cc4 100644
--- a/src/standard.c
+++ b/src/standard.c
@@ -2693,6 +2693,27 @@ int v6tov4(struct in_addr *sin_addr, struct in6_addr 
*sin6_addr)
return 0;
 }
 
+void sockaddr_v4tov6(struct sockaddr_storage *v6, struct sockaddr_storage *v4) 
{
+   if (v4->ss_family == AF_INET) {
+   v6->ss_family = AF_INET6;
+

Re: Connections stuck in CLOSE_WAIT state with h2

2018-06-29 Thread Milan Petruželka
On Fri, 29 Jun 2018 at 11:19, Milan Petruželka  wrote:

> I've added more debug into h2s_close to see not only h2s state and flags
> but also h2c state and flags. My only way to reproduce the bug is to let
> Haproxy run until some of its FD falls into CLOSE_WAIT. After I catch some,
> I'll report back.
>

I just caught new CLOSE_WAIT.

... continuation of longer h2 connection ...

20180629.1347 mpeh2 fd25 h2c_stream_new - id0117 st00 fl
streams:0 -> 1
20180629.1347 000267eb:frntend.accept(0006)=0019 from [some_ip:52750]
ALPN=h2
20180629.1347 mpeh2 fd25 h2c_stream_new - id0119 st00 fl
streams:1 -> 2
20180629.1347 000267ec:frntend.accept(0006)=0019 from [some_ip:52750]
ALPN=h2
20180629.1347 mpeh2 fd25 h2c_stream_new - id011b st00 fl
streams:2 -> 3
20180629.1347 000267ed:frntend.accept(0006)=0019 from [some_ip:52750]
ALPN=h2
20180629.1347 mpeh2 fd25 h2c_stream_new - id011d st00 fl
streams:3 -> 4
20180629.1347 000267ee:frntend.accept(0006)=0019 from [some_ip:52750]
ALPN=h2
20180629.1347 mpeh2 fd25 h2c_stream_new - id011f st00 fl
streams:4 -> 5
20180629.1347 000267ef:frntend.accept(0006)=0019 from [some_ip:52750]
ALPN=h2
20180629.1347 mpeh2 fd25 h2c_stream_new - id0121 st00 fl
streams:5 -> 6
20180629.1347 000267f0:frntend.accept(0006)=0019 from [some_ip:52750]
ALPN=h2
20180629.1347 mpeh2 fd25 h2c_stream_new - id0123 st00 fl
streams:6 -> 7
20180629.1347 000267f1:frntend.accept(0006)=0019 from [some_ip:52750]
ALPN=h2
20180629.1347 mpeh2 fd25 h2c_stream_new - id0125 st00 fl
streams:7 -> 8
20180629.1347 000267f2:frntend.accept(0006)=0019 from [some_ip:52750]
ALPN=h2
20180629.1347 mpeh2 fd25 h2c_stream_new - id0127 st00 fl
streams:8 -> 9
20180629.1347 000267f3:frntend.accept(0006)=0019 from [some_ip:52750]
ALPN=h2
OK, we have some new h2 streams

20180629.1347 000267eb:frntend.clireq[0019:]: GET /some/uri HTTP/1.1
20180629.1347 000267ec:frntend.clireq[0019:]: GET /some/uri HTTP/1.1
20180629.1347 000267ed:frntend.clireq[0019:]: GET /some/uri HTTP/1.1
20180629.1347 000267ee:frntend.clireq[0019:]: GET /some/uri HTTP/1.1
20180629.1347 000267ef:frntend.clireq[0019:]: GET /some/uri HTTP/1.1
20180629.1347 000267f0:frntend.clireq[0019:]: GET /some/uri HTTP/1.1
20180629.1347 000267f1:frntend.clireq[0019:]: GET /some/uri HTTP/1.1
20180629.1347 000267f2:frntend.clireq[0019:]: GET /some/uri HTTP/1.1
20180629.1347 000267f3:frntend.clireq[0019:]: GET /some/uri HTTP/1.1
20180629.1347 000267eb:backend.srvrep[0019:0010]: HTTP/1.1 200 OK
20180629.1347 000267ef:backend.srvrep[0019:001b]: HTTP/1.1 200 OK
20180629.1347 000267f1:backend.srvrep[0019:001d]: HTTP/1.1 200 OK
20180629.1347 000267f3:backend.srvrep[0019:001f]: HTTP/1.1 200 OK
20180629.1347 000267f2:backend.srvrep[0019:adfd]: HTTP/1.1 200 OK
20180629.1347 000267f2:backend.srvcls[0019:adfd]
20180629.1347 mpeh2 fd25 h2s_close/h2c  - id0125 h2c_st02
h2c_fl streams:9
20180629.1347 mpeh2 fd25 h2s_close/real - id0125 st04 fl3001
streams:9 -> 8
20180629.1347 000267ee:backend.srvrep[0019:adfd]: HTTP/1.1 200 OK
20180629.1347 000267ee:backend.srvcls[0019:adfd]
20180629.1347 000267f0:backend.srvrep[0019:adfd]: HTTP/1.1 200 OK
20180629.1347 000267f0:backend.srvcls[0019:adfd]
20180629.1347 000267f4:frntend.clicls[0019:]
20180629.1347 000267f4:frntend.closed[0019:]
20180629.1347 mpeh2 fd25 h2s_destroy - id0125 st07 fl3003 streams:8
20180629.1347 mpeh2 fd25 h2s_close/h2c  - id0125 h2c_st02
h2c_fl streams:8
20180629.1347 mpeh2 fd25 h2s_close/dumy - id0125 st07 fl3003
streams:8 -> 8
20180629.1347 mpeh2 fd25 h2s_close/h2c  - id011d h2c_st02
h2c_fl streams:8
20180629.1347 mpeh2 fd25 h2s_close/real - id011d st04 fl3001
streams:8 -> 7
20180629.1347 mpeh2 fd25 h2s_close/h2c  - id0121 h2c_st02
h2c_fl streams:7
20180629.1347 mpeh2 fd25 h2s_close/real - id0121 st04 fl3001
streams:7 -> 6
20180629.1347 000267ed:backend.srvrep[0019:0017]: HTTP/1.1 200 OK
20180629.1347 000267ec:backend.srvrep[0019:adfd]: HTTP/1.1 200 OK
20180629.1347 000267ec:backend.srvcls[0019:adfd]
20180629.1347 000267f5:frntend.clicls[0019:]
20180629.1347 000267f5:frntend.closed[0019:]
20180629.1347 mpeh2 fd25 h2s_destroy - id011d st07 fl3003 streams:6
20180629.1347 mpeh2 fd25 h2s_close/h2c  - id011d h2c_st02
h2c_fl0002 streams:6
20180629.1347 mpeh2 fd25 h2s_close/dumy - id011d st07 fl3003
streams:6 -> 6
20180629.1347 000267f6:frntend.clicls[0019:]
20180629.1347 000267f6:frntend.closed[0019:]
20180629.1347 mpeh2 fd25 h2s_destroy - id0121 st07 fl3003 streams:6
20180629.1347 mpeh2 fd25 h2s_close/h2c  - id0121 h2c_st02
h2c_fl0002 streams:6
20180629.1347 mpeh2 fd25 h2s_close/dumy - id0121 st07 fl3003
streams:6 -> 6
20180629.1347 mpeh2 fd25 h2c_stream_new - id0129 st00 fl

Re: IPv6 : bug in unique-id-format and hex transormation

2018-06-29 Thread Mildis

> Le 29 juin 2018 à 14:26, Mildis  a écrit :
> 
>> 
>> Le 29 juin 2018 à 04:51, Willy Tarreau  a écrit :
>> 
>> Hi,
>> 
>> On Thu, Jun 28, 2018 at 11:48:24AM +0200, m...@mildis.org wrote:
>>> 
>>> Hi,
>>> 
>>> When applying hex transform to an IPv6 in unique-id-format, the result is 
>>> an string full of zeros. unique-id-format %{+X}o\ 
>>> %ci:%cp_%fi:%fp_%Ts_%rt:%pid":D142_:01BB_5B348110_:0FC3"
>>> When hex transform is disabled, the IPv6 is printed.
>>> 
>>> Here is a patch that only applies hex transformation to IPv4 addresses.
>> 
>> Hmmm I get your point but then we should have 3 cases handled differently :
>> - IPv4 => hex conversion
>> - IPv6 => no conversion
>> - IPv4 in IPv6 => conversion of the IPv4 part.
> It hits me when I made the patch : wether I should choose the lazy way or the 
> thorough way.
> I did the former.
> 
> So the results should be :
> - 192.168.0.1 => C0A80001
> - 2001:db8:0:85a3::ac1f:8001 => 20010db885a3ac1f8001
> - :::192.168.0.1 => C0A80001
Or even 
2001:db8:0:85a3::ac1f:8001 => 20013Adb83A03A85a33A3Aac1f3A8001
:::192.168.0.1 => 3A3A3AC0A80001

> Applying address compression without commas is not feasible.
> The argument of saving space with hex will not be that obivous then.
> 
> Mildis
> 
> 
>> 
>> In practice it should still boil down to doing IPv4 vs IPv6 and encoding
>> the fields manually for IPv6 without the colons. Indeed, some people will
>> definitely expect the hexa conversion to put a 16-byte block at once and
>> not to insert colons that are used as port delimiters in their format,
>> especially for unique-id. So this should just have its own encoding format
>> for IPv6 addresses in my opinion.
>> 
>> Thanks,
>> willy



Re: IPv6 : bug in unique-id-format and hex transormation

2018-06-29 Thread Mildis


> Le 29 juin 2018 à 04:51, Willy Tarreau  a écrit :
> 
> Hi,
> 
> On Thu, Jun 28, 2018 at 11:48:24AM +0200, m...@mildis.org wrote:
>> 
>> Hi,
>> 
>> When applying hex transform to an IPv6 in unique-id-format, the result is an 
>> string full of zeros. unique-id-format %{+X}o\ 
>> %ci:%cp_%fi:%fp_%Ts_%rt:%pid":D142_:01BB_5B348110_:0FC3"
>> When hex transform is disabled, the IPv6 is printed.
>> 
>> Here is a patch that only applies hex transformation to IPv4 addresses.
> 
> Hmmm I get your point but then we should have 3 cases handled differently :
>  - IPv4 => hex conversion
>  - IPv6 => no conversion
>  - IPv4 in IPv6 => conversion of the IPv4 part.
It hits me when I made the patch : wether I should choose the lazy way or the 
thorough way.
I did the former.

So the results should be :
- 192.168.0.1 => C0A80001
- 2001:db8:0:85a3::ac1f:8001 => 20010db885a3ac1f8001
- :::192.168.0.1 => C0A80001

Applying address compression without commas is not feasible.
The argument of saving space with hex will not be that obivous then.

Mildis


> 
> In practice it should still boil down to doing IPv4 vs IPv6 and encoding
> the fields manually for IPv6 without the colons. Indeed, some people will
> definitely expect the hexa conversion to put a 16-byte block at once and
> not to insert colons that are used as port delimiters in their format,
> especially for unique-id. So this should just have its own encoding format
> for IPv6 addresses in my opinion.
> 
> Thanks,
> willy




Reuse backend connections

2018-06-29 Thread Leela Kalidindi (lkalidin)
Hi,

How can I enforce haproxy to reuse limited backend connections regardless of 
number of client connections? Basically I do not want to recreate backend 
connection for every front end client.

"HAPROXY_0_BACKEND_HEAD": "\nbackend {backend}\n balance {balance}\n mode 
http\n option httplog\n  option forwardfor\n option http-keep-alive\n option 
persist\n http-reuse aggressive\n maxconn 16\n",
"HAPROXY_0_FRONTEND_HEAD": "\nfrontend {backend}\n  bind 
{bindAddr}:{servicePort}\n  mode http\n  option httplog\n  option forwardfor\n 
option http-keep-alive\n maxconn 16\n"

I currently have the above configuration, but still backend connections are 
getting closed when the next client request comes in.

Could someone help me with the issue?  Thanks in advance!


-Leela



Re: Haproxy 1.8 with OpenSSL 1.1.1-pre4 stops working after 1 hour

2018-06-29 Thread Emeric Brun
Hi Lukas,

On 06/27/2018 04:48 AM, Willy Tarreau wrote:
> On Wed, Jun 27, 2018 at 01:44:08AM +0200, Lukas Tribus wrote:
>> Hey guys,
>>
>>
>> FYI after lots of discussions with openssl folks:
>>
>> https://github.com/openssl/openssl/issues/5330
>> https://github.com/openssl/openssl/pull/6388
>> https://github.com/openssl/openssl/pull/6432
>>
>>
>> OpenSSL 1.1.1 will now keep the FD open by default:
>>
>> https://github.com/openssl/openssl/commit/c7504aeb640a88949dfe3146f7e0f275f517464c
> 
> Wow good job Lukas! At least now we know that openssl 1.1.1 is not broken
> anymore! Thanks for taking care of explaining all these valid use cases
> there!
> 
> Willy
> 

I've noticed that. Thank you for your support reporting this issue to openssl's 
team

R,
Emeric



Re: Connections stuck in CLOSE_WAIT state with h2

2018-06-29 Thread Milan Petruželka
Hi Willy,

I'm back at work after 2 weeks on the beach in Dalmatia. I've patched my
Haproxy 1.8.11 with all three patches discussed here in last two weeks. It
didn't help. Then tried to run Haproxy with debug enabled. The last logs
from FD hanging in CLOSE_WAIT looks like this:

00020535:frntend.accept(0006)=000b from [ip:58321] ALPN=h2
00020535:frntend.clireq[000b:]: POST /some/uri HTTP/1.1
00020535:backend.srvrep[000b:0015]: HTTP/1.1 200 OK
00020535:backend.srvcls[000b:adfd]
00020536:frntend.clicls[000b:]
00020536:frntend.closed[000b:]
00020537:frntend.clicls[000b:]
00020537:frntend.closed[000b:]
00020538:frntend.clicls[000b:]
00020538:frntend.closed[000b:]
00020516:backend.srvcls[000b:adfd]
0002051a:backend.srvcls[000b:adfd]
00020514:backend.srvcls[000b:adfd]
00020514:backend.clicls[000b:adfd]
00020514:backend.closed[000b:adfd]
00020516:backend.clicls[000b:adfd]
00020516:backend.closed[000b:adfd]
0002051a:backend.clicls[000b:adfd]
0002051a:backend.closed[000b:adfd]
0002051d:backend.srvcls[000b:adfd]
0002051d:backend.clicls[000b:adfd]
0002051d:backend.closed[000b:adfd]
0002051e:backend.clicls[000b:adfd]
0002051e:backend.closed[000b:adfd]
0002051f:backend.srvcls[000b:adfd]
0002051f:backend.clicls[000b:adfd]
0002051f:backend.closed[000b:adfd]
00020528:backend.srvcls[000b:adfd]
00020528:backend.clicls[000b:adfd]
00020528:backend.closed[000b:adfd]
00020529:backend.srvcls[000b:adfd]
00020529:backend.clicls[000b:adfd]
00020529:backend.closed[000b:adfd]
0002052a:backend.srvcls[000b:adfd]
0002052a:backend.clicls[000b:adfd]
0002052a:backend.closed[000b:adfd]
0002052b:backend.srvcls[000b:adfd]
0002052b:backend.clicls[000b:adfd]
0002052b:backend.closed[000b:adfd]
00020535:backend.clicls[000b:adfd]
00020535:backend.closed[000b:adfd]

I decided to add some quick'n'dirty debug messages into mux_h2.c to see
more details. I modified send_log function to write to console and used it
for digging inside h2 mux. I'm not attaching the full patch, because it's
ugly and nothing to be proud of. Just an example of patched function to get
the feeling.

static inline void h2s_close(struct h2s *h2s)
{
send_log(NULL, LOG_NOTICE, "mpeh2 fd%d h2s_close - id%08x st%02x fl%08x
streams:%d\n", mpeh2_h2s_fd(h2s), mpeh2_h2s_id(h2s), mpeh2_h2s_st(h2s),
mpeh2_h2s_flags(h2s), h2s->h2c->nb_streams);

if (h2s->st != H2_SS_CLOSED)
h2s->h2c->nb_streams--;
h2s->st = H2_SS_CLOSED;
}

This is how successful connection looks like:

20180628.1614 mpeh2 fd17 h2c_frt_init
20180628.1614 mpeh2 fd17 h2c_stream_new - id0001 st00 fl
20180628.1614 004c:frntend.accept(0006)=0011 from [some_ip:5059] ALPN=h2
20180628.1614 004c:frntend.clireq[0011:]: GET /some/uri HTTP/1.1
20180628.1614 004c:backend.srvrep[0011:0013]: HTTP/1.1 200 OK
20180628.1614 004c:backend.srvcls[0011:adfd]
20180628.1614 mpeh2 fd17 h2s_close - id0001 st04 fl3001 streams:1
20180628.1614 004f:frntend.clicls[0011:]
20180628.1614 004f:frntend.closed[0011:]
20180628.1614 mpeh2 fd17 h2s_destroy - id0001 st07 fl3003
20180628.1614 mpeh2 fd17 h2s_close - id0001 st07 fl3003 streams:0
20180628.1614 mpeh2 fd17 h2c_stream_new - id0003 st00 fl
20180628.1614 0056:frntend.accept(0006)=0011 from [some_ip:5059] ALPN=h2
20180628.1614 0056:frntend.clireq[0011:]: GET /some/uri HTTP/1.1
20180628.1614 0056:backend.srvrep[0011:0013]: HTTP/1.1 200 OK
20180628.1614 0056:backend.srvcls[0011:adfd]
20180628.1614 mpeh2 fd17 h2s_close - id0003 st04 fl3001 streams:1
20180628.1614 0057:frntend.clicls[0011:]
20180628.1614 0057:frntend.closed[0011:]
20180628.1614 mpeh2 fd17 h2s_destroy - id0003 st07 fl3003
20180628.1614 mpeh2 fd17 h2s_close - id0003 st07 fl3003 streams:0
20180628.1614 mpeh2 fd17 h2c_stream_new - id0005 st00 fl
20180628.1614 0059:frntend.accept(0006)=0011 from [some_ip:5059] ALPN=h2
20180628.1614 0059:frntend.clireq[0011:]: POST /some/uri
HTTP/1.1
20180628.1614 0059:backend.srvrep[0011:0013]: HTTP/1.1 200 OK
20180628.1614 0059:backend.srvcls[0011:adfd]
20180628.1614 mpeh2 fd17 h2s_close - id0005 st04 fl3101 streams:1
20180628.1614 005b:frntend.clicls[0011:]
20180628.1614 005b:frntend.closed[0011:]
20180628.1614 mpeh2 fd17 h2s_destroy - id0005 st07 fl3103
20180628.1614 mpeh2 fd17 h2s_close - id0005 st07 fl3103 streams:0
20180628.1614 mpeh2 fd17 h2c_stream_new - id0007 st00 fl
20180628.1614 005d:frntend.accept(0006)=0011 from [some_ip:5059] ALPN=h2
20180628.1614 005d:frntend.clireq[0011:]: GET /some/uri HTTP/1.1
20180628.1614 005d:backend.srvrep[0011:0013]: HTTP/1.1 200 OK
20180628.1614 005d:backend.srvcls[0011:adfd]
20180628.1614 mpeh2 fd17 h2s_close - id0007 st04 fl3001 streams:1
20180628.1614 005e:frntend.clicls[0011:]
20180628.1614 

Re: Reverse String (or get 2nd level domain sample)?

2018-06-29 Thread Baptiste
Hi,

converters are just simple C functions, (or could be Lua code as well), and
are quite trivial to write.
Instead of creating a converter that reverse the order of chars in a
string, I would rather patch current "word" converter to support negative
integers.
IE: -2 would means you extract the second word, starting at the end of the
string.

Baptiste



On Mon, Jun 25, 2018 at 12:29 PM, Daniel Schneller <
daniel.schnel...@centerdevice.com> wrote:

> Hi!
>
> Just double checking to make sure I am not simply blind: Is there a way to
> reverse a string using a sample converter?
>
> Background: I need to extract just the second level domain from the host
> header. So for sub.sample.example.com I need to fetch "example".
>
> Using the "word" converter and a "." as the separator I can get at the
> individual components, but because the number of nested subdomains varies,
> I cannot use that directly.
>
> My idea was to just reverse the full domain (removing a potential port
> number first), get word(2) and reverse again. Is that possible? Or is there
> an even better function I can use? I am thinking this must be a common use
> case, but googling "haproxy" and "reverse" will naturally turn up lots of
> results talking about "reverse proxying".
>
> If possible, I would like to avoid using maps to keep this thing as
> generic as possible.
>
> Thanks a lot!
>
> Daniel
>
>
> --
> Daniel Schneller
> Principal Cloud Engineer
>
> CenterDevice GmbH
> Rheinwerkallee 3
> 53227 Bonn
> www.centerdevice.com
>
> __
> Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina, Michael
> Rosbach, Handelsregister-Nr.: HRB 18655, HR-Gericht: Bonn,
> USt-IdNr.: DE-815299431
>
> Diese E-Mail einschließlich evtl. beigefügter Dateien enthält vertrauliche
> und/oder rechtlich geschützte Informationen. Wenn Sie nicht der richtige
> Adressat sind oder diese E-Mail irrtümlich erhalten haben, informieren
> Sie bitte sofort den Absender und löschen Sie diese E-Mail und evtl.
> beigefügter Dateien umgehend. Das unerlaubte Kopieren, Nutzen oder
> Öffnen evtl. beigefügter Dateien sowie die unbefugte Weitergabe
> dieser E-Mail ist nicht gestattet.
>
>
>


Re: Haproxy health check interval value is not being respected

2018-06-29 Thread Baptiste
Hi Adwait,

So, you have a "timeout check" set to 5s as well.
Are your servers UP and RUNNING ?
If not, then timeout check would trigger before interval, and HAProxy would
retry a health check (up to 'fall' parameter).
(timeout connect might also trigger a retry if a S/A is not received by
HAProxy)

If your servers are fully operational, can you try set 'timeout check' to
1s and see what happens?
and also, the output of 'haproxy -vv' would be interesting.

Baptiste




On Tue, Jun 26, 2018 at 7:11 PM, Adwait Gokhale 
wrote:

> Hi Baptiste,
>
> Here is the haproxy configuration I have. Please let me know if you need
> anything else.
>
> global
>   log 127.0.0.1 local0
>   nbthread 2
>   cpu-map auto:1/1-2 0-1
>   maxconn 5000
>   tune.bufsize 18432
>   tune.maxrewrite 9216
>   user haproxy
>   group haproxy
>   daemon
>   stats socket /var/run/haproxy.sock mode 600 level admin
>   stats timeout 2m # Wait up to 2 minutes for input
>   tune.ssl.default-dh-param 2048
>   ssl-default-bind-ciphers 
>   ssl-default-bind-options no-sslv3 no-tls-tickets
>   ssl-default-server-ciphers 
>   ssl-default-server-options no-sslv3 no-tls-tickets
>
> defaults
>   log global
>   option splice-auto
>   option abortonclose
>   timeout connect 5s
>   timeout queue 5s
>   timeout client 60s
>   timeout server 60s
>   timeout tunnel 1h
>   timeout http-request 120s
>   timeout check 5s
>   option httpchk GET /graph
>   default-server inter 10s port 80 rise 5 fall 3
>   cookie DO-LB insert indirect nocache maxlife 300s maxidle 300s
>
> frontend monitor
>   bind *:50054
>   mode http
>   option forwardfor
>   monitor-uri /haproxy_test
>
> frontend tcp_80
>   bind 10.10.0.16:80
>   default_backend tcp_80_backend
>   mode tcp
>
> backend tcp_80_backend
>   balance leastconn
>   mode tcp
>   server node-359413 10.36.32.32:80 check cookie node-359413
>   server node-359414 10.36.32.35:80 check cookie node-359414
>
> On Sun, Jun 17, 2018 at 6:25 AM, Baptiste  wrote:
>
>>
>>
>> On Wed, Jun 13, 2018 at 6:31 PM, Adwait Gokhale <
>> agokh...@digitalocean.com> wrote:
>>
>>> Hello,
>>>
>>> I have come across an issue with use of  'inter' parameter that sets
>>> interval between two consecutive health checks. When configured, I found
>>> that health checks are twice as aggressive than what is configured.
>>>
>>> For instance, when I have haproxy with 2 backends with 'default-server
>>> inter 10s port 80 rise 5 fall 3'   I see that health checks to every
>>> backend is at interval of 5 seconds instead of 10. With more than 2
>>> backends, this behavior does not change.
>>>
>>> Is this a known bug or is it a misconfiguration of some sorts?
>>> Appreciate your help with this.
>>>
>>> Thanks,
>>> Adwait
>>>
>>
>>
>> Hi,
>>
>> Maybe you could share your entire configuration?
>> That would help a lot.
>>
>> Baptiste
>>
>
>