Re: Connections stuck in CLOSE_WAIT state with h2

2018-07-23 Thread Willy Tarreau
Hi Milan,

On Mon, Jul 23, 2018 at 08:41:03AM +0200, Milan Petruzelka wrote:
> After weekend CLOSE_WAIT connections are still there.

Ah bad :-(

> What
> does cflg=0x80203300 in "show fd" mean?

These are the connection flags. You can figure them with contrib/debug/flags :

$ ./flags 0x80203300 | grep ^conn
conn->flags = CO_FL_XPRT_TRACKED | CO_FL_CONNECTED | CO_FL_ADDR_TO_SET | 
CO_FL_ADDR_FROM_SET | CO_FL_XPRT_READY | CO_FL_CTRL_READY

So basically it says that everything is configured on the connection and
that there is no request for polling (CO_FL_{CURR,SOCK,XPRT}_{WR,RD}_ENA).

> FDs with cflg=0x80203300 are either
> CLOSE_WAIT or "sock - protocol: TCP" - see FDs 14, 15, 16, 18, 19 and 25 in
> following dumps. And - sockets in lsof state "sock - protocol: TCP" can't
> be found in netstat.

That totally makes sense. If the connection is not monitored at all (but
why, this is the question), and has no timeout, definitely it will wander
forever.

Do you *think* that you got less CLOSE_WAITs or that the latest fixes
didn't change anything ? I suspect that for some reason you might be
hit by several bugs, which is what has complicated the diagnostic, but
that's just pure guess.

> 
> SHOW FD 3300
>  14 : st=0x20(R:pra W:pRa) ev=0x00(heopi) [nlc] cache=0 owner=0x23d0340
> iocb=0x4d4c90(conn_fd_handler) tmask=0x1 umask=0x0 cflg=0x80203300
> fe=fe-http mux=H2 mux_ctx=0x2494cc0
(...)

If you run this with the latest 1.8 (not just the two patches above), some
extra debug information is provided in show fd :

$ echo show fd | socat - /tmp/sock1 | grep -i mux=H2
 19 : st=0x25(R:PrA W:pRa) ev=0x00(heopi) [nlc] cache=0 
owner=0x77fd8e80 iocb=0x58cb44(conn_fd_handler) tmask=0x1 umask=0x0 
cflg=0x80201306 fe=httpgw mux=H2 mux_ctx=0x77f8ff10 st0=2 flg=0x 
nbst=1 nbcs=1 fctl_cnt=0 send_cnt=0 tree_cnt=1 orph_cnt=0 dbuf=0/0 mbuf=0/0

In particular, st0, the mux flags, the number of streams and the buffer
states will give important information about what state the connection
is in (and whether it still has streams attached or not).

Oh I'm just seeing you already did that in the next e-mail. Thank you :-)

So we have this :

 25 : st=0x20(R:pra W:pRa) ev=0x00(heopi) [nlc] cache=0 owner=0x24f0a70 
iocb=0x4d34c0(conn_fd_handler) tmask=0x1 umask=0x0 cflg=0x80203300  
fe=fe-http mux=H2 mux_ctx=0x258a880 st0=7 flg=0x1000 nbst=8 nbcs=0  
fctl_cnt=0 send_cnt=8 tree_cnt=8 orph_cnt=8 dbuf=0/0 mbuf=0/16384   

  - st0=7 => H2_CS_ERROR2 : an error was sent, either it succeeded or
could not be sent and had to be aborted nonetheless ;

  - flg=1000 => H2_CF_GOAWAY_SENT : the GOAWAY frame was sent to the mux
buffer.

  - nbst=8 => 8 streams still attached

  - nbcs=0 => 0 conn_streams found (application layer detached or not
attached yet)

  - send_cnt=8 => 8 streams still in the send_list, waiting for the mux
to pick their contentx.

  - tree_cnt=8 => 8 streams known in the tree (hence they are still valid
from the H2 protocol perspective)

  - orph_cnt=8 => 8 streams are orphaned : these streams have quit at the
application layer (very likely a timeout).

  - mbuf=0/16384 : the mux buffer is empty but allocated. It's not very
common.

At this point what it indicates is that :
  - 8 streams were active on this connection and a response was sent (at
least partially) and probably waited for the mux buffer to be empty
due to data from other previous streams. I'm realising it would be
nice to also report the highest stream index to get an idea of the
number of past streams on the connection.

  - an error happened (protocol error, network issue, etc, no more info
at the moment) and caused haproxy to emit a GOAWAY frame. While doing
so, the pending streams in the send_list were not destroyed.

  - then for an unknown reason the situation doesn't move anymore. I'm
realising that one case I figured in the past with an error possibly
blocking the connection at least partially covers one point here, it
causes the mux buffer to remain allocated, so this patch would have
caused it to be released, but it's still incomplete.

Now I have some elements to dig through, I'll try to mentally reproduce
the complex sequence of a blocked response with a GOAWAY being sent at
the same time to see what happens.

Thank you very much for all these information!
Willy



Re: [PATCH] BUG/MINOR: build: Fix compilation with debug mode enabled

2018-07-23 Thread Willy Tarreau
Hi Cyril,

On Mon, Jul 23, 2018 at 10:04:34PM +0200, Cyril Bonté wrote:
> Some monthes ago, I began writing a compilation test script for haproxy, but
> as you may have noticed, I was not very available recently ;-)

Oh it happens to all of us unfortunately :-/

> I should be
> more available now. I'll try to finish this little work as it would have
> detected such type of error.

Great!

> The script parses the Makefile to find all USE_* settings and performs a
> compilation test for each one. I still have some work to do to prepare some
> compilations (dependencies like slz, deviceatlas, 51degrees, ...), but it
> looks to be already useful. I've now added DEBUG=-DDEBUG_FULL in the
> compilation options.
> 
> The main issue is that it takes hours on the tiny atom server I wanted to
> use for that job. But well, on my laptop it takes less that 2 minutes :
> that's acceptable, I've added it in the git hooks so it is executed each
> time I fetch commits from the repository.

Nice! A full build takes around 3-5 seconds on the build farm I have at
the office (I extended the initial distcc farm with the load generators).

> Some ideas for future versions :
> - randomly mix USE_* options: for example, it would have triggered an error
> to indicate an incompatbility between USE_DEVICEATLAS and USE_PCRE2.

I tend to think that randomly mixing settings will not detect much in fact.
If you have 20 settings, you have 1 million combinations. If a few of them
are incompatible, you'll very rarely meet them. However you may face some
which are expected to fail. Some very likely make sense and probably just
need to be hard-coded, especially once they have been reported to
occasionally fail. In the end you'll have less combinations with a higher
chance to detect failures by having just an iteration over all settings
one at a time and a selected set of combinations. At least that's how I
see it :-)

> - use different SSL libs/versions

Good point! I've done this a few times when we touched ssl_sock.c and
that's definitely needed.

Cheers,
Willy



RE: Missing SRV cookie

2018-07-23 Thread Norman Branitsky
Epiphany!



I was conflating the stick table with peers - thinking it was required in order 
to not lose a connection if one of the HAProxy servers failed.

As it turns out, I can't "stick on src" as the users in the client's data 
center will all present with the identical NAT address to the HAProxy servers.

So I have to use the cookies.



I do find it weird that some machines would see the SRV cookie and some not.
If I delete the following lines, will my users lose their connection if one of 
the HAProxy servers fail (the HAProxy servers are protected by DNS failover)?

stick-table type ip size 20k peers mypeers

stick on src



Or does the peers section mitigate that?

peers mypeers

# include hap_servers-haproxy declarations

peer ip-10-241-1-140 10.241.1.140:1024

peer ip-10-241-1-237 10.241.1.237:1024



-Original Message-
From: Cyril Bonté 
Sent: Monday, July 23, 2018 3:31 PM
To: Norman Branitsky 
Cc: haproxy 
Subject: Re: Missing SRV cookie



Hi Norman,



Le 23/07/2018 à 18:36, Norman Branitsky a écrit :

> My client's environment had 3 HAProxy servers.

>

> Due to a routing issue, my client's users could only see the old

> HAProxy

> 1.5 server when connecting from their data center.

> They could not see the 2 new HAProxy 1.7 servers.

>

> The routing issue was resolved last week and they could now see the 2

> new HAProxy servers, as well the old server.

>

> They started getting quick disconnects from their Java application -

>

> the SEVERE error indicated that they had arrived at the wrong server

> and had no current session.

> [...]

> New HAProxy servers configuration:

>

> backend ssl_backend-vr

>  balance roundrobin

>  stick-table type ip size 20k peers mypeers

>  stick on src



Here you are using stick tables for session persistence.



> [...]

>  cookie SRV insert indirect nocache httponly secure

>  server i-067c94ded1c8e212c 10.241.1.138:9001 check cookie

> i-067c94ded1c8e212c

>  server i-07035eca525e56235 10.241.1.133:9001 check cookie

> i-07035eca525e56235



But here, you are using cookies for the same purpose.



> I realized that the cookie mechanism was different so I shut down the

> old HAProxy server and the problem appeared to be resolved.

>

> This morning that client is complaining that the problem has returned

> - disconnects resulting in the user being kicked out to the login screen.

>

> Checking with multiple browsers, I can see both the old JSESSIONID

> cookie (with the machine name appended) and the new SRV cookie.

>

> Checking with multiple browsers, my colleagues can *NOT* see the new

> SRV cookie from any browser in this office -

>

> but they can see the SRV cookie when browsing from a virtual PC in our

> Atlanta data center!

> Even more puzzling, though my client cannot see the SRV cookie (either

> in the F12 cookies sent list, or in the browser's cookies folder) he

> *never* experiences an unexpected disconnect.

>

> Suggestions, please?



You have to make a choice, either you use stick tables, either you use cookies, 
but don't mix both otherwise you'll have the situation you are describing.





--

Cyril Bonté


Re: Missing SRV cookie

2018-07-23 Thread Cyril Bonté

Hi Norman,

Le 23/07/2018 à 18:36, Norman Branitsky a écrit :

My client’s environment had 3 HAProxy servers.

Due to a routing issue, my client’s users could only see the old HAProxy 
1.5 server when connecting from their data center.

They could not see the 2 new HAProxy 1.7 servers.

The routing issue was resolved last week and they could now see the 2 
new HAProxy servers, as well the old server.


They started getting quick disconnects from their Java application –

the SEVERE error indicated that they had arrived at the wrong server and 
had no current session.

[...]
New HAProxy servers configuration:

backend ssl_backend-vr 
     balance roundrobin

     stick-table type ip size 20k peers mypeers
     stick on src


Here you are using stick tables for session persistence.


[...]
 cookie SRV insert indirect nocache httponly secure
     server i-067c94ded1c8e212c 10.241.1.138:9001 check cookie 
i-067c94ded1c8e212c
     server i-07035eca525e56235 10.241.1.133:9001 check cookie 
i-07035eca525e56235


But here, you are using cookies for the same purpose.

I realized that the cookie mechanism was different so I shut down the 
old HAProxy server and the problem appeared to be resolved.


This morning that client is complaining that the problem has returned – 
disconnects resulting in the user being kicked out to the login screen.


Checking with multiple browsers, I can see both the old JSESSIONID 
cookie (with the machine name appended) and the new SRV cookie.


Checking with multiple browsers, my colleagues can *NOT* see the new SRV 
cookie from any browser in this office –


but they can see the SRV cookie when browsing from a virtual PC in our 
Atlanta data center!
Even more puzzling, though my client cannot see the SRV cookie (either 
in the F12 cookies sent list, or in the browser’s cookies folder)

he *never* experiences an unexpected disconnect.

Suggestions, please?


You have to make a choice, either you use stick tables, either you use 
cookies, but don't mix both otherwise you'll have the situation you are 
describing.



--
Cyril Bonté



Re: Issue with TCP splicing

2018-07-23 Thread Julien Semaan
Doing it with the patch does the equivalent of disabling it with the 
option (realized there was an option afterwards).


We're more looking to know if the haproxy team is interested in getting 
the issue addressed more than just getting the workaround


Thanks!

--
Julien Semaan
jsem...@inverse.ca   ::  +1 (866) 353-6153 *155  ::www.inverse.ca
Inverse inc. :: Leaders behind SOGo (www.sogo.nu) and PacketFence 
(www.packetfence.org)

On 2018-07-23 11:25 AM, Aleksandar Lazic wrote:

Hi Julien.

On 23/07/2018 09:07, Julien Semaan wrote:

Hi all,

We're currently using haproxy in our project PacketFence 
(https://packetfence.org) and are currently experiencing an issue 
with haproxy segfaulting when TCP splicing is enabled.


We're currently running version 1.8.9 and are occasionally getting 
segfaults on this specific line in stream.c (line 2131):
(objt_cs(si_b->end) && __objt_cs(si_b->end)->conn->xprt && 
__objt_cs(si_b->end)->conn->xprt->snd_pipe) &&


I wasn't too bright when I found it through gdb and forgot to copy 
the backtrace, so I'm hoping that the issue can be found with this 
limited information.


After commenting out the code for TCP splicing with the patch 
attached to the email, then the issue stopped happening.


Have you tried to disable splice via config?

https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#nosplice


Best Regards,

--
Julien Semaan
jsem...@inverse.ca   ::  +1 (866) 353-6153 *155 ::www.inverse.ca
Inverse inc. :: Leaders behind SOGo (www.sogo.nu) and PacketFence 
(www.packetfence.org)


Best regards
aleks




Missing SRV cookie

2018-07-23 Thread Norman Branitsky
My client's environment had 3 HAProxy servers.
Due to a routing issue, my client's users could only see the old HAProxy 1.5 
server when connecting from their data center.
They could not see the 2 new HAProxy 1.7 servers.
The routing issue was resolved last week and they could now see the 2 new 
HAProxy servers, as well the old server.
They started getting quick disconnects from their Java application -
the SEVERE error indicated that they had arrived at the wrong server and had no 
current session.

Old HAProxy server configuration:
backend ssl_backend-vr
balance roundrobin
option httpchk GET /le5/about.txt
http-check disable-on-404
http-request allow if { src -f /etc/CONFIG/haproxy/whitelist.lst } || { 
ssl_c_used }
http-request deny
appsession JSESSIONID len 52 timeout 3h
acl path_root path /
redirect location /le5/ if path_root
# include ssl_servers-vr declarations
server i-067c94ded1c8e212c 10.241.1.138:9001 check
   server i-07035eca525e56235 10.241.1.133:9001 check

New HAProxy servers configuration:
backend ssl_backend-vr
balance roundrobin
stick-table type ip size 20k peers mypeers
stick on src
option httpchk GET /le5/about.txt
http-check disable-on-404
http-request allow if { src -f /etc/CONFIG/haproxy/whitelist.lst } || { 
ssl_c_used }
http-request deny
cookie SRV insert indirect nocache httponly secure
acl path_root path /
redirect location /le5/ if path_root
# include ssl_servers-vr declarations
server i-067c94ded1c8e212c 10.241.1.138:9001 check cookie 
i-067c94ded1c8e212c
server i-07035eca525e56235 10.241.1.133:9001 check cookie 
i-07035eca525e56235

I realized that the cookie mechanism was different so I shut down the old 
HAProxy server and the problem appeared to be resolved.
This morning that client is complaining that the problem has returned - 
disconnects resulting in the user being kicked out to the login screen.
Checking with multiple browsers, I can see both the old JSESSIONID cookie (with 
the machine name appended) and the new SRV cookie.
Checking with multiple browsers, my colleagues can NOT see the new SRV cookie 
from any browser in this office -
but they can see the SRV cookie when browsing from a virtual PC in our Atlanta 
data center!
Even more puzzling, though my client cannot see the SRV cookie (either in the 
F12 cookies sent list, or in the browser's cookies folder)
he never experiences an unexpected disconnect.
Suggestions, please?



[PATCH] MINOR: ssl: BoringSSL matches OpenSSL 1.1.0

2018-07-23 Thread Emmanuel Hocdet
Hi Willy,

This patch is necessary to build with current BoringSSL (SSL_SESSION is now 
opaque).
BoringSSL correctly matches OpenSSL 1.1.0 since 3b2ff028 for haproxy needs.
The patch revert part of haproxy 019f9b10 (openssl-compat.h).
This will not break openssl/libressl compat.

Can you consider it for 1.9?
Thanks.

Manu



0001-MINOR-ssl-BoringSSL-matches-OpenSSL-1.1.0.patch
Description: Binary data




Re: Issue with TCP splicing

2018-07-23 Thread Aleksandar Lazic

Hi Julien.

On 23/07/2018 09:07, Julien Semaan wrote:

Hi all,

We're currently using haproxy in our project PacketFence 
(https://packetfence.org) and are currently experiencing an issue with 
haproxy segfaulting when TCP splicing is enabled.


We're currently running version 1.8.9 and are occasionally getting 
segfaults on this specific line in stream.c (line 2131):
(objt_cs(si_b->end) && __objt_cs(si_b->end)->conn->xprt && 
__objt_cs(si_b->end)->conn->xprt->snd_pipe) &&


I wasn't too bright when I found it through gdb and forgot to copy the 
backtrace, so I'm hoping that the issue can be found with this limited 
information.


After commenting out the code for TCP splicing with the patch attached 
to the email, then the issue stopped happening.


Have you tried to disable splice via config?

https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#nosplice


Best Regards,

--
Julien Semaan
jsem...@inverse.ca   ::  +1 (866) 353-6153 *155  ::www.inverse.ca
Inverse inc. :: Leaders behind SOGo (www.sogo.nu) and PacketFence 
(www.packetfence.org)


Best regards
aleks



looking for help with redirect + acl

2018-07-23 Thread James Stroehmann

I need help with a current ACL and redirect that looks like this:

acl has_statistical_uri path_beg -i /statistical
http-request redirect code 301 prefix 
https://statistical.example.com/statisticalinsight if has_statistical_uri

When the request like this comes in:
https://statistical.example.com/statistical/example?key=value
it gets redirected to this:
https://statistical.example.com/statisticalinsight/statistical/example?key=value

They would like it to be redirected to:
https://statistical.example.com/statisticalinsight/example?key=value




Reload certificates file without downtime

2018-07-23 Thread Jerome Warnier
Hi,

I'm running HAproxy (1.7 or 1.8) inside Docker containers, through by
SystemD unit files on the host.
I would like to force HAproxy to reload certificates (bind ssl crt) with
minimal downtime whenever they are renewed on disk (another process inside
the container).

I tried to send HUP signals to HAproxy through Docker, but this doesn't
seem to work.
Of course, any other signal just kills the containers, so there is a
downtime of about 30s to 1min.

Any idea?


Issue with TCP splicing

2018-07-23 Thread Julien Semaan

Hi all,

We're currently using haproxy in our project PacketFence 
(https://packetfence.org) and are currently experiencing an issue with 
haproxy segfaulting when TCP splicing is enabled.


We're currently running version 1.8.9 and are occasionally getting 
segfaults on this specific line in stream.c (line 2131):
(objt_cs(si_b->end) && __objt_cs(si_b->end)->conn->xprt && 
__objt_cs(si_b->end)->conn->xprt->snd_pipe) &&


I wasn't too bright when I found it through gdb and forgot to copy the 
backtrace, so I'm hoping that the issue can be found with this limited 
information.


After commenting out the code for TCP splicing with the patch attached 
to the email, then the issue stopped happening.


Best Regards,

--
Julien Semaan
jsem...@inverse.ca   ::  +1 (866) 353-6153 *155  ::www.inverse.ca
Inverse inc. :: Leaders behind SOGo (www.sogo.nu) and PacketFence 
(www.packetfence.org)

diff -ruN haproxy-1.8.9.orig/src/stream.c haproxy-1.8.9/src/stream.c
--- haproxy-1.8.9.orig/src/stream.c	2018-05-18 09:10:29.0 -0400
+++ haproxy-1.8.9/src/stream.c	2018-07-20 13:06:41.861913134 -0400
@@ -2122,8 +2122,9 @@
 		if (s->txn)
 			s->txn->req.sov = s->txn->req.eoh + s->txn->req.eol - req->buf->o;
 	}
-
 	/* check if it is wise to enable kernel splicing to forward request data */
+  /* DON'T ENABLE TCP SPLICING AT ALL BECAUSE OF OCCASIONNAL SEGFAULTS WE'VE SEEN
+   * jsem...@inverse.ca
 	if (!(req->flags & (CF_KERN_SPLICING|CF_SHUTR)) &&
 	req->to_forward &&
 	(global.tune.options & GTUNE_USE_SPLICE) &&
@@ -2135,7 +2136,7 @@
 	  (req->flags & CF_STREAMER_FAST {
 		req->flags |= CF_KERN_SPLICING;
 	}
-
+  */
 	/* reflect what the L7 analysers have seen last */
 	rqf_last = req->flags;
 
@@ -2306,6 +2307,8 @@
 	}
 
 	/* check if it is wise to enable kernel splicing to forward response data */
+  /* DON'T ENABLE TCP SPLICING AT ALL BECAUSE OF OCCASIONNAL SEGFAULTS WE'VE SEEN
+   * jsem...@inverse.ca
 	if (!(res->flags & (CF_KERN_SPLICING|CF_SHUTR)) &&
 	res->to_forward &&
 	(global.tune.options & GTUNE_USE_SPLICE) &&
@@ -2318,6 +2321,7 @@
 		res->flags |= CF_KERN_SPLICING;
 	}
 
+  */
 	/* reflect what the L7 analysers have seen last */
 	rpf_last = res->flags;
 


Re: Connections stuck in CLOSE_WAIT state with h2

2018-07-23 Thread Milan Petruželka
Hi,

I've compiled latest haproxy 1.8.12 from Git repo (HAProxy version
1.8.12-5e100b-15, released 2018/07/20) with latest h2 patches and extended
h2 debug info. And after some time I caught one CLOSE_WAIT connection. Here
is extended show fd debug:
 25 : st=0x20(R:pra W:pRa) ev=0x00(heopi) [nlc] cache=0 owner=0x24f0a70
iocb=0x4d34c0(conn_fd_handler) tmask=0x1 umask=0x0 cflg=0x80203300
fe=fe-http mux=H2 mux_ctx=0x258a880 st0=7 flg=0x1000 nbst=8 nbcs=0
fctl_cnt=0 send_cnt=8 tree_cnt=8 orph_cnt=8 dbuf=0/0 mbuf=0/16384

LSOF CLOSE_WAIT
haproxy 26364 haproxy   25u IPv47140390  0t0 TCP
ip:https->ip:50041 (CLOSE_WAIT)

Milan


Re: Connections stuck in CLOSE_WAIT state with h2

2018-07-23 Thread Milan Petruželka
On Fri, 20 Jul 2018 at 14:36, Milan Petruželka  wrote:

>
> I've applied both patches to vanilla haproxy 1.8.12. I'll leave it running
> and report back.
>

>

Hi,

After weekend CLOSE_WAIT connections are still there. What
does cflg=0x80203300 in "show fd" mean? FDs with cflg=0x80203300 are either
CLOSE_WAIT or "sock - protocol: TCP" - see FDs 14, 15, 16, 18, 19 and 25 in
following dumps. And - sockets in lsof state "sock - protocol: TCP" can't
be found in netstat.

SHOW FD 3300
 14 : st=0x20(R:pra W:pRa) ev=0x00(heopi) [nlc] cache=0 owner=0x23d0340
iocb=0x4d4c90(conn_fd_handler) tmask=0x1 umask=0x0 cflg=0x80203300
fe=fe-http mux=H2 mux_ctx=0x2494cc0
 15 : st=0x20(R:pra W:pRa) ev=0x00(heopi) [nlc] cache=0 owner=0x245c6f0
iocb=0x4d4c90(conn_fd_handler) tmask=0x1 umask=0x0 cflg=0x80203300
fe=fe-http mux=H2 mux_ctx=0x23c1db0
 16 : st=0x20(R:pra W:pRa) ev=0x00(heopi) [nlc] cache=0 owner=0x25598e0
iocb=0x4d4c90(conn_fd_handler) tmask=0x1 umask=0x0 cflg=0x80203300
fe=fe-http mux=H2 mux_ctx=0x23d0900
 18 : st=0x20(R:pra W:pRa) ev=0x00(heopi) [nlc] cache=0 owner=0x23940a0
iocb=0x4d4c90(conn_fd_handler) tmask=0x1 umask=0x0 cflg=0x80203300
fe=fe-http mux=H2 mux_ctx=0x242a030
 19 : st=0x20(R:pra W:pRa) ev=0x00(heopi) [nlc] cache=0 owner=0x24a8b90
iocb=0x4d4c90(conn_fd_handler) tmask=0x1 umask=0x0 cflg=0x80203300
fe=fe-http mux=H2 mux_ctx=0x24820b0
 25 : st=0x20(R:pra W:pRa) ev=0x00(heopi) [nlc] cache=0 owner=0x2457a10
iocb=0x4d4c90(conn_fd_handler) tmask=0x1 umask=0x0 cflg=0x80203300
fe=fe-http mux=H2 mux_ctx=0x2394660

LSOF
haproxy 31313 haproxy0u  CHR  136,1  0t0   4
/dev/pts/1
haproxy 31313 haproxy1w FIFO   0,10  0t0 2004495
pipe
haproxy 31313 haproxy2w FIFO   0,10  0t0 2004495
pipe
haproxy 31313 haproxy3u  a_inode   0,1107017
[eventpoll]
haproxy 31313 haproxy4u unix 0x88042aa3b400  0t0 2002869
/www/server/haproxy/cmd.sock.31313.tmp type=STREAM
haproxy 31313 haproxy5u IPv42002872  0t0 TCP
some.ip:http (LISTEN)
haproxy 31313 haproxy6u IPv42002873  0t0 TCP
some.ip:https (LISTEN)
haproxy 31313 haproxy7u IPv42002874  0t0 TCP
*:http-alt (LISTEN)
haproxy 31313 haproxy8u IPv42002875  0t0 TCP
*:8443 (LISTEN)
haproxy 31313 haproxy9r FIFO   0,10  0t0 2002876
pipe
haproxy 31313 haproxy   10w FIFO   0,10  0t0 2002876
pipe
haproxy 31313 haproxy   11u IPv46560416  0t0 TCP
some.ip:https->some.ip:49375 (ESTABLISHED)
haproxy 31313 haproxy   12u IPv42002883  0t0 UDP
*:52068
haproxy 31313 haproxy   13u IPv46656750  0t0 TCP
some.ip:https->some.ip:50544 (ESTABLISHED)
haproxy 31313 haproxy   14u IPv44951212  0t0 TCP
some.ip:https->some.ip:4 (CLOSE_WAIT)
haproxy 31313 haproxy   15u sock0,8  0t0 4111815
protocol: TCP
haproxy 31313 haproxy   16u sock0,8  0t0 6236118
protocol: TCP
haproxy 31313 haproxy   17u IPv46657419  0t0 TCP
some.ip:https->some.ip:64934 (ESTABLISHED)
haproxy 31313 haproxy   18u sock0,8  0t0 2653890
protocol: TCP
haproxy 31313 haproxy   19u IPv45699053  0t0 TCP
some.ip:https->some.ip:59601 (CLOSE_WAIT)
haproxy 31313 haproxy   20u IPv46656756  0t0 TCP
some.ip:https->some.ip:29233 (ESTABLISHED)
haproxy 31313 haproxy   21u IPv46656760  0t0 TCP
some.ip:https->some.ip:59058 (ESTABLISHED)
haproxy 31313 haproxy   22u IPv46654620  0t0 TCP
some.ip:https->some.ip:49306 (ESTABLISHED)
haproxy 31313 haproxy   23u IPv46656769  0t0 TCP
some.ip:https->some.ip:17513 (ESTABLISHED)
haproxy 31313 haproxy   25u IPv45873818  0t0 TCP
some.ip:https->some.ip:58413 (CLOSE_WAIT)
haproxy 31313 haproxy   26u unix 0x8802f924  0t0 6656772
type=STREAM
haproxy 31313 haproxy   27u IPv46656639  0t0 TCP
some.ip:https->some.ip:2926 (ESTABLISHED)

SHOW FD
  4 : st=0x05(R:PrA W:pra) ev=0x01(heopI) [nlc] cache=0 owner=0x232ac80
iocb=0x4c0be0(listener_accept) tmask=0x
umask=0xfffe l.st=RDY fe=GLOBAL
  5 : st=0x05(R:PrA W:pra) ev=0x01(heopI) [nlc] cache=0 owner=0x232ce80
iocb=0x4c0be0(listener_accept) tmask=0x
umask=0xfffe l.st=RDY fe=fe-http
  6 : st=0x05(R:PrA W:pra) ev=0x01(heopI) [nlc] cache=0 owner=0x232d390
iocb=0x4c0be0(listener_accept) tmask=0x
umask=0xfffe l.st=RDY fe=fe-http
  7 : st=0x05(R:PrA W:pra) ev=0x01(heopI) [nlc] cache=0 owner=0x234cb00
iocb=0x4c0be0(listener_accept) tmask=0x
umask=0xfffe l.st=RDY fe=fe-service
  8 : st=0x05(R:PrA W:pra)