Re: [1.9.7] One of haproxy processes using 100% CPU

2019-05-05 Thread Willy Tarreau
Hi Maciej,

On Mon, May 06, 2019 at 06:49:26AM +0200, Maciej Zdeb wrote:
> Hi,
> 
> I confirm Willy patch fixed the problem! Thanks!

Great, thanks for confirming!
Willy



Re: [1.9.7] One of haproxy processes using 100% CPU

2019-05-05 Thread Maciej Zdeb
Hi,

I confirm Willy patch fixed the problem! Thanks!

wt., 30 kwi 2019 o 13:49 Maciej Zdeb  napisał(a):

> Hi Olivier,
>
> Thank you very much. I'll test it and get back with feedback!
>
> Regards,
>
> wt., 30 kwi 2019 o 13:12 Olivier Houchard 
> napisał(a):
>
>> Hi Maciej,
>>
>> On Tue, Apr 30, 2019 at 08:43:07AM +0200, Maciej Zdeb wrote:
>> > Filtered results from show fd for that particular virtual server:
>> >
>> > 10 : st=0x22(R:pRa W:pRa) ev=0x01(heopI) [lc] cnext=-3 cprev=-2
>> tmask=0x1
>> > umask=0x0 owner=0x53a5690 iocb=0x59d689(conn_fd_handler) back=0
>> > cflg=0x80243300 fe=virtual-server_front mux=H2 ctx=0x6502860 h2c.st0=2
>> > .err=0 .maxid=17 .lastid=-1 .flg=0x1 .nbst=0 .nbcs=1 .fctl_cnt=0
>> > .send_cnt=0 .tree_cnt=1 .orph_cnt=0 .sub=0 .dsi=13 .dbuf=0@(nil)+0/0
>> > .msi=-1 .mbuf=0@(nil)+0/0 last_h2s=0x5907040 .id=13 .flg=0x4005
>> > .rxbuf=0@(nil)+0/0
>> > .cs=0x905b1b0 .cs.flg=0x00106a00 .cs.data=0x5d1d228
>> > 98 : st=0x22(R:pRa W:pRa) ev=0x01(heopI) [lc] cnext=-1809 cprev=-2
>> > tmask=0x1 umask=0x0 owner=0xa3bb7f0 iocb=0x59d689(conn_fd_handler)
>> back=0
>> > cflg=0x80201300 fe=virtual-server_front mux=H2 ctx=0xa71f310 h2c.st0=3
>> > .err=0 .maxid=0 .lastid=-1 .flg=0x0008 .nbst=0 .nbcs=0 .fctl_cnt=0
>> > .send_cnt=0 .tree_cnt=0 .orph_cnt=0 .sub=0 .dsi=3
>> > .dbuf=16384@0x5873f10+61/16384
>> > .msi=-1 .mbuf=0@(nil)+0/0
>>
>> I see that it seems to be HTTP/2. Not saying it's your problem, but a bug
>> that would cause haproxy to use 100% of the CPU has been fixed in the
>> HTTP/2
>> code just after the 1.9.7 release was done.
>> Any chance you can see if it still happens with that commit :
>> commit c980b511bfef566e9890eb9a06d607c193d63828
>> Author: Willy Tarreau 
>> Date:   Mon Apr 29 10:20:21 2019 +0200
>>
>> BUG/MEDIUM: mux-h2: properly deal with too large headers frames
>>
>> Regards,
>>
>> Olivier
>>
>


Re: [1.9.7] One of haproxy processes using 100% CPU

2019-04-30 Thread Maciej Zdeb
Hi Olivier,

Thank you very much. I'll test it and get back with feedback!

Regards,

wt., 30 kwi 2019 o 13:12 Olivier Houchard 
napisał(a):

> Hi Maciej,
>
> On Tue, Apr 30, 2019 at 08:43:07AM +0200, Maciej Zdeb wrote:
> > Filtered results from show fd for that particular virtual server:
> >
> > 10 : st=0x22(R:pRa W:pRa) ev=0x01(heopI) [lc] cnext=-3 cprev=-2 tmask=0x1
> > umask=0x0 owner=0x53a5690 iocb=0x59d689(conn_fd_handler) back=0
> > cflg=0x80243300 fe=virtual-server_front mux=H2 ctx=0x6502860 h2c.st0=2
> > .err=0 .maxid=17 .lastid=-1 .flg=0x1 .nbst=0 .nbcs=1 .fctl_cnt=0
> > .send_cnt=0 .tree_cnt=1 .orph_cnt=0 .sub=0 .dsi=13 .dbuf=0@(nil)+0/0
> > .msi=-1 .mbuf=0@(nil)+0/0 last_h2s=0x5907040 .id=13 .flg=0x4005
> > .rxbuf=0@(nil)+0/0
> > .cs=0x905b1b0 .cs.flg=0x00106a00 .cs.data=0x5d1d228
> > 98 : st=0x22(R:pRa W:pRa) ev=0x01(heopI) [lc] cnext=-1809 cprev=-2
> > tmask=0x1 umask=0x0 owner=0xa3bb7f0 iocb=0x59d689(conn_fd_handler) back=0
> > cflg=0x80201300 fe=virtual-server_front mux=H2 ctx=0xa71f310 h2c.st0=3
> > .err=0 .maxid=0 .lastid=-1 .flg=0x0008 .nbst=0 .nbcs=0 .fctl_cnt=0
> > .send_cnt=0 .tree_cnt=0 .orph_cnt=0 .sub=0 .dsi=3
> > .dbuf=16384@0x5873f10+61/16384
> > .msi=-1 .mbuf=0@(nil)+0/0
>
> I see that it seems to be HTTP/2. Not saying it's your problem, but a bug
> that would cause haproxy to use 100% of the CPU has been fixed in the
> HTTP/2
> code just after the 1.9.7 release was done.
> Any chance you can see if it still happens with that commit :
> commit c980b511bfef566e9890eb9a06d607c193d63828
> Author: Willy Tarreau 
> Date:   Mon Apr 29 10:20:21 2019 +0200
>
> BUG/MEDIUM: mux-h2: properly deal with too large headers frames
>
> Regards,
>
> Olivier
>


Re: [1.9.7] One of haproxy processes using 100% CPU

2019-04-30 Thread Olivier Houchard
Hi Maciej,

On Tue, Apr 30, 2019 at 08:43:07AM +0200, Maciej Zdeb wrote:
> Filtered results from show fd for that particular virtual server:
> 
> 10 : st=0x22(R:pRa W:pRa) ev=0x01(heopI) [lc] cnext=-3 cprev=-2 tmask=0x1
> umask=0x0 owner=0x53a5690 iocb=0x59d689(conn_fd_handler) back=0
> cflg=0x80243300 fe=virtual-server_front mux=H2 ctx=0x6502860 h2c.st0=2
> .err=0 .maxid=17 .lastid=-1 .flg=0x1 .nbst=0 .nbcs=1 .fctl_cnt=0
> .send_cnt=0 .tree_cnt=1 .orph_cnt=0 .sub=0 .dsi=13 .dbuf=0@(nil)+0/0
> .msi=-1 .mbuf=0@(nil)+0/0 last_h2s=0x5907040 .id=13 .flg=0x4005
> .rxbuf=0@(nil)+0/0
> .cs=0x905b1b0 .cs.flg=0x00106a00 .cs.data=0x5d1d228
> 98 : st=0x22(R:pRa W:pRa) ev=0x01(heopI) [lc] cnext=-1809 cprev=-2
> tmask=0x1 umask=0x0 owner=0xa3bb7f0 iocb=0x59d689(conn_fd_handler) back=0
> cflg=0x80201300 fe=virtual-server_front mux=H2 ctx=0xa71f310 h2c.st0=3
> .err=0 .maxid=0 .lastid=-1 .flg=0x0008 .nbst=0 .nbcs=0 .fctl_cnt=0
> .send_cnt=0 .tree_cnt=0 .orph_cnt=0 .sub=0 .dsi=3
> .dbuf=16384@0x5873f10+61/16384
> .msi=-1 .mbuf=0@(nil)+0/0

I see that it seems to be HTTP/2. Not saying it's your problem, but a bug
that would cause haproxy to use 100% of the CPU has been fixed in the HTTP/2
code just after the 1.9.7 release was done.
Any chance you can see if it still happens with that commit :
commit c980b511bfef566e9890eb9a06d607c193d63828
Author: Willy Tarreau 
Date:   Mon Apr 29 10:20:21 2019 +0200

BUG/MEDIUM: mux-h2: properly deal with too large headers frames

Regards,

Olivier



Re: [1.9.7] One of haproxy processes using 100% CPU

2019-04-29 Thread Maciej Zdeb
Filtered results from show fd for that particular virtual server:

10 : st=0x22(R:pRa W:pRa) ev=0x01(heopI) [lc] cnext=-3 cprev=-2 tmask=0x1
umask=0x0 owner=0x53a5690 iocb=0x59d689(conn_fd_handler) back=0
cflg=0x80243300 fe=virtual-server_front mux=H2 ctx=0x6502860 h2c.st0=2
.err=0 .maxid=17 .lastid=-1 .flg=0x1 .nbst=0 .nbcs=1 .fctl_cnt=0
.send_cnt=0 .tree_cnt=1 .orph_cnt=0 .sub=0 .dsi=13 .dbuf=0@(nil)+0/0
.msi=-1 .mbuf=0@(nil)+0/0 last_h2s=0x5907040 .id=13 .flg=0x4005
.rxbuf=0@(nil)+0/0
.cs=0x905b1b0 .cs.flg=0x00106a00 .cs.data=0x5d1d228
98 : st=0x22(R:pRa W:pRa) ev=0x01(heopI) [lc] cnext=-1809 cprev=-2
tmask=0x1 umask=0x0 owner=0xa3bb7f0 iocb=0x59d689(conn_fd_handler) back=0
cflg=0x80201300 fe=virtual-server_front mux=H2 ctx=0xa71f310 h2c.st0=3
.err=0 .maxid=0 .lastid=-1 .flg=0x0008 .nbst=0 .nbcs=0 .fctl_cnt=0
.send_cnt=0 .tree_cnt=0 .orph_cnt=0 .sub=0 .dsi=3
.dbuf=16384@0x5873f10+61/16384
.msi=-1 .mbuf=0@(nil)+0/0
184 : st=0x05(R:PrA W:pra) ev=0x01(heopI) [lC] cnext=-3 cprev=-2
tmask=0x umask=0x0 owner=0x23eb040
iocb=0x57e662(listener_accept) l.st=RDY fe=virtual-server_front
660 : st=0x22(R:pRa W:pRa) ev=0x11(HeopI) [lc] cnext=-3 cprev=-2 tmask=0x1
umask=0x0 owner=0x533d6e0 iocb=0x59d689(conn_fd_handler) back=0
cflg=0x80243300 fe=virtual-server_front mux=H2 ctx=0x8031b90 h2c.st0=2
.err=0 .maxid=49 .lastid=-1 .flg=0x1 .nbst=0 .nbcs=1 .fctl_cnt=0
.send_cnt=0 .tree_cnt=1 .orph_cnt=0 .sub=0 .dsi=49 .dbuf=0@(nil)+0/0
.msi=-1 .mbuf=0@(nil)+0/0 last_h2s=0x70f1e80 .id=31 .flg=0x4005
.rxbuf=0@(nil)+0/0
.cs=0x6f373d0 .cs.flg=0x00106a00 .cs.data=0x56bb788
699 : st=0x22(R:pRa W:pRa) ev=0x11(HeopI) [lc] cnext=-87 cprev=-2 tmask=0x1
umask=0x0 owner=0x6694b60 iocb=0x59d689(conn_fd_handler) back=0
cflg=0x80243300 fe=virtual-server_front mux=H2 ctx=0x56e7b00 h2c.st0=2
.err=0 .maxid=111 .lastid=-1 .flg=0x1 .nbst=0 .nbcs=1 .fctl_cnt=0
.send_cnt=0 .tree_cnt=1 .orph_cnt=0 .sub=0 .dsi=111 .dbuf=0@(nil)+0/0
.msi=-1 .mbuf=0@(nil)+0/0 last_h2s=0x5931bf0 .id=47 .flg=0x4105
.rxbuf=0@(nil)+0/0
.cs=0x5943120 .cs.flg=0x00106a00 .cs.data=0x77af4c8
970 : st=0x22(R:pRa W:pRa) ev=0x01(heopI) [lc] cnext=-3 cprev=-2 tmask=0x1
umask=0x0 owner=0x67684b0 iocb=0x59d689(conn_fd_handler) back=0
cflg=0x80243300 fe=virtual-server_front mux=H2 ctx=0x5c90c30 h2c.st0=2
.err=0 .maxid=125 .lastid=-1 .flg=0x1 .nbst=0 .nbcs=1 .fctl_cnt=0
.send_cnt=0 .tree_cnt=1 .orph_cnt=0 .sub=0 .dsi=125 .dbuf=0@(nil)+0/0
.msi=-1 .mbuf=0@(nil)+0/0 last_h2s=0x7ac8650 .id=43 .flg=0x4005
.rxbuf=0@(nil)+0/0
.cs=0x7901a20 .cs.flg=0x00106a00 .cs.data=0x882c388
1282 : st=0x22(R:pRa W:pRa) ev=0x11(HeopI) [lc] cnext=-3 cprev=-2 tmask=0x1
umask=0x0 owner=0x6f23720 iocb=0x59d689(conn_fd_handler) back=0
cflg=0x80243300 fe=virtual-server_front mux=H2 ctx=0x6090cf0 h2c.st0=2
.err=0 .maxid=129 .lastid=-1 .flg=0x1 .nbst=0 .nbcs=1 .fctl_cnt=0
.send_cnt=0 .tree_cnt=1 .orph_cnt=0 .sub=0 .dsi=129 .dbuf=0@(nil)+0/0
.msi=-1 .mbuf=0@(nil)+0/0 last_h2s=0x5cc0890 .id=17 .flg=0x4005
.rxbuf=0@(nil)+0/0
.cs=0x64d33f0 .cs.flg=0x00106a00 .cs.data=0x639a3e8
3041 : st=0x22(R:pRa W:pRa) ev=0x11(HeopI) [lc] cnext=-955 cprev=-2
tmask=0x1 umask=0x0 owner=0x6de8980 iocb=0x59d689(conn_fd_handler) back=0
cflg=0x80243300 fe=virtual-server_front mux=H2 ctx=0x5beca10 h2c.st0=2
.err=0 .maxid=89 .lastid=-1 .flg=0x1 .nbst=0 .nbcs=1 .fctl_cnt=0
.send_cnt=0 .tree_cnt=1 .orph_cnt=0 .sub=0 .dsi=89 .dbuf=0@(nil)+0/0
.msi=-1 .mbuf=0@(nil)+0/0 last_h2s=0x82f5900 .id=15 .flg=0x4005
.rxbuf=0@(nil)+0/0
.cs=0x7e027a0 .cs.flg=0x00106a00 .cs.data=0x6e5d398

wt., 30 kwi 2019 o 08:31 Maciej Zdeb  napisał(a):

> Forgot to attach information about HAProxy (for tracing the issue I
> compiled it with debug symbols and without optimizations):
>
> haproxy -vv
> HA-Proxy version 1.9.7 2019/04/25 - https://haproxy.org/
> Build options :
>   TARGET  = linux2628
>   CPU = generic
>   CC  = gcc
>   CFLAGS  = -O0 -g -fno-strict-aliasing -Wdeclaration-after-statement
> -fwrapv -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter
> -Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered
> -Wno-missing-field-initializers -Wtype-limits -DIP_BIND_ADDRESS_NO_PORT=24
>   OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_REGPARM=1 USE_DL=1
> USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1 USE_PCRE_JIT=1
>
> Default settings :
>   maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
>
> Built with OpenSSL version : OpenSSL 1.1.1b  26 Feb 2019
> Running on OpenSSL version : OpenSSL 1.1.1b  26 Feb 2019
> OpenSSL library supports TLS extensions : yes
> OpenSSL library supports SNI : yes
> OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
> Built with Lua version : Lua 5.3.5
> Built with transparent proxy support using: IP_TRANSPARENT
> IPV6_TRANSPARENT IP_FREEBIND
> Built with zlib version : 1.2.8
> Running on zlib version : 1.2.8
> Compression algorithms supported : identity("identity"),
> deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
> Built with PCRE version : 8.43 20

Re: [1.9.7] One of haproxy processes using 100% CPU

2019-04-29 Thread Maciej Zdeb
Forgot to attach information about HAProxy (for tracing the issue I
compiled it with debug symbols and without optimizations):

haproxy -vv
HA-Proxy version 1.9.7 2019/04/25 - https://haproxy.org/
Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -O0 -g -fno-strict-aliasing -Wdeclaration-after-statement
-fwrapv -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter
-Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered
-Wno-missing-field-initializers -Wtype-limits -DIP_BIND_ADDRESS_NO_PORT=24
  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_REGPARM=1 USE_DL=1
USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1 USE_PCRE_JIT=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.1.1b  26 Feb 2019
Running on OpenSSL version : OpenSSL 1.1.1b  26 Feb 2019
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with Lua version : Lua 5.3.5
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
IP_FREEBIND
Built with zlib version : 1.2.8
Running on zlib version : 1.2.8
Compression algorithms supported : identity("identity"),
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with PCRE version : 8.43 2019-02-23
Running on PCRE version : 8.43 2019-02-23
PCRE library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with multi-threading support.

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
  h2 : mode=HTTP   side=FE
  h2 : mode=HTXside=FE|BE
: mode=HTXside=FE|BE
: mode=TCP|HTTP   side=FE|BE

Available filters :
[SPOE] spoe
[COMP] compression
[CACHE] cache
[TRACE] trace

wt., 30 kwi 2019 o 08:16 Maciej Zdeb  napisał(a):

> Hi,
>
> I'm returning with problem similar as in thread "[1.9.6] One of haproxy
> processes using 100% CPU".
>
> Thanks to Olivier hard work, some issues where fixed but still not all of
> them. :( Currently it is much harder to trigger and it occurs on https
> virtual server and not tcp one. I'm observing this problem for HAProxy
> 1.9.7, process starts using 100% cpu and admin socket is still responsive.
>
> Attached gdb session. Please let me know if you need some more info from
> gdb, unfortunately I'm not gdb expert and I'm not sure what to look for. It
> is production server, however I can keep it for a couple hours isolated
> from production traffic.
>
> Pasting some anonymized info from show sess, please note that it was
> dumped on 30/Apr/2019:06:53:54 (before dump, traffic was switched to
> another server) but some connections are from 29/Apr/2019 and persist in
> that strange state.
>
> CC: Oliver :)
>
> socat /var/run/haproxy/haproxy1.sock - <<< "show sess all"
> 0x56bb4f0: [29/Apr/2019:12:31:37.186578] id=14500574 proto=tcpv4
> source=C.C.C.C:56567
>   flags=0x44e, conn_retries=2, srv_conn=0x24db060, pend_pos=(nil) waiting=0
>   frontend=virtual-server_front (id=45 mode=http), listener=? (id=1)
> addr=V.V.V.V:443
>   backend=V.V.V.V:443_back (id=46 mode=http) addr=S.S.S.S:57654
>   server=slot_9_3 (id=78) addr=B.B.B.B:31160
>   task=0x6ec4070 (state=0x00 nice=0 calls=2 exp= tmask=0x1
> age=18h22m)
>   txn=0x57eadf0 flags=0x88003000 meth=1 status=-1 req.st=MSG_DONE rsp.st
> =MSG_RPBEFORE
>   req.f=0x0c blen=0 chnk=0 next=0
>   rsp.f=0x00 blen=0 chnk=0 next=0
>   si[0]=0x56bb788 (state=EST flags=0x48008 endp0=CS:0x6f373d0 exp=
> et=0x000 sub=0)
>   si[1]=0x56bb7c8 (state=CON flags=0x111 endp1=CS:0x53a9250 exp=
> et=0x008 sub=2)
>   co0=0x533d6e0 ctrl=tcpv4 xprt=SSL mux=H2 data=STRM
> target=LISTENER:0x23eb040
>   flags=0x80243300 fd=660 fd.state=22 fd.cache=0 updt=0 fd.tmask=0x1
>   cs=0x6f373d0 csf=0x00106a00 ctx=0x70f1e80
>   co1=0x91263b0 ctrl=tcpv4 xprt=RAW mux=PASS data=STRM
> target=SERVER:0x24db060
>   flags=0x00403370 fd=51 fd.state=22 fd.cache=0 updt=0 fd.tmask=0x1
>   cs=0x53a9250 csf=0x0200 ctx=(nil)
>   req=0x56bb500 (f=0x4cc0 an=0x8000 pipe=0 tofwd=0 total=1610)
>   an_exp= rex= wex=
>   buf=0x56bb508 data=0x58ba660 o=1831 p=1831 req.next=0 i=0 size=16384
>   res=0x56bb560 (f=0x8000 an=0x0 pipe=0 tofwd=0 total=0)
>   an_exp= rex= wex=
>   buf=0x56bb568 data=(nil) o=0 p=0 rsp.next=0 i=0 size=0
> 0x639a150: [29/Apr/2019:12:31:53.231220] id=14501038 proto=tcpv4
> source=C2.C2.C2.C2:30107
>   flags=0x44e, conn_retries=2, srv_conn=0x24cdad0, pend_pos=(nil) waiting=0
>   frontend=virtual-server_front (id=45 mode=http), listener=? (id=1)
> addr=V.V.V.V:443
>   backend=V.V.V.V:443_back (id=46 mode=http) addr=S2.S2.S2.S2:8760
>   server=slot_9_2 (id=61) addr=B.B.B.B:31160
>   task=0x6625ab

[1.9.7] One of haproxy processes using 100% CPU

2019-04-29 Thread Maciej Zdeb
Hi,

I'm returning with problem similar as in thread "[1.9.6] One of haproxy
processes using 100% CPU".

Thanks to Olivier hard work, some issues where fixed but still not all of
them. :( Currently it is much harder to trigger and it occurs on https
virtual server and not tcp one. I'm observing this problem for HAProxy
1.9.7, process starts using 100% cpu and admin socket is still responsive.

Attached gdb session. Please let me know if you need some more info from
gdb, unfortunately I'm not gdb expert and I'm not sure what to look for. It
is production server, however I can keep it for a couple hours isolated
from production traffic.

Pasting some anonymized info from show sess, please note that it was dumped
on 30/Apr/2019:06:53:54 (before dump, traffic was switched to another
server) but some connections are from 29/Apr/2019 and persist in that
strange state.

CC: Oliver :)

socat /var/run/haproxy/haproxy1.sock - <<< "show sess all"
0x56bb4f0: [29/Apr/2019:12:31:37.186578] id=14500574 proto=tcpv4
source=C.C.C.C:56567
  flags=0x44e, conn_retries=2, srv_conn=0x24db060, pend_pos=(nil) waiting=0
  frontend=virtual-server_front (id=45 mode=http), listener=? (id=1)
addr=V.V.V.V:443
  backend=V.V.V.V:443_back (id=46 mode=http) addr=S.S.S.S:57654
  server=slot_9_3 (id=78) addr=B.B.B.B:31160
  task=0x6ec4070 (state=0x00 nice=0 calls=2 exp= tmask=0x1
age=18h22m)
  txn=0x57eadf0 flags=0x88003000 meth=1 status=-1 req.st=MSG_DONE rsp.st
=MSG_RPBEFORE
  req.f=0x0c blen=0 chnk=0 next=0
  rsp.f=0x00 blen=0 chnk=0 next=0
  si[0]=0x56bb788 (state=EST flags=0x48008 endp0=CS:0x6f373d0 exp=
et=0x000 sub=0)
  si[1]=0x56bb7c8 (state=CON flags=0x111 endp1=CS:0x53a9250 exp=
et=0x008 sub=2)
  co0=0x533d6e0 ctrl=tcpv4 xprt=SSL mux=H2 data=STRM
target=LISTENER:0x23eb040
  flags=0x80243300 fd=660 fd.state=22 fd.cache=0 updt=0 fd.tmask=0x1
  cs=0x6f373d0 csf=0x00106a00 ctx=0x70f1e80
  co1=0x91263b0 ctrl=tcpv4 xprt=RAW mux=PASS data=STRM
target=SERVER:0x24db060
  flags=0x00403370 fd=51 fd.state=22 fd.cache=0 updt=0 fd.tmask=0x1
  cs=0x53a9250 csf=0x0200 ctx=(nil)
  req=0x56bb500 (f=0x4cc0 an=0x8000 pipe=0 tofwd=0 total=1610)
  an_exp= rex= wex=
  buf=0x56bb508 data=0x58ba660 o=1831 p=1831 req.next=0 i=0 size=16384
  res=0x56bb560 (f=0x8000 an=0x0 pipe=0 tofwd=0 total=0)
  an_exp= rex= wex=
  buf=0x56bb568 data=(nil) o=0 p=0 rsp.next=0 i=0 size=0
0x639a150: [29/Apr/2019:12:31:53.231220] id=14501038 proto=tcpv4
source=C2.C2.C2.C2:30107
  flags=0x44e, conn_retries=2, srv_conn=0x24cdad0, pend_pos=(nil) waiting=0
  frontend=virtual-server_front (id=45 mode=http), listener=? (id=1)
addr=V.V.V.V:443
  backend=V.V.V.V:443_back (id=46 mode=http) addr=S2.S2.S2.S2:8760
  server=slot_9_2 (id=61) addr=B.B.B.B:31160
  task=0x6625ab0 (state=0x00 nice=0 calls=2 exp= tmask=0x1
age=18h22m)
  txn=0x5af9a10 flags=0x88003000 meth=1 status=-1 req.st=MSG_DONE rsp.st
=MSG_RPBEFORE
  req.f=0x0c blen=0 chnk=0 next=0
  rsp.f=0x00 blen=0 chnk=0 next=0
  si[0]=0x639a3e8 (state=EST flags=0x48008 endp0=CS:0x64d33f0 exp=
et=0x000 sub=0)
  si[1]=0x639a428 (state=CON flags=0x111 endp1=CS:0x5d681c0 exp=
et=0x008 sub=2)
  co0=0x6f23720 ctrl=tcpv4 xprt=SSL mux=H2 data=STRM
target=LISTENER:0x23eb040
  flags=0x80243300 fd=1282 fd.state=22 fd.cache=0 updt=0 fd.tmask=0x1
  cs=0x64d33f0 csf=0x00106a00 ctx=0x5cc0890
  co1=0x6039c60 ctrl=tcpv4 xprt=RAW mux=PASS data=STRM
target=SERVER:0x24cdad0
  flags=0x00403370 fd=577 fd.state=22 fd.cache=0 updt=0 fd.tmask=0x1
  cs=0x5d681c0 csf=0x0200 ctx=0x5d681e0
  req=0x639a160 (f=0x4cc0 an=0x8000 pipe=0 tofwd=0 total=1137)
  an_exp= rex= wex=
  buf=0x639a168 data=0x5408ef0 o=1349 p=1349 req.next=0 i=0 size=16384
  res=0x639a1c0 (f=0x8000 an=0x0 pipe=0 tofwd=0 total=0)
  an_exp= rex= wex=
  buf=0x639a1c8 data=(nil) o=0 p=0 rsp.next=0 i=0 size=0
0x882c0f0: [29/Apr/2019:12:31:46.503263] id=14503967 proto=tcpv4 source=
5.173.79.240:7434
  flags=0x44e, conn_retries=2, srv_conn=0x24cdad0, pend_pos=(nil) waiting=0
  frontend=virtual-server_front (id=45 mode=http), listener=? (id=1)
addr=V.V.V.V:443
  backend=V.V.V.V:443_back (id=46 mode=http) addr=S2.S2.S2.S2:24688
  server=slot_9_2 (id=61) addr=B.B.B.B:31160
  task=0x7c13f50 (state=0x00 nice=0 calls=2 exp= tmask=0x1
age=18h22m)
  txn=0x5978210 flags=0x88003000 meth=1 status=-1 req.st=MSG_DONE rsp.st
=MSG_RPBEFORE
  req.f=0x0c blen=0 chnk=0 next=0
  rsp.f=0x00 blen=0 chnk=0 next=0
  si[0]=0x882c388 (state=EST flags=0x48008 endp0=CS:0x7901a20 exp=
et=0x000 sub=0)
  si[1]=0x882c3c8 (state=CON flags=0x111 endp1=CS:0x6ca3a50 exp=
et=0x008 sub=2)
  co0=0x67684b0 ctrl=tcpv4 xprt=SSL mux=H2 data=STRM
target=LISTENER:0x23eb040
  flags=0x80243300 fd=970 fd.state=22 fd.cache=0 updt=0 fd.tmask=0x1
  cs=0x7901a20 csf=0x00106a00 ctx=0x7ac8650
  co1=0x74cba90 ctrl=tcpv4 xprt=RAW mux=PASS data=STRM
target=SERVER:0x24cdad0
  flags=0x00403370 fd=2185 fd.state=22 fd.cache=0 updt=0 fd.tmask