Re: RST_STREAM immediately after GET request in h2
Le 28/11/2024 à 13:18, Andreas Bergander a écrit : Hi Willy! So, we deployed the new shiny 3.1.0 version of HAProxy add added the logging you suggested. We use this log format: log-format "Client: %ci:%cp | Time: [%t] | Frontend: %f | Backend: %b | Server: %s | Frontend Error: %[fc_err_name] | Backend Error: %[bc_err_name] | Frontend Debug: %[fs.debug_str] | Backend Debug: %[bs.debug_str] | Request: \"%r\"" And this is the result of some of the requests where the stream was closed by an RST_STREAM from HAProxy: Client: 193.0.241.14:10505 | Time: [28/Nov/2024:11:30:49.331] | Frontend: ft_web | Backend: bk_appsrv | Server: server1 | Frontend Error: - | Backend Error: - | Frontend Debug: h2s.id=15 .st=CLO .flg=0x4005 .rxwin=0 .rxbuf.c=0 .t=0@(nil)+0/0 .h=0@(nil)+0/0 .sc=0x7f776011cd60(.flg=0x00074086 .app=0x7f7760086670) .sd=0x7f77600daad0(.flg=0x5041d601) .subs=(nil) h2c.st0=FRH .err=0 .maxid=27 .lastid=-1 .flg=0x410200 .nbst=3 .nbsc=6 .nbrcv=0 .glitches=0 .fctl_cnt=0 .send_cnt=0 .tree_cnt=6 .orph_cnt=0 .sub=1 .dsi=27 .dbuf=0@(nil)+0/0 .mbuf=[1..1|32],h=[0@(nil)+0/0],t=[0@(nil)+0/0] .task=0x7f7760416280 .exp= conn.flg=0xa300 | Backend Debug: h2s.id=5 .st=CLO .flg=0x300a .rxwin=1048576 .rxbuf.c=0 .t=0@(nil)+0/0 .h=0@(nil)+0/0 .sc=0x7f7760053180(.flg=0x0003d413 .app=0x7f7760086670) .sd=0x7f77600a2420(.flg=0x50420a01) .subs=(nil) h2c.st0=FRH .err=0 .maxid=7 .lastid=-1 .flg=0x18e00 .nbst=1 .nbsc=2 .nbrcv=0 .glitches=0 .fctl_cnt=0 .send_cnt=0 .tree_cnt=2 .orph_cnt=0 .sub=1 .dsi=3 .dbuf=0@(nil)+0/0 .mbuf=[1..1|32],h=[13@0x7f77603634a0+0/16384],t=[13@0x7f77603634a0+0/16384] .task=0x7f77601cf220 .exp= conn.flg=0x000300 | Request: "GET https://www.mydomain.com/lib/select2/js/select2.min.js HTTP/2.0" Client: 176.87.9.172:11004 | Time: [28/Nov/2024:11:40:51.466] | Frontend: ft_web | Backend: bk_appsrv | Server: server1 | Frontend Error: - | Backend Error: - | Frontend Debug: h2s.id=7 .st=CLO .flg=0x4005 .rxwin=0 .rxbuf.c=0 .t=0@(nil)+0/0 .h=0@(nil)+0/0 .sc=0x5611a99ff9d0(.flg=0x00074086 .app=0x5611a9755b30) .sd=0x5611a9a22e40(.flg=0x5041d601) .subs=(nil) h2c.st0=FRH .err=0 .maxid=11 .lastid=-1 .flg=0x0600 .nbst=1 .nbsc=2 .nbrcv=0 .glitches=0 .fctl_cnt=0 .send_cnt=0 .tree_cnt=2 .orph_cnt=0 .sub=1 .dsi=11 .dbuf=0@(nil)+0/0 .mbuf=[1..1|32],h=[0@(nil)+0/0],t=[0@(nil)+0/0] .task=0x5611a9a01b60 .exp= conn.flg=0xa300 | Backend Debug: h2s.id=83 .st=CLO .flg=0x300a .rxwin=1048576 .rxbuf.c=0 .t=0@(nil)+0/0 .h=0@(nil)+0/0 .sc=0x5611a97ba600(.flg=0x0003d413 .app=0x5611a9755b30) .sd=0x5611a9870280(.flg=0x50420a01) .subs=(nil) h2c.st0=FRH .err=0 .maxid=85 .lastid=-1 .flg=0x18e00 .nbst=0 .nbsc=1 .nbrcv=0 .glitches=0 .fctl_cnt=0 .send_cnt=0 .tree_cnt=1 .orph_cnt=0 .sub=1 .dsi=81 .dbuf=0@(nil)+0/0 .mbuf=[1..1|32],h=[13@0x5611a97e3f30+0/16384],t=[13@0x5611a97e3f30+0/16384] .task=0x5611a9a40cc0 .exp= conn.flg=0x000300 | Request: "GET https://www.mydomain.com/fetch HTTP/2.0" Client: 192.54.145.137:50920 | Time: [28/Nov/2024:11:45:36.627] | Frontend: ft_web | Backend: bk_appsrv | Server: server1 | Frontend Error: - | Backend Error: - | Frontend Debug: h2s.id=27 .st=CLO .flg=0x4005 .rxwin=0 .rxbuf.c=0 .t=0@(nil)+0/0 .h=0@(nil)+0/0 .sc=0x7f777419c520(.flg=0x00074086 .app=0x7f777404dc70) .sd=0x7f77740ae7b0(.flg=0x5041d601) .subs=(nil) h2c.st0=FRH .err=0 .maxid=27 .lastid=-1 .flg=0x0200 .nbst=0 .nbsc=1 .nbrcv=0 .glitches=0 .fctl_cnt=0 .send_cnt=0 .tree_cnt=1 .orph_cnt=0 .sub=1 .dsi=27 .dbuf=0@(nil)+0/0 .mbuf=[1..1|32],h=[0@(nil)+0/0],t=[0@(nil)+0/0] .task=0x5611a9824980 .exp= conn.flg=0xa300 | Backend Debug: h2s.id=35 .st=CLO .flg=0x300a .rxwin=1048576 .rxbuf.c=0 .t=0@(nil)+0/0 .h=0@(nil)+0/0 .sc=0x7f777418b320(.flg=0x0003d413 .app=0x7f777404dc70) .sd=0x7f77740441b0(.flg=0x50420a01) .subs=(nil) h2c.st0=FRH .err=0 .maxid=35 .lastid=-1 .flg=0x18e00 .nbst=0 .nbsc=1 .nbrcv=0 .glitches=0 .fctl_cnt=0 .send_cnt=0 .tree_cnt=1 .orph_cnt=0 .sub=1 .dsi=33 .dbuf=0@(nil)+0/0 .mbuf=[1..1|32],h=[13@0x7f77742024b0+0/16384],t=[13@0x7f77742024b0+0/16384] .task=0x7f77740c2b20 .exp= conn.flg=0x000300 | Request: "GET https://www.mydomain.com/lib/polyfills.js HTTP/2.0" Does this help at all? Something else we should log or try? Hi Andreas, There is no error or shut reported at the connection level. This eliminates a bug recently fixed about early close because of a strange behavior of epoll. However, I can see a RST_STREAM was received on the client side: h2s.flags = H2_SF_HEADERS_RCVD | H2_SF_RST_RCVD | H2_SF_ES_RCVD So the request was in fact aborted by the client and this was forwarded to the server. You may be interested by logging fs.aborted / fs.rst_code and bs.aborted / bs.rst_code to have more info and a confirmation too. Regards, -- Christopher Faulet
Re: RST_STREAM immediately after GET request in h2
Hi Willy! So, we deployed the new shiny 3.1.0 version of HAProxy add added the logging you suggested. We use this log format: log-format "Client: %ci:%cp | Time: [%t] | Frontend: %f | Backend: %b | Server: %s | Frontend Error: %[fc_err_name] | Backend Error: %[bc_err_name] | Frontend Debug: %[fs.debug_str] | Backend Debug: %[bs.debug_str] | Request: \"%r\"" And this is the result of some of the requests where the stream was closed by an RST_STREAM from HAProxy: Client: 193.0.241.14:10505 | Time: [28/Nov/2024:11:30:49.331] | Frontend: ft_web | Backend: bk_appsrv | Server: server1 | Frontend Error: - | Backend Error: - | Frontend Debug: h2s.id=15 .st=CLO .flg=0x4005 .rxwin=0 .rxbuf.c=0 .t=0@(nil)+0/0 .h=0@(nil)+0/0 .sc=0x7f776011cd60(.flg=0x00074086 .app=0x7f7760086670) .sd=0x7f77600daad0(.flg=0x5041d601) .subs=(nil) h2c.st0=FRH .err=0 .maxid=27 .lastid=-1 .flg=0x410200 .nbst=3 .nbsc=6 .nbrcv=0 .glitches=0 .fctl_cnt=0 .send_cnt=0 .tree_cnt=6 .orph_cnt=0 .sub=1 .dsi=27 .dbuf=0@(nil)+0/0 .mbuf=[1..1|32],h=[0@(nil)+0/0],t=[0@(nil)+0/0] .task=0x7f7760416280 .exp= conn.flg=0xa300 | Backend Debug: h2s.id=5 .st=CLO .flg=0x300a .rxwin=1048576 .rxbuf.c=0 .t=0@(nil)+0/0 .h=0@(nil)+0/0 .sc=0x7f7760053180(.flg=0x0003d413 .app=0x7f7760086670) .sd=0x7f77600a2420(.flg=0x50420a01) .subs=(nil) h2c.st0=FRH .err=0 .maxid=7 .lastid=-1 .flg=0x18e00 .nbst=1 .nbsc=2 .nbrcv=0 .glitches=0 .fctl_cnt=0 .send_cnt=0 .tree_cnt=2 .orph_cnt=0 .sub=1 .dsi=3 .dbuf=0@(nil)+0/0 .mbuf=[1..1|32],h=[13@0x7f77603634a0+0/16384],t=[13@0x7f77603634a0+0/16384] .task=0x7f77601cf220 .exp= conn.flg=0x000300 | Request: "GET https://www.mydomain.com/lib/select2/js/select2.min.js HTTP/2.0" Client: 176.87.9.172:11004 | Time: [28/Nov/2024:11:40:51.466] | Frontend: ft_web | Backend: bk_appsrv | Server: server1 | Frontend Error: - | Backend Error: - | Frontend Debug: h2s.id=7 .st=CLO .flg=0x4005 .rxwin=0 .rxbuf.c=0 .t=0@(nil)+0/0 .h=0@(nil)+0/0 .sc=0x5611a99ff9d0(.flg=0x00074086 .app=0x5611a9755b30) .sd=0x5611a9a22e40(.flg=0x5041d601) .subs=(nil) h2c.st0=FRH .err=0 .maxid=11 .lastid=-1 .flg=0x0600 .nbst=1 .nbsc=2 .nbrcv=0 .glitches=0 .fctl_cnt=0 .send_cnt=0 .tree_cnt=2 .orph_cnt=0 .sub=1 .dsi=11 .dbuf=0@(nil)+0/0 .mbuf=[1..1|32],h=[0@(nil)+0/0],t=[0@(nil)+0/0] .task=0x5611a9a01b60 .exp= conn.flg=0xa300 | Backend Debug: h2s.id=83 .st=CLO .flg=0x300a .rxwin=1048576 .rxbuf.c=0 .t=0@(nil)+0/0 .h=0@(nil)+0/0 .sc=0x5611a97ba600(.flg=0x0003d413 .app=0x5611a9755b30) .sd=0x5611a9870280(.flg=0x50420a01) .subs=(nil) h2c.st0=FRH .err=0 .maxid=85 .lastid=-1 .flg=0x18e00 .nbst=0 .nbsc=1 .nbrcv=0 .glitches=0 .fctl_cnt=0 .send_cnt=0 .tree_cnt=1 .orph_cnt=0 .sub=1 .dsi=81 .dbuf=0@(nil)+0/0 .mbuf=[1..1|32],h=[13@0x5611a97e3f30+0/16384],t=[13@0x5611a97e3f30+0/16384] .task=0x5611a9a40cc0 .exp= conn.flg=0x000300 | Request: "GET https://www.mydomain.com/fetch HTTP/2.0" Client: 192.54.145.137:50920 | Time: [28/Nov/2024:11:45:36.627] | Frontend: ft_web | Backend: bk_appsrv | Server: server1 | Frontend Error: - | Backend Error: - | Frontend Debug: h2s.id=27 .st=CLO .flg=0x4005 .rxwin=0 .rxbuf.c=0 .t=0@(nil)+0/0 .h=0@(nil)+0/0 .sc=0x7f777419c520(.flg=0x00074086 .app=0x7f777404dc70) .sd=0x7f77740ae7b0(.flg=0x5041d601) .subs=(nil) h2c.st0=FRH .err=0 .maxid=27 .lastid=-1 .flg=0x0200 .nbst=0 .nbsc=1 .nbrcv=0 .glitches=0 .fctl_cnt=0 .send_cnt=0 .tree_cnt=1 .orph_cnt=0 .sub=1 .dsi=27 .dbuf=0@(nil)+0/0 .mbuf=[1..1|32],h=[0@(nil)+0/0],t=[0@(nil)+0/0] .task=0x5611a9824980 .exp= conn.flg=0xa300 | Backend Debug: h2s.id=35 .st=CLO .flg=0x300a .rxwin=1048576 .rxbuf.c=0 .t=0@(nil)+0/0 .h=0@(nil)+0/0 .sc=0x7f777418b320(.flg=0x0003d413 .app=0x7f777404dc70) .sd=0x7f77740441b0(.flg=0x50420a01) .subs=(nil) h2c.st0=FRH .err=0 .maxid=35 .lastid=-1 .flg=0x18e00 .nbst=0 .nbsc=1 .nbrcv=0 .glitches=0 .fctl_cnt=0 .send_cnt=0 .tree_cnt=1 .orph_cnt=0 .sub=1 .dsi=33 .dbuf=0@(nil)+0/0 .mbuf=[1..1|32],h=[13@0x7f77742024b0+0/16384],t=[13@0x7f77742024b0+0/16384] .task=0x7f77740c2b20 .exp= conn.flg=0x000300 | Request: "GET https://www.mydomain.com/lib/polyfills.js HTTP/2.0" Does this help at all? Something else we should log or try? /Andreas From: Willy Tarreau Sent: Wednesday, November 13, 2024 14:31 To: Andreas Bergander Cc: haproxy@formilux.org Subject: Re: RST_STREAM immediately after GET request in h2 Hi Andreas, On Wed, Nov 13, 2024 at 10:25:15AM +, Andreas Bergander wrote: > Hi Willy! > > Thanks for the great pointers and explanations! We do not have the > abortonclose option set, the only options we use are for access logging > (httpslog). OK! > These are our timeout settings: > > timeout connect 5s > timeout client 20s > timeout client-fin 20s > timeout server 1200s > timeout tunnel 1h > timeout check 5s > timeout http-keep-alive 10s > timeout http-request 200s > > > Currently
Re: RST_STREAM immediately after GET request in h2
Hi Andreas, On Wed, Nov 13, 2024 at 10:25:15AM +, Andreas Bergander wrote: > Hi Willy! > > Thanks for the great pointers and explanations! We do not have the > abortonclose option set, the only options we use are for access logging > (httpslog). OK! > These are our timeout settings: > > timeout connect 5s > timeout client 20s > timeout client-fin 20s > timeout server 1200s > timeout tunnel 1h > timeout check 5s > timeout http-keep-alive 10s > timeout http-request 200s > > > Currently we can only observer the issue in a production environment with > plenty of traffic, so logging might be problematic to turn on there. I'll see > if we can trigger the issue in a test system with logging. Then one thing that might be doable *if you're willing to give 3.1-dev a try* would be to add just this to your existing logs: fc_err=%[fc_err_name] bc_err=%[bc_err_name] fs_dbg=[%[fs.debug_str]] bs_dbg=[%[bs.debug_str]] These will complete your existing traffic logs with extra info indicating how the connection ended on each side. The fc/bc_err_name will be a single word and is short. fs/bs_dbg will be larger, ~50 chars. If that's too much, maybe just keeping the back first could reveal what's happening, but we don't know what we're looking for so it's hard to say (at least if you have to keep only one side, the backend is more important). If that's still too much, you can append ",when(error)" to strip these outputs when there's no error: fc_err=%[fc_err_name] bc_err=%[bc_err_name] fs_dbg=[%[fs.debug_str,when(error)]] bs_dbg=[%[bs.debug_str,when(error)]] >From that point, the next time you spot issues in your logs, these logs should help a lot. If you can't give 3.1-dev a try (which I perfectly understand but I preferred to suggest), this will need to wait for us to manage to backport these to 3.0. Cheers, Willy
Re: RST_STREAM immediately after GET request in h2
Hi Willy! Thanks for the great pointers and explanations! We do not have the abortonclose option set, the only options we use are for access logging (httpslog). These are our timeout settings: timeout connect 5s timeout client 20s timeout client-fin 20s timeout server 1200s timeout tunnel 1h timeout check 5s timeout http-keep-alive 10s timeout http-request 200s Currently we can only observer the issue in a production environment with plenty of traffic, so logging might be problematic to turn on there. I'll see if we can trigger the issue in a test system with logging. /Andreas From: Willy Tarreau Sent: Monday, November 11, 2024 11:12 To: Andreas Bergander Cc: haproxy@formilux.org Subject: Re: RST_STREAM immediately after GET request in h2 Hi Andreas, On Mon, Nov 11, 2024 at 08:49:16AM +, Andreas Bergander wrote: > After enabling h2 between haproxy and our backend servers we sometimes notice > that haproxy sends an RST_STREAM immediately after the HEADERS packet with > the GET request. I've logged the packages and in this case less than 3 ms > after haproxy sends GET, haproxy then sends an RST_STREAM. > > Time (seconds) source destination Protocol Packet > 69.632322 haproxy -> backend HTTP2 HEADERS[25]: GET /image.svg > 69.635099 haproxy -> backend HTTP2 RST_STREAM[25] > > What could be the cause of that? The stream gets closed before the backend > can write the response. First I suspected this was due to e.g. end user > closing their browser or similar. But it occurs a bit too often for that and > we have also got reports of missing images from end users. We are using > haproxy 3.0.5. Any ideas what I can look into or is this normal? That's interesting. We're currently chasing an issue that a few users observe, by which some H1 requests on the backend are sometimes immediately aborted with a FIN and for now we don't know what causes this (and we've searched a lot). What you're observing here could very likely be the H2 variant of the same issue. > I can also see a bunch of WINDOW_UPDATE packages sent very frequently by > haproxy, maybe that is expected and unrelated? > > 8.915556 haproxy -> backend HTTP2 WINDOW_UPDATE[127], WINDOW_UPDATE[0] > 8.915594 haproxy -> backend HTTP2 WINDOW_UPDATE[127], WINDOW_UPDATE[0] > 8.916445 haproxy -> backend HTTP2 WINDOW_UPDATE[127], WINDOW_UPDATE[0] > 8.916487 haproxy -> backend HTTP2 WINDOW_UPDATE[127], WINDOW_UPDATE[0] > 8.917168 haproxy -> backend HTTP2 WINDOW_UPDATE[127], WINDOW_UPDATE[0] > 8.917220 haproxy -> backend HTTP2 WINDOW_UPDATE[127], WINDOW_UPDATE[0] > 8.918075 haproxy -> backend HTTP2 WINDOW_UPDATE[127], WINDOW_UPDATE[0] These ones can be perfectly normal and are quite common on the backend side. WINDOW_UPDATE frames allow to refill the sender's budget to send more DATA frames. So as data get delivered to haproxy, it had to send the equivalent amount of refill. HAProxy keeps count of the received data per frame and connection to send a single aggregate WINDOW_UPDATE frame for each of them, once per processed buffer (a buffer is 16kB by default). If the sender is sending large amounts of data in 16kB frames, in the worst case you could see two WU frames per DATA frame (one for the connection and one for the stream). But these frames are very small (13 bytes) so that's not a problem. > This is the config we use for the backend: > > backend bk_appsrv > mode http > balance roundrobin > option httpchk > http-check connect proto h2 > http-check send meth GET uri /status ver HTTP/2 > http-check expect rstatus 200 > cookie shtick insert indirect nocache secure > server server1 server1:80 cookie appbfy1 check maxconn 512 proto h2 > server server2 server2:80 cookie appbfy2 check maxconn 512 proto h2 Thanks for sharing this. Do you have anything specific in the frontend, such as maybe "option abortonclose", or such stuff ? And what are your timeouts ? We're still trying to figure what conditoins could provoke this, and were going towards socket errors for H1 but your report could change a part of the analysis. If you don't have too much traffic, it could be useful to enable H2 traces to a ring buffer, combined with network captures so that the comparison between what the H2 mux sees and what the network sees helps eliminate some possibilities. Traces can be enabled in the config by adding first a "ring" section and second another "global" section with traces in it. Note that "trace" is marked experimental in 3.0 since it's ugly to declare it in the global section, and was simplified in 3.1 with a new dedicated "traces" section. global ... ring buf1 size 1073741824 # 1 GB format timed backing-file /tmp/h2-trace.
Re: RST_STREAM immediately after GET request in h2
Hi Andreas, On Mon, Nov 11, 2024 at 08:49:16AM +, Andreas Bergander wrote: > After enabling h2 between haproxy and our backend servers we sometimes notice > that haproxy sends an RST_STREAM immediately after the HEADERS packet with > the GET request. I've logged the packages and in this case less than 3 ms > after haproxy sends GET, haproxy then sends an RST_STREAM. > > Time (seconds) source destination Protocol Packet > 69.632322 haproxy -> backend HTTP2 HEADERS[25]: GET /image.svg > 69.635099 haproxy -> backend HTTP2 RST_STREAM[25] > > What could be the cause of that? The stream gets closed before the backend > can write the response. First I suspected this was due to e.g. end user > closing their browser or similar. But it occurs a bit too often for that and > we have also got reports of missing images from end users. We are using > haproxy 3.0.5. Any ideas what I can look into or is this normal? That's interesting. We're currently chasing an issue that a few users observe, by which some H1 requests on the backend are sometimes immediately aborted with a FIN and for now we don't know what causes this (and we've searched a lot). What you're observing here could very likely be the H2 variant of the same issue. > I can also see a bunch of WINDOW_UPDATE packages sent very frequently by > haproxy, maybe that is expected and unrelated? > > 8.915556 haproxy -> backend HTTP2 WINDOW_UPDATE[127], WINDOW_UPDATE[0] > 8.915594 haproxy -> backend HTTP2 WINDOW_UPDATE[127], WINDOW_UPDATE[0] > 8.916445 haproxy -> backend HTTP2 WINDOW_UPDATE[127], WINDOW_UPDATE[0] > 8.916487 haproxy -> backend HTTP2 WINDOW_UPDATE[127], WINDOW_UPDATE[0] > 8.917168 haproxy -> backend HTTP2 WINDOW_UPDATE[127], WINDOW_UPDATE[0] > 8.917220 haproxy -> backend HTTP2 WINDOW_UPDATE[127], WINDOW_UPDATE[0] > 8.918075 haproxy -> backend HTTP2 WINDOW_UPDATE[127], WINDOW_UPDATE[0] These ones can be perfectly normal and are quite common on the backend side. WINDOW_UPDATE frames allow to refill the sender's budget to send more DATA frames. So as data get delivered to haproxy, it had to send the equivalent amount of refill. HAProxy keeps count of the received data per frame and connection to send a single aggregate WINDOW_UPDATE frame for each of them, once per processed buffer (a buffer is 16kB by default). If the sender is sending large amounts of data in 16kB frames, in the worst case you could see two WU frames per DATA frame (one for the connection and one for the stream). But these frames are very small (13 bytes) so that's not a problem. > This is the config we use for the backend: > > backend bk_appsrv > mode http > balance roundrobin > option httpchk > http-check connect proto h2 > http-check send meth GET uri /status ver HTTP/2 > http-check expect rstatus 200 > cookie shtick insert indirect nocache secure > server server1 server1:80 cookie appbfy1 check maxconn 512 proto h2 > server server2 server2:80 cookie appbfy2 check maxconn 512 proto h2 Thanks for sharing this. Do you have anything specific in the frontend, such as maybe "option abortonclose", or such stuff ? And what are your timeouts ? We're still trying to figure what conditoins could provoke this, and were going towards socket errors for H1 but your report could change a part of the analysis. If you don't have too much traffic, it could be useful to enable H2 traces to a ring buffer, combined with network captures so that the comparison between what the H2 mux sees and what the network sees helps eliminate some possibilities. Traces can be enabled in the config by adding first a "ring" section and second another "global" section with traces in it. Note that "trace" is marked experimental in 3.0 since it's ugly to declare it in the global section, and was simplified in 3.1 with a new dedicated "traces" section. global ... ring buf1 size 1073741824 # 1 GB format timed backing-file /tmp/h2-trace.bin global expose-experimental-directives trace h2 sink buf1 trace h2 level developer trace h2 verbosity complete trace h2 start now Note that each request will produce around 1kB of traces on average, so the volume can be high. The file is a circular ring buffer which automatically loops back and keeps 1GB of contiguous traces. You can increase the size if needed. Also please be aware that each reload of haproxy will rotate the trace, replacing the file with the same name suffixed by ".bak". This way once you capture the problem on the network you can simply reload, and the old trace (.bak) will contain the event and can safely be moved somewhere else (a second reload will destroy it). You can use dev/haring/haring on the file to read its contents. It will emit them in text format, in chronological order. This allows to spot the relevant request based on timing and URL. We can help you with this, but keep in mind that some decoded requests will appear and may disclose some private