stable-bot: WARNING: 34 bug fixes in queue for next release

2019-05-11 Thread stable-bot
Hi,

This is a friendly bot that watches fixes pending for the next haproxy-stable 
release!  One such e-mail is sent periodically once patches are waiting in the 
last maintenance branch, and an ideal release date is computed based on the 
severity of these fixes and their merge date.  Responses to this mail must be 
sent to the mailing list.

Last release 1.9.7 was issued on 2019/04/25.  There are currently 34 patches in 
the queue cut down this way:
- 1 MAJOR, first one merged on 2019/04/30
- 21 MEDIUM, first one merged on 2019/04/29
- 12 MINOR, first one merged on 2019/04/29

Thus the computed ideal release date for 1.9.8 would be 2019/05/14, which is in 
one week or less.

The current list of patches in the queue is:
- MAJOR   : map/acl: real fix segfault during show map/acl on CLI
- MEDIUM  : checks: make sure the warmup task takes the server lock
- MEDIUM  : h2: Make sure we set send_list to NULL in h2_detach().
- MEDIUM  : mux-h2: properly deal with too large headers frames
- MEDIUM  : h2: Revamp the way send subscriptions works.
- MEDIUM  : streams: Don't add CF_WRITE_ERROR if early data were rejected.
- MEDIUM  : h2/htx: never leave a trailers block alone with no EOM block
- MEDIUM  : connections: Make sure we remove CO_FL_SESS_IDLE on disown.
- MEDIUM  : ssl: Use the early_data API the right way.
- MEDIUM  : h2: Don't check send_wait to know if we're in the send_list.
- MEDIUM  : channels: Don't forget to reset output in channel_erase().
- MEDIUM  : servers: fix typo "src" instead of "srv"
- MEDIUM  : http: Use pointer to the begining of input to parse message 
headers
- MEDIUM  : spoe: arg len encoded in previous frag frame but len changed
- MEDIUM  : ssl: Don't attempt to use early data with libressl.
- MEDIUM  : pattern: fix memory leak in regex pattern functions
- MEDIUM  : spoe: Be sure the sample is found before setting its context
- MEDIUM  : h2/htx: always fail on too large trailers
- MEDIUM  : port_range: Make the ring buffer lock-free.
- MEDIUM  : contrib/modsecurity: If host header is NULL, don't try to 
strdup it
- MEDIUM  : listener: Fix how unlimited number of consecutive accepts is 
handled
- MEDIUM  : mux-h2/htx: never wait for EOM when processing trailers
- MINOR   : logs/threads: properly split the log area upon startup
- MINOR   : mux-h1: Fix the parsing of trailers
- MINOR   : mux-h2: fix the condition to close a cs-less h2s on the backend
- MINOR   : mworker/ssl: close OpenSSL FDs on reload
- MINOR   : http: Call stream_inc_be_http_req_ctr() only one time per 
request
- MINOR   : log: properly free memory on logformat parse error and deinit()
- MINOR   : mux-h2: rely on trailers output not input to turn them to empty 
data
- MINOR   : stream: Attach the read side on the response as soon as possible
- MINOR   : checks: free memory allocated for tasklets
- MINOR   : htx: Never transfer more than expected in htx_xfer_blks()
- MINOR   : haproxy: fix rule->file memory leak
- MINOR   : activity: always initialize the profiling variable

---
The haproxy stable-bot is freely provided by HAProxy Technologies to help 
improve the quality of each HAProxy release.  If you have any issue with these 
emails or if you want to suggest some improvements, please post them on the 
list so that the solutions suiting the most users can be found.



stable-bot: WARNING: 6 bug fixes in queue for next release

2019-05-11 Thread stable-bot
Hi,

This is a friendly bot that watches fixes pending for the next haproxy-stable 
release!  One such e-mail is sent periodically once patches are waiting in the 
last maintenance branch, and an ideal release date is computed based on the 
severity of these fixes and their merge date.  Responses to this mail must be 
sent to the mailing list.

Last release 1.8.20 was issued on 2019/04/25.  There are currently 6 patches in 
the queue cut down this way:
- 1 MAJOR, first one merged on 2019/04/30
- 4 MEDIUM, first one merged on 2019/04/29
- 1 MINOR, first one merged on 2019/04/29

Thus the computed ideal release date for 1.8.21 would be 2019/05/14, which is 
in one week or less.

The current list of patches in the queue is:
- MAJOR   : map/acl: real fix segfault during show map/acl on CLI
- MEDIUM  : listener: Fix how unlimited number of consecutive accepts is 
handled
- MEDIUM  : contrib/modsecurity: If host header is NULL, don't try to 
strdup it
- MEDIUM  : spoe: arg len encoded in previous frag frame but len changed
- MEDIUM  : port_range: Make the ring buffer lock-free.
- MINOR   : http: Call stream_inc_be_http_req_ctr() only one time per 
request

---
The haproxy stable-bot is freely provided by HAProxy Technologies to help 
improve the quality of each HAProxy release.  If you have any issue with these 
emails or if you want to suggest some improvements, please post them on the 
list so that the solutions suiting the most users can be found.



Re: [PATCH] BUILD: common: Add __ha_cas_dw fallback for single threaded builds

2019-05-11 Thread Willy Tarreau
On Fri, May 10, 2019 at 11:52:31AM +0200, Willy Tarreau wrote:
> > Actually I think there's an additional change needed in my patch. By 
> > passing the parameters to HA_ATOMIC_CAS we end up attempting to 
> > dereference a void *. So this should needs to cast to a proper type. For 
> > what it's worth I'll send a v2 that does this.
> 
> OK, but since it's already merged, please send an incremental patch.

Don't waste your time, finally I fixed it definitely.

Willy



Re: [1.9 HEAD] HAProxy using 100% CPU

2019-05-11 Thread Maciej Zdeb
Patch applied, finger crossed, testing! :-)

Thanks!

sob., 11 maj 2019 o 14:58 Willy Tarreau  napisaƂ(a):

> On Sat, May 11, 2019 at 11:01:42AM +0200, Willy Tarreau wrote:
> > On Sat, May 11, 2019 at 10:52:35AM +0200, Willy Tarreau wrote:
> > > I certainly made a few reasoning mistakes above but I don't see
> anything
> > > in the code preventing this case from happening.
> > >
> > > Thus I'd like you to try the attached patch which is supposed to
> prevent
> > > this scenario from happening. At least I've verified that it doesn't
> > > break the h2spec test suite.
> >
> > While trying to check if it still applied to the latest 1.9 I figured
> > that it corresponds to what Olivier had also found and fixed in his
> > latest patch :-/  The positive point is that my analysis was correct.
> >
> > So I'm afraid that if it still fails with his fix, we'll need another
> > core :-(
>
> Actually not, Olivier's fix is incomplete regarding the scenario I
> proposed :
> - in h2s_frt_make_resp_data() we can set H2_SF_BLK_SFCTL and remove the
>   element from the list
> - then in h2_shutr() and h2_shutw(), we check if the list is empty before
>   subscribing the element, which is true after the case above
> - then in h2c_update_all_ws() we still have H2_SF_BLK_SFCTL with the item
>   in the send_list, thus LIST_ADDQ() adds it a second time.
>
> Thus the first part of the patch I sent is still required, I'm attaching it
> again, rebased on top of Olivier's patch and simplified so that we don't
> detach then re-attach.
>
> I'm still keeping hope ;-)
>
> Willy
>


Re: [1.9 HEAD] HAProxy using 100% CPU

2019-05-11 Thread Willy Tarreau
On Sat, May 11, 2019 at 11:01:42AM +0200, Willy Tarreau wrote:
> On Sat, May 11, 2019 at 10:52:35AM +0200, Willy Tarreau wrote:
> > I certainly made a few reasoning mistakes above but I don't see anything
> > in the code preventing this case from happening.
> > 
> > Thus I'd like you to try the attached patch which is supposed to prevent
> > this scenario from happening. At least I've verified that it doesn't
> > break the h2spec test suite.
> 
> While trying to check if it still applied to the latest 1.9 I figured
> that it corresponds to what Olivier had also found and fixed in his
> latest patch :-/  The positive point is that my analysis was correct.
> 
> So I'm afraid that if it still fails with his fix, we'll need another
> core :-(

Actually not, Olivier's fix is incomplete regarding the scenario I proposed :
- in h2s_frt_make_resp_data() we can set H2_SF_BLK_SFCTL and remove the
  element from the list
- then in h2_shutr() and h2_shutw(), we check if the list is empty before
  subscribing the element, which is true after the case above
- then in h2c_update_all_ws() we still have H2_SF_BLK_SFCTL with the item
  in the send_list, thus LIST_ADDQ() adds it a second time.

Thus the first part of the patch I sent is still required, I'm attaching it
again, rebased on top of Olivier's patch and simplified so that we don't
detach then re-attach.

I'm still keeping hope ;-)

Willy
diff --git a/src/mux_h2.c b/src/mux_h2.c
index d201921..94e4e5e 100644
--- a/src/mux_h2.c
+++ b/src/mux_h2.c
@@ -1484,9 +1484,8 @@ static void h2c_update_all_ws(struct h2c *h2c, int diff)
 
if (h2s->mws > 0 && (h2s->flags & H2_SF_BLK_SFCTL)) {
h2s->flags &= ~H2_SF_BLK_SFCTL;
-   if (h2s->send_wait)
+   if (h2s->send_wait && LIST_ISEMPTY(>list))
LIST_ADDQ(>send_list, >list);
-
}
 
node = eb32_next(node);
@@ -1791,9 +1790,8 @@ static int h2c_handle_window_update(struct h2c *h2c, 
struct h2s *h2s)
h2s->mws += inc;
if (h2s->mws > 0 && (h2s->flags & H2_SF_BLK_SFCTL)) {
h2s->flags &= ~H2_SF_BLK_SFCTL;
-   if (h2s->send_wait)
+   if (h2s->send_wait && LIST_ISEMPTY(>list))
LIST_ADDQ(>send_list, >list);
-
}
}
else {


Fwd: Paid Guest Post Inquiry - Content Collaboration Opportunity

2019-05-11 Thread Levente seo
Hello,

I would like to inquire about publishing a guest post on your site - unique
& objective content, not self-promotional and of course relevant to your
site's audience.

We'd be willing to pay for it.

Please let me know how can we proceed.

Looking forward to working with you,

Levente


Guide Lines for Link Publishing - April 2019.docx
Description: MS-Word 2007 document


Re: [1.9 HEAD] HAProxy using 100% CPU

2019-05-11 Thread Maciej Zdeb
> What I find very strange is why you're possibly the only one seeing this
> (and maybe also @serimin on github issue #94). If we could figure what
> makes your case specific it could help narrow the issue down. I'm seeing
> that you have a very simple Lua service to respond to health checks, so
> I've been thinking that maybe we do have some remaining bugs when Lua is
> accessed over H2 (e.g. incorrect length or whatever), but it's not the
> case on your connection since there are something like 17 streams so we
> can rule out the hypothesis of a health check connection, and thus that
> Lua was used.
>

Correct, lua applets are used for health checking and only for specific
list of clients (distinguished by source IP). Health checks are made over
http 1.0 (not even 1.1), so it should not affect h2 in HAProxy. This
particular HAProxy is used for serving images, so the responses are quite
large and so is the traffic. I'm using HAProxy 1.9.7 on other machines,
without that problem so the traffic specification or config matters.

The difference from other instances of HAProxy 1.9.7 I'm using is the
consistent hashing (hash-type consistent, balance hdr(...) and server id
specified inf config for each server) and also inter, fastinter, downinter,
slowstart settings.


Re: HAProxy 1.9.6 unresponsive

2019-05-11 Thread Willy Tarreau
Hi Patrick,

On Fri, May 10, 2019 at 09:17:25AM -0400, Patrick Hemmer wrote:
> So I see a few updates on some of the other 100% CPU usage threads, and that
> some fixes have been pushed. Are any of those in relation to this issue? Or
> is this one still outstanding?

Apparently we've pulled a long piece of string and uncovered a series of
such bugs. It's likely that different persons have been affected by
different bugs. We still have the issue Maciej is experiencing that I'd
really like to nail down, given the last occurrence doesn't seem to make
sense as the code looks right after Olivier's fix.

Thanks,
Willy



Re: [PATCH] new contrib proposal / exec Python & Lua scripts

2019-05-11 Thread Willy Tarreau
Hi Thierry,

I just stumbled upon the patch series below you sent a while ago. I see
that you didn't receive any feedback on it, but see no reason not to
merge it, as it must still be valid given that it's outside of the
core. Do you have any objection against it getting merged ? Or maybe
even a newer version ? This could be a nice bootstrap for people who
want to try to create new agents.

Thanks,
Willy

On Sun, Feb 25, 2018 at 10:00:01PM +0100, Thierry Fournier wrote:
> Hi,
> 
> Some guy says that SPOE is a great method for some things. Actually it
> is not really accessible because it requires C development.
> 
> I write a server which can bind SPOP messages on Python and/Lua
> functions. The function process the necessary and return variables
> using SPOP ack.
> 
> The patches are in attachment. Below, an example of python script.
> 
> Thierry
> 
> 
>import spoa
>import ipaddress
> 
>def check_client_ip(args):
> 
> pprint(args)
> # This display:
> # [{'name': '', 'value': True},
> #  {'name': '', 'value': 1234L},
> #  {'name': '', 'value': IPv4Address(u'127.0.0.1')},
> #  {'name': '', 'value': IPv6Address(u'::55')},
> #  {'name': '', 'value': '127.0.0.1:10001'}]
> 
> spoa.set_var_null("null", spoa.scope_txn)
> spoa.set_var_boolean("boolean", spoa.scope_txn, True)
> spoa.set_var_int32("int32", spoa.scope_txn, 1234)
> spoa.set_var_uint32("uint32", spoa.scope_txn, 1234)
> spoa.set_var_int64("int64", spoa.scope_txn, 1234)
> spoa.set_var_uint64("uint64", spoa.scope_txn, 1234)
> spoa.set_var_ipv4("ipv4", spoa.scope_txn, 
> ipaddress.IPv4Address(u"127.0.0.1"))
> spoa.set_var_ipv6("ipv6", spoa.scope_txn, 
> ipaddress.IPv6Address(u"1::f"))
> spoa.set_var_str("str", spoa.scope_txn, "1::f")
> spoa.set_var_bin("bin", spoa.scope_txn, "1:\x01:\x02f\x00\x00")
>   # HAProxy display:
>   # [debug converter] type: any <>
>   # [debug converter] type: bool <1>
>   # [debug converter] type: sint <1234>
>   # [debug converter] type: sint <1234>
>   # [debug converter] type: sint <1234>
>   # [debug converter] type: sint <1234>
>   # [debug converter] type: ipv4 <127.0.0.1>
>   # [debug converter] type: ipv6 <1::f>
>   # [debug converter] type: str <1::f>
>   # [debug converter] type: bin <1:.:.f>
> 
> return
> 
> 

> >From 0794044c73b7361560ebeb205d733f978bcd78af Mon Sep 17 00:00:00 2001
> From: Thierry FOURNIER 
> Date: Fri, 23 Feb 2018 11:40:03 +0100
> Subject: [PATCH 01/14] MINOR: spoa-server: Clone the v1.7 spoa-example project
> 
> This is a working base.
> ---
>  contrib/spoa_server/Makefile |   24 +
>  contrib/spoa_server/README   |   88 
>  contrib/spoa_server/spoa.c   | 1152 
> ++
>  3 files changed, 1264 insertions(+)
>  create mode 100644 contrib/spoa_server/Makefile
>  create mode 100644 contrib/spoa_server/README
>  create mode 100644 contrib/spoa_server/spoa.c
> 
> diff --git a/contrib/spoa_server/Makefile b/contrib/spoa_server/Makefile
> new file mode 100644
> index 000..e6b7c53
> --- /dev/null
> +++ b/contrib/spoa_server/Makefile
> @@ -0,0 +1,24 @@
> +DESTDIR =
> +PREFIX  = /usr/local
> +BINDIR  = $(PREFIX)/bin
> +
> +CC = gcc
> +LD = $(CC)
> +
> +CFLAGS  = -g -O2 -Wall -Werror -pthread
> +LDFLAGS = -lpthread
> +
> +OBJS = spoa.o
> +
> +
> +spoa: $(OBJS)
> + $(LD) $(LDFLAGS) -o $@ $^
> +
> +install: spoa
> + install spoa $(DESTDIR)$(BINDIR)
> +
> +clean:
> + rm -f spoa $(OBJS)
> +
> +%.o: %.c
> + $(CC) $(CFLAGS) -c -o $@ $<
> diff --git a/contrib/spoa_server/README b/contrib/spoa_server/README
> new file mode 100644
> index 000..7e376ee
> --- /dev/null
> +++ b/contrib/spoa_server/README
> @@ -0,0 +1,88 @@
> +A Random IP reputation service acting as a Stream Processing Offload Agent
> +--
> +
> +This is a very simple service that implement a "random" ip reputation
> +service. It will return random scores for all checked IP addresses. It only
> +shows you how to implement a ip reputation service or such kind of services
> +using the SPOE.
> +
> +
> +  Start the service
> +-
> +
> +After you have compiled it, to start the service, you just need to use "spoa"
> +binary:
> +
> +$> ./spoa  -h
> +Usage: ./spoa [-h] [-d] [-p ] [-n ]
> +-h  Print this message
> +-d  Enable the debug mode
> +-pSpecify the port to listen on (default: 12345)
> +-n Specify the number of workers (default: 5)
> +
> +Note: A worker is a thread.
> +
> +
> +  Configure a SPOE to use the service
> +---
> +
> +All information about SPOE configuration can be found in "doc/SPOE.txt". 
> Here is
> +the configuration template to use for your SPOE:
> +
> +

Re: [1.9 HEAD] HAProxy using 100% CPU

2019-05-11 Thread Willy Tarreau
On Sat, May 11, 2019 at 10:52:35AM +0200, Willy Tarreau wrote:
> I certainly made a few reasoning mistakes above but I don't see anything
> in the code preventing this case from happening.
> 
> Thus I'd like you to try the attached patch which is supposed to prevent
> this scenario from happening. At least I've verified that it doesn't
> break the h2spec test suite.

While trying to check if it still applied to the latest 1.9 I figured
that it corresponds to what Olivier had also found and fixed in his
latest patch :-/  The positive point is that my analysis was correct.

So I'm afraid that if it still fails with his fix, we'll need another
core :-(

Thanks,
Willy



Re: [1.9 HEAD] HAProxy using 100% CPU

2019-05-11 Thread Willy Tarreau
On Sat, May 11, 2019 at 09:56:18AM +0200, Willy Tarreau wrote:
> I'm back to auditing the code to figure how we can free an h2s without
> first detaching it from the lists. I hope to have yet another patch to
> propose to you.

So I'm seeing something which bothers me in the code. Since I'm not at
ease with these parts I may say stupid things. But overall the idea is
the following :

- everywhere before releasing an h2s we remove it from its lists, so it
  cannot exist in a pool while still belonging to a list ; that's fine.
  => it means the element we're looping on is still live in the connection.

- at every place we're adding an element to the list, we take care of
  removing it first, or checking that it was not attached to a list. Every
  single place except one :

  static void h2c_update_all_ws(struct h2c *h2c, int diff)
  (...)
if (h2s->mws > 0 && (h2s->flags & H2_SF_BLK_SFCTL)) {
h2s->flags &= ~H2_SF_BLK_SFCTL;
if (h2s->send_wait)
LIST_ADDQ(>send_list, >list);

}

  Instead the check is made on send_wait being non-null.

- in h2s_frt_make_resp_data() we test this send_wait pointer before deciding
  to remove the element from the list in case we're blocking on flow control :

if (size <= 0) {
h2s->flags |= H2_SF_BLK_SFCTL;
if (h2s->send_wait) {
LIST_DEL(>list);
LIST_INIT(>list);
}
goto end;
}

  So if for any reason send_wait is not set, nobody removes the element
  from the list.

- this function h2s_frt_make_resp_data() is called from h2_snd_buf(), which
  starts by resetting h2s->send_wait. And it doesn't delete the stream from
  the list only in the case where it managed to send some data.

So what I'm wondering is if something like the following scenario may happen :

1) a stream wants to send data, it calls h2_snd_buf() a first time. The
   output buffer is full, the call fails

2) the caller (si_cs_send()) detects the error and calls h2_subscribe() to
   attach the stream to the send_list

3) some room is made in the buffer, the list is walked over and this stream
   is woken up.

4) h2_snd_buf() is called again to try to send, send_wait is immediately reset

5) h2s_frt_make_resp_data() detects that the stream's flow control credit
   is exhausted and declines. It sets H2_SF_BLK_SFCTL but doesn't delete
   the element from the send list since send_wait is NULL

6) another stream still has credit and fills the connection's buffer with
   its data.

7) the server the first stream is associated to delivers more data while the
   stream is still waiting for a window update, and attempts to send again
   via h2_snd_buf()

8) h2_snd_buf() doesn't detect that the stream is blocked because the check
   on sending_list doesn't match, so it tries again and sets send_wait to
   NULL again, then tries to send. It fails again because the connection's
   buffer is full

9) si_cs_send() above calls h2_subsribe() again to set send_wait and try
   to add the stream to the send_list. But since the stream has the SFCTL
   flag it's not added again, and it remains in the send_list where it
   still is.

10) a window update finally arrives for this stream (or an initial window
size update for all streams). h2c_handle_window_update() is called,
sees the stream has the SFCTL flag, removes it and appends it to the
end of the send_list, where it already was.

11) h2_process_mux() scans the list, finds the first reference to the list
element (where it was first added), and wakes it up. When going to the
next one however, it directly jumps to the one that was added past the
second addition, ignoring the orphans in the list that sit between the
first place of the element and its last one.

12) h2_snd_buf() is called again for this element, then deletes it from the
list, which has the consequence of unlinking the element from its last
predecessor and successors, but not from the first one. Once the element
is deleted, it's initialized and its next points to itself (an empty
list head).

13) h2_process_mux() scans the list again to process more streams, and
reaches the first predecessor of this element, then visits the next
which is this element again, looping onto itself.

I certainly made a few reasoning mistakes above but I don't see anything
in the code preventing this case from happening.

Thus I'd like you to try the attached patch which is supposed to prevent
this scenario from happening. At least I've verified that it doesn't
break the h2spec test suite.

Thanks,
Willy
diff --git a/src/mux_h2.c b/src/mux_h2.c
index 67a297a..e2b1533 100644
--- a/src/mux_h2.c
+++ b/src/mux_h2.c
@@ -1484,9 +1484,10 @@ static void h2c_update_all_ws(struct h2c *h2c, int diff)
 
if (h2s->mws > 0 && (h2s->flags & H2_SF_BLK_SFCTL)) {

Re: [1.9 HEAD] HAProxy using 100% CPU

2019-05-11 Thread Willy Tarreau
Hi Maciej,

On Fri, May 10, 2019 at 06:45:21PM +0200, Maciej Zdeb wrote:
> Olivier, it's still looping, but differently:
> 
> 2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
> (gdb) n
> 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags &
> H2_CF_MUX_BLOCK_ANY)
> (gdb)
(...)
> (gdb) p *h2s
> $1 = {cs = 0x2f84190, sess = 0x819580 , h2c = 0x2f841a0, h1m
> = {state = 48, flags = 0, curr_len = 38317, body_len = 103852, next = 413,
> err_pos = -1, err_state = 0}, by_id = {node = {branches = {b = {0x34c0260,
>   0x321d330}}, node_p = 0x0, leaf_p = 0x0, bit = 1, pfx = 47005},
> key = 3}, id = 3, flags = 28675, mws = 1017461, errcode = H2_ERR_NO_ERROR,
> st = H2_SS_CLOSED, status = 200, body_len = 0, rxbuf = {size = 0, area =
> 0x0, data = 0,
> head = 0}, wait_event = {task = 0x2cd0ed0, handle = 0x3, events = 0},
> recv_wait = 0x0, send_wait = 0x321d390, list = {n = 0x321d3b8, p =
> 0x321d3b8}, sending_list = {n = 0x3174cf8, p = 0x3174cf8}}
(...)

In fact it's exactly the same. I've analyzed the core you sent me (thanks
a lot for this by the way), and it explains very well why it's looping,
though I don't yet understand how we entered this situation. It also
explains how it can randomly crash instead of looping.

What happens is that the item you're inspecting above has an empty
list element which points to itself. Thus it doesn't belong to a list
and its next is itself. But it's still attached to its predecessor. In
your core file it was very visible as the h2c list's head points to the
element and the list tail's tail points to it as well. So what we're
seeing is an element that was *not* unlinked from a list before being
removed / recycled. Once it was reused, it was initialized with an
empty list header and the next time the connection's list was visited,
it caused the loop.

What I find very strange is why you're possibly the only one seeing this
(and maybe also @serimin on github issue #94). If we could figure what
makes your case specific it could help narrow the issue down. I'm seeing
that you have a very simple Lua service to respond to health checks, so
I've been thinking that maybe we do have some remaining bugs when Lua is
accessed over H2 (e.g. incorrect length or whatever), but it's not the
case on your connection since there are something like 17 streams so we
can rule out the hypothesis of a health check connection, and thus that
Lua was used.

I'm back to auditing the code to figure how we can free an h2s without
first detaching it from the lists. I hope to have yet another patch to
propose to you.

Thanks again for your traces, they've been amazingly useful!
Willy