Re: [PATCH v2] MINOR: config: rhttp: Don't require SSL when attach-srv name parsing
On Thu, May 23, 2024 at 03:58:45PM +0100, William Manley wrote: > On Thu, May 23, 2024, at 3:52 PM, William Manley wrote: > > On Thu, May 23, 2024, at 3:45 PM, Amaury Denoyelle wrote: > > [...] > > Thank you for your quick testing. I have manage to reproduce what I > > think could be a similar issue. I have updated my patch, can you please > > try the following new one ? > Yes, that fixes it. > Let me know whenever you want something tested, the way I've got it set up > it's pretty easy for me to do so. Perfect thank you very much for your help. I will merge the fix before the new upcoming 3.0 release. For now, no more rhttp evolution is planned until this release. > I can also report that I no longer need to avoid `nbthread 1` in the config > on the node. Presumably thanks to ceebb09744df367ad84586a341d9336f84f72bce > "rhttp: fix preconnect on single-thread". Indeed. I completely forgot this issue and re-stumbled onto it while implementing the latest rhttp features. -- Amaury Denoyelle
Re: [PATCH v2] MINOR: config: rhttp: Don't require SSL when attach-srv name parsing
On Thu, May 23, 2024 at 02:47:15PM +0100, William Manley wrote: > On Thu, May 23, 2024, at 2:08 PM, Amaury Denoyelle wrote: > > On Thu, May 23, 2024 at 11:55:13AM +0100, William Manley wrote: > > > On Thu, May 23, 2024, at 11:34 AM, William Manley wrote: > > > > On Thu, May 23, 2024, at 10:08 AM, Amaury Denoyelle wrote: > > > > > On Wed, May 22, 2024 at 04:58:44PM +0100, William Manley wrote: > > > > > > On Wed, May 22, 2024, at 1:06 PM, Amaury Denoyelle wrote: > > > > > > > FYI, I just merged a series of fix to improve reverse HTTP. It is > > > > > > > now > > > > > > > possible to use PROXY protocol on preconnect stage. Also, you > > > > > > > have the > > > > > > > availability to use PROXY v2 TLV to differentiate connections. > > > > > > > Note > > > > > > > however that PROXY unique-id is not supported for now due to > > > > > > > internal > > > > > > > API limitations. > > > > > > > > If you can do not hesitate to test this and report us if it's > > > > > > > > sufficient > > > > > > > for you. > > > > > > I've just tried this out and there's something about these changes > > > > > > that are causing my tests to fail. > > > > > > It seems to be triggered by "MEDIUM: rhttp: create session for > > > > > > active preconnect" > > > > > > Tested versions: > > > > > > eb89a7da33ae30da3ed61570aa1597987b59dff3 OK > > > > > > ceebb09744df367ad84586a341d9336f84f72bce OK > > > > > > 45b80aed70a597614e31b748328570785099dfec OK > > > > > > 12c40c25a9520fe3365950184fe724a1f4e91d03 BAD > > > > > > 60496e884e5220b9330a1d8b3a1218f7988c879a BAD > > > > > > 4a968d9d274a24e5d00bd3c03dd22fe2563b13af BAD > > > > > > I'll investigate further tomorrow. > > > > > > > Hum can you describe what sort of tests you are running ? > > > > > > > > [...] > > > > > > > > I've attached the configs used during the test. You'll notice that > > > > each block uses a different IP address from 127.0.0.*. This makes the > > > > TCP flow graph in wireshark meaningful. I've attached a packet capture > > > > of the test failing. I'd also attach a packet capture from it > > > > succeeding, but it's a bit big. I'll look into them and try to figure > > > > out where it's going wrong. > > > I've compared the good trace and the bad trace and I think I can see > > > where this is going wrong. My RHTTP block on the node looks like: > > > # Requests from the portal to the node arrive at this frontend over rhttp: > > > listen flask_reverse_proxy > > > mode http > > > log global > > > bind rhttp@rhttp_backend/srv > > > # We dynamically route to the listening socket based on the hostname, this > > > # way we can add new domains without a node deploy: > > > http-request capture req.hdr(X-Remote-IP) len 15 > > > http-request capture req.hdr(X-Remote-Port) len 5 > > > http-request set-dst hdr(X-Remote-IP) > > > http-request set-dst-port hdr(X-Remote-Port) > > > http-response add-header Vary X-Remote-IP,X-Remote-Port > > > server srv 0.0.0.0:0 source 127.0.0.8 > > > It seems like with the active preconnect commit (12c40c25) it ignores > > > set-dst, so instead of connecting to the IP requested (in this case > > > 127.0.0.7) it attempts to connect to 127.0.0.8. > > > Interesting. Could you please try the attached patch and tell me if this > > solves your issue ? Many thanks, > It doesn't fix it. It fails in the same way. Thank you for your quick testing. I have manage to reproduce what I think could be a similar issue. I have updated my patch, can you please try the following new one ? -- Amaury Denoyelle >From 28785a6594915b7cf91a29a5240d052dbc2523eb Mon Sep 17 00:00:00 2001 From: Amaury Denoyelle Date: Thu, 23 May 2024 15:06:52 +0200 Subject: [PATCH] BUG/MINOR: connection: fix session init on reverse --- src/connection.c | 3 +++ src/proto_rhttp.c | 2 +- 2 files changed, 4 insertions(+), 1 deletion(-) diff --git a/src/connection.c b/src/connection.c index d186c2f01..cdb1c7aaa 100644 --- a/src/connection.c +++ b/src/connection.c @@ -2785,6 +2785,7 @@ int conn_reverse(struct connection *conn) session_unown_conn(sess, conn); sess->origin = NULL;
Re: [PATCH v2] MINOR: config: rhttp: Don't require SSL when attach-srv name parsing
On Thu, May 23, 2024 at 11:55:13AM +0100, William Manley wrote: > On Thu, May 23, 2024, at 11:34 AM, William Manley wrote: > > On Thu, May 23, 2024, at 10:08 AM, Amaury Denoyelle wrote: > > > On Wed, May 22, 2024 at 04:58:44PM +0100, William Manley wrote: > > > > On Wed, May 22, 2024, at 1:06 PM, Amaury Denoyelle wrote: > > > > > FYI, I just merged a series of fix to improve reverse HTTP. It is now > > > > > possible to use PROXY protocol on preconnect stage. Also, you have the > > > > > availability to use PROXY v2 TLV to differentiate connections. Note > > > > > however that PROXY unique-id is not supported for now due to internal > > > > > API limitations. > > > > > > If you can do not hesitate to test this and report us if it's > > > > > > sufficient > > > > > for you. > > > > I've just tried this out and there's something about these changes that > > > > are causing my tests to fail. > > > > It seems to be triggered by "MEDIUM: rhttp: create session for active > > > > preconnect" > > > > Tested versions: > > > > eb89a7da33ae30da3ed61570aa1597987b59dff3 OK > > > > ceebb09744df367ad84586a341d9336f84f72bce OK > > > > 45b80aed70a597614e31b748328570785099dfec OK > > > > 12c40c25a9520fe3365950184fe724a1f4e91d03 BAD > > > > 60496e884e5220b9330a1d8b3a1218f7988c879a BAD > > > > 4a968d9d274a24e5d00bd3c03dd22fe2563b13af BAD > > > > I'll investigate further tomorrow. > > > > > Hum can you describe what sort of tests you are running ? > > > > [...] > > > > I've attached the configs used during the test. You'll notice that each > > block uses a different IP address from 127.0.0.*. This makes the TCP flow > > graph in wireshark meaningful. I've attached a packet capture of the test > > failing. I'd also attach a packet capture from it succeeding, but it's a > > bit big. I'll look into them and try to figure out where it's going wrong. > I've compared the good trace and the bad trace and I think I can see where > this is going wrong. My RHTTP block on the node looks like: > # Requests from the portal to the node arrive at this frontend over > rhttp: > listen flask_reverse_proxy > mode http > log global > bind rhttp@rhttp_backend/srv > # We dynamically route to the listening socket based on the > hostname, this > # way we can add new domains without a node deploy: > http-request capture req.hdr(X-Remote-IP) len 15 > http-request capture req.hdr(X-Remote-Port) len 5 > http-request set-dst hdr(X-Remote-IP) > http-request set-dst-port hdr(X-Remote-Port) > http-response add-header Vary X-Remote-IP,X-Remote-Port > server srv 0.0.0.0:0 source 127.0.0.8 > It seems like with the active preconnect commit (12c40c25) it ignores > set-dst, so instead of connecting to the IP requested (in this case > 127.0.0.7) it attempts to connect to 127.0.0.8. Interesting. Could you please try the attached patch and tell me if this solves your issue ? Many thanks, -- Amaury Denoyelle >From f32cd9a542be2caee223d3902011199b472b361d Mon Sep 17 00:00:00 2001 From: Amaury Denoyelle Date: Thu, 23 May 2024 15:06:52 +0200 Subject: [PATCH] BUG/MINOR: connection: swap session addresses on reverse --- src/connection.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/src/connection.c b/src/connection.c index 97af8f65e..2df2fb700 100644 --- a/src/connection.c +++ b/src/connection.c @@ -2785,6 +2785,7 @@ int conn_reverse(struct connection *conn) session_unown_conn(sess, conn); sess->origin = NULL; session_free(sess); + sess = NULL; conn_set_owner(conn, NULL, NULL); conn->flags |= CO_FL_REVERSED; @@ -2802,6 +2803,8 @@ int conn_reverse(struct connection *conn) /* Invert source and destination addresses if already set. */ SWAP(conn->src, conn->dst); + if (sess) + SWAP(sess->src, sess->dst); conn->reverse.target = NULL; ha_free(>reverse.name.area); -- 2.45.1
Re: [PATCH v2] MINOR: config: rhttp: Don't require SSL when attach-srv name parsing
On Wed, May 22, 2024 at 04:58:44PM +0100, William Manley wrote: > On Wed, May 22, 2024, at 1:06 PM, Amaury Denoyelle wrote: > > FYI, I just merged a series of fix to improve reverse HTTP. It is now > > possible to use PROXY protocol on preconnect stage. Also, you have the > > availability to use PROXY v2 TLV to differentiate connections. Note > > however that PROXY unique-id is not supported for now due to internal > > API limitations. > > > If you can do not hesitate to test this and report us if it's sufficient > > for you. > I've just tried this out and there's something about these changes that are > causing my tests to fail. > It seems to be triggered by "MEDIUM: rhttp: create session for active > preconnect" > Tested versions: > eb89a7da33ae30da3ed61570aa1597987b59dff3 OK > ceebb09744df367ad84586a341d9336f84f72bce OK > 45b80aed70a597614e31b748328570785099dfec OK > 12c40c25a9520fe3365950184fe724a1f4e91d03 BAD > 60496e884e5220b9330a1d8b3a1218f7988c879a BAD > 4a968d9d274a24e5d00bd3c03dd22fe2563b13af BAD > I'll investigate further tomorrow. Hum can you describe what sort of tests you are running ? -- Amaury Denoyelle
Re: [PATCH v2] MINOR: config: rhttp: Don't require SSL when attach-srv name parsing
On Tue, May 14, 2024 at 04:48:16PM +0200, Amaury Denoyelle wrote: > On Wed, May 08, 2024 at 11:43:11AM +0100, William Manley wrote: > > An attach-srv config line usually looks like this: > > tcp-request session attach-srv be/srv name ssl_c_s_dn(CN) > > while a rhttp server line usually looks like this: > > server srv rhttp@ sni req.hdr(host) > > The server sni argument is used as a key for looking up connection in the > > connection pool. The attach-srv name argument is used as a key for > > inserting connections into the pool. For it to work correctly they must > > match. There was a check that either both the attach-srv and server > > provide that key or neither does. > > It also checked that SSL and SNI was activated on the server. This is too > > strict. This patch removes that requirement. Now you can pass arbitrary > > expressions as the name expression. > > With this patch we also produce a more helpful and specific error message. > > I'm doing this as I want to use `fc_pp_unique_id` as the name. > > Arguably it would be easier to understand if instead of using `name` and > > `sni` for `attach-srv` and `server` rules it used the same term in both > > places - like "conn-pool-key" or something. That would make it clear that > > the two must match. But it's too late to change that now. > > [...] > Many thanks for your new patch. I have just merged it with only small > adaptation in the commit message. Note that however I tried to use PROXY > protocol with reverse HTTP to test it, but it appears that for now it is > impossible due to several haproxy internal limitations. > The main issue is that send-proxy and other related options on a server > line used for reverse HTTP are simply ignored, thus you cannot connect > to a bind with accept-proxy. I managed to fix this, but this is not > enough as PROXY protocol in LOCAL mode seems to not work completely as > expected either... FYI, I just merged a series of fix to improve reverse HTTP. It is now possible to use PROXY protocol on preconnect stage. Also, you have the availability to use PROXY v2 TLV to differentiate connections. Note however that PROXY unique-id is not supported for now due to internal API limitations. If you can do not hesitate to test this and report us if it's sufficient for you. Regards, -- Amaury Denoyelle
Re: [PATCH v2] MINOR: config: rhttp: Don't require SSL when attach-srv name parsing
On Wed, May 08, 2024 at 11:43:11AM +0100, William Manley wrote: > An attach-srv config line usually looks like this: > tcp-request session attach-srv be/srv name ssl_c_s_dn(CN) > while a rhttp server line usually looks like this: > server srv rhttp@ sni req.hdr(host) > The server sni argument is used as a key for looking up connection in the > connection pool. The attach-srv name argument is used as a key for > inserting connections into the pool. For it to work correctly they must > match. There was a check that either both the attach-srv and server > provide that key or neither does. > It also checked that SSL and SNI was activated on the server. This is too > strict. This patch removes that requirement. Now you can pass arbitrary > expressions as the name expression. > With this patch we also produce a more helpful and specific error message. > I'm doing this as I want to use `fc_pp_unique_id` as the name. > Arguably it would be easier to understand if instead of using `name` and > `sni` for `attach-srv` and `server` rules it used the same term in both > places - like "conn-pool-key" or something. That would make it clear that > the two must match. But it's too late to change that now. > [...] Many thanks for your new patch. I have just merged it with only small adaptation in the commit message. Note that however I tried to use PROXY protocol with reverse HTTP to test it, but it appears that for now it is impossible due to several haproxy internal limitations. The main issue is that send-proxy and other related options on a server line used for reverse HTTP are simply ignored, thus you cannot connect to a bind with accept-proxy. I managed to fix this, but this is not enough as PROXY protocol in LOCAL mode seems to not work completely as expected either... -- Amaury Denoyelle
Re: some QUIC questions
On Mon, May 06, 2024 at 08:16:34PM +0200, Björn Jacke wrote: > On 06.05.24 15:34, Shawn Heisey wrote: > > On 5/6/24 06:02, Björn Jacke wrote: > > > frontend ft_443 > > > bind :::443 ssl crt /ssl/combined.pem > > > bind quic6@:443 ssl crt /ssl/combined.pem alpn h3 > > > option tcp-smart-accept > > > http-after-response add-header alt-svc 'h3=":443"; ma=600; > > > persistent=1' > > > > > > > frontend ft_quic_test > > > mode tcp > > > bind quic6@:443 ssl crt /ssl/combined.pem > > > use_backend local_smb > > > > > this results in this config check error thoug: > > > > > [ALERT] (3611777) : config : frontend 'ft_quic_test' : MUX > > > protocol 'quic' is not usable for 'bind quic6@:443' at > > > [/etc/haproxy/ haproxy.cfg:73]. > > > > > So a setup like this is not supported by HAProxy's QUIC > > > implementation currently, right? Is QUIC in HAProxy HTTP3 only for > > > now?\ > > > The alpn on the first config snippet only includes h3, not quic. Here > > are alpn and npn settings that allow some of the quic protocol > > variations as well as h3 itself: > for the http frontend sniplet h3 as only alpn protocol was intended. It > turned out to be a firewall causing making haproxy "ignore" the incoming > quic traffic, sorry for not finding that earlier. > Continuing the test with connection migration on network changes showed that > connection migration is not working. I'm not sure though if none of the > browsers do really support this or if this not being supported on the > haproxy server side. Does any of the QUIC experts here have some insights on > that? Indeed haproxy currently does not implement connection migration. For now, haproxy will silently ignored datagrams received for a known connection on another network endpoint. We have started to talk about it so it's definitely on our roadmap. But first we have to update our documentation to reflect the current state. > > The second one is a tcp frontend ... I feel pretty sure that h3/quic > > requires http in the frontend, not tcp. > but for any non-http protocol using QUIC as transport layer a http type > frontend is obviously not the right choice. > So let me ask the question differently: is QUIC support in haproxy currently > only targeting at http3 support or is generic QUIC transport also on the > agenda? From the docs I can't find much about non-http related QUIC support. For the moment, QUIC in haproxy is only usable in HTTP mode with either HTTP/3 or HTTP/0.9. The last one is reserved for testing for interop testing and ensure we can use multiple application protocols. As such, it could be possible to support non-http protocols without too much work. The biggest constraint for now is the lack of testing client. But if have a working setup for this, do not hesitate to share it with us it would be extremely useful. -- Amaury Denoyelle
Re: [PATCH] MINOR: config: rhttp: Downgrade error on attach-srv name parsing
Hi William ! Sorry for the delay. We have rediscussed this issue this morning and here is my answer on your patch. It is definitely legitimate to want to be able to use reverse HTTP without SSL on the server line. However, the way that haproxy currently uses idle connection is that at least the SNI parameter alone must be set to match the name parameter of the corresponding attach-srv rule. If this is not the case, the connection will never be reused. But in fact, when rereading haproxy code this morning, we found that it is valid to have SNI parameter set even if SSL is not active, and this will produce the desired effect. We are definitely okay with merging an adjusted version of your patch for the moment, see my remarks below. However, using SNI on a server line without SSL is something tricky. Thus we plan to add a new keyword to replace it when SSL is not used to have the same effect. When this will be done, you should update your configuration to use it. On Fri, Apr 12, 2024 at 02:29:30PM +0100, William Manley wrote: > An attach-srv config line usually looks like this: > tcp-request session attach-srv be/srv name ssl_c_s_dn(CN) > The name is a key that is used when looking up connections in the > connection pool. Without this patch you'd get an error if you passed > anything other than "ssl_c_s_dn(CN)" as the name expression. Now you can > pass arbitrary expressions and it will just warn you if you aren't > producing a configuration that is RFC compliant. > I'm doing this as I want to use `fc_pp_unique_id` as the name. I'm not 100% okay with your description here. The current code condition does not check that "ssl_c_s_dn(CN)" is used as expression, but rather that if name is defined for an attach-srv rule, the targetted server must have both SSL and SNI activated. I think this paragraph should be reworded. > --- > src/tcp_act.c | 3 +-- > 1 file changed, 1 insertion(+), 2 deletions(-) > diff --git a/src/tcp_act.c b/src/tcp_act.c > index a88fab4af..4d2a56c67 100644 > --- a/src/tcp_act.c > +++ b/src/tcp_act.c > @@ -522,8 +522,7 @@ static int tcp_check_attach_srv(struct act_rule *rule, > struct proxy *px, char ** > > if ((rule->arg.attach_srv.name && (!srv->use_ssl || !srv->sni_expr)) || > (!rule->arg.attach_srv.name && srv->use_ssl && srv->sni_expr)) { > - memprintf(err, "attach-srv rule: connection will never be used; > either specify name argument in conjunction with defined SSL SNI on targeted > server or none of these"); > - return 0; > + ha_warning("attach-srv rule: connection may never be used; > usually name argument is defined SSL SNI on targeted server or none of > these"); > } > > rule->arg.attach_srv.srv = srv; > -- > 2.34.1 > I think an alternative patch may be more desirable here. We could update the condition with the following statement without removing the fatal error : > if ((rule->arg.attach_srv.name && !srv->sni_expr) || > (!rule->arg.attach_srv.name && srv->sni_expr)) { This reflects the current mandatory usage of reverse-http : if name is used on attach-srv, sni keyword must be specified on the server line. Let me know your thoughts. If you're okay, can you adjust your patch please ? If not, do not hesitate to tell me if there is something you disagreeing with. Thanks, -- Amaury Denoyelle
Re: [PATCH] MINOR: config: rhttp: Downgrade error on attach-srv name parsing
On Fri, Apr 12, 2024 at 03:37:56PM +0200, Willy Tarreau wrote: > Hi! > On Fri, Apr 12, 2024 at 02:29:30PM +0100, William Manley wrote: > > An attach-srv config line usually looks like this: > > > tcp-request session attach-srv be/srv name ssl_c_s_dn(CN) > > > The name is a key that is used when looking up connections in the > > connection pool. Without this patch you'd get an error if you passed > > anything other than "ssl_c_s_dn(CN)" as the name expression. Now you can > > pass arbitrary expressions and it will just warn you if you aren't > > producing a configuration that is RFC compliant. > > > I'm doing this as I want to use `fc_pp_unique_id` as the name. > > --- > > src/tcp_act.c | 3 +-- > > 1 file changed, 1 insertion(+), 2 deletions(-) > > > diff --git a/src/tcp_act.c b/src/tcp_act.c > > index a88fab4af..4d2a56c67 100644 > > --- a/src/tcp_act.c > > +++ b/src/tcp_act.c > > @@ -522,8 +522,7 @@ static int tcp_check_attach_srv(struct act_rule *rule, > > struct proxy *px, char ** > > > > if ((rule->arg.attach_srv.name && (!srv->use_ssl || !srv->sni_expr)) || > > (!rule->arg.attach_srv.name && srv->use_ssl && srv->sni_expr)) { > > - memprintf(err, "attach-srv rule: connection will never be used; > > either specify name argument in conjunction with defined SSL SNI on > > targeted server or none of these"); > > - return 0; > > + ha_warning("attach-srv rule: connection may never be used; > > usually name argument is defined SSL SNI on targeted server or none of > > these"); > Well, I consider that any valid (and useful) configuration must be > writable without a warning. So if you have a valid use case with a > different expression, here you still have no way to express it without > the warning. In this case I'd suggest to use ha_diag_warning() instead, > it will only report this when starting with -dD (config diagnostic mode). I have a doubt though, will this kind of configuration really works ? I though that for the moment if name parameter is specified, it is mandatory to use a server with SSL+SNI. -- Amaury Denoyelle
Re: [PATCH] BUG/MINOR: server: fix persistence cookie for dynamic servers
On Wed, Mar 27, 2024 at 02:34:25PM +, Damien Claisse wrote: > When adding a server dynamically, we observe that when a backend has a > dynamic persistence cookie, the new server has no cookie as we receive > the following HTTP header: > set-cookie: test-cookie=; Expires=Thu, 01-Jan-1970 00:00:01 GMT; path=/ > Whereas we were expecting to receive something like the following, which > is what we receive for a server added in the config file: > set-cookie: test-cookie=abcdef1234567890; path=/ > After investigating code path, srv_set_dyncookie() is never called when > adding a server through CLI, it is only called when parsing config file > or using "set server bkd1/srv1 addr". > To fix this, call srv_set_dyncookie() inside cli_parse_add_server(). > This patch must be backported up to 2.4. > --- > src/server.c | 5 + > 1 file changed, 5 insertions(+) > diff --git a/src/server.c b/src/server.c > index 555cae82c..a93798f03 100644 > --- a/src/server.c > +++ b/src/server.c > @@ -5732,6 +5732,11 @@ static int cli_parse_add_server(char **args, char > *payload, struct appctx *appct >*/ > srv->rid = (srv_id_reuse_cnt) ? (srv_id_reuse_cnt / 2) : 0; > > + /* generate new server's dynamic cookie if enabled on backend */ > + if (be->ck_opts & PR_CK_DYNAMIC) { > + srv_set_dyncookie(srv); > + } > + > /* adding server cannot fail when we reach this: >* publishing EVENT_HDL_SUB_SERVER_ADD >*/ > -- > 2.34.1 > Thank you very much. This was merged in our development tree. In the meantime, I also enabled "cookie" keyword for dynamic servers as nothing prevented it. -- Amaury Denoyelle
Re: [PATCH] BUG/MINOR: server: fix persistence cookie for dynamic servers
On Fri, Mar 22, 2024 at 09:45:59AM +, Damien Claisse wrote: > Hi Amaury! > Sorry for the HTML message, I have to use a *** well-known enterprise MUA :( > Le 22/03/2024 09:09, « Amaury Denoyelle » a écrit : >> This patch raises several interrogations. First, dynamic servers are >> currently intended to not support cookies, hence why the keyword is >> disabled for them. It has been done as a convenience but maybe it would >> be a good time to review it carefully and see if whole cookie support >> can be enabled. > Indeed, there could be an opportunity to revisit this. What we observed is > that, even with dynamic servers, calling “set server bkd1/srv1 addr a.b.c.d” > would re-add the dynamic cookie for this server, and calling “enable > dynamic-cookie backend bkd1” would re-compute cookie for all servers in the > backend. It is expected as in these calls code path there is a call to > srv_set_dyncookie(). So there currently is at least a partial support for > dynamic cookies on dynamic servers, even if it’s not expected :) >> Second, I'm unsure srv_set_dyncookie() should be called on >> _srv_parse_init(). This function is also called for configuration file >> servers. In particular, I do not know how we should handled duplicate >> cookie values in this case. > Not sure we really create duplicates here as we basically reset the same > cookie on the server, not on another one in the backend, at least I didn’t > observe such warnings in my logs while testing this patch yet. But I agree > that in the context of haproxy startup there would be 2 calls to > srv_set_dyncookie() per server which is useless and could create issues. > Maybe I could move that at the end of cli_parse_add_server()? Feel free to > suggest any better option. Okay, I had the time to review dynamic cookie handling. For me it's fine to use srv_set_dyncookie() for dynamic servers. I think however its new invocation should be placed near the end of cli_parse_add_server(). Maybe with an extra check (be->ck_opts & PR_CK_DYNAMIC) to be similar with invocation in check_config_validity(). This location will prevent duplicate invocation of srv_set_dyncookie() for static servers, as for them the function must be called on post parsing to ensure the backend key has been parsed. Could you please adjust your patch ? Also, as it is a bugfix, you can specify that it must be backported up to 2.4. In the meantime, I will continue code inspection to determine if cookie keyword can also be enabled for dynamic servers as well. Regards, -- Amaury Denoyelle
Re: [PATCH] BUG/MINOR: server: fix persistence cookie for dynamic servers
On Thu, Mar 21, 2024 at 02:37:06PM +, Damien Claisse wrote: > When adding a server dynamically, we observe that when a backend has a > dynamic persistence cookie, the new server has no cookie as we receive > the following HTTP header: > set-cookie: test-cookie=; Expires=Thu, 01-Jan-1970 00:00:01 GMT; path=/ > Whereas we were expecting to receive something like the following, which > is what we receive for a server added in the config file: > set-cookie: test-cookie=abcdef1234567890; path=/ > After investigating code path, it seems srv_set_dyncookie() is never > called when adding a server through CLI, it is only called when parsing > config file or using "set server bkd1/srv1 addr". > To fix this, add a call to srv_set_dyncookie() inside _srv_parse_init() > which is used on the dynamic server initialization path. > --- > src/server.c | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > diff --git a/src/server.c b/src/server.c > index e22b33806..c954a8de3 100644 > --- a/src/server.c > +++ b/src/server.c > @@ -3306,7 +3306,9 @@ static int _srv_parse_init(struct server **srv, char > **args, int *cur_arg, > /* >* we don't need to lock the server here, because >* we are in the process of initializing. > - * > + */ > + srv_set_dyncookie(newsrv); > + /* >* Note that the server is not attached into the proxy tree if >* this is a dynamic server. >*/ > -- > 2.34.1 > Thanks for your contribution ! This patch raises several interrogations. First, dynamic servers are currently intended to not support cookies, hence why the keyword is disabled for them. It has been done as a convenience but maybe it would be a good time to review it carefully and see if whole cookie support can be enabled. Second, I'm unsure srv_set_dyncookie() should be called on _srv_parse_init(). This function is also called for configuration file servers. In particular, I do not know how we should handled duplicate cookie values in this case. -- Amaury Denoyelle
Re: http/3 flow control equivalent
On Thu, Feb 22, 2024 at 12:47:04PM +1100, Miles Hampson wrote: > Hi, > I have noticed that transferring large files with http/2 to a backend > server through HAProxy 2.9 (and earlier) over a network link with a bit of > latency can be extremely slow unless the HTTP/2 Flow Control window size is > increased quite a bit (i.e. 4x the default works well in our situation). > We are now trying out http/3 and hit the same issue. I don't think there is > any connection migration, this is just from a test server. There don't seem > to be any tune.h3 settings in the config manual, are there any connection > settings that I might be able to adjust to improve this situation? > I haven't investigated much so far because I don't have any understanding > of how QUIC stream flow control works yet, so the only thing I have tried > is increasing the QUIC receive buffer (this is on an Ubuntu 22.04 server) Alas for the moment HTTP/3 upload are limited indeed by the flow control which is equivalent to tune.bufsize. To remove this limitation, I wish to implement a multi-buffer receive per stream, but for now this is not yet planned. -- Amaury Denoyelle
Re: unsupported protocol family 2 for address 'quic4@0.0.0.0:4
On Wed, Nov 08, 2023 at 04:42:00PM +0100, Christoph Kukulies wrote: > Christoph Kukulies > k...@kukulies.org > > [...] > which leads to haproxy failing on startup: > Nov 8 16:38:28 mail haproxy[101582]: [ALERT](101582) : parsing > [/etc/haproxy/haproxy.cfg:26] : 'bind' : unsupported protocol family 2 for > address 'quic4@0.0.0.0:443' > So what can I do about it other than trusting upon the scripts and configs > to work? Or ask here for help? This particular error message "unsupported protocol family" has been removed in haproxy 2.6. Looks like you did not run the correct binary as this does not corresponds to the version reported by -vv. Have you ensure to run the binary directly on the command-line using an absolute path ? -- Amaury Denoyelle
Re: Stalled H3/QUIC Connections with Packet Loss
On Fri, Sep 22, 2023 at 03:30:58PM +0200, Luke Seelenbinder wrote: > If it's any help, here's `show quic full` for a stalled connection: > [...] Tristan has been right, as we saw here fd=-1 meaning that there is probably a permission issue for bind() on connection sockets. You should have a look at the new setcap directive to fix it. Thanks to Tristan previous report, we know there is a real performance issue when using QUIC listener sockets. We have to investigate this. You have probably encounter an occurence of it. In the meantime, it's important to ensure you run with connection dedicated socket instead. The socket fallback has been implemented silently first as on some platform it may not be supported at all. We plan to change this soon to report a log on the first permission error to improve the current situation. Hope this helps, -- Amaury Denoyelle
Re: [PATCH] DOC: Attempt to fix dconv parsing error for tune.h2.fe.initial-window-size
On Tue, Jun 13, 2023 at 03:18:19PM +0200, Tim Düsterhus, WoltLab GmbH wrote: > Hi > please find the patch attached. > This email address is not subscribed to the list, please keep it in Cc > when replying. Thanks Tim, I applied all of your patches. On a side note, I noticed on the rendered doc output that keywords in the "See also" section are not highlighted with a link. Maybe because quotes are missing around. Regards, -- Amaury Denoyelle
Re: [PATCH] DOC: quic: fix misspelled tune.quic.socket-owner
On Tue, Jun 06, 2023 at 02:54:19PM +0200, Tim Düsterhus wrote: > Hi Artur, > On 6/6/23 14:42, Artur wrote: > > DOC: quic: fix misspelled tune.quic.socket-owner > > > Commit 511ddd5 introduced tune.quic.socket-owner parameter > > related to QUIC socket behaviour. > > However it was misspelled in configuration.txt in 'bind' section as > > tune.quic.conn-owner. > > > I'm not a committer, but a regular contributor. I had a look: The patch > looks pretty good, but the commit message is lacking a body, which is > required as per: > https://github.com/haproxy/haproxy/blob/a475448161b406b0b81f5b551336417b05426492/CONTRIBUTING#L562-L567 > As you already found the commit that introduced the issue, it would be a > good opportunity to add a reference to the message body, something like: > The typo was introduced in commit 511ddd5785266c149dfa593582512239480e1688 > ("MINOR: quic: define config option for socket per conn") and needs to be > backported together with that commit (2.8 and possible 2.7). > Best regards In fact, I already merged the patch. The commit message was already provided as the email body so I integrated it directly into the patch. As for backport information, most of the time we skip documentation patches when they are only for spelling fixes. However as this one is directly to correct a referenced keyword we may choose to backport it. -- Amaury Denoyelle
Re: [PATCH] DOC: quic: fix misspelled tune.quic.socket-owner
On Tue, Jun 06, 2023 at 02:42:37PM +0200, Artur wrote: > DOC: quic: fix misspelled tune.quic.socket-owner > Commit 511ddd5 introduced tune.quic.socket-owner parameter > related to QUIC socket behaviour. > However it was misspelled in configuration.txt in 'bind' section as > tune.quic.conn-owner. Merged, thank you very much ! Do not hesitate to give us feedback if you test QUIC support :) -- Amaury Denoyelle
Re: tune.quic.socket-owner misspelled in configuration.txt (bind section)
On Mon, Jun 05, 2023 at 07:21:37PM +0200, Artur wrote: > Hello, > In the following commit tune.quic.socket-owner parameter is introduced. > However, in configuration.txt, line 4629, it's misspelled as > tune.quic.conn-owner. > https://github.com/haproxy/haproxy/commit/511ddd5785266c149dfa593582512239480e1688 > I can fill a "bug" report on github if necessary. Indeed good catch. If you have the time, can you provide us a proper patch for this ? There is some insight in the CONTRIBUTING file for this. If not, I will submit the change myself. -- Amaury Denoyelle
[ANNOUNCE] haproxy-2.7.8
Hi, HAProxy 2.7.8 was released on 2023/05/02. It added 1 new commit after version 2.7.7. This version is released just a few days after the previous one with a single fix only. Its purpose is to solve an issue in the QUIC submodule. This bug is present since the 2.7.7 only and never affected the 2.8 development tree. It causes the haproxy process to crash on each new connection accepted by a QUIC listener. This means that all those who use QUIC with 2.7.7 should upgrade to 2.7.8. Otherwise, there is no need to upgrade on this new version. This issue has been introduced along the mechanism to balance QUIC connections accross threads on accept. This feature relied on a commit which itself was not backported thus causing the unexpected behavior. Despite our care, maintaining stable branches and backporting specific commits is still a tedious process. In most cases, our regression tests helps us to detect the most glaring issues. However, for QUIC we are still at loss here due to the young ecosystem and the lack of testing tools. We're still investigating on how to improve this. If anyone is interested here, do not hesitate to participate on it. Anyway, thanks to our community who spotted this issue and help to quickly resolve it. # Please find the usual URLs below : Site index : https://www.haproxy.org/ Documentation: https://docs.haproxy.org/ Wiki : https://github.com/haproxy/wiki/wiki Discourse: https://discourse.haproxy.org/ Slack channel: https://slack.haproxy.org/ Issue tracker: https://github.com/haproxy/haproxy/issues Sources : https://www.haproxy.org/download/2.7/src/ Git repository : https://git.haproxy.org/git/haproxy-2.7.git/ Git Web browsing : https://git.haproxy.org/?p=haproxy-2.7.git Changelog: https://www.haproxy.org/download/2.7/src/CHANGELOG Dataplane API: https://github.com/haproxytech/dataplaneapi/releases/latest Pending bugs : https://www.haproxy.org/l/pending-bugs Reviewed bugs: https://www.haproxy.org/l/reviewed-bugs Code reports : https://www.haproxy.org/l/code-reports Latest builds: https://www.haproxy.org/l/dev-packages --- Complete changelog : Willy Tarreau (1): MINOR: listener: remove the now useless LI_F_QUIC_LISTENER flag -- Amaury Denoyelle
[ANNOUNCE] haproxy-2.6.12
Hi, HAProxy 2.6.12 was released on 2023/03/28. It added 19 new commits after version 2.6.11. Here is a detailled summary of the improvements : In 2.6.10, important fixes were introduced on FD thread concurrent access which could have caused a crash. However, this patch was incomplete for kqueue and events ports pollers used respectively in BSD and SunOS. This has been corrected by Willy for the current release. Christopher fixed an issue affecting the H1 multiplexer. If the reponse was fully transferred before the whole request is read, there was a risk that the channel is left open without any further processing. In the end, this caused the stream to enter a spinning loop which triggered an assertion failure crash. A similar problem was also encountered for the stats applet when large GET HTTP requests were issued to it. The stream is left in a blocked state with memory resource consumed for nothing. A bunch of QUIC fixes were made. Two github issues were opened recently about an assertion failure on the sending code after several hours of running. A serie of patches were deployed to solve them, with notably a fix on the connection level flow-control calculation. Also, probing on PTO expiration has been corrected which should improve transfers under packet loss condition. Finally, if a connection is preemptively closed by haproxy for any reason the CONNECTION_CLOSE error code should be more precise. Willy has fixed several crashes and leaks related to potential memory allocation failures. The code is now more robust and should better resist under memory pressure. Thanks to everyone who contributed to this release. # Please find the usual URLs below : Site index : https://www.haproxy.org/ Documentation: https://docs.haproxy.org/ Wiki : https://github.com/haproxy/wiki/wiki Discourse: https://discourse.haproxy.org/ Slack channel: https://slack.haproxy.org/ Issue tracker: https://github.com/haproxy/haproxy/issues Sources : https://www.haproxy.org/download/2.6/src/ Git repository : https://git.haproxy.org/git/haproxy-2.6.git/ Git Web browsing : https://git.haproxy.org/?p=haproxy-2.6.git Changelog: https://www.haproxy.org/download/2.6/src/CHANGELOG Dataplane API: https://github.com/haproxytech/dataplaneapi/releases/latest Pending bugs : https://www.haproxy.org/l/pending-bugs Reviewed bugs: https://www.haproxy.org/l/reviewed-bugs Code reports : https://www.haproxy.org/l/code-reports Latest builds: https://www.haproxy.org/l/dev-packages --- Complete changelog : Amaury Denoyelle (5): BUG/MINOR: quic: wake up MUX on probing only for 01RTT BUG/MINOR: trace: fix hardcoded level for TRACE_PRINTF BUG/MEDIUM: mux-quic: release data from conn flow-control on qcs reset BUG/MINOR: h3: properly handle incomplete remote uni stream type BUG/MINOR: mux-quic: prevent CC status to be erased by shutdown Aurelien DARRAGON (2): DOC: config: set-var() dconv rendering issues BUG/MINOR: applet/new: fix sedesc freeing logic Christopher Faulet (2): BUG/MEDIUM: stats: Consume the request except when parsing the POST payload BUG/MEDIUM: mux-h1: Wakeup H1C on shutw if there is no I/O subscription Frédéric Lécaille (1): BUG/MINOR: quic: Missing STREAM frame type updated Willy Tarreau (9): BUG/MAJOR: poller: drop FD's tgid when masks don't match OPTIM: mux-h1: limit first read size to avoid wrapping BUG/MEDIUM: stream: do not try to free a failed stream-conn BUG/MEDIUM: mux-h2: do not try to free an unallocated h2s->sd BUG/MEDIUM: mux-h2: erase h2c->wait_event.tasklet on error path BUG/MEDIUM: stconn: don't set the type before allocation succeeds BUG/MINOR: stconn: fix sedesc memory leak on stream allocation failure BUG/MEDIUM: mux-h1: properly destroy a partially allocated h1s BUG/MEDIUM: applet: only set appctx->sedesc on successful allocation --- -- Amaury Denoyelle
[ANNOUNCE] haproxy-2.7.6
Hi, HAProxy 2.7.6 was released on 2023/03/28. It added 39 new commits after version 2.7.5. Here is a short summary of the improvements : Aurélien has extended the internal listener API to better handle the resume operation. One noticeable effect is that listeners that have an ABNS abstract namespace socket can now support reload without crashing haproxy. Also, it also ensures properly that 'no-quic' configuration keyword is respected on reload so that disabled QUIC listeners are not started. Christopher fixed an issue affecting the H1 multiplexer. If the reponse was fully transferred before the whole request is read, there was a risk that the channel is left open without any further processing. In the end, this caused the stream to enter a spinning loop which triggered an assertion failure crash. A similar problem was also encountered for the stats applet when large GET HTTP requests were issued to it. The stream is left in a blocked state with memory resource consumed for nothing. As usual, several QUIC improvements were introduced. Two github issues were opened recently about an assertion failure on the sending code after several hours of running. A serie of patches were deployed to solve them, with notably a fix on the connection level flow-control calculation. Also, probing on PTO expiration has been corrected which should improve transfers under packet loss condition. Finally, if a connection is preemptively closed by haproxy for any reason the CONNECTION_CLOSE error code should be more precise. Willy has fixed several crashes and leaks related to potential memory allocation failures. The code is now more robust and should better resist under memory pressure. Thanks to everyone who contributed to this release. # Please find the usual URLs below : Site index : https://www.haproxy.org/ Documentation: https://docs.haproxy.org/ Wiki : https://github.com/haproxy/wiki/wiki Discourse: https://discourse.haproxy.org/ Slack channel: https://slack.haproxy.org/ Issue tracker: https://github.com/haproxy/haproxy/issues Sources : https://www.haproxy.org/download/2.7/src/ Git repository : https://git.haproxy.org/git/haproxy-2.7.git/ Git Web browsing : https://git.haproxy.org/?p=haproxy-2.7.git Changelog: https://www.haproxy.org/download/2.7/src/CHANGELOG Dataplane API: https://github.com/haproxytech/dataplaneapi/releases/latest Pending bugs : https://www.haproxy.org/l/pending-bugs Reviewed bugs: https://www.haproxy.org/l/reviewed-bugs Code reports : https://www.haproxy.org/l/code-reports Latest builds: https://www.haproxy.org/l/dev-packages --- Complete changelog : Amaury Denoyelle (13): BUG/MINOR: quic: wake up MUX on probing only for 01RTT BUG/MINOR: quic: ignore congestion window on probing for MUX wakeup BUG/MINOR: trace: fix hardcoded level for TRACE_PRINTF BUG/MEDIUM: mux-quic: release data from conn flow-control on qcs reset MINOR: mux-quic: complete traces for qcs emission MINOR: mux-quic: adjust trace level for MAX_DATA/MAX_STREAM_DATA recv MINOR: mux-quic: add flow-control info to minimal trace level BUG/MINOR: h3: properly handle incomplete remote uni stream type BUG/MINOR: mux-quic: prevent CC status to be erased by shutdown MINOR: mux-quic: interrupt qcc_recv*() operations if CC scheduled MINOR: mux-quic: ensure CONNECTION_CLOSE is scheduled once per conn MINOR: mux-quic: close on qcs allocation failure MINOR: mux-quic: close on frame alloc failure Aurelien DARRAGON (14): MINOR: proto_uxst: add resume method MINOR: listener/api: add lli hint to listener functions MINOR: listener: add relax_listener() function MINOR: listener: workaround for closing a tiny race between resume_listener() and stopping MINOR: listener: make sure we don't pause/resume bypassed listeners BUG/MEDIUM: listener: fix pause_listener() suspend return value handling BUG/MINOR: listener: fix resume_listener() resume return value handling BUG/MEDIUM: resume from LI_ASSIGNED in default_resume_listener() MINOR: listener: pause_listener() becomes suspend_listener() BUG/MEDIUM: listener/proxy: fix listeners notify for proxy resume MEDIUM: proto_ux: properly suspend named UNIX listeners MINOR: proto_ux: ability to dump ABNS names in error messages DOC: config: set-var() dconv rendering issues BUG/MINOR: applet/new: fix sedesc freeing logic Christopher Faulet (2): BUG/MEDIUM: stats: Consume the request except when parsing the POST payload BUG/MEDIUM: mux-h1: Wakeup H1C on shutw if there is no I/O subscription Frédéric Lécaille (3): MINOR: quic: Stop stressing the acknowledgments process (RX ACK frames) BUG/MINOR: quic: Dysfunctional
Re: tcp mode: client port selection
On Fri, Mar 03, 2023 at 09:35:45AM +0100, Jack Bauer wrote: > Am Do., 2. März 2023 um 17:52 Uhr schrieb Amaury Denoyelle < > adenoye...@haproxy.com>: > > > > It seems you do not use 'option redispatch' in your configuration so a > > retry will never be conducted on another server. Therefore, your problem > > is probably not related to haproxy retries. > > > From the documentation ( > http://docs.haproxy.org/2.7/configuration.html#4-option%20redispatch) one > could or should conclude, that option redispatch is only working in HTTP > mode. I confirm that it works also for proxy on TCP mode and that the documentation is confusing. > Even if it is also working in TCP mode and we are not using it in the > configuration, haproxy makes connections with the same client ip port to > another target server. > Can anyone say sth. about client port allocation in haproxy? Is it done > manually in some cases? Or is that a task that is completely done by the OS? To my knowledge, haproxy does not explicitely select the port when connecting to a backend server unless a specific "source" statement is used, so this should be the responsibility of the OS. Have you checked that your ephemeral port range is big enough ? $ sysctl net.ipv4.ip_local_port_range net.ipv4.ip_local_reserved_ports -- Amaury Denoyelle
Re: tcp mode: client port selection
On Thu, Mar 02, 2023 at 02:39:25PM +0100, Jack Bauer wrote: > Hi, > [...] > Therefore it might be necessary to understand how haproxy chooses the > client-ip-port when creating new tcp connections to backend servers. > Especially how haproxy behaves when a connection attempt fails and a retry > is triggered because this retry might change the backend server. > Will the retry use the same client port? If yes, is this intended? > It seems you do not use 'option redispatch' in your configuration so a retry will never be conducted on another server. Therefore, your problem is probably not related to haproxy retries. -- Amaury Denoyelle
Re: Miscellaneous Crashes on 2.7.1
On Sat, Jan 07, 2023 at 02:22:01PM +0100, Willy Tarreau wrote: > Hi Luke, > On Sat, Jan 07, 2023 at 01:44:30PM +0100, Luke Seelenbinder wrote: > > Hi list, > > > We've been running 2.7.1 on a subset of our edge servers with QUIC + > > > HTTP/3 > > enabled, and we're seeing routine, but infrequent (~daily), crashes (mix of > > SIGABRT / SIGSEGV). I have coredumps and there doesn't seem to be any common > > thread across crashes / machines, but it's possible I'm missing something. > > Two of the coredumps show the following backtrace: > > > Program terminated with signal SIGSEGV, Segmentation fault. > > #0 0x55b0fe319ce7 in qc_release_frm (qc=0x55b101236570, > > frm=0x7fd8201fbbf0 ) at src/quic_conn.c:1569 > > 1569 pn = f->pkt->pn_node.key; > > > Program terminated with signal SIGSEGV, Segmentation fault. > > #0 qc_release_frm (qc=0x5652aa588fc0, frm=0x5652aa2537d0) at > > src/quic_conn.c:1564 > > 1564 list_for_each_entry_safe(f, tmp, >reflist, ref) { > > > which seem similar enough to possibly share a common cause. The other > > crashes occur in quictls (sigabrt), htx.h (sigsegv), and ebtree.h (sigsegv). > > > > Are there known fixes from 2.8-dev or internal trackers that could be > > related? I can dig deeper, but for now I'll probably disable quic since that > > seems to be the most likely culprit. > I'm seeing the following patch for QUIC which was fixed right after > 2.7.1 was emitted and which suggest potential crashes: > 15337fd80 ("BUG/MEDIUM: mux-quic: fix double delete from qcc.opening_list") > So you might possibly be hitting that bug, indeed. If you're interested > in giving 2.8-dev1 a try, it would confirm whether you're facing this > exact issue. But at the moment we're not aware of any remaining crash- > inducing bugs in 2.8-dev, so if it would still fail for you it would > indicate a new unknown bug. Luke, the crashes you reported are quite identical to the ones I had before I introduced the fix. Indeed, you should try 2.8-dev1 if you can and report us if this has solved the issue. Thanks for your help, -- Amaury Denoyelle
Re: Followup on openssl 3.0 note seen in another thread
On Thu, Dec 15, 2022 at 09:20:01AM +0100, Amaury Denoyelle wrote: > On Thu, Dec 15, 2022 at 09:03:18AM +0100, Amaury Denoyelle wrote: > > On Thu, Dec 15, 2022 at 08:58:16AM +0100, Amaury Denoyelle wrote: > > > On Wed, Dec 14, 2022 at 11:20:44PM -0700, Shawn Heisey wrote: > > > > On 12/14/22 21:23, Илья Шипицин wrote: > > > > > Can you try to bisect? > > > > I had made some incorrect assumptions about what's needed to use > > > > bisect. With a little bit of research I figured it out and it was a > > > > LOT easier than I had imagined. > > > > > I suspect that it won't help, browsers tend to remember things in > > > > > their own way > > > > One thing I have learned in my testing is that doing shift-reload on > > > > the page means it will never switch to h3. So I use shift-reload > > > > followed by a couple of regular reloads as a way of resetting what > > > > the browser remembers. That seems to work. > > > > The bisect process only took a few runs to find the problem commit: > > > > 3ca4223c5e1f18a19dc93b0b09ffdbd295554d46 is the first bad commit > > > > commit 3ca4223c5e1f18a19dc93b0b09ffdbd295554d46 > > > > Author: Amaury Denoyelle > > > > Date: Wed Dec 7 14:31:42 2022 +0100 > > > > BUG/MEDIUM: h3: reject request with invalid header name > > > > [...] > > > I seem to be able to reach your website with H3 currently. Did you > > > revert to an older version ? Regarding this commit, it rejects requests > > > with invalid headers (with uppercase or non-HTTP tokens in the field > > > name). Have you tried with several browsers and with command-line > > > clients ? > > > I will look on my side to see if I missed something. > > With a local instance of nextcloud I am able to reproduce a bug linked > > to this commit with caused the deactivation of H3. I'm investigating on > > it... > The issue seems to be triggered by request with a cookie header. Can you > please apply the following patch on top of the master branch and confirm > me if this resolves your issue ? Thanks. > [...] I'm definitely sure on the fix so I merged my patch. If you can, please give a try to the new master branch and tell me if your issue is resolved. Thanks you for your help on this issue, I really appreciate ! -- Amaury Denoyelle
Re: Followup on openssl 3.0 note seen in another thread
On Thu, Dec 15, 2022 at 09:03:18AM +0100, Amaury Denoyelle wrote: > On Thu, Dec 15, 2022 at 08:58:16AM +0100, Amaury Denoyelle wrote: > > On Wed, Dec 14, 2022 at 11:20:44PM -0700, Shawn Heisey wrote: > > > On 12/14/22 21:23, Илья Шипицин wrote: > > > > Can you try to bisect? > > > I had made some incorrect assumptions about what's needed to use > > > bisect. With a little bit of research I figured it out and it was a > > > LOT easier than I had imagined. > > > > I suspect that it won't help, browsers tend to remember things in > > > > their own way > > > One thing I have learned in my testing is that doing shift-reload on > > > the page means it will never switch to h3. So I use shift-reload > > > followed by a couple of regular reloads as a way of resetting what > > > the browser remembers. That seems to work. > > > The bisect process only took a few runs to find the problem commit: > > > 3ca4223c5e1f18a19dc93b0b09ffdbd295554d46 is the first bad commit > > > commit 3ca4223c5e1f18a19dc93b0b09ffdbd295554d46 > > > Author: Amaury Denoyelle > > > Date: Wed Dec 7 14:31:42 2022 +0100 > > > BUG/MEDIUM: h3: reject request with invalid header name > > > [...] > > I seem to be able to reach your website with H3 currently. Did you > > revert to an older version ? Regarding this commit, it rejects requests > > with invalid headers (with uppercase or non-HTTP tokens in the field > > name). Have you tried with several browsers and with command-line > > clients ? > > I will look on my side to see if I missed something. > With a local instance of nextcloud I am able to reproduce a bug linked > to this commit with caused the deactivation of H3. I'm investigating on > it... The issue seems to be triggered by request with a cookie header. Can you please apply the following patch on top of the master branch and confirm me if this resolves your issue ? Thanks. -- Amaury Denoyelle >From 603a919c8b0cea75516571c27e427960e85fae72 Mon Sep 17 00:00:00 2001 From: Amaury Denoyelle Date: Thu, 15 Dec 2022 09:18:25 +0100 Subject: [PATCH] BUG/MEDIUM: h3: fix cookie header parsing --- src/h3.c | 1 + 1 file changed, 1 insertion(+) diff --git a/src/h3.c b/src/h3.c index d24b3de5f..10d19e2cd 100644 --- a/src/h3.c +++ b/src/h3.c @@ -544,6 +544,7 @@ static ssize_t h3_headers_to_htx(struct qcs *qcs, const struct buffer *buf, if (isteq(list[hdr_idx].n, ist("cookie"))) { http_cookie_register(list, hdr_idx, , _cookie); + ++hdr_idx; continue; } else if (isteq(list[hdr_idx].n, ist("content-length"))) { -- 2.39.0
Re: Followup on openssl 3.0 note seen in another thread
On Thu, Dec 15, 2022 at 08:58:16AM +0100, Amaury Denoyelle wrote: > On Wed, Dec 14, 2022 at 11:20:44PM -0700, Shawn Heisey wrote: > > On 12/14/22 21:23, Илья Шипицин wrote: > > > Can you try to bisect? > > I had made some incorrect assumptions about what's needed to use > > bisect. With a little bit of research I figured it out and it was a > > LOT easier than I had imagined. > > > I suspect that it won't help, browsers tend to remember things in > > > their own way > > One thing I have learned in my testing is that doing shift-reload on > > the page means it will never switch to h3. So I use shift-reload > > followed by a couple of regular reloads as a way of resetting what > > the browser remembers. That seems to work. > > The bisect process only took a few runs to find the problem commit: > > 3ca4223c5e1f18a19dc93b0b09ffdbd295554d46 is the first bad commit > > commit 3ca4223c5e1f18a19dc93b0b09ffdbd295554d46 > > Author: Amaury Denoyelle > > Date: Wed Dec 7 14:31:42 2022 +0100 > > BUG/MEDIUM: h3: reject request with invalid header name > > [...] > I seem to be able to reach your website with H3 currently. Did you > revert to an older version ? Regarding this commit, it rejects requests > with invalid headers (with uppercase or non-HTTP tokens in the field > name). Have you tried with several browsers and with command-line > clients ? > I will look on my side to see if I missed something. With a local instance of nextcloud I am able to reproduce a bug linked to this commit with caused the deactivation of H3. I'm investigating on it... -- Amaury Denoyelle
Re: Followup on openssl 3.0 note seen in another thread
On Wed, Dec 14, 2022 at 11:20:44PM -0700, Shawn Heisey wrote: > On 12/14/22 21:23, Илья Шипицин wrote: > > Can you try to bisect? > I had made some incorrect assumptions about what's needed to use > bisect. With a little bit of research I figured it out and it was a > LOT easier than I had imagined. > > I suspect that it won't help, browsers tend to remember things in > > their own way > One thing I have learned in my testing is that doing shift-reload on > the page means it will never switch to h3. So I use shift-reload > followed by a couple of regular reloads as a way of resetting what > the browser remembers. That seems to work. > The bisect process only took a few runs to find the problem commit: > 3ca4223c5e1f18a19dc93b0b09ffdbd295554d46 is the first bad commit > commit 3ca4223c5e1f18a19dc93b0b09ffdbd295554d46 > Author: Amaury Denoyelle > Date: Wed Dec 7 14:31:42 2022 +0100 > BUG/MEDIUM: h3: reject request with invalid header name > [...] I seem to be able to reach your website with H3 currently. Did you revert to an older version ? Regarding this commit, it rejects requests with invalid headers (with uppercase or non-HTTP tokens in the field name). Have you tried with several browsers and with command-line clients ? I will look on my side to see if I missed something. -- Amaury Denoyelle
Re: [PATCH] fix spelling "choosen" --> "chosen"
On Sat, Nov 19, 2022 at 06:25:39PM +0500, Илья Шипицин wrote: > Hello, > can we settle it before 2.7 ? > пн, 7 нояб. 2022 г. в 11:50, Willy Tarreau : > > On Wed, Nov 02, 2022 at 10:43:49AM +0100, William Lallemand wrote: > > > > > - if (!tp->choosen) > > > > > + if (!tp->chosen) > > > > > return; > > > > > > > > > > - chunk_appendf(b, "\n\tversion_information:(choosen=0x%08x", > > tp->choosen); > > > > > + chunk_appendf(b, "\n\tversion_information:(chosen=0x%08x", > > tp->coosen); > > > > > > > > > I don't think it will even compile this way. > > > > I confirm it doesn't. I was about to fix it myself but I'd firt like > > to get a confirmation that it's OK to change this. > > > > Willy > > Hello, I'm testing the patch, there was issues with renaming done wrong : coosen instead of chosen : - chunk_appendf(b, "\n\tversion_information:(choosen=0x%08x", tp->choosen); + chunk_appendf(b, "\n\tversion_information:(chosen=0x%08x", tp->coosen); chooen instead of chosen : -/* Encode version information transport parameters with as choosen +/* Encode version information transport parameters with as chosen With this adjusted it compiled. I'm definitely ok with spelling patch but you should at least try to compile it before submitting because it's pretty easy to miss errors on this kind of fix. Anyway thanks for your help. -- Amaury Denoyelle
stderr output variations after startup
Hi everyone, We have just discovered that there is a discrepancy on stderr output produced by haproxy since version 2.5. This is caused by the command 'add server' and the introduction of the notion of prefix to stderr output. When CLI commands are issued after haproxy startup, stderr output may be produced by commands on stderr. For example with 'enable server': $ echo "enable server be/srv1" | socat - /usr/local/run/haproxy.stats * "[WARNING] (4752) : Server be/srv1 is UP/READY (leaving forced maintenance)." We see that the format here is the following one : "[loglevel] (pid) : " Note that this concerns output on stderr. On CLI, the output is different with the following message : "New server registered." With dynamic servers support, we have noted that it would be useful to print internal logs messages both on stderr and on the CLI output. This is useful for example to notify about a malformed server keyword. With this, we also have updated stderr output to include a new prefix which specify the origin of the message. When using 'add server' on the CLI, prefix is set to "CLI". This has the side effect of cluttering the output of all following commands with the same "CLI" prefix. What all this means is that stderr output won't be the same before and after using 'add server'. $ echo "add server be/srv1 127.0.0.1:20080" | socat - /usr/local/run/haproxy.stats * "[NOTICE] (4752) : CLI : 'server be/srv1' : New server registered." $ echo "enable server be/srv1" | socat - /usr/local/run/haproxy.stats * "[WARNING] (4752) : CLI : Server be/srv1 is UP/READY (leaving forced maintenance)." So this means the stderr format has been changed to : "[loglevel] (pid) : : " Note that the output prefix is thread-local. Depending on the thread which treated the command, the prefix might be set or not. So all this reveals that stderr output is non predictable since 2.5. Before taking further actions, we would like to have the feelings of our users about this, in particular on the following points : 1. Do you monitor haproxy stderr output with scripts which might be broken if we change the output format ? 2. Do you find the notion of origin as a prefix on stderr output useful ? For now, only "CLI" is used, but it could be extended easily, for example to differentiate between CLI and Lua interactions. 3. Should we aim to always print stderr logs also on the CLI output ? This could be useful to return more details on the cause of success or error. Ideally, we should decide on this before the next 2.7 release. Thanks everyone for your feedback, -- Amaury Denoyelle
Re: [ANNOUNCE] haproxy-2.7-dev3
On Tue, Aug 09, 2022 at 03:14:34PM -0600, Shawn Heisey wrote: > On 8/7/22 10:07, Willy Tarreau wrote: > > Given that QUIC users in 2.6 tend to be a bit forced to update to 2.7-dev > > to improve reporting, we're starting to think about updating the QUIC stack > > in 2.6 to match 2.7 once we're done with the pending issues. > I am not at all familiar with the code, and I probably never will be ... at > this time and for the foreseeable future, I have no need to be familiar with > code for TLS and low-level networking, and it would be a huge time > investment to become familiar with it. Perfectly happy to let others be the > experts. > Updating the whole QUIC stack in 2.6 to what's in 2.7-dev sounds to me like > a good idea. Some of my web pages are mostly unusable when doing http/3 > with 2.6 on Firefox. Chrome works a lot better, but I have seen some > problems there too. I switched to 2.7 from git about the time dev2 was > released. So far everything that I have tried with 2.7 works very well on > both browsers. Indeed we tried to backport all QUIC bugs but this is not enough for the moment and we continue to improve features and refactor the code on 2.7-dev, so it difficult to maintain a clean QUIC on 2.6. Hopefully, it should not be too hard to align the two versions once we have fixed some remaining issues. > It will be very interesting to see how things shake out when the openssl > team eventually gets around to tackling QUIC. I won't be able to follow a > lot of it, but I will take an occasional look. > I love working with haproxy. Can't thank Willy et al enough for all the > effort involved. For my own websites, I just use it as a reverse proxy > handling TLS, and some minor filtering/fixup, not a load balancer, but if I > ever get enough spare hardware for redundancy, that might change. At a > previous $DAYJOB I did set it up as a load balancer. I think I got up to > 1.5.18 by the time that job ended. > I like this little web page I made. A tiny bit of php code. > https://http3test.elyograg.org/ Thank you very much for your kind words and your first reports on QUIC which have greatly helped us. Let's hope we keep refine QUIC support even more in the near future. -- Amaury Denoyelle
[ANNOUNCE] haproxy-2.6.1
g: http://www.haproxy.org/download/2.6/src/CHANGELOG Pending bugs : http://www.haproxy.org/l/pending-bugs Reviewed bugs: http://www.haproxy.org/l/reviewed-bugs Code reports : http://www.haproxy.org/l/code-reports Latest builds: http://www.haproxy.org/l/dev-packages --- Complete changelog : Amaury Denoyelle (10): BUG/MINOR: h3: fix frame type definition BUG/MEDIUM: mux-quic: fix flow control connection Tx level BUG/MINOR: mux-quic: fix memleak on frames rejected by transport BUG/MEDIUM: mux-quic: fix segfault on flow-control frame cleanup BUG/MINOR: qpack: support header litteral name decoding MINOR: qpack: add comments and remove a useless trace BUG/MINOR: h3/qpack: deal with too many headers BUG/BUILD: h3: fix wrong label name BUG/MINOR: quic: purge conn Rx packet list on release BUG/MINOR: quic: free rejected Rx packets Christopher Faulet (36): BUG/MINOR: ssl_ckch: Free error msg if commit changes on a cert entry fails BUG/MINOR: ssl_ckch: Free error msg if commit changes on a CA/CRL entry fails BUG/MEDIUM: ssl_ckch: Don't delete a cert entry if it is being modified BUG/MEDIUM: ssl_ckch: Don't delete CA/CRL entry if it is being modified BUG/MINOR: ssl_ckch: Don't duplicate path when replacing a cert entry BUG/MINOR: ssl_ckch: Don't duplicate path when replacing a CA/CRL entry BUG/MEDIUM: ssl_ckch: Rework 'commit ssl cert' to handle full buffer cases BUG/MEDIUM: ssl_ckch: Rework 'commit ssl ca-file' to handle full buffer cases BUG/MEDIUM: ssl/crt-list: Rework 'add ssl crt-list' to handle full buffer cases BUG/MEDIUM: httpclient: Don't remove HTX header blocks before duplicating them BUG/MEDIUM: httpclient: Rework CLI I/O handler to handle full buffer cases MEDIUM: http-ana: Always report rewrite failures as PRXCOND in logs MEDIUM: httpclient: Don't close CLI applet at the end of a response REGTESTS: abortonclose: Add a barrier to not mix up log messages REGTESTS: http_request_buffer: Increase client timeout to wait "slow" clients BUG/MINOR: ssl_ckch: Use right type for old entry in show_crlfile_ctx BUG/MINOR: ssl_ckch: Dump CRL transaction only once if show command yield BUG/MINOR: ssl_ckch: Dump CA transaction only once if show command yield BUG/MINOR: ssl_ckch: Dump cert transaction only once if show command yield BUG/MINOR: ssl_ckch: Init right field when parsing "commit ssl crl-file" cmd BUG/MINOR: ssl_ckch: Fix possible uninitialized value in show_cert I/O handler BUG/MINOR: ssl_ckch: Fix possible uninitialized value in show_cafile I/O handler BUG/MINOR: ssl_ckch: Fix possible uninitialized value in show_crlfile I/O handler REGTESTS: http_abortonclose: Extend supported versions REGTESTS: restrict_req_hdr_names: Extend supported versions BUG/MEDIUM: mailers: Set the object type for check attached to an email alert BUG/MINOR: trace: Test server existence for health-checks to get proxy BUG/MINOR: checks: Properly handle email alerts in trace messages REGTESTS: healthcheckmail: Update the test to be functionnal again REGTESTS: healthcheckmail: Relax health-check failure condition BUG/MINOR: tcp-rules: Make action call final on read error and delay expiration BUG/MEDIUM: stconn: Don't wakeup applet for send if it won't consume data BUG/MEDIUM: cli: Notify cli applet won't consume data during request processing BUG/MEDIUM: stream: Properly handle destructive client connection upgrades MINOR: stream: Rely on stconn flags to abort stream destructive upgrade BUG/MINOR: log: Properly test connection retries to fix dontlog-normal option Frédéric Lécaille (5): BUG/MINOR: quic: Stop hardcoding Retry packet Version field BUG/MINOR: quic: Wrong PTO calculation BUG/MINOR: quic: Unexpected half open connection counter wrapping BUG/MINOR: quic_stats: Duplicate "quic_streams_data_blocked_bidi" field name BUG/MINOR: quic: Acknowledgement must be forced during handshake William Lallemand (3): BUG/MEDIUM: ssl/cli: crash when crt inserted into a crt-list BUG/MEDIUM: mworker: use default maxconn in wait mode REGTESTS: ssl: add the same cert for client/server Willy Tarreau (5): BUILD: compiler: implement unreachable for older compilers too BUG/MINOR: cli/stats: add missing trailing LF after JSON outputs BUG/MINOR: server: do not enable DNS resolution on disabled proxies BUG/MINOR: cli/stats: add missing trailing LF after "show info json" BUG/MINOR: task: fix thread assignment in tasklet_kill() --- -- Amaury Denoyelle
Re: haproxy 2.6.0 and quic
On Fri, Jun 03, 2022 at 07:08:43AM -0600, Shawn Heisey wrote: > [...] > A word of warning that you would probably also get from the devs here: > HTTP3/QUIC support is still new and not entirely working. I have it > configured and it only works correctly for VERY simple websites. Any > complex webapp I try it on will fail in some way, but if I disable HTTP3 and > use HTTP2, it works. > Hi, I just wanted to let you and other QUIC enthusiast know that I found a defect in haproxy QPACK implementation which prevented to decrypt some headers. The fix has been merged and it helped greatly on my test with a nextcloud instance. Of course, I still have some other issues and unexpected behavior, but if you have the time do not hesitate to test the development version and give us your thoughts. As an alternative, we may probably emit soon a 2.6.1 with the first batch of QUIC issues resolved so far. Regards, -- Amaury Denoyelle
Re: [PATCH] CLEANUP: Re-apply xalloc_size.cocci (2)
On Wed, Jun 01, 2022 at 09:58:37PM +0200, Tim Duesterhus wrote: > This reapplies the xalloc_size.cocci patch across the whole `src/` tree. > see 16cc16dd8235e7eb6c38b7abd210bd1e1d96b1d9 > see 63ee0e4c01b94aee5fc6c6dd98cfc4480ae5ea46 > --- > src/ncbuf.c | 2 +- > src/proto_quic.c | 2 +- > src/quic_sock.c | 3 ++- > 3 files changed, 4 insertions(+), 3 deletions(-) > diff --git a/src/ncbuf.c b/src/ncbuf.c > index 1944cfe34..adb32b57a 100644 > --- a/src/ncbuf.c > +++ b/src/ncbuf.c > @@ -726,7 +726,7 @@ struct rand_off { > static struct rand_off *ncb_generate_rand_off(const struct ncbuf *buf) > { > struct rand_off *roff; > - roff = calloc(1, sizeof(struct rand_off)); > + roff = calloc(1, sizeof(*roff)); > BUG_ON(!roff); > > roff->off = rand() % (ncb_size(buf)); > diff --git a/src/proto_quic.c b/src/proto_quic.c > index 55aa4b50f..ab1bef18f 100644 > --- a/src/proto_quic.c > +++ b/src/proto_quic.c > @@ -703,7 +703,7 @@ static int quic_alloc_dghdlrs(void) > { > int i; > > - quic_dghdlrs = calloc(global.nbthread, sizeof(struct quic_dghdlr)); > + quic_dghdlrs = calloc(global.nbthread, sizeof(*quic_dghdlrs)); > if (!quic_dghdlrs) { > ha_alert("Failed to allocate the quic datagram handlers.\n"); > return 0; > diff --git a/src/quic_sock.c b/src/quic_sock.c > index 6207af703..a391006af 100644 > --- a/src/quic_sock.c > +++ b/src/quic_sock.c > @@ -466,7 +466,8 @@ static int quic_alloc_accept_queues(void) > { > int i; > > - quic_accept_queues = calloc(global.nbthread, sizeof(struct > quic_accept_queue)); > + quic_accept_queues = calloc(global.nbthread, > + sizeof(*quic_accept_queues)); > if (!quic_accept_queues) { > ha_alert("Failed to allocate the quic accept queues.\n"); > return 0; > -- > 2.36.1 > Fine for me, Thanks, -- Amaury Denoyelle
Re: Thoughts on QUIC/HTTP3
On Sun, May 29, 2022 at 12:37:14PM -0600, Shawn Heisey wrote: > On 4/29/2022 10:10 AM, Shawn Heisey wrote: > > I did a build and install this morning, a bunch of quic-related changes > > in that. Now everything seems to be working on my paste site. Large > > pastes work, and I can reload the page a ton of times without it hanging > > until browser restart. > I have found that the roundcube package doesn't work completely right with > quic/http3. It has proven to be an excellent way of revealing problems. The > roundcube package is a php webmail app that utilizes a database and an imap > server. It's using many of the features that are possible with modern > browsers. > When I first started this http3 journey, I couldn't even log in to roundcube > because any POST request was failing ... but that problem got fixed. Now I > find that when quickly jumping between folders in my big mailbox, the site > stops responding. If I switch back to http/2, everything works. If it > would help, I can give one of the devs full access to use my roundcube > install and experiment with the config, see if maybe I am doing something > wrong. Hi Shawn, Thanks for your continuing your journey on HTTP/3 :) On our side we finish our work on improving POST, it should be fairly complete now. We also improve other aspects and complete the documentation for new users. Now, we are using nextcloud, which is another big webapp, through haproxy/QUIC as a test. We have already fix some bugs which may also help with your roundcube installation. However, despite our efforts we know there is still a grey area in our QUIC stack. We do not properly manage the connection closing, and it's not completely clear how we should handle all the cases. Sadly, this requires some reflexion time and it won't be solve before the imminent 2.6 release. For the moment, you can try on your side to increase/decrease the haproxy client timeout to see if this change your situation. Regards, -- Amaury Denoyelle
Re: Thoughts on QUIC/HTTP3
Hi, Sorry for not having answers your last mails, I was busy with some new improvment for QUIC on haproxy side. Most notably, we improved the transfer performance slightly by being able to use multiple buffers per streams. Please find my answers to your remarks below, On Sat, Apr 23, 2022 at 11:13:01PM -0600, Shawn Heisey wrote: > After seeing http/3 (orange lightning bolt with the HTTP Version Indicator > extension) talking to a lot of websites, I had thought the standard was > further along than it is. I see that the openssl team is discussing it, and > plans to fully embrace it, but hasn't actually started putting QUIC code in > openssl yet, and it may be quite some time before something usable shows up > even in their master branch. I would not put too much faith in it for the near future. The OpenSSL team seems to have put aside a simple QUIC API integration in favor of a brand new full QUIC stack, which should take quite some time. So for now, manually rebuilding your SSL library seems the only way to go. > It's been fun fiddling with it using haproxy with quictls, and I hope I can > provide useful information to stabilize the implementation. > I'd like to say thank you to Willy and all the other people who make haproxy > one of the best things in my problem-solving arsenal. It handles the > internet side of all my web deployments. I haven't yet put other services > behind it. At a previous $DAYJOB I had been testing FTP load-balancing, > which I did get working, but didn't actually get to the deployment stage. > At the moment, I am experiencing two problems with http3. The second > problem might actually just be another instance of the first problem. > First problem: If I do enough fiddling with an HTTP3 page, in either > Firefox or Chrome, eventually that page will stop loading and the only way > I've found to fix it is to completely close the browser and reopen it. > Restarting haproxy or Apache doesn't seem to help. We already experienced this kind of problems but we though we have fixed them. It seems the connection closure it not always properly handle on haproxy side, which left the client with no notification that it should open a new connection. It may help to increase the timeout client to be greater than the default QUIC idle timeout, which is hardcoded to 10s on haproxy side. For haproxy.org, we use a value of 1m and it seems to be working. Please tell me if this makes your problem disappears or not. > Second problem: If I try pasting a REALLY large block of text into my paste > website at the following URL while I have it configured to use HTTP/3, it > won't work. The page never loads. I can't tell if this is a separate > problem from the one above, or just another occurrence of it that triggers > more readily because there is more data transferred. The reason I think it > might be actually the first problem is that if I open another tab, I can't > get to the website ... but if I close the browser and reopen it, then I can > get to the website again. > https://paste.elyograg.org/ > If I remove the paste website from the http3 ACL so it doesn't send the > alt-svc header, then everything works once I can convince the browser to > stop using HTTP/3. We did not have enough time to work on POST so there is still bugs in this part. I just recently fixed two bugs which may be enough to fix your situattion with the latest 2.6-dev7. However, I still have issues when I use large payloads. Thanks for your kind compliment for haproxy reliability. We hope one day we will reach this level for QUIC but for now this objectif is still far. -- Amaury Denoyelle
Re: HTTP/3 -- POST requests not working
On Thu, Apr 14, 2022 at 07:29:20AM -0600, Shawn Heisey wrote: > On 4/14/22 03:27, Amaury Denoyelle wrote: > > So to summary, this option should be activated if you only have browsers > > as client and the traffic is big enough to saturate haproxy queues. > > I hope this will clarify your thoughts, > Thanks for that detail. For these setups, I really doubt that there will > ever be that many requests. Unless I get hit by a DDOS attack. :) > I am having further problems with http/3. Randomly in the webmail it will > get a 403 error when accessing a URL for the site. Checking the log > history, looks like it's on POST requests, which were already problematic. > I guess removing abortonclose doesn't completely fix POST requests. > Apr 14 07:11:15 - haproxy[2357875] 199.192.164.74:54388 > [14/Apr/2022:07:11:15.227] web~ be-apache-81/bilbo 2/0/76/-1/85 403 1371 - - > LHVN 1/1/0/0/0 0/0 {webmail.elyograg.org} "POST > /mail/?_task=mail&_action=refresh HTTP/3.0" Hum this is strange. Do you have a way to reproduce it easily ? Otherwise, please know that as QUIC is still in an experimental status, we did not have the time to test various config options with it. Indeed, most of the time we use extra simple config files with only timeout configuration provided. If you encounter a recurring bug, I advise you to switch to a simple config file and inspect if the issue is still there. This will allows us to more precisely understand the nature of the problem. > And something that has occasionally happened with anything I've enabled > http/3 on ... firefox will get into a mode where accessing an http/3 enabled > site will completely hang, and the only way to get that website working > again is to close the browser, reopen it, and try again. I've also seen > this in Chrome, so it doesn't seem to be browser-specific. I do most of my > testing in Firefox, because on the work machine Chrome has a proxy, which > won't do http/3. Hum we already have encounter this issue because we did not send a CONNECTION_CLOSE on connection closing. Now most cases seems to be fixed but maybe there is still cases where the connection dies without a notification to the client. Do you observe this frequently ? > So for now I've only enabled http3 on a select subset of websites. These > problems make the webmail not work right. I hope any information I gather > will speed up development on QUIC so it can move out of experimental status. > Ah, git pull has revealed a lot of changes, some of which are on quic code. > I will get that built and installed. As I already said, QUIC in haproxy is still experimental and we expect a lot of issues to appear :) so please use it with care and always have a backup solution. We commit frequently changes related to QUIC but the todo-list is still big and I do not think the latest changes will have an improvement on your reported issues. But still, it is always useful to have users ready to test it and report bugs, in fact we already have fix some thanks to you. Just know that some bugs may take some time before being addressed on our side. Thanks, -- Amaury Denoyelle
Re: HTTP/3 -- POST requests not working
On Wed, Apr 13, 2022 at 08:16:06AM -0600, Shawn Heisey wrote: > On 4/13/22 02:42, Amaury Denoyelle wrote: > > Ok this seems related to 'option abortonclose'. Without this, I do not > > have a 400 error. Can you confirm me this behavior on your side please ? > > If I remove that, it works. I can have my webmail served via http/3 and > login still works, which it didn't before because all POST requests were > getting a 400. > What am I losing or gaining by removing abortonclose? I read the docs on > it, and wasn't able to work out whether I want it or not. Will this failure > when configuring abortonclose be considered a bug? Currently, this option is buggy when used with a QUIC listener. This is indeed a bug and will be fixed in a future patch. > The reasons for that option being in the config are lost to the antiquities > of time. Most of my config is stolen from a setup I once built at an old > ${DAYJOB} that was actually doing load balancing. For my personal setups, I > mainly use haproxy for SSL offloading and making multiple private-side web > servers available through a single public IP address. By default, connection are kept opened in haproxy side even if the client has closed it. This is required because it's impossible for haproxy to detect a full close or only a sending channel close. In the last case, haproxy can still deliver the response and then fully close the connection. This is conform to the default http specification and is useful when dealing with clients tools such as netcat which rely heavily on this behavior. If all your clients are browsers and the traffic is enough to fill the haproxy queue, it is useful to activate abortonclose. When a user hits the "STOP" loading page of the browser, the connection is immediatly close on haproxy, which liberate some resources to handle other connections. So to summary, this option should be activated if you only have browsers as client and the traffic is big enough to saturate haproxy queues. I hope this will clarify your thoughts, -- Amaury Denoyelle
Re: HTTP/3 -- POST requests not working
On Wed, Apr 13, 2022 at 10:36:00AM +0200, Amaury Denoyelle wrote: > On Tue, Apr 12, 2022 at 10:30:05AM -0600, Shawn Heisey wrote: > > On 4/12/22 09:45, Amaury Denoyelle wrote: > > > After much analysis of the code, it may be useful to have a run with the > > > stream traces as well : > > > $ trace stream sink buf0; trace stream level developer; trace stream > > > verbosity clean; trace stream start now > > All 3 traces enabled, this time it should only be for one connection, I > > think the last one there were two: > > https://paste.elyograg.org/view/a06556ef > > > You can also provide us your haproxy config file, just remove any > > > sensitive info if you have any. > > https://paste.elyograg.org/view/3f884d8f > > The acl lines with redacted info are not being matched for the connections > > that don't work, so it should have no connection to this problem. > I'm able to reproduce your issue with your config file. I can now > investigate the problem. I will let you know about any update, thanks. Ok this seems related to 'option abortonclose'. Without this, I do not have a 400 error. Can you confirm me this behavior on your side please ? -- Amaury Denoyelle
Re: HTTP/3 -- POST requests not working
On Tue, Apr 12, 2022 at 10:30:05AM -0600, Shawn Heisey wrote: > On 4/12/22 09:45, Amaury Denoyelle wrote: > > After much analysis of the code, it may be useful to have a run with the > > stream traces as well : > > $ trace stream sink buf0; trace stream level developer; trace stream > > verbosity clean; trace stream start now > All 3 traces enabled, this time it should only be for one connection, I > think the last one there were two: > https://paste.elyograg.org/view/a06556ef > > You can also provide us your haproxy config file, just remove any > > sensitive info if you have any. > https://paste.elyograg.org/view/3f884d8f > The acl lines with redacted info are not being matched for the connections > that don't work, so it should have no connection to this problem. I'm able to reproduce your issue with your config file. I can now investigate the problem. I will let you know about any update, thanks. -- Amaury Denoyelle
Re: HTTP/3 -- POST requests not working
On Tue, Apr 12, 2022 at 05:26:41PM +0200, Amaury Denoyelle wrote: > On Tue, Apr 12, 2022 at 08:01:59AM -0600, Shawn Heisey wrote: > > On 4/12/22 02:22, Amaury Denoyelle wrote: > > > then you can display the traces with the following command : > > > $ show events buf0 > > > > For the h3 layer, the trace mechanism is not currently implemented. You > > > should instead recompile your haproxy binary with the DEBUG options : > > > -DDEBUG_H3 -DDEBUG_QPACK > > > and watch for the stderr output for your process. > > I got some traces. URL below for accessing it. There's a LOT of data. > > https://paste.elyograg.org/view/8678c875 > > I didn't do the H3 debug yet, if you still need it after looking at the > > traces, let me know. You'll also need to tell me how to make those debugs > > active when I build haproxy. > We analyzed the output of the traces but there is nothing of particular > interest here. What may be useful is to activate the H3 traces. > However, I notice that the -dev5 does not compile with H3 traces > activated. So first, you have to apply the attached patch before > recompiling. To do this, save the attachment and execute the following > command in your source folder : > $ patch -p1 < 0001-BUG-MINOR-h3-fix-build-with-DEBUG_H3.patch > Alternatively, you can also use the latest master with contains the > mentionned patch. However, I think it's safer to stay on the -dev5 tag > for the moment. > Once the patch is applied, recompile your haproxy binary with DEBUG_H3 > and DEBUG_QPACK flags. Here is a make example, complete it with your > usual set of flags : > $ make TARGET=linux-glibc USE_QUIC=1 DEBUG="-DDEBUG_H3 -DDEBUG_QPACK" ... > You can now restart your process and watch the stderr output and report > it to us. > Again, thank you very much for your help, Hi again, After much analysis of the code, it may be useful to have a run with the stream traces as well : $ trace stream sink buf0; trace stream level developer; trace stream verbosity clean; trace stream start now You can also provide us your haproxy config file, just remove any sensitive info if you have any. Thanks, -- Amaury Denoyelle
Re: HTTP/3 -- POST requests not working
On Tue, Apr 12, 2022 at 08:01:59AM -0600, Shawn Heisey wrote: > On 4/12/22 02:22, Amaury Denoyelle wrote: > > then you can display the traces with the following command : > > $ show events buf0 > > > For the h3 layer, the trace mechanism is not currently implemented. You > > should instead recompile your haproxy binary with the DEBUG options : > > -DDEBUG_H3 -DDEBUG_QPACK > > and watch for the stderr output for your process. > I got some traces. URL below for accessing it. There's a LOT of data. > https://paste.elyograg.org/view/8678c875 > I didn't do the H3 debug yet, if you still need it after looking at the > traces, let me know. You'll also need to tell me how to make those debugs > active when I build haproxy. We analyzed the output of the traces but there is nothing of particular interest here. What may be useful is to activate the H3 traces. However, I notice that the -dev5 does not compile with H3 traces activated. So first, you have to apply the attached patch before recompiling. To do this, save the attachment and execute the following command in your source folder : $ patch -p1 < 0001-BUG-MINOR-h3-fix-build-with-DEBUG_H3.patch Alternatively, you can also use the latest master with contains the mentionned patch. However, I think it's safer to stay on the -dev5 tag for the moment. Once the patch is applied, recompile your haproxy binary with DEBUG_H3 and DEBUG_QPACK flags. Here is a make example, complete it with your usual set of flags : $ make TARGET=linux-glibc USE_QUIC=1 DEBUG="-DDEBUG_H3 -DDEBUG_QPACK" ... You can now restart your process and watch the stderr output and report it to us. Again, thank you very much for your help, -- Amaury Denoyelle >From bb970422546c0db08b244d5e5a12a795a605dd64 Mon Sep 17 00:00:00 2001 From: Amaury Denoyelle Date: Tue, 12 Apr 2022 16:40:52 +0200 Subject: [PATCH] BUG/MINOR: h3: fix build with DEBUG_H3 qcs by_id field has been replaced by a new field named "id". Adjust the h3_debug_printf traces. This is the case since the introduction of the qc_stream_desc type. --- src/h3.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/h3.c b/src/h3.c index e39c54135..2d450d8b8 100644 --- a/src/h3.c +++ b/src/h3.c @@ -231,7 +231,7 @@ static int h3_decode_qcs(struct qcs *qcs, int fin, void *ctx) struct buffer *rxbuf = >rx.buf; int ret; - h3_debug_printf(stderr, "%s: STREAM ID: %llu\n", __func__, qcs->by_id.key); + h3_debug_printf(stderr, "%s: STREAM ID: %lu\n", __func__, qcs->id); if (!b_data(rxbuf)) return 0; @@ -333,7 +333,7 @@ static int h3_control_recv(struct h3_uqs *h3_uqs, void *ctx) struct buffer *rxbuf = _uqs->qcs->rx.buf; struct h3 *h3 = ctx; - h3_debug_printf(stderr, "%s STREAM ID: %llu\n", __func__, h3_uqs->qcs->by_id.key); + h3_debug_printf(stderr, "%s STREAM ID: %lu\n", __func__, h3_uqs->qcs->id); if (!b_data(rxbuf)) return 1; -- 2.35.1
Re: HTTP/3 -- POST requests not working
On Tue, Apr 12, 2022 at 08:14:25AM +0200, Willy Tarreau wrote: > Hi Shawn, > [...] > On Mon, Apr 11, 2022 at 01:05:21PM -0600, Shawn Heisey wrote: > > An astute observer will notice that the backend says 81 while the packet > > capture shows port 82. This is because I changed the port number on the > > server to point to nginx instead of apache, but did not rename the backend. > > My use of nginx is temporary, only using it to eliminate Apache as the > > problem. I know how to configure Apache; it would take me forever to fully > > switch to nginx. > I think that the QUIC team will soon ask you to enable traces on the QUIC > layer, but I'll let them follow up ;-) Hi Shawn, First, thank you for your interest for QUIC in haproxy :) I have quickly tested POST on my side but did not see any issue. To help you diagnostic your problem, you can try to activate the traces on the stats socket. for the transport traces : $ trace quic sink buf0; trace quic level developer; trace quic verbosity clean; trace quic start now and for the mux : $ trace qmux sink buf0; trace qmux level developer; trace qmux verbosity minimal; trace qmux start now then you can display the traces with the following command : $ show events buf0 For the h3 layer, the trace mechanism is not currently implemented. You should instead recompile your haproxy binary with the DEBUG options : -DDEBUG_H3 -DDEBUG_QPACK and watch for the stderr output for your process. Thanks again for your help, Regards, -- Amaury Denoyelle
Re: [ANNOUNCE] haproxy-2.6-dev4
On Sun, Mar 27, 2022 at 12:09:22AM +0500, Илья Шипицин wrote: > сб, 26 мар. 2022 г. в 22:23, Ionel GARDAIS >: > > Thanks Willy for these updates. > > > > While skimming the result on the interop website, I was surprised that > > haproxy is always more than 50% slower than its competitor. > > Is it because you've enable lots of traces as part of your debugging > > process for the runs ? > > > looks like this dockerfile is used > https://github.com/haproxytech/haproxy-qns/blob/master/Dockerfile > Hi, Thanks for your interest on haproxy QUIC implementation :) Indeed the perfs results reported by the interop test suite for haproxy look miserable. To be honest, at the moment I did not look how these tests are implemented. We left this part out as we dealt with plenty of functional issues. As such, we enabled a lot of traces and debug options to be able to quickly understand bugs, so this may have an impact on the results of perfs test. Now, we are focused on trying to deploy QUIC on haproxy.org to inspect the behavior with real-life browsers. When this is done, the next objective will be to try to improve the results of these perfs tests. -- Amaury Denoyelle
Re: [PATCH] BUG/MEDIUM: server: avoid changing healthcheck ctx with set server ssl
On Tue, Jan 11, 2022 at 06:55:10PM +0100, Christopher Faulet wrote: > Le 1/10/22 à 23:19, Willy Tarreau a écrit : > > At this point I'm starting to think that we should probably avoid as > > much as possible to use implicit settings for whatever is dynamic. > > Originally a lot of settings were implicit because we don't want to > > have huge config lines to enforce lots of obvious settings. On the CLI > > it's less of a problem and for example if "ssl" only deals with the > > traffic without manipulating the checks, I'm personally not shocked, > > because these are normally sent by bots and we don't care if they > > have to set a few more settings to avoid multiple implicit changes > > that may not always be desired. This is where the CLI (or any future > > API) might differ a bit from the config, and an agent writing some > > config might have to explicitly state certain things like "no-check-ssl" > > for example to stay safe and avoid such implicit dependencies. > > > I agree with Willy on this point. Especially because, depending the order of > commands, the result can be different because of implicit changes and it is > pretty hard to predict how it will behave in all cases. > For instance, to fix William's bug about "set server ssl" command, in a way > or another, we must stop to dynamically update the health-check if its port > or its address is explicitly specified. With this change, the result of > following set of commands will be different: > $ set server BK/SRV ssl on > $ set server BK/SRV check-port XXX > ==> SSL is enabled for the server and the health-check > $ set server BK/SRV check-port XXX > $ set server BK/SRV ssl on > ==> because the check's port was updated first, the SSL is only enabled for > the server. > > Note that such a brainstorming doesn't apply to your fix and should > > not hold it from being merged in any way, I'm just speaking in the > > general sense, as I don't feel comfortable with keep all these special > > cases in a newly introduced API, that are only there for historical > > reasons. > Agree. But, if possible, a warning may be added in the documentation to warn > about implicit changes. > About the fix, I checked the code, and, at first glance, there is no reason > to change "srv->check.use_ssl" value when the health-check uses the server > settings. Thus, we may fix "set server ssl" command this way: > diff --git a/src/check.c b/src/check.c > index f0ae81504..8cf8a1c5b 100644 > --- a/src/check.c > +++ b/src/check.c > @@ -1565,10 +1565,8 @@ int init_srv_check(struct server *srv) > * default, unless one is specified. > */ > if (!srv->check.port && !is_addr(>check.addr)) { > - if (!srv->check.use_ssl && srv->use_ssl != -1) { > - srv->check.use_ssl = srv->use_ssl; > - srv->check.xprt= srv->xprt; > - } > + if (!srv->check.use_ssl && srv->use_ssl != -1) > + srv->check.xprt = srv->xprt; > else if (srv->check.use_ssl == 1) > srv->check.xprt = xprt_get(XPRT_SSL); > srv->check.send_proxy |= (srv->pp_opts); > diff --git a/src/server.c b/src/server.c > index 2061153bc..a18836a71 100644 > --- a/src/server.c > +++ b/src/server.c > @@ -2113,10 +2113,11 @@ void srv_set_ssl(struct server *s, int use_ssl) > return; > s->use_ssl = use_ssl; > - if (s->use_ssl) > - s->xprt = xprt_get(XPRT_SSL); > - else > - s->xprt = s->check.xprt = s->agent.xprt = xprt_get(XPRT_RAW); > + s->xprt = (s->use_ssl == 1) ? xprt_get(XPRT_SSL) : xprt_get(XPRT_RAW); > + > + if ((s->check.state & CHK_ST_CONFIGURED) && !s->check.use_ssl && > + !s->check.port && !is_addr(>check.addr)) > + s->check.xprt = s->xprt; > } > #endif /* USE_OPENSSL */ > This may be done with the following 3 steps: > * First, stop to change "srv->check.use_ssl" value > * Then, stop to change the agent settings in srv_set_ssl() because there > is no ssl support for agent-check. > * Finally, fix the bug identified by William, adding the according > documentation. > Note I don't know if all this stuff works properly with the server-state > file... > -- > Christopher Faulet > To add more context on this discussion, I looked at dynamic servers. Currently, the behavior is identical with the configuration as init_srv_check() is used in the "add server" CLI handler. This is really problematic as "no-check-ssl" is not available for dynamic servers, so if I understand correctly it's not possible to add a dynamic SSL server with checks on the same port without SSL on checks. Christopher's patch for init_srv_check() is probably a good idea, but if not taken it should be probably at least used for servers with the SRV_F_DYNAMIC flag. -- Amaury Denoyelle
Re: [PATCH] MEDIUM numa supports for FreeBSD
On Tue, Dec 14, 2021 at 02:12:28AM +, David CARLIER wrote: > ping :) > On Mon, 6 Dec 2021 at 11:07, David CARLIER wrote: > > > > Hi > > > > Here a little patch for proper NUMA topology detection on FreeBSD. > > > > Thanks. > > > > Regards. I have merged your patch, thanks for your contribution. Note that I have splitted it in two to facilitate possible revert or cherry-picking of numa_detect_topology() various implementation. I also used the BUG_ON macro which is the idiomatic way for asserts in haproxy. Sorry for the delay, and thanks again, -- Amaury Denoyelle
Re: [EXTERNAL] Re: [PATCH] MEDIUM numa supports for FreeBSD
On Tue, Dec 14, 2021 at 05:16:15AM +0100, Willy TARREAU wrote: > On Tue, Dec 14, 2021 at 02:12:28AM +, David CARLIER wrote: > > ping :) > sorry for the delay David, we'll check today. > Willy I can handle this as I have already implemented the Linux part. I'm looking at it as soon as possible. -- Amaury Denoyelle
[ANNOUNCE] haproxy-2.0.26
Hi, HAProxy 2.0.26 was released on 2021/12/03. It added 68 new commits after version 2.0.25. This version contains a lot of bug fixes. One of the main area to benefit from them is the muxers and streams infrastructure. Adjustments were made to prevent rare occurences of blocked transfer, improper connection closing or premature abort. Even though most users won't notice the difference, this is obviously a critical path of the haproxy architecture, that's why it's always important to upgrade to the latest stable in your branch. Resolvers is another section which receive improvement. Race conditions were fixed and the code should be more reliable. Sadly, the current architecture has shown its limits and won't perform in the most optimal way. For users with an important usage of resolvers, do not hesitate to have a look at the 2.5 which have breaking changes and big progress on performance. In the SSL area, resumption on the backend side was not functional with SNI on TLS1.3. This is now fixed. Also, the error reporting has been improved and will now return a proper description if a failure occurs with strict-sni. A very subtle bug was fixed in the LUA code for the sleep() function. Most of the time, it will run fine. However, due to an erroneous time comparison, there is a risk to freeze the entire haproxy process when using it. However, this is extremly rare as this can occurs only when the time is wrapping, which happens during 1ms every 49 days. The "block" statement in the proxy configuration was broken since the previous release due to an incorrect refactoring and has been now restored. Note however that this keyword is deprecated and users are encourage to use an alternative like the "http-request deny" rule. Some cleanups were made for samples. In some fetches such as strcmp or secure_memcmp, variables of the improper type could cause a crash. Now a check has been added to properly handle this case. Thanks to everyone for this release. Enjoy ! Please find the usual URLs below : Site index : http://www.haproxy.org/ Discourse: http://discourse.haproxy.org/ Slack channel: https://slack.haproxy.org/ Issue tracker: https://github.com/haproxy/haproxy/issues Wiki : https://github.com/haproxy/wiki/wiki Sources : http://www.haproxy.org/download/2.0/src/ Git repository : http://git.haproxy.org/git/haproxy-2.0.git/ Git Web browsing : http://git.haproxy.org/?p=haproxy-2.0.git Changelog: http://www.haproxy.org/download/2.0/src/CHANGELOG Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/ --- Complete changelog : Amaury Denoyelle (1): BUG/MINOR: server: allow 'enable health' only if check configured Christopher Faulet (31): BUG/MEDIUM: stream-int: Don't block SI on a channel policy if EOI is reached Revert "REGTESTS: mark http_abortonclose as broken" BUG/MEDIUM: mux-h1: Adjust conditions to ask more space in the channel buffer BUG/MEDIUM: stream-int: Notify stream that the mux wants more room to xfer data BUG/MEDIUM: stream: Stop waiting for more data if SI is blocked on RXBLK_ROOM BUG/MINOR: mux-h1/mux-fcgi: Sanitize TE header to only send "trailers" BUG/MINOR: tcp-rules: Stop content rules eval on read error and end-of-input BUG/MINOR: stream: Don't release a stream if FLT_END is still registered BUG/MEDIUM: http-ana: Reset channels analysers when returning an error BUG/MINOR: filters: Always set FLT_END analyser when CF_FLT_ANALYZE flag is set BUG/MINOR: filters: Set right FLT_END analyser depending on channel BUG/MEDIUM: filters: Fix a typo when a filter is attached blocking the release BUG/MEDIUM: http-ana: Clear request analyzers when applying redirect rule BUG/MEDIUM: mux_h2: Handle others remaining read0 cases on partial frames BUG/MEDIUM: stream: Keep FLT_END analyzers if a stream detects a channel error BUG/MINOR: mux-h1: Save shutdown mode if the shutdown is delayed BUG/MEDIUM: mux-h1: Perform a connection shutdown when the h1c is released BUG/MEDIUM: http-ana: Drain request data waiting the tarpit timeout expiration DOC: config: Fix alphabetical order of fc_* samples MINOR: stream: Improve dump of bogus streams BUG/MINOR: tcpcheck: Improve LDAP response parsing to fix LDAP check MINOR: htx: Add an HTX flag to know when a message is fragmented MINOR: htx: Add a function to know if the free space wraps BUG/MEDIUM: stream-int: Defrag HTX message in si_cs_recv() if necessary BUG/MEDIUM: mux-h1: Fix H1C_F_ST_SILENT_SHUT value DOC: config: Fix typo in ssl_fc_unique_id description BUG/MINOR: http-ana: Apply stop to the current section for http-response rules BUG/MEDIUM: conn-stream: Don't reset CS flags on close BUG/MINOR: mux-h2: Fix H2_CF_DEM_SHORT_READ value BUG/MINOR: sti
Re: [ANNOUNCE] haproxy-2.4.9
On Thu, Nov 25, 2021 at 11:42:01AM +0300, Dmitry Sivachenko wrote: > On 24 Nov 2021, at 12:57, Christopher Faulet wrote: > > > > Hi, > > > HAProxy 2.4.9 was released on 2021/11/23. It added 36 new commits > > after version 2.4.8. > > > Hello, > version 2.4.9 fails to build with OpenSSL turned off: > src/server.c:207:51: error: no member named 'ssl_ctx' in 'struct server' > if (srv->mux_proto || srv->use_ssl != 1 || !srv->ssl_ctx.alpn_str) { > ~~~ ^ > src/server.c:241:37: error: no member named 'ssl_ctx' in 'struct server' > const struct ist alpn = ist2(srv->ssl_ctx.alpn_str, > ~~~ ^ > src/server.c:242:37: error: no member named 'ssl_ctx' in 'struct server' > srv->ssl_ctx.alpn_len); > ~~~ ^ > Version 2.4.8 builds fine. > > Dmitry, the patches that Willy provided you should fix the issue. Now, do you need a 2.4.10 to be emitted early with it or is it possible for you to keep the patches in your tree so we can have a more substantial list of change for a new version ? -- Amaury Denoyelle
Re: [ANNOUNCE] haproxy-2.4.9
On Thu, Nov 25, 2021 at 11:42:01AM +0300, Dmitry Sivachenko wrote: > On 24 Nov 2021, at 12:57, Christopher Faulet wrote: > > > > Hi, > > > HAProxy 2.4.9 was released on 2021/11/23. It added 36 new commits > > after version 2.4.8. > > > Hello, > version 2.4.9 fails to build with OpenSSL turned off: > src/server.c:207:51: error: no member named 'ssl_ctx' in 'struct server' > if (srv->mux_proto || srv->use_ssl != 1 || !srv->ssl_ctx.alpn_str) { > ~~~ ^ > src/server.c:241:37: error: no member named 'ssl_ctx' in 'struct server' > const struct ist alpn = ist2(srv->ssl_ctx.alpn_str, > ~~~ ^ > src/server.c:242:37: error: no member named 'ssl_ctx' in 'struct server' > srv->ssl_ctx.alpn_len); > ~~~ ^ > Version 2.4.8 builds fine. > > Thanks for your report. One of my commit to handle properly websocket on the server side introduces this issue. I'm working on a fix. -- Amaury Denoyelle
Re: [PATCH 1/2] BUILD: SSL: add quictls build to scripts/build-ssl.sh
On Thu, Nov 18, 2021 at 06:27:56PM +0500, Ilya Shipitsin wrote: > script/build-ssl.sh is used mostly in CI, let us introduce QUIC > OpenSSL fork support > --- > scripts/build-ssl.sh | 23 +++ > 1 file changed, 23 insertions(+) > diff --git a/scripts/build-ssl.sh b/scripts/build-ssl.sh > index e1d89a0eb..d143cec55 100755 > --- a/scripts/build-ssl.sh > +++ b/scripts/build-ssl.sh > @@ -86,6 +86,17 @@ download_boringssl () { > fi > } > > +download_quictls () { > +if [ ! -d "download-cache/quictls" ]; then > +git clone --depth=1 https://github.com/quictls/openssl > download-cache/quictls > +else > + ( > +cd download-cache/quictls > +git pull > + ) > +fi > +} > + > if [ ! -z ${LIBRESSL_VERSION+x} ]; then > download_libressl > build_libressl > @@ -121,3 +132,15 @@ if [ ! -z ${BORINGSSL+x} ]; then > ) > fi > > +if [ ! -z ${QUICTLS+x} ]; then > +( > + > +download_quictls > +cd download-cache/quictls > + > +./config shared --prefix="${HOME}/opt" --openssldir="${HOME}/opt" > --libdir=lib -DPURIFY > +make -j$(nproc) build_sw > +make install_sw > + > +) > +fi > -- > 2.29.2.windows.2 > Thank you for your patches. However, we should probably fix the GCC error reproduced on your repository before merging it. This will prevent to have a red test indicator indefinitely. -- Amaury Denoyelle
Re: [EXTERNAL] [PATCH] BUILD/MINOR: cpuset Freebsd build fix
On Sat, Oct 30, 2021 at 12:25:13PM +0100, David CARLIER wrote: > Hi here a patch to address freebsd build failure. > Kind regards. Thanks, applied. -- Amaury Denoyelle
Re: [EXTERNAL] [PATCH] BUILD/MINOR: healthcheck netbsd build fix
On Wed, Oct 20, 2021 at 07:03:43PM +0100, David CARLIER wrote: > Hi > here a little patch for netbsd system. > Kind regards. Thanks, I'm currently trying to fix other issues with NetBSD9.2, so I'll merge your patch with my work. -- Amaury Denoyelle
Re: [EXTERNAL] testing websockets
On Wed, Oct 20, 2021 at 09:59:59AM +0500, Илья Шипицин wrote: > Hello, > I've found a way how to test websockets automatically. > this approach is able to catch https://github.com/haproxy/haproxy/issues/737 > > the idea is to place haproxy between browser and kestrel in SignalR tests ( > https://github.com/dotnet/aspnetcore/tree/main/src/SignalR ) > test takes 3 hours on my low end laptop. > I collected all required parts in > https://github.com/chipitsine/haproxy-signalr-tests > also, I checked current maintained branches, no issues (last issue was > fixed in issue 737). Hi, I did not look precisely at your testsuite, but note that now we have some websockets related VTC regtests. Of course, this does only test the tunnel establishment without websocket data exchange, but it's still a good start. Also this will be soon extended with a new test to better handle h2 websocket. Regards, -- Amaury Denoyelle
Re: haproxy hung with CPU usage at 100% Heeeelp, please!!!
Oriol Magrane wrote: > Amaury, > Your patch seems to have killed the bug: our newly recompiled haproxy is > running for more than 24h now without glitches (while the unpatched > versions never got past the 4h hour). > I'll let it run for 24 more hours at full load just in case, but things are > looking good and I don't expect to have to come back here crying. > So thank you very much for your help! Great news ! I will merge the fix soon in the dev branch and it should hit the next serie of stable releases. Thank you again for your report and your feedback. -- Amaury Denoyelle
Re: haproxy hung with CPU usage at 100% Heeeelp, please!!!
0\000\000\000\000P\000\000\000\000\000\000\000\210\334\377\377\377\377\377\377\000\000\000\000\000\000\000\000\003\000\000\000\060", > '\000' , "[\000\000\000n\000\000\000|\000\000\000w", > '\000' , > "\360\006\236\f\335\177\000\000@\274\235\f\335\177\000\000 > \000\000" > pidfd = > ... so the server HA_SPIN_LOCK() in line 177 of queue.c seems to be the > culprit. > At this point, we don’t know what we can do in order to get rid of this > problem. We tried with the new haproxy series 2.x and the issue is still > there; the only difference is the haproxy process doesn’t hang, but it gets > restarted instead (although I don’t know if the restart is systemd’s > action). > Any idea about what could we do? > Thank you very much in advance! > Oriol > P.S. I don’t attach logs or system config to keep this first post simple, > but if anyone wants to give them a look I’ll be happy to share; so just > ask, please Thanks for your report. A deadlock may indeed arise when using 'set maxconn server' if the function process_srv_queue is called, which is not always the case. I had a quick look and the issue seems to be present in the latest version as well. I will continue to inspect the problem in details. As for the restart in the 2.x, it is probably due to the internal haproxy watchdog which was added in the 2.0 version and is specifically designed to detect this type of issues. -- Amaury Denoyelle
[ANNOUNCE] haproxy-1.8.30
Hi, HAProxy 1.8.30 was released on 2021/04/12. It added 8 new commits after version 1.8.29. The main point of this release is the fix of the regression on the frequency counters introduced in the latest release. Thanks to users feedback, the bug was quickly spotted and the fix validated in higher version before being backported in the 1.8 tree. Here is the list of the detailed changes. - As stated, the bug on the frequency counters is fixed. Since the previous release, the time period was not properly calculated, so the counters were not properly updated. Willy was able to fix the issue thanks to quick user reports. - The hdr_ip sample fetch is now stricter. It will reject a field if there is some garbage after a valid IPv4 address. This ensures that for example an invalid x-forwared-for header field is not present, which is better to detect a non-conformant http proxy in a network. - The silent-drop action was not functional for IPv6 connections if the haproxy process is executed without admin capabilities. It now properly set the IPv6 header field hop-limit to 1, as explained in the documentation. Thanks to everyone for this release. Enjoy ! Please find the usual URLs below : Site index : http://www.haproxy.org/ Discourse: http://discourse.haproxy.org/ Slack channel: https://slack.haproxy.org/ Issue tracker: https://github.com/haproxy/haproxy/issues Wiki : https://github.com/haproxy/wiki/wiki Sources : http://www.haproxy.org/download/1.8/src/ Git repository : http://git.haproxy.org/git/haproxy-1.8.git/ Git Web browsing : http://git.haproxy.org/?p=haproxy-1.8.git Changelog: http://www.haproxy.org/download/1.8/src/CHANGELOG Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/ Amaury Denoyelle --- Complete changelog : Willy Tarreau (8): MINOR: time: also provide a global, monotonic global_now_ms timer BUG/MEDIUM: freq_ctr/threads: use the global_now_ms variable BUG/MEDIUM: time: make sure to always initialize the global tick MINOR: tools: make url2ipv4 return the exact number of bytes parsed BUG/MINOR: http_fetch: make hdr_ip() reject trailing characters BUG/MINOR: tcp: fix silent-drop workaround for IPv6 BUILD: tcp: use IPPROTO_IPV6 instead of SOL_IPV6 on FreeBSD/MacOS BUG/MINOR: http_fetch: make hdr_ip() resistant to empty fields ---
[ANNOUNCE] haproxy-2.0.22
Hi, HAProxy 2.0.22 was released on 2021/04/12. It added 23 new commits after version 2.0.21. Most notably, this release fixes a regression affecting the frequency counters. Thanks to quick users feedback, the bug was quickly spotted and the fix validated in higher version before being backported in the 2.0 tree. There were also important fixes in the DNS module due to recent changes. Here is the list of all the changes. - Since the previous release, the time period of frequency counters was not properly calculated, and so the counters not correctly updated. Willy was able to fix the issue quickly thanks to quick user reports. - As mentionned, a bunch of fixes were made on the DNS resolvers. The server managed through the SRV records additional sections were not properly handled, causing servers in MAINT status to never be activated when becoming available. Also the handling of resolution errors is now safer. - The hdr_ip sample fetch is now stricter. It will reject a field if there is some garbage after a valid IPv4 address. This ensures that for example an invalid x-forwared-for header field is not present, which is better to detect a non-conformant http proxy in a network. - The silent-drop action was not functional for IPv6 connections if the haproxy process is executed without admin capabilities. It now properly set the IPv6 header field hop-limit to 1, as explained in the documentation. - The Lua debugging has been slightly improved by Christopher with the implementation of an internal function to display the backtrace in case of failure. This allows to output the backtrace even if a memory allocation failure is the cause of the bug. - The closing of an H1 connection is now idempotent. This prevents a rare occurence of a crash when closing an already closed H1 connection. - On the html stats page, a DOWN backend in transition to the UP state was incorrectly displayed with the wrong color, making it indistinguishable with going DOWN backends. - The unix-bind-prefix directive was incorrectly prepended to the UNIX socket path. - A deadlock was fixed for process built with DEBUG_UAF when using thread isolation. This option is normally only activated for debugging purposes to detect use-after-free problems. Thanks to everyone for this release. Enjoy ! Please find the usual URLs below : Site index : http://www.haproxy.org/ Discourse: http://discourse.haproxy.org/ Slack channel: https://slack.haproxy.org/ Issue tracker: https://github.com/haproxy/haproxy/issues Wiki : https://github.com/haproxy/wiki/wiki Sources : http://www.haproxy.org/download/2.0/src/ Git repository : http://git.haproxy.org/git/haproxy-2.0.git/ Git Web browsing : http://git.haproxy.org/?p=haproxy-2.0.git Changelog: http://www.haproxy.org/download/2.0/src/CHANGELOG Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/ Amaury Denoyelle --- Complete changelog : Baptiste Assmann (1): BUG/MAJOR: dns: disabled servers through SRV records never recover Christopher Faulet (10): MINOR: lua: Slightly improve function dumping the lua traceback BUG/MEDIUM: debug/lua: Use internal hlua function to dump the lua traceback BUG/MEDIUM: lua: Always init the lua stack before referencing the context BUG/MEDIUM: thread: Fix a deadlock if an isolated thread is marked as harmless BUG/MINOR: resolvers: Unlink DNS resolution to set RMAINT on SRV resolution MINOR: resolvers: Use a function to remove answers attached to a resolution MINOR: resolvers: Purge answer items when a SRV resolution triggers an error MINOR: resolvers: Add function to change the srv status based on SRV resolution MINOR: resolvers: Directly call srvrq_update_srv_state() when possible BUG/MEDIUM: resolvers: Don't release resolution from a requester callbacks Eric Salama (1): MINOR/BUG: mworker/cli: do not use the unix_bind prefix for the master CLI socket Florian Apolloner (1): BUG/MINOR: stats: Apply proper styles in HTML status page. Jerome Magnin (1): BUG/MAJOR: dns: fix null pointer dereference in snr_update_srv_status Willy Tarreau (9): MINOR: time: also provide a global, monotonic global_now_ms timer BUG/MEDIUM: freq_ctr/threads: use the global_now_ms variable BUG/MEDIUM: time: make sure to always initialize the global tick MINOR: tools: make url2ipv4 return the exact number of bytes parsed BUG/MINOR: http_fetch: make hdr_ip() reject trailing characters BUG/MEDIUM: mux-h1: make h1_shutw_conn() idempotent BUG/MINOR: tcp: fix silent-drop workaround for IPv6 BUILD: tcp: use IPPROTO_IPV6 instead of SOL_IPV6 on FreeBSD/MacOS BUG/MINOR: http_fetch: make hdr_ip() resistant to empty fields ---
Re: Request for new 1.8.x release due to freq counter bug
"Robin H. Johnson" wrote: > Hi, > Wondering if you could make a new 1.8.x release to get the fix for the > freq counter bug into more systems? > dde80e111 BUG/MEDIUM: time: make sure to always initialize the global tick > 491b86ed0 BUG/MEDIUM: freq_ctr/threads: use the global_now_ms variable > It's a problem for anybody using counters for ratelimiting purposes, who > can't yet upgrade to 2.x series due to other issues encountered there > (the 100% cpu bug even on the latest 2.3.9, that I have logs I'm hoping > to post when they are approved by $work) Hi, I will try to emit new releases next week for 2.0 and 1.8 which are impacted by the bug you mentionned. -- Amaury Denoyelle
[ANNOUNCE] haproxy-1.8.29
Hi, HAProxy 1.8.29 was released on 2021/03/19. It added 40 new commits after version 1.8.28. As again, this is mainly a release with a collection of fixes which improve the overall quality and stability of haproxy. Here is a list of the most notables changes : - William D. and Christopher give special care on the server-state file. Some bugs related to the servers reload from the state file are corrected. Most notably, the RMAINT dynamic state is now ignored if present in the state file. This prevents an error to be reported and the server to not be updated. Also, it is now possible in the configuration file to use the server-state directive without a filename. In this case, a file with the backend name will automatically be used as the state file. In fact, this feature was already documented but not functional. - The SPOE module receives fixes. First, a crash caused by dangling references on released applets is now prevented. It could have happened on unusual various occasions, one of them being during a reload. Second, after receiving an incomplete frame, the SPOE stream processing was stucked. Now the stream is properly woken up to process the remaining data. Finally, some adjustments have been made to process streams smoother when using SPOE with multithread, especially with a low maxconn on the server. - The DNS responses are matched with case-insensitive functions. This prevent resolution failures for the uncommon case of having uppercases characters in names of DNS queries. - It is now possible to use ipv6 connections for servers with the special address 0.0.0.0 to reuse the client source address as a transparent proxy. - A bug was fixed by Willy in the configuration parser. The queue and tarpit timeouts were applied from the last default section if not defined instead of the default section before the proxy definition. This might have resulted in incorrect timeout values. This bug is 13 years old so I think it deserves a special note here :) - The processing of the originalto option has been adjusted. Now the destination address is correctly tested against the except statement. Please be careful because the doc was also wrong on this point, mentioning the source instead of the destination address. It has been edited, so this feature is now consistent. - There was an issue with the set-dst tcp rule. The destination tcp port was not correctly fixed for an ipv4 address. However its impact was limited on UNIX sockets. - Christopher wrote a fix for the filters. This ensure that the filters will be called on the response after the request, even if the analyzis on the request is finished before the start of the response filtering. - The sample fetch functions are safer and handle properly an empty string buffer value. Before this, it caused a memory corruption problem by reading bytes outside of the sample value. - Data races could happened on hard-stop or shutdown-sessions. These instructions are now only executed when no other threads are running to enforce the thread-safety. - Willy has fixed a data race on frequency counters, causing a non-monotonically updates of them. At the same occasion, an optimization has been backported. The counters are now incremented atomically without a lock. This reduces the contention on them, which should improve the performance as they are often used as proxies counters. - The sticky counter of the sessions was also subject to a non-thread safe incrementation and could be slightly off on time. An atomic operation is now used to guarantee a correct value. - A possible crash was averted by William when manipulating listener counters with an applet as a client (peers, SPOE or Lua). In this case, the listener instance is null, thus it cannot be dereferenced. However, these instructions are limited to specific scopes, so most of the time it won't happened for an applet. Thanks to everyone for this release. Enjoy ! Please find the usual URLs below : Site index : http://www.haproxy.org/ Discourse: http://discourse.haproxy.org/ Slack channel: https://slack.haproxy.org/ Issue tracker: https://github.com/haproxy/haproxy/issues Wiki : https://github.com/haproxy/wiki/wiki Sources : http://www.haproxy.org/download/1.8/src/ Git repository : http://git.haproxy.org/git/haproxy-1.8.git/ Git Web browsing : http://git.haproxy.org/?p=haproxy-1.8.git Changelog: http://www.haproxy.org/download/1.8/src/CHANGELOG Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/ --- Complete changelog : Amaury Denoyelle (2): BUG/MINOR: config: fix leak on proxy.conn_src.bind_hdr_name CLEANUP: remove unused src/cfgparse-listen.c Bertrand Jacquin (1): BUILD/MINOR: lua: define _GNU_SOURCE for LLONG_MAX Christopher Faulet (20): BUG/MINOR: stick-table: Always call smp_fetch_src
Re: Proxy protocol unique id and idle connection
Tim Düsterhus wrote: > Amaury, > Am 21.01.21 um 16:40 schrieb (Amaury Denoyelle): > > I have a question for you on the case of the proxy protocol. One of > > these special parameters to identify a connection is the content of the > > proxy protocol block. However, this breaks the following reg-tests : > > - proxy_protocol_send_unique_id.vtc (from commit > > cf6e0c8a836da211946fa637020e776286093633) > > What happens in the reg-test with my modification is that on the second > > client request, a new proxy protocol packet for connection comparison is > > generated with the header of the second request as unique id. It is > > identified that the previous connection cannot be reused (different > > unique id in proxy protocol), and thus a new connection is established. > > However, on the reg-tests, it is expected that the first connection is > > reused and thus no proxy protocol resend. > > What is your opinion on this ? On my side, I think the behavior of my > > implementation is less surprising, but if it is a real use-case, I do > > not want to break it without knowing it. > Sending unique IDs via PROXY v2 was implemented for TCP proxying. The > documentation already states that: > It can > lead to unexpected results in "mode http", because the > generated unique ID is also used for the first HTTP request > within a Keep-Alive connection. > So: > 1. No one should send unique IDs via PROXY v2 for HTTP traffic. There's > headers for that. > 2. It didn't make sense to actually prevent users from doing that. > 3. Preventing sharing for such connections does not actually *break* > anything, instead it just reduces performance. > 4. With the reduced performance we are back to (1): You really shouldn't > do this. > 5. The test was just to make sure it behaves somewhat sanely (i.e. does > not send entirely wrong information) for HTTP traffic. My intention was > not to test that Keep-Alive is possible. > So: Please go ahead with your modification, change the test to make sure > keep-alive does not happen and adjust the documentation as necessary. Hi Tim, Thanks for your quick answer. I will continue my work and adjust the test and the doc. -- Amaury Denoyelle
Proxy protocol unique id and idle connection
Hi Tim, I'm currently doing a major rework of idle connection management on the server side of haproxy. The goal is to be able to reuse connection that are private today, such as the ones with SNI or proxy protocol. Every server connection will be differentiated by a set of parameters to be able to reuse it only if the new request shares the same parameters. I have a question for you on the case of the proxy protocol. One of these special parameters to identify a connection is the content of the proxy protocol block. However, this breaks the following reg-tests : - proxy_protocol_send_unique_id.vtc (from commit cf6e0c8a836da211946fa637020e776286093633) What happens in the reg-test with my modification is that on the second client request, a new proxy protocol packet for connection comparison is generated with the header of the second request as unique id. It is identified that the previous connection cannot be reused (different unique id in proxy protocol), and thus a new connection is established. However, on the reg-tests, it is expected that the first connection is reused and thus no proxy protocol resend. What is your opinion on this ? On my side, I think the behavior of my implementation is less surprising, but if it is a real use-case, I do not want to break it without knowing it. Let me know if I'm not clear, I'm putting the list on copy in case someone has an opinion on the subject. Thank you for your help, -- Amaury Denoyelle
[ANNOUNCE] haproxy-1.8.28
Hi, HAProxy 1.8.28 was released on 2021/01/13. It added 33 new commits after version 1.8.27. This new release is relatively light and contains only a small set of fixes. Of particular interest are the following ones: - A race condition has been fixed by Christopher affecting the leastconn load-balancer algorithm. A crash could occur due to a division by zero with the weight parameter. Thanks to Peter Statham for its help on the issue. - The parser for haproxy command-line arguments was behaving incorrectly when using an argument value starting with a dash. On reload, the process was restarted with a wrong set of arguments. In fact, this should have been fixed in the earlier version but the backported commit was incomplete in the 1.8 branch. - Olivier has fixed a problem affecting users of ARM-based Macs, whose assembler takes the semi-colon as a comment instead of an instruction delimiter, causing the CAS to fail and loop forever. - Maciej has fixed the sample fetches to retrieve messages cookies when called without cookie name. This case was never properly handled, contrary to what the doc says. - A set of minor patches from Thierry were applied to the Lua code. In particular, new warnings will be reported when trying to register action, converter, sample fetch, cli or applet with a name already used. - A small fix was written by Christopher on SPOE. The connection is not closed anymore when receiving an ACK even if no frame is waiting an acknowledge. - The time parser is now more restrictive on invalid values. Please update your configuration if you spot new warnings about this. - The extra cookie attributes defined in the default section were not taken into account before this release. Now they are properly applied on each proxy instance. - A crash could happen on startup when using in the configuration a disabled backend with specific server parameters. And as usual there is a bunch of cleanup/typo and doc commits, not visible on first sight, but they are always welcome as they improve the overall quality of haproxy software. Thanks to everyone for this release. Enjoy ! Please find the usual URLs below : Site index : http://www.haproxy.org/ Discourse: http://discourse.haproxy.org/ Slack channel: https://slack.haproxy.org/ Issue tracker: https://github.com/haproxy/haproxy/issues Wiki : https://github.com/haproxy/wiki/wiki Sources : http://www.haproxy.org/download/1.8/src/ Git repository : http://git.haproxy.org/git/haproxy-1.8.git/ Git Web browsing : http://git.haproxy.org/?p=haproxy-1.8.git Changelog: http://www.haproxy.org/download/1.8/src/CHANGELOG Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/ Amaury Denoyelle --- Complete changelog : Amaury Denoyelle (2): BUG/MINOR: config: copy extra cookie attributes from dfl proxy BUG/MINOR: srv: do not init address if backend is disabled Christian Ruppert (1): BUILD: hpack: hpack-tbl-t.h uses VAR_ARRAY but does not include compiler.h Christopher Faulet (7): BUG/MINOR: http-fetch: Fix calls w/o parentheses of the cookie sample fetches MINOR: spoe: Don't close connection in sync mode on processing timeout DOC: config: Move req.hdrs and req.hdrs_bin in L7 samples fetches section BUG/MINOR: tools: make parse_time_err() more strict on the timer validity BUG/MINOR: tools: Reject size format not starting by a digit BUG/MEDIUM: lb-leastconn: Reposition a server using the right eweight CLEANUP: lua: Remove declaration of an inexistant function David Carlier (1): DOC: email change of the DeviceAtlas maintainer Maciej Zdeb (1): BUG/MINOR: http-fetch: Extract cookie value even when no cookie name Olivier Houchard (1): MINOR: atomic: don't use ; to separate instruction on aarch64. Phil Scherer (1): DOC/MINOR: Fix formatting in Management Guide Thayne McCombs (2): DOC: fix some spelling issues over multiple files SCRIPTS: announce-release: fix typo in help message Thierry Fournier (7): BUG/MINOR: lua: lua-load doesn't check its parameters BUG/MINOR: lua: Post init register function are not executed beyond the first one BUG/MINOR: lua: Some lua init operation are processed unsafe MINOR: actions: Export actions lookup functions MINOR: actions: add a function returning a service pointer from its name MINOR: cli: add a function to look up a CLI service description BUG/MINOR: lua: warn when registering action, conv, sf, cli or applet multiple times Tim Duesterhus (1): BUG/MINOR: cfgparse: Fail if the strdup() for `rule->be.name` for `use_backend` fails William Lallemand (1): BUG/MEDIUM: mworker: fix again copy_argv() Willy Tarreau (8): CLEANUP: stream: remove an obsolete debugging test BUILD: Makefile: have "make clean" destroy .o/.a/.s in co
Re: Crash when using wlc in multithreaded mode with agent checks (1.8.26).
Mark Brookes wrote: > Sorry to drag up an old thread. But do we have an eta for a new > release of version 1.8 that contains the fix? I noticed that the 2.x > versions have been updated, and wanted to make sure that 1.8 has not > been left out by mistake. I have just updated the backports for haproxy 1.8, I will try to make a release with your fix soon. -- Amaury Denoyelle
Re: [PR] Fixed null pointer dereference in srv_cleanup_connections()
PR Bot wrote: > Dear list! > Author: Peter Skarpetis > Number of patches: 1 > This is an automated relay of the Github pull request: >Fixed null pointer dereference in srv_cleanup_connections() > Patch title(s): >Fixed null pointer dereference in srv_cleanup_connections() > Link: >https://github.com/haproxy/haproxy/pull/1031 > Edit locally: >wget https://github.com/haproxy/haproxy/pull/1031.patch && vi 1031.patch > Apply locally: >curl https://github.com/haproxy/haproxy/pull/1031.patch | git am - > Description: >haproxy_srv_cleanup_connections_crash.cfg causes a null pointer >dereference in srv_cleanup_connections . Configuration file works in >2.0.19 branch but crashes in all subsequent versions including the >dev branch. I did not track down the cause, I just added the null >pointer check to stop the crashing. > >Crash can be reproduced >with the following command: >./haproxy -c -f >haproxy_srv_cleanup_connections_crash.cfg >haproxy_srv_cleanup_connections_crash.cfg can be grabbed from the >gist below: >https://gist.github.com/peterska/769f41562f6b045df59df >2294b2c20f0#file-haproxy_srv_cleanup_connections_crash-cfg >configuration file has been edited enough to cause the crash while >removing all references to certificates and CA authorities. It is not >a production config file. > Instructions: >This github pull request will be closed automatically; patch should be >reviewed on the haproxy mailing list (haproxy@formilux.org). Everyone is >invited to comment, even the patch's author. Please keep the author and >list CCed in replies. Please note that in absence of any response this >pull request will be lost. I can reproduce your issue on haproxy master. However I'm not sure why idle_conns list is NULL in this case so I'll investigate it before validating your patch. -- Amaury Denoyelle
[ANNOUNCE] haproxy-1.8.27
Hi, HAProxy 1.8.27 was released on 2020/11/06. It added 44 new commits after version 1.8.26. Every 1.8 users are encouraged to upgrade as it contains several bug fixes. This release contains some fixes also present on higher versions. Most notably a fix on the h2 multiplexer, a thread-safety bug on load-balancer algorithms, a design issue on SPOE and the skipping of disabled proxies for filters. Also there is the possibility now to update the server-state file without crashing haproxy. You can look the detailled reports for them on 2.2.5 announce. In addition, the following changes have been made : The h2 multiplexer is more robust thanks to Christopher and Willy. First it is now able to parse incomplete chunk formatting. An issue has also been raised due to a certain combination of frame type and flags which haproxy interprets wrongly as invalid and is now fixed. Some improvement on the ssl have been made by William. Notably, a better algorithm to choose a certificate when using wildcards with respect to the supported encryption algorithm. The lua engine now prevents to load map at runtime, which should never have been permitted. There is also additional checks on arguments when doing ip address manipulation. A nasty bug has fixed by Willy, triggered when using multi processes with expose-fd. In short, the "disable frontend" command had the side-effect of pausing other process socket file-descriptors. Now this case is properly handled, the other listeners are not impacted. There is also a list of smaller fixes. Again, look at 2.2.5 announce that summarizes them for more info. Thanks to everyone for this release. Enjoy ! Please find the usual URLs below : Site index : http://www.haproxy.org/ Discourse: http://discourse.haproxy.org/ Slack channel: https://slack.haproxy.org/ Issue tracker: https://github.com/haproxy/haproxy/issues Wiki : https://github.com/haproxy/wiki/wiki Sources : http://www.haproxy.org/download/1.8/src/ Git repository : http://git.haproxy.org/git/haproxy-1.8.git/ Git Web browsing : http://git.haproxy.org/?p=haproxy-1.8.git Changelog: http://www.haproxy.org/download/1.8/src/CHANGELOG Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/ --- Complete changelog : Amaury Denoyelle (6): BUG/MINOR: config: Fix memory leak on config parse listen MINOR: counters: fix a typo in comment BUG/MINOR: stats: fix validity of the json schema BUG/MINOR: server: fix srv downtime calcul on starting BUG/MINOR: server: fix down_time report for stats BUG/MINOR: lua: initialize sample before using it Christopher Faulet (13): BUG/MEDIUM: mux-h2: Don't fail if nothing is parsed for a legacy chunk response BUG/MEDIUM: map/lua: Return an error if a map is loaded during runtime BUG/MINOR: lua: Check argument type to convert it to IPv4/IPv6 arg validation BUG/MINOR: lua: Check argument type to convert it to IP mask in arg validation BUG/MEDIUM: pattern: Renew the pattern expression revision when it is pruned MINOR: hlua: Display debug messages on stderr only in debug mode BUG/MEDIUM: spoe: Unset variable instead of set it if no data provided BUG/MEDIUM: lb: Always lock the server when calling server_{take,drop}_conn BUG/MAJOR: mux-h2: Don't try to send data if we know it is no longer possible BUG/MEDIUM: filters: Don't try to init filters for disabled proxies BUG/MINOR: server: Set server without addr but with dns in RMAINT on startup MINOR: server: Copy configuration file and line for server templates BUG/MINOR: filters: Skip disabled proxies during startup only Dragan Dosen (1): BUG/MEDIUM: pattern: fix memory leak in regex pattern functions Lukas Tribus (1): BUG/MINOR: dns: ignore trailing dot Remi Tricot-Le Breton (1): BUG/MINOR: cache: Inverted variables in http_calc_maxage function Tim Duesterhus (2): MINOR: Commit .gitattributes CLEANUP: Update .gitignore William Dauchy (1): DOC: agent-check: fix typo in "fail" word expected reply William Lallemand (5): BUG/MINOR: startup: haproxy -s cause 100% cpu BUG/MEDIUM: ssl: check OCSP calloc in ssl_sock_load_ocsp() BUG/MEDIUM: ssl: does not look for all SNIs before chosing a certificate BUG/MINOR: ssl: verifyhost is case sensitive DOC: ssl: crt-list negative filters are only a hint Willy Tarreau (14): BUG/MINOR: stats: use strncmp() instead of memcmp() on health states BUG/MINOR: reload: do not fail when no socket is sent BUG/MINOR: threads: work around a libgcc_s issue with chrooting BUILD: thread: limit the libgcc_s workaround to glibc only BUILD: threads: better workaround for late loading of libgcc_s BUG/MEDIUM: h2: report frame bits only for handled types BUG/MEDIUM: listeners: do not pause foreign listeners
Re: Easy but useful feature missing, anyone interested ?
Willy Tarreau wrote: > Hi all, > while revisiting pending issues, I've come across this one, about > the impossibility for an environment variable to produce multiple > words in the configuration: > https://github.com/haproxy/haproxy/issues/165 > It can be trivially addressed by adding support for ${VAR[*]} to the > config language. This addition is relatively simple to do, roughly > speaking, simply add the code in parse_line() in the "if" block dealing > with "$", detect the presence of '[', terminate the variable name here, > raise a flag, then once the variable is resolved, increase arg for each > space found. > We've missed it for 2.2 already, it would be nice if someone interested > in this feature could have a look at it before we release 2.3. I'm OK > with merging it slightly late given that the side effects are quickly > tested. FIY, I have looked upon the issue and seems to have now a working implementation. I will probably be able to produce a clean patch soon. -- Amaury Denoyelle
Re: Easy but useful feature missing, anyone interested ?
Willy Tarreau wrote: > Hi all, > while revisiting pending issues, I've come across this one, about > the impossibility for an environment variable to produce multiple > words in the configuration: > https://github.com/haproxy/haproxy/issues/165 > It can be trivially addressed by adding support for ${VAR[*]} to the > config language. This addition is relatively simple to do, roughly > speaking, simply add the code in parse_line() in the "if" block dealing > with "$", detect the presence of '[', terminate the variable name here, > raise a flag, then once the variable is resolved, increase arg for each > space found. > We've missed it for 2.2 already, it would be nice if someone interested > in this feature could have a look at it before we release 2.3. I'm OK > with merging it slightly late given that the side effects are quickly > tested. > In the same vein, I've long been saying that we're missing an ifdef > mechanism, but if we had $(if,cond,true,false) like makefiles, we could > already do a lot, particularly in the regtests. There's quite more work > here (define expressions for the condition, internal variable names for > version and builtin options, recursive resolving of variable names), but > it's likely that the same person(s) might be interested. > If someone's interested in having a look at that, please let me know. > There's no shame in trying and not succeeding, don't worry :-) At the moment I'm always interested to add simple feature, but I also would like to progress with my patch serie on stats so I'm not sure if I will have the time for this :/ -- Amaury Denoyelle
Statistics improvement - usage for the json schema
Hi, I'm currently working on extending the statistisc capabilities for HAProxy. My first goal is to design a small API to be able to quickly add new counters. I have a question about the cli command "show schema json". I am not a json expert and do not know precisely the purpose of a json schema. However, it seems that the currently generated schema is not valid, at least by checking on this website : https://www.jsonschemavalidator.net/ Is the json schema useful to anyone or can I safely ignore/remove it ? Thanks for your feedback, -- Amaury Denoyelle