Re: [PATCH] MINOR: Call deinit_and_exit(0) for `haproxy -vv`
On Wed, Apr 27, 2022 at 12:08:11AM +0200, Tim Duesterhus wrote: > It appears that it is safe to call perform a clean deinit at this point, so > let's do this to exercise the deinit paths some more. OK let's try. If there were any issue with this, we could easily revert it without impact anyway. Applied, thank you! Willy
[PATCH] MINOR: Call deinit_and_exit(0) for `haproxy -vv`
It appears that it is safe to call perform a clean deinit at this point, so let's do this to exercise the deinit paths some more. Running `valgrind --leak-check=full --show-leak-kinds=all ./haproxy -vv` with this change reports: ==261864== HEAP SUMMARY: ==261864== in use at exit: 344 bytes in 11 blocks ==261864== total heap usage: 1,178 allocs, 1,167 frees, 1,102,089 bytes allocated ==261864== ==261864== 24 bytes in 1 blocks are still reachable in loss record 1 of 2 ==261864==at 0x483DD99: calloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so) ==261864==by 0x324BA6: hap_register_pre_check (init.c:92) ==261864==by 0x155824: main (haproxy.c:3024) ==261864== ==261864== 320 bytes in 10 blocks are still reachable in loss record 2 of 2 ==261864==at 0x483DD99: calloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so) ==261864==by 0x26E54E: cfg_register_postparser (cfgparse.c:4238) ==261864==by 0x155824: main (haproxy.c:3024) ==261864== ==261864== LEAK SUMMARY: ==261864==definitely lost: 0 bytes in 0 blocks ==261864==indirectly lost: 0 bytes in 0 blocks ==261864== possibly lost: 0 bytes in 0 blocks ==261864==still reachable: 344 bytes in 11 blocks ==261864== suppressed: 0 bytes in 0 blocks which is looking pretty good. --- src/haproxy.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/haproxy.c b/src/haproxy.c index 6fbe85bd3..b43997b6c 100644 --- a/src/haproxy.c +++ b/src/haproxy.c @@ -1608,7 +1608,7 @@ static void init_args(int argc, char **argv) display_version(); if (flag[1] == 'v') /* -vv */ display_build_opts(); - exit(0); + deinit_and_exit(0); } #if defined(USE_EPOLL) else if (*flag == 'd' && flag[1] == 'e') -- 2.36.0
Re: [PATCH] CLEANUP: Destroy `http_err_chunks` members during deinit
Hi Tim, On Tue, Apr 26, 2022 at 11:35:07PM +0200, Tim Duesterhus wrote: > To make the deinit function a proper inverse of the init function we need to > free the `http_err_chunks`: > > ==252081== 311,296 bytes in 19 blocks are still reachable in loss record > 50 of 50 > ==252081==at 0x483B7F3: malloc (in > /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so) > ==252081==by 0x2727EE: http_str_to_htx (http_htx.c:914) > ==252081==by 0x272E60: http_htx_init (http_htx.c:1059) > ==252081==by 0x26AC87: check_config_validity (cfgparse.c:4170) > ==252081==by 0x155DFE: init (haproxy.c:2120) > ==252081==by 0x155DFE: main (haproxy.c:3037) Indeed. At first I was worried that there could be static buffers in use there like in the past, but no, that's indeed always initialized and allocated by http_str_to_htx() so that's both safe and needed. Applied, thank you! Willy
Re: [PATCH] BUG/MINOR: Fix memory leak in resolvers_deinit()
On Tue, Apr 26, 2022 at 11:28:47PM +0200, Tim Duesterhus wrote: > A config like the following: > > global > stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd > listeners > > resolvers unbound > nameserver unbound 127.0.0.1:53 > > will report the following leak when running a configuration check: > > ==241882== 6,991 (6,952 direct, 39 indirect) bytes in 1 blocks are > definitely lost in loss record 8 of 13 > ==241882==at 0x483DD99: calloc (in > /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so) > ==241882==by 0x25938D: cfg_parse_resolvers (resolvers.c:3193) > ==241882==by 0x26A1E8: readcfgfile (cfgparse.c:2171) > ==241882==by 0x156D72: init (haproxy.c:2016) > ==241882==by 0x156D72: main (haproxy.c:3037) > > because the `.px` member of `struct resolvers` is not freed. > > The offending allocation was introduced in > c943799c865c04281454a7a54fd6c45c2b4d7e09 which is a reorganization that > happened during development of 2.4.x. This fix can likely be backported > without > issue to 2.4+ and is likely not needed for earlier versions as the leak > happens > during deinit only. Looks good, now merged, thanks Tim! Willy
[PATCH] CLEANUP: Destroy `http_err_chunks` members during deinit
To make the deinit function a proper inverse of the init function we need to free the `http_err_chunks`: ==252081== 311,296 bytes in 19 blocks are still reachable in loss record 50 of 50 ==252081==at 0x483B7F3: malloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so) ==252081==by 0x2727EE: http_str_to_htx (http_htx.c:914) ==252081==by 0x272E60: http_htx_init (http_htx.c:1059) ==252081==by 0x26AC87: check_config_validity (cfgparse.c:4170) ==252081==by 0x155DFE: init (haproxy.c:2120) ==252081==by 0x155DFE: main (haproxy.c:3037) --- src/http_htx.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/src/http_htx.c b/src/http_htx.c index d9584abae..ea4c25f1a 100644 --- a/src/http_htx.c +++ b/src/http_htx.c @@ -1112,6 +1112,9 @@ static void http_htx_deinit(void) LIST_DELETE(&http_rep->list); release_http_reply(http_rep); } + + for (rc = 0; rc < HTTP_ERR_SIZE; rc++) + chunk_destroy(&http_err_chunks[rc]); } REGISTER_CONFIG_POSTPARSER("http_htx", http_htx_init); -- 2.36.0
[PATCH] BUG/MINOR: Fix memory leak in resolvers_deinit()
A config like the following: global stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners resolvers unbound nameserver unbound 127.0.0.1:53 will report the following leak when running a configuration check: ==241882== 6,991 (6,952 direct, 39 indirect) bytes in 1 blocks are definitely lost in loss record 8 of 13 ==241882==at 0x483DD99: calloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so) ==241882==by 0x25938D: cfg_parse_resolvers (resolvers.c:3193) ==241882==by 0x26A1E8: readcfgfile (cfgparse.c:2171) ==241882==by 0x156D72: init (haproxy.c:2016) ==241882==by 0x156D72: main (haproxy.c:3037) because the `.px` member of `struct resolvers` is not freed. The offending allocation was introduced in c943799c865c04281454a7a54fd6c45c2b4d7e09 which is a reorganization that happened during development of 2.4.x. This fix can likely be backported without issue to 2.4+ and is likely not needed for earlier versions as the leak happens during deinit only. --- src/resolvers.c | 1 + 1 file changed, 1 insertion(+) diff --git a/src/resolvers.c b/src/resolvers.c index 0b7faf93d..3179073b5 100644 --- a/src/resolvers.c +++ b/src/resolvers.c @@ -2448,6 +2448,7 @@ static void resolvers_deinit(void) abort_resolution(res); } + free_proxy(resolvers->px); free(resolvers->id); free((char *)resolvers->conf.file); task_destroy(resolvers->t); -- 2.36.0
Re: Stupid question about nbthread and maxconn
Hello, > > Let's say we have the following setup. > > > > ``` > > maxconn 2 > > nbthread 4 > > ``` > > > > My understanding is that HAProxy will accept 2 concurrent connection, > > right? Even when I increase the nbthread will HAProxy *NOT* accept more then > > 2 concurrent connection, right? Yes. > > What confuses me is "maximum per-process" in the maxconn docu part, will > > every > > thread handle the maxconn or is this for the whole HAProxy instance. Per process limits apply to processes, they do not apply to threads. Maxconn is per process. It is NOT per thread. Multithreading solves those issues. Lukas
Re: Stupid question about nbthread and maxconn
Hi. Anyone any Idea about the question below? Regards Alex On Sat, 23 Apr 2022 11:05:36 +0200 Aleksandar Lazic wrote: > Hi. > > I'm not sure if I understand the doc properly. > > https://docs.haproxy.org/2.2/configuration.html#nbthread > ``` > This setting is only available when support for threads was built in. It > makes haproxy run on threads. This is exclusive with "nbproc". While > "nbproc" historically used to be the only way to use multiple processors, it > also involved a number of shortcomings related to the lack of synchronization > between processes (health-checks, peers, stick-tables, stats, ...) which do > not affect threads. As such, any modern configuration is strongly encouraged > to migrate away from "nbproc" to "nbthread". "nbthread" also works when > HAProxy is started in foreground. On some platforms supporting CPU affinity, > when nbproc is not used, the default "nbthread" value is automatically set to > the number of CPUs the process is bound to upon startup. This means that the > thread count can easily be adjusted from the calling process using commands > like "taskset" or "cpuset". Otherwise, this value defaults to 1. The default > value is reported in the output of "haproxy -vv". See also "nbproc". > ``` > > https://docs.haproxy.org/2.2/configuration.html#3.2-maxconn > ``` > Sets the maximum per-process number of concurrent connections to . It > is equivalent to the command-line argument "-n". Proxies will stop accepting > connections when this limit is reached. The "ulimit-n" parameter is > automatically adjusted according to this value. See also "ulimit-n". Note: > the "select" poller cannot reliably use more than 1024 file descriptors on > some platforms. If your platform only supports select and reports "select > FAILED" on startup, you need to reduce maxconn until it works (slightly > below 500 in general). If this value is not set, it will automatically be > calculated based on the current file descriptors limit reported by the > "ulimit -n" command, possibly reduced to a lower value if a memory limit > is enforced, based on the buffer size, memory allocated to compression, SSL > cache size, and use or not of SSL and the associated maxsslconn (which can > also be automatic). > > ``` > > Let's say we have the following setup. > > ``` > maxconn 2 > nbthread 4 > ``` > > My understanding is that HAProxy will accept 2 concurrent connection, > right? Even when I increase the nbthread will HAProxy *NOT* accept more then > 2 concurrent connection, right? > > The increasing of nbthread will "only" change that the performance will be > "better" on a let's say 32 CPU Machine, especially for the upcoming 2.6 :-) > > https://docs.microsoft.com/en-us/azure/virtual-machines/dv3-dsv3-series#dsv3-series > => Standard_D32s_v3: 32 CPU, 128G RAM > > What confuses me is "maximum per-process" in the maxconn docu part, will every > thread handle the maxconn or is this for the whole HAProxy instance. > > More mathematically :-O. > 2 * 4 = 8 > or > 2 * 4 = 2 > > Regards > Alex >
Re: Set environment variables
On Tue, 26 Apr 2022 15:03:51 +0200 Valerio Pachera wrote: > Hi, I have several backend configuration that make use of a custom script: > > external-check command 'custom-script.sh' > > The script read uses the environment variables such as $HAPROXY_PROXY_NAME. > I would like to be able to set and environment variable in the backend > declaration, before running the external check. > This environment variable will change the behavior of custom-script.sh. > > Is it possible to declare environment variables in haproxy 1.9 or later? > > What I need is to make custom-script.sh aware if SSL is used or not. > If there's another way to achieve that, please tell me. Well you can put it in the name of the server as I don't see any other option to add extra variables into the external check. https://git.haproxy.org/?p=haproxy.git;a=blob;f=src/extcheck.c;hb=e50aabe443125eb94e3e7823c387125ca7e0c302#l81 ``` 81 const struct extcheck_env extcheck_envs[EXTCHK_SIZE] = { 82 [EXTCHK_PATH] = { "PATH", EXTCHK_SIZE_EVAL_INIT }, 83 [EXTCHK_HAPROXY_PROXY_NAME] = { "HAPROXY_PROXY_NAME", EXTCHK_SIZE_EVAL_INIT }, 84 [EXTCHK_HAPROXY_PROXY_ID] = { "HAPROXY_PROXY_ID", EXTCHK_SIZE_EVAL_INIT }, 85 [EXTCHK_HAPROXY_PROXY_ADDR] = { "HAPROXY_PROXY_ADDR", EXTCHK_SIZE_EVAL_INIT }, 86 [EXTCHK_HAPROXY_PROXY_PORT] = { "HAPROXY_PROXY_PORT", EXTCHK_SIZE_EVAL_INIT }, 87 [EXTCHK_HAPROXY_SERVER_NAME]= { "HAPROXY_SERVER_NAME", EXTCHK_SIZE_EVAL_INIT }, 88 [EXTCHK_HAPROXY_SERVER_ID] = { "HAPROXY_SERVER_ID", EXTCHK_SIZE_EVAL_INIT }, 89 [EXTCHK_HAPROXY_SERVER_ADDR]= { "HAPROXY_SERVER_ADDR", EXTCHK_SIZE_ADDR }, 90 [EXTCHK_HAPROXY_SERVER_PORT]= { "HAPROXY_SERVER_PORT", EXTCHK_SIZE_UINT }, 91 [EXTCHK_HAPROXY_SERVER_MAXCONN] = { "HAPROXY_SERVER_MAXCONN", EXTCHK_SIZE_EVAL_INIT }, 92 [EXTCHK_HAPROXY_SERVER_CURCONN] = { "HAPROXY_SERVER_CURCONN", EXTCHK_SIZE_ULONG }, 93 }; ``` > Thank you. Hth Alex
Set environment variables
Hi, I have several backend configuration that make use of a custom script: external-check command 'custom-script.sh' The script read uses the environment variables such as $HAPROXY_PROXY_NAME. I would like to be able to set and environment variable in the backend declaration, before running the external check. This environment variable will change the behavior of custom-script.sh. Is it possible to declare environment variables in haproxy 1.9 or later? What I need is to make custom-script.sh aware if SSL is used or not. If there's another way to achieve that, please tell me. Thank you.
Learning from Spam (was: Re: Social media marketing Plans from Scratch haproxy.org)
Hi, On Tue, 26 Apr 2022 03:32:16 -0700 Ivana Paul wrote: > Hello haproxy.org [SPAM Content] New Idea for spam "learning platform" :-) I never heard anything about "SMO services" and now I know it's this. Social Media Optimization (SMO) Services Regard Alex
Social media marketing Plans from Scratch haproxy.org
Hello haproxy.org Hope you know how engaging Social Media Platforms like Facebook, Instagram, Twitter, etc. are these days. We help you find the right audience and provide exposure for your services while managing a consistent plan of communication for your business. You reach the right people when a SMO company like ours showcases your business. Are you interested in SMO services? If yes, send me your website URL along with the target location and the strategies to discuss further. Thanks, Regards *Ivana Paul*
[ANNOUNCE] haproxy-2.5.6
Hi, HAProxy 2.5.6 was released on 2022/04/26. It added 86 new commits after version 2.5.5. As usual, several bugs were fixed in this release: * An internal issue leading to truncated messages. When data were mixed with an error report, connection errors could be handled too early by the stream-interface. Now connection errors are only considered by the stream-interface during the connection establishment. After that, it relies on the conn-stream to be notified of any error. * An issue in the pass-through multiplexer, exposed by the previous fix, and that may lead to a loop at 100% CPU. Connection error was not properly reported to the conn-stream on the sending path. * An issue in the idle connections management code. It's extremely hard to hit but it could randomly crash the process under high contention on the server side due to a missing lock. * An issue with the FCGI multiplexer when the response is compressed. The FCGI application was rewriting the response headers modifying HTX flags while the compression filter was doing so by modifying the HTTP message flags. Thus some modification performed on a side were not detected by the other, leading to produce invalid responses. Now, the flags of both structures are systematically updated. * An issue with responses to HEAD requests sent to FCGI servers. A "Content-Length: 0" header was erroneously added on the bodyless responses while it should not. Indeed, if the expected payload size is not specified by the server, HAProxy must not add this header because it cannot know it. In addition, still in the FCGI multiplexer, the parsing of headers and trailers was fixed to properly handle parsing errors. * Two issues in the H1 multiplexer. First, Connection error was reported to early, when there were still pending data for the stream. Because of this bug, last pending data could be truncated. Now the connection error is reported only if there is no pending data. The second issue is a problem about full buffer detection during the trailers parsing. Because of this bug, it was possible to block the message parsing till the timeout expiration. * A design issue with the HTX. When EOM HTX block was replaced by a flag, we tried hard to be sure the flag was always set with the last HTX block. It works pretty well for all messages received from a client or a server. But for internal messages, it was not always true, especially for messages produced by applets. Some workarounds were found to fix this design issue on stable versions. But a more elegant solution must be found for the 2.6. Prometheus exporter, the stats applet and lua HTTP applets were concerned. * Some issues in the H2 multiplexers. First the GOAWAY frame is no longer sent if SETTINGS were not sent. Then, as announced, the "timeout http-keep-alive" and "timeout http-request" are now respected and work as documented, so that it will finally be possible to force such connections to be closed when no request comes even if they're seeing control traffic such as PING frames. This can typically happen in some server-to-server communications whereby the client application makes use of PING frames to make sure the connection is still alive. * Issues with captures defined in defaults sections. Since the 2.5, it is possible to declare TCP/HTTP rules in defaults sections. However, captures were not properly working. It is still pretty tricky to use captures, but it doesn't crash anymore. * Several issues in the HTTP client. An end callback was added to prevent lua code to be stuck, response message is now properly consumed and the host header is used to generate a SNI expression, mandatory for SSL connections. * A crash when HAproxy is compiled without the PCRE/PCRE2 support if it tries to replace part of the uri while the path is invalid or not specified. * An issue with url_enc() converter. It was able to crush HTTP headers. It is now fixed. * Expired entries were displayed in "show cache" output. These entries are now evicted instead of being listed. In addition to these fixes, some improvements were backported: * The server queue management was made way more scalable with threads. Till now dequeuing would wake up next pending entry which could run on a different thread, resulting in a lot of entries in the shared run queue when many threads were running, causing a lot of contention on the scheduler's lock, thus slowing down the dequeuing and adding in turn contention on the queue's lock, to the point that a few users were seeing similar performance with N threads as with a single thread when queues were highly solicited. A small change was made both in the scheduler and in the dequeuing code to bypass this locking and completely address this issue. * The automatic frontend connection closing mechanism on reload