Re: [PATCH] BUILD: make TMPDIR global variable in travis-ci in order to show reg-tests errors
On Fri, May 10, 2019 at 03:42:17PM +0500, ??? wrote: > this patch will reveal osx reg-tests errors (after osx build is repaired) > > ??, 10 ??? 2019 ?. ? 15:38, : > > > From: Ilya Shipitsin > > > > v2, rebased to master (...) Thanks Ilya, I've applied it. However, please in the future, do not rush your patches, take the time to re-read them and to write correct commit messages. It's really taking quite some time on my side to systematically recompose commit messages by assembling sentences picked from multiple e-mails. A good way to avoid forgetting parts of the description is to keep the "what, why, how" principle in mind. The "what" is the subject, the "why" and the "how" are in the commit message. Usually commits written this way are correct at the first iteration and are merged very quickly. Please have a look at section 11 of CONTRIBUTING to get an idea, it requires a little effort first but is easy to get used to and is useful for any project. Thanks! Willy
Re: haproxy stopped balancing after about 2 weeks
On Fri, May 10, 2019 at 01:42:17PM -0600, ericr wrote: > resending my entire message because I didn't get subscribed in time... Didn't you get the response I've already sent ? Willy
Re: [PATCH] BUG/MINOR: vars: Fix memory leak in vars_check_arg
On Fri, May 10, 2019 at 05:50:50PM +0200, Tim Duesterhus wrote: > vars_check_arg previously leaked the string containing the variable > name: (...) Thanks Tim! I'm going to apply a minor change : > diff --git a/src/vars.c b/src/vars.c > index 477a14632..d32310270 100644 > --- a/src/vars.c > +++ b/src/vars.c > @@ -510,6 +510,7 @@ int vars_check_arg(struct arg *arg, char **err) >err); > if (!name) > return 0; > + free(arg->data.str.area); Here I'll add "arg->data.str.area=NULL". It significantly simplifies debugging sessions to avoid leaving pointers to freed areas in various structs. Thanks! Willy
haproxy stopped balancing after about 2 weeks
resending my entire message because I didn't get subscribed in time... A couple of weeks ago I installed haproxy on our server running FreeBSD 11.0-RELEASE-p16. (yes, I know it's an old version of the OS, I'm going to upgrade it as soon as I solve my haproxy problem.) Haproxy is supposed to load balance between 2 web servers running apache. haproxy ran fine and balanced well for about 2 weeks, and then it stopped sending client connections to the second web server. It still works fine for the first server. Why it persists across reboots is a mystery. Once haproxy stopped balancing, it's never used the second server, even after a restart/reboot. It still does health checks to both servers just fine, and reports L7OK/200 at every check for both servers. I've tried using both roundrobin and leastconn, with no luck. I've restarted haproxy several times, and rebooted the server it's running on, and it the behavior doesn't change. I'm out of ideas, does anyone have suggestions for fixing this (or improving my config in general)? Here's my config file: # global holds defaults, global variables, etc. global daemon user haproxy group haproxy log /dev/log local0 stats socket /var/run/haproxy/admin.sock user haproxy group haproxy mode 660 level admin # https://www.haproxy.com/blog/multithreading-in-haproxy/ maxconn 2048 # max connections we handle at once nbproc 1 # number of haproxy processes to start nbthread 4 # max threads, 1 per CPU core # cpu map = number of cpu cores cpu-map all 0-3 ssl-default-bind-ciphers "EECDH+ECDSA+AESGCM ECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4" ssl-default-bind-options ssl-min-ver TLSv1.2 defaults timeout connect 30s timeout client 600s timeout server 30s log global mode http stats enable stats uri /haproxy?stats stats realm Statistics stats auth REMOVED stats refresh 10s # frontend holds info about the public face of the site frontend vi-gate2.docbasedirect.com bind XXX.XX.XX.XXX:80 bind XXX.XX.XX.XXX:443 ssl crt "/usr/local/etc/2019-www-prod-SSL.crt" http-request redirect scheme https if !{ ssl_fc } default_backend web_servers option httplog # info about backend servers backend web_servers balance leastconn cookie phpsessid insert indirect nocache option httpchk HEAD / default-server check maxconn 2048 server vi-www3 10.3.3.10:8080 cookie phpsessid inter 120s server vi-www4 10.3.3.11:8080 cookie phpsessid inter 120s email-alert mailers vi-mailer email-alert from REMOVED email-alert to REMOVED mailers vi-mailer mailer localhost 127.0.0.1:25 mailer vi-backup2 10.3.3.100:25 Version info: haproxy -vv HA-Proxy version 1.9.6 2019/03/29 - https://haproxy.org/ Build options : TARGET = freebsd CPU = generic CC = cc CFLAGS = -O2 -pipe -fstack-protector -fno-strict-aliasing -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -Wno-address-of-packed-member -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-ignored-qualifiers -Wno-missing-field-initializers -Wno-implicit-fallthrough -Wtype-limits -Wshift-negative-value -Wnull-dereference -DFREEBSD_PORTS OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_CPU_AFFINITY=1 USE_ACCEPT4=1 USE_REGPARM=1 USE_OPENSSL=1 USE_STATIC_PCRE=1 USE_PCRE_JIT=1 Default settings : maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200 Built with OpenSSL version : OpenSSL 1.0.2o-freebsd 27 Mar 2018 Running on OpenSSL version : OpenSSL 1.0.2j-freebsd 26 Sep 2016 OpenSSL library supports TLS extensions : yes OpenSSL library supports SNI : yes OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2 Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY Built with zlib version : 1.2.11 Running on zlib version : 1.2.8 Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip") Built with PCRE version : 8.43 2019-02-23 Running on PCRE version : 8.43 2019-02-23 PCRE library supports JIT : yes Encrypted password support via crypt(3): yes Built with multi-threading support. Available polling systems : kqueue : pref=300, test result OK poll : pref=200, test result OK select : pref=150, test result OK Total: 3 (3 usable), will use kqueue. Available multiplexer protocols : (protocols marked as cannot be specified using 'proto' keyword) h2 : mode=HTTP side=FE h2 : mode=HTXside=FE|BE : mode=HTXside=FE|BE : mode=TCP|HTTP side=FE|BE Available filters : [SPOE] spoe [COMP] compression [CACHE] cache [TRACE] trace Thanks! -- ericr
Re: [1.9 HEAD] HAProxy using 100% CPU
Olivier, it's still looping, but differently: 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, list) { (gdb) n 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags & H2_CF_MUX_BLOCK_ANY) (gdb) 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, list) { (gdb) 2613if (!LIST_ISEMPTY(&h2s->sending_list)) (gdb) 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, list) { (gdb) 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags & H2_CF_MUX_BLOCK_ANY) (gdb) 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, list) { (gdb) 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags & H2_CF_MUX_BLOCK_ANY) (gdb) 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, list) { (gdb) 2613if (!LIST_ISEMPTY(&h2s->sending_list)) (gdb) 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, list) { (gdb) 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags & H2_CF_MUX_BLOCK_ANY) (gdb) 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, list) { (gdb) 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags & H2_CF_MUX_BLOCK_ANY) (gdb) 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, list) { (gdb) 2613if (!LIST_ISEMPTY(&h2s->sending_list)) (gdb) 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, list) { (gdb) 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags & H2_CF_MUX_BLOCK_ANY) (gdb) 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, list) { (gdb) 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags & H2_CF_MUX_BLOCK_ANY) (gdb) 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, list) { (gdb) 2613if (!LIST_ISEMPTY(&h2s->sending_list)) (gdb) 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, list) { (gdb) 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags & H2_CF_MUX_BLOCK_ANY) (gdb) 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, list) { (gdb) 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags & H2_CF_MUX_BLOCK_ANY) (gdb) 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, list) { (gdb) 2613if (!LIST_ISEMPTY(&h2s->sending_list)) (gdb) 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, list) { (gdb) 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags & H2_CF_MUX_BLOCK_ANY) (gdb) 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, list) { (gdb) 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags & H2_CF_MUX_BLOCK_ANY) (gdb) 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, list) { (gdb) 2613if (!LIST_ISEMPTY(&h2s->sending_list)) (gdb) 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, list) { (gdb) 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags & H2_CF_MUX_BLOCK_ANY) (gdb) 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, list) { (gdb) 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags & H2_CF_MUX_BLOCK_ANY) (gdb) 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, list) { (gdb) 2613if (!LIST_ISEMPTY(&h2s->sending_list)) (gdb) gcore warning: target file /proc/12265/cmdline contained unexpected null characters warning: Memory read failed for corefile section, 12288 bytes at 0x7fff17ff3000. Saved corefile core.12265 (gdb) p *h2s $1 = {cs = 0x2f84190, sess = 0x819580 , h2c = 0x2f841a0, h1m = {state = 48, flags = 0, curr_len = 38317, body_len = 103852, next = 413, err_pos = -1, err_state = 0}, by_id = {node = {branches = {b = {0x34c0260, 0x321d330}}, node_p = 0x0, leaf_p = 0x0, bit = 1, pfx = 47005}, key = 3}, id = 3, flags = 28675, mws = 1017461, errcode = H2_ERR_NO_ERROR, st = H2_SS_CLOSED, status = 200, body_len = 0, rxbuf = {size = 0, area = 0x0, data = 0, head = 0}, wait_event = {task = 0x2cd0ed0, handle = 0x3, events = 0}, recv_wait = 0x0, send_wait = 0x321d390, list = {n = 0x321d3b8, p = 0x321d3b8}, sending_list = {n = 0x3174cf8, p = 0x3174cf8}} (gdb) p *h2s_back $2 = {cs = 0x2f84190, sess = 0x819580 , h2c = 0x2f841a0, h1m = {state = 48, flags = 0, curr_len = 38317, body_len = 103852, next = 413, err_pos = -1, err_state = 0}, by_id = {node = {branches = {b = {0x34c0260, 0x321d330}}, node_p = 0x0, leaf_p = 0x0, bit = 1, pfx = 47005}, key = 3}, id = 3, flags = 28675, mws = 1017461, errcode = H2_ERR_NO_ERROR, st = H2_SS_CLOSED, status = 200, body_len = 0, rxbuf = {size = 0, area = 0x0, data = 0, head = 0}, wait_event = {task = 0x2cd0ed0, handle = 0x3, events = 0}, recv_wait = 0x0, send_wait = 0x321d390, list = {n = 0x321d3b8, p = 0x321d3b8}, sending_list = {n = 0x3174cf8, p = 0x3174cf8}} (gdb) p *h2c $3 = {conn = 0x2d10700, st0 = H2_CS_FRAME_H, errcode = H2_ERR_NO_ERROR, flags = 0, streams_limit = 100, max_id = 9, rcvd_c = 0, rcvd_s = 0, ddht = 0x2f311a0, dbuf = {size = 0, area = 0x0, data = 0, hea
[PATCH] BUG/MINOR: vars: Fix memory leak in vars_check_arg
vars_check_arg previously leaked the string containing the variable name: Consider this config: frontend fe1 mode http bind :8080 http-request set-header X %[var(txn.host)] Starting HAProxy and immediately stopping it by sending a SIGINT makes Valgrind report this leak: ==7795== 9 bytes in 1 blocks are definitely lost in loss record 15 of 71 ==7795==at 0x4C2DB8F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==7795==by 0x4AA2AD: my_strndup (standard.c:2227) ==7795==by 0x51FCC5: make_arg_list (arg.c:146) ==7795==by 0x4CF095: sample_parse_expr (sample.c:897) ==7795==by 0x4BA7D7: add_sample_to_logformat_list (log.c:495) ==7795==by 0x4BBB62: parse_logformat_string (log.c:688) ==7795==by 0x4E70A9: parse_http_req_cond (http_rules.c:239) ==7795==by 0x41CD7B: cfg_parse_listen (cfgparse-listen.c:1466) ==7795==by 0x480383: readcfgfile (cfgparse.c:2089) ==7795==by 0x47A081: init (haproxy.c:1581) ==7795==by 0x4049F2: main (haproxy.c:2591) This leak can be detected even in HAProxy 1.6, this patch thus should be backported to all supported branches. --- src/vars.c | 1 + 1 file changed, 1 insertion(+) diff --git a/src/vars.c b/src/vars.c index 477a14632..d32310270 100644 --- a/src/vars.c +++ b/src/vars.c @@ -510,6 +510,7 @@ int vars_check_arg(struct arg *arg, char **err) err); if (!name) return 0; + free(arg->data.str.area); /* Use the global variable name pointer. */ arg->type = ARGT_VAR; -- 2.21.0
Re: [1.9 HEAD] HAProxy using 100% CPU
I've just sent some additional data to Willy. :) Sure, I'll test your patch! pt., 10 maj 2019 o 15:11 Olivier Houchard napisał(a): > Hi Maciej, > > On Thu, May 09, 2019 at 07:25:54PM +0200, Maciej Zdeb wrote: > > Hi again, > > > > I have bad news, HAProxy 1.9.7-35b44da still looping :/ > > > > gdb session: > > h2_process_mux (h2c=0x1432420) at src/mux_h2.c:2609 > > 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, > list) { > > (gdb) n > > 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags & > > H2_CF_MUX_BLOCK_ANY) > > (gdb) > > 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, > list) { > > (gdb) > > 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags & > > H2_CF_MUX_BLOCK_ANY) > > (gdb) > > 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, > list) { > > (gdb) > > 2613if (!LIST_ISEMPTY(&h2s->sending_list)) > > (gdb) > > 2619if (!h2s->send_wait) { > > (gdb) > > 2620LIST_DEL_INIT(&h2s->list); > > (gdb) > > 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, > list) { > > (gdb) > > 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags & > > H2_CF_MUX_BLOCK_ANY) > > (gdb) > > 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, > list) { > > (gdb) > > 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags & > > H2_CF_MUX_BLOCK_ANY) > > (gdb) > > 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, > list) { > > (gdb) > > 2613if (!LIST_ISEMPTY(&h2s->sending_list)) > > (gdb) > > 2619if (!h2s->send_wait) { > > (gdb) > > 2620LIST_DEL_INIT(&h2s->list); > > (gdb) > > 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, > list) { > > (gdb) > > 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags & > > H2_CF_MUX_BLOCK_ANY) > > (gdb) > > 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, > list) { > > (gdb) > > 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags & > > H2_CF_MUX_BLOCK_ANY) > > (gdb) > > 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, > list) { > > (gdb) > > 2613if (!LIST_ISEMPTY(&h2s->sending_list)) > > (gdb) > > 2619if (!h2s->send_wait) { > > (gdb) > > 2620LIST_DEL_INIT(&h2s->list); > > (gdb) > > 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, > list) { > > (gdb) p *h2s > > $1 = {cs = 0x1499030, sess = 0x819580 , h2c = 0x1432420, > h1m > > = {state = H1_MSG_DONE, flags = 29, curr_len = 0, body_len = 111976, > next = > > 411, err_pos = -1, err_state = 0}, by_id = {node = {branches = {b = > > {0x13dcf50, > > 0x15b3120}}, node_p = 0x125ab90, leaf_p = 0x15b3121, bit = 1, > pfx > > = 0}, key = 11}, id = 11, flags = 28675, mws = 977198, errcode = > > H2_ERR_NO_ERROR, st = H2_SS_CLOSED, status = 200, body_len = 0, rxbuf = > > {size = 0, area = 0x0, > > data = 0, head = 0}, wait_event = {task = 0x15077a0, handle = 0x0, > > events = 0}, recv_wait = 0x0, send_wait = 0x0, list = {n = 0x15b31a8, p = > > 0x15b31a8}, sending_list = {n = 0x15b31b8, p = 0x15b31b8}} > > (gdb) p *h2s_back > > $2 = {cs = 0x1499030, sess = 0x819580 , h2c = 0x1432420, > h1m > > = {state = H1_MSG_DONE, flags = 29, curr_len = 0, body_len = 111976, > next = > > 411, err_pos = -1, err_state = 0}, by_id = {node = {branches = {b = > > {0x13dcf50, > > 0x15b3120}}, node_p = 0x125ab90, leaf_p = 0x15b3121, bit = 1, > pfx > > = 0}, key = 11}, id = 11, flags = 28675, mws = 977198, errcode = > > H2_ERR_NO_ERROR, st = H2_SS_CLOSED, status = 200, body_len = 0, rxbuf = > > {size = 0, area = 0x0, > > data = 0, head = 0}, wait_event = {task = 0x15077a0, handle = 0x0, > > events = 0}, recv_wait = 0x0, send_wait = 0x0, list = {n = 0x15b31a8, p = > > 0x15b31a8}, sending_list = {n = 0x15b31b8, p = 0x15b31b8}} > > (gdb) p *h2c > > $3 = {conn = 0x17e3310, st0 = H2_CS_FRAME_H, errcode = H2_ERR_NO_ERROR, > > flags = 0, streams_limit = 100, max_id = 13, rcvd_c = 0, rcvd_s = 0, > ddht = > > 0x1e99a40, dbuf = {size = 0, area = 0x0, data = 0, head = 0}, dsi = 13, > dfl > > = 4, > > dft = 8 '\b', dff = 0 '\000', dpl = 0 '\000', last_sid = -1, mbuf = > {size > > = 16384, area = 0x1e573a0 "", data = 13700, head = 0}, msi = -1, mfl = 0, > > mft = 0 '\000', mff = 0 '\000', miw = 65535, mws = 10159243, mfs = 16384, > > timeout = 2, shut_timeout = 2, nb_streams = 2, nb_cs = 3, > > nb_reserved = 0, stream_cnt = 7, proxy = 0xb85fc0, task = 0x126aa30, > > streams_by_id = {b = {0x125ab91, 0x0}}, send_list = {n = 0x15b31a8, p = > > 0x125ac18}, fctl_list = { > > n = 0x14324f8, p = 0x14324f8}, sending_list = {n = 0x1432508, p = > > 0x1432508}, buf_wait = {target = 0x0, wakeup_cb = 0x0, list = {n = > > 0x1432528, p = 0x1432528}}, wait_event = {task = 0x1420fa0, handle = 0x0, > > events = 1}} > > (gdb) p list > > $4 = (int *) 0x0 > > (gdb) n > > 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags & > >
Re: HAProxy 1.9.6 unresponsive
*From:* Willy Tarreau [mailto:w...@1wt.eu] *Sent:* Tuesday, May 7, 2019, 14:46 EDT *To:* Patrick Hemmer *Cc:* haproxy@formilux.org *Subject:* HAProxy 1.9.6 unresponsive Hi Patrick, On Tue, May 07, 2019 at 02:01:33PM -0400, Patrick Hemmer wrote: Just in case it's useful, we had the issue recur today. However I gleaned a little more information from this recurrence. Provided below are several outputs from a gdb `bt full`. The important bit is that in the captures, the last frame which doesn't change between each capture is the `si_cs_send` function. The last stack capture provided has the shortest stack depth of all the captures, and is inside `h2_snd_buf`. Thank you. At first glance this remains similar. Christopher and I have been studying these issues intensely these days because they have deep roots into some design choices and tradeoffs we've had to make and that we're relying on, and we've come to conclusions about some long term changes to address the causes, and some fixes for 1.9 that now appear valid. We're still carefully reviewing our changes before pushing them. Then I think we'll emit 1.9.8 anyway since it will already fix quite a number of issues addressed since 1.9.7, so for you it will probably be easier to try again. So I see a few updates on some of the other 100% CPU usage threads, and that some fixes have been pushed. Are any of those in relation to this issue? Or is this one still outstanding? Thanks -Patrick
Re: [1.9 HEAD] HAProxy using 100% CPU
Hi Maciej, On Thu, May 09, 2019 at 07:25:54PM +0200, Maciej Zdeb wrote: > Hi again, > > I have bad news, HAProxy 1.9.7-35b44da still looping :/ > > gdb session: > h2_process_mux (h2c=0x1432420) at src/mux_h2.c:2609 > 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, list) { > (gdb) n > 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags & > H2_CF_MUX_BLOCK_ANY) > (gdb) > 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, list) { > (gdb) > 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags & > H2_CF_MUX_BLOCK_ANY) > (gdb) > 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, list) { > (gdb) > 2613if (!LIST_ISEMPTY(&h2s->sending_list)) > (gdb) > 2619if (!h2s->send_wait) { > (gdb) > 2620LIST_DEL_INIT(&h2s->list); > (gdb) > 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, list) { > (gdb) > 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags & > H2_CF_MUX_BLOCK_ANY) > (gdb) > 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, list) { > (gdb) > 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags & > H2_CF_MUX_BLOCK_ANY) > (gdb) > 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, list) { > (gdb) > 2613if (!LIST_ISEMPTY(&h2s->sending_list)) > (gdb) > 2619if (!h2s->send_wait) { > (gdb) > 2620LIST_DEL_INIT(&h2s->list); > (gdb) > 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, list) { > (gdb) > 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags & > H2_CF_MUX_BLOCK_ANY) > (gdb) > 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, list) { > (gdb) > 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags & > H2_CF_MUX_BLOCK_ANY) > (gdb) > 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, list) { > (gdb) > 2613if (!LIST_ISEMPTY(&h2s->sending_list)) > (gdb) > 2619if (!h2s->send_wait) { > (gdb) > 2620LIST_DEL_INIT(&h2s->list); > (gdb) > 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, list) { > (gdb) p *h2s > $1 = {cs = 0x1499030, sess = 0x819580 , h2c = 0x1432420, h1m > = {state = H1_MSG_DONE, flags = 29, curr_len = 0, body_len = 111976, next = > 411, err_pos = -1, err_state = 0}, by_id = {node = {branches = {b = > {0x13dcf50, > 0x15b3120}}, node_p = 0x125ab90, leaf_p = 0x15b3121, bit = 1, pfx > = 0}, key = 11}, id = 11, flags = 28675, mws = 977198, errcode = > H2_ERR_NO_ERROR, st = H2_SS_CLOSED, status = 200, body_len = 0, rxbuf = > {size = 0, area = 0x0, > data = 0, head = 0}, wait_event = {task = 0x15077a0, handle = 0x0, > events = 0}, recv_wait = 0x0, send_wait = 0x0, list = {n = 0x15b31a8, p = > 0x15b31a8}, sending_list = {n = 0x15b31b8, p = 0x15b31b8}} > (gdb) p *h2s_back > $2 = {cs = 0x1499030, sess = 0x819580 , h2c = 0x1432420, h1m > = {state = H1_MSG_DONE, flags = 29, curr_len = 0, body_len = 111976, next = > 411, err_pos = -1, err_state = 0}, by_id = {node = {branches = {b = > {0x13dcf50, > 0x15b3120}}, node_p = 0x125ab90, leaf_p = 0x15b3121, bit = 1, pfx > = 0}, key = 11}, id = 11, flags = 28675, mws = 977198, errcode = > H2_ERR_NO_ERROR, st = H2_SS_CLOSED, status = 200, body_len = 0, rxbuf = > {size = 0, area = 0x0, > data = 0, head = 0}, wait_event = {task = 0x15077a0, handle = 0x0, > events = 0}, recv_wait = 0x0, send_wait = 0x0, list = {n = 0x15b31a8, p = > 0x15b31a8}, sending_list = {n = 0x15b31b8, p = 0x15b31b8}} > (gdb) p *h2c > $3 = {conn = 0x17e3310, st0 = H2_CS_FRAME_H, errcode = H2_ERR_NO_ERROR, > flags = 0, streams_limit = 100, max_id = 13, rcvd_c = 0, rcvd_s = 0, ddht = > 0x1e99a40, dbuf = {size = 0, area = 0x0, data = 0, head = 0}, dsi = 13, dfl > = 4, > dft = 8 '\b', dff = 0 '\000', dpl = 0 '\000', last_sid = -1, mbuf = {size > = 16384, area = 0x1e573a0 "", data = 13700, head = 0}, msi = -1, mfl = 0, > mft = 0 '\000', mff = 0 '\000', miw = 65535, mws = 10159243, mfs = 16384, > timeout = 2, shut_timeout = 2, nb_streams = 2, nb_cs = 3, > nb_reserved = 0, stream_cnt = 7, proxy = 0xb85fc0, task = 0x126aa30, > streams_by_id = {b = {0x125ab91, 0x0}}, send_list = {n = 0x15b31a8, p = > 0x125ac18}, fctl_list = { > n = 0x14324f8, p = 0x14324f8}, sending_list = {n = 0x1432508, p = > 0x1432508}, buf_wait = {target = 0x0, wakeup_cb = 0x0, list = {n = > 0x1432528, p = 0x1432528}}, wait_event = {task = 0x1420fa0, handle = 0x0, > events = 1}} > (gdb) p list > $4 = (int *) 0x0 > (gdb) n > 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags & > H2_CF_MUX_BLOCK_ANY) > (gdb) n > 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, list) { > (gdb) p *h2s > $5 = {cs = 0x1499030, sess = 0x819580 , h2c = 0x1432420, h1m > = {state = H1_MSG_DONE, flags = 29, curr_len = 0, body_len = 111976, next = > 411, err_pos = -1, err_state = 0}, by_id = {node = {branches = {b = > {0x13dcf50, > 0x15b3120}}, node_p = 0x12
Re: [PATCH] BUILD: make TMPDIR global variable in travis-ci in order to show reg-tests errors
please ignore, I will send "v2" soon пт, 10 мая 2019 г. в 15:32, : > From: Ilya Shipitsin > > --- > .travis.yml | 8 ++-- > 1 file changed, 6 insertions(+), 2 deletions(-) > > diff --git a/.travis.yml b/.travis.yml > index f9a13586..530d1682 100644 > --- a/.travis.yml > +++ b/.travis.yml > @@ -8,6 +8,7 @@ env: > - FLAGS="USE_ZLIB=1 USE_PCRE=1 USE_LUA=1 USE_OPENSSL=1" > - SSL_LIB=${HOME}/opt/lib > - SSL_INC=${HOME}/opt/include > +- TMPDIR=/tmp > > addons: >apt: > @@ -44,6 +45,9 @@ matrix: >- os: linux > compiler: gcc > env: TARGET=linux2628 LIBRESSL_VERSION=2.7.5 > + - os: linux > +compiler: gcc > +env: TARGET=linux2628 BORINGSSL=yes >- os: linux > compiler: clang > env: TARGET=linux2628 FLAGS= > @@ -64,11 +68,11 @@ script: >- ./haproxy -vv >- if [ "${TRAVIS_OS_NAME}" = "linux" ]; then ldd haproxy; fi >- if [ "${TRAVIS_OS_NAME}" = "osx" ]; then otool -L haproxy; fi > - - env TMPDIR=/tmp VTEST_PROGRAM=../vtest/vtest make reg-tests > + - env VTEST_PROGRAM=../vtest/vtest make reg-tests > > after_failure: >- | > -for folder in ${TMPDIR:-/tmp}/*regtest*/vtc.*; do > +for folder in ${TMPDIR}/*regtest*/vtc.*; do >cat $folder/INFO >cat $folder/LOG > done > -- > 2.20.1 > >
[PATCH] BUILD: make TMPDIR global variable in travis-ci in order to show reg-tests errors
From: Ilya Shipitsin v2, rebased to master --- .travis.yml | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/.travis.yml b/.travis.yml index c00725d8..530d1682 100644 --- a/.travis.yml +++ b/.travis.yml @@ -8,6 +8,7 @@ env: - FLAGS="USE_ZLIB=1 USE_PCRE=1 USE_LUA=1 USE_OPENSSL=1" - SSL_LIB=${HOME}/opt/lib - SSL_INC=${HOME}/opt/include +- TMPDIR=/tmp addons: apt: @@ -67,11 +68,11 @@ script: - ./haproxy -vv - if [ "${TRAVIS_OS_NAME}" = "linux" ]; then ldd haproxy; fi - if [ "${TRAVIS_OS_NAME}" = "osx" ]; then otool -L haproxy; fi - - env TMPDIR=/tmp VTEST_PROGRAM=../vtest/vtest make reg-tests + - env VTEST_PROGRAM=../vtest/vtest make reg-tests after_failure: - | -for folder in ${TMPDIR:-/tmp}/*regtest*/vtc.*; do +for folder in ${TMPDIR}/*regtest*/vtc.*; do cat $folder/INFO cat $folder/LOG done -- 2.20.1
Re: [PATCH] BUILD: make TMPDIR global variable in travis-ci in order to show reg-tests errors
this patch will reveal osx reg-tests errors (after osx build is repaired) пт, 10 мая 2019 г. в 15:38, : > From: Ilya Shipitsin > > v2, rebased to master > > --- > .travis.yml | 5 +++-- > 1 file changed, 3 insertions(+), 2 deletions(-) > > diff --git a/.travis.yml b/.travis.yml > index c00725d8..530d1682 100644 > --- a/.travis.yml > +++ b/.travis.yml > @@ -8,6 +8,7 @@ env: > - FLAGS="USE_ZLIB=1 USE_PCRE=1 USE_LUA=1 USE_OPENSSL=1" > - SSL_LIB=${HOME}/opt/lib > - SSL_INC=${HOME}/opt/include > +- TMPDIR=/tmp > > addons: >apt: > @@ -67,11 +68,11 @@ script: >- ./haproxy -vv >- if [ "${TRAVIS_OS_NAME}" = "linux" ]; then ldd haproxy; fi >- if [ "${TRAVIS_OS_NAME}" = "osx" ]; then otool -L haproxy; fi > - - env TMPDIR=/tmp VTEST_PROGRAM=../vtest/vtest make reg-tests > + - env VTEST_PROGRAM=../vtest/vtest make reg-tests > > after_failure: >- | > -for folder in ${TMPDIR:-/tmp}/*regtest*/vtc.*; do > +for folder in ${TMPDIR}/*regtest*/vtc.*; do >cat $folder/INFO >cat $folder/LOG > done > -- > 2.20.1 > >
[PATCH] BUILD: make TMPDIR global variable in travis-ci in order to show reg-tests errors
From: Ilya Shipitsin --- .travis.yml | 8 ++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/.travis.yml b/.travis.yml index f9a13586..530d1682 100644 --- a/.travis.yml +++ b/.travis.yml @@ -8,6 +8,7 @@ env: - FLAGS="USE_ZLIB=1 USE_PCRE=1 USE_LUA=1 USE_OPENSSL=1" - SSL_LIB=${HOME}/opt/lib - SSL_INC=${HOME}/opt/include +- TMPDIR=/tmp addons: apt: @@ -44,6 +45,9 @@ matrix: - os: linux compiler: gcc env: TARGET=linux2628 LIBRESSL_VERSION=2.7.5 + - os: linux +compiler: gcc +env: TARGET=linux2628 BORINGSSL=yes - os: linux compiler: clang env: TARGET=linux2628 FLAGS= @@ -64,11 +68,11 @@ script: - ./haproxy -vv - if [ "${TRAVIS_OS_NAME}" = "linux" ]; then ldd haproxy; fi - if [ "${TRAVIS_OS_NAME}" = "osx" ]; then otool -L haproxy; fi - - env TMPDIR=/tmp VTEST_PROGRAM=../vtest/vtest make reg-tests + - env VTEST_PROGRAM=../vtest/vtest make reg-tests after_failure: - | -for folder in ${TMPDIR:-/tmp}/*regtest*/vtc.*; do +for folder in ${TMPDIR}/*regtest*/vtc.*; do cat $folder/INFO cat $folder/LOG done -- 2.20.1
Re: [1.9 HEAD] HAProxy using 100% CPU
I'm gettingh old... I failed to remember to dump core :( And already killed the process. Sorry, but the issue must reoccur and I can't say how long it may take. As soon as I get core dump I'll return. pt., 10.05.2019, 10:35 użytkownik Willy Tarreau napisał: > Hi Maciej, > > On Thu, May 09, 2019 at 07:25:54PM +0200, Maciej Zdeb wrote: > > Hi again, > > > > I have bad news, HAProxy 1.9.7-35b44da still looping :/ > > Well, it's getting really annoying. Something's definitely wrong in > this list and I can't figure what. > > > 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, > list) { > > (gdb) p *h2s > > $1 = {cs = 0x1499030, sess = 0x819580 , h2c = 0x1432420, > h1m > ^^^ > > Seeing things like the above make me doubt about the list's integrity, thus > it could again hold an element that was already reused somewhere else. > Could > it be possible for you to share your unstripped executable, a core dump and > your config ? (not to the list! just send a private link to Olivier or me). > > At this point either we find what's happening or we'll have to issue 1.9.8 > with this bug still alive, which doesn't make me feel comfortable to say > the least :-/ > > Willy >
Re: [PATCHv2] BUILD: common: Add __ha_cas_dw fallback for single threaded builds
osx build is broken https://travis-ci.com/haproxy/haproxy/jobs/199157750 seems to be related пт, 10 мая 2019 г. в 14:45, Chris Packham : > __ha_cas_dw() is used in fd_rm_from_fd_list() and when built without > USE_THREADS=1 the linker fails to find __ha_cas_dw(). Add a definition > of __ha_cas_dw() for the #ifndef USE_THREADS case. > > Signed-off-by: Chris Packham > --- > Changes in v2: > - cast to int * to avoid dereferencing void * > > include/common/hathreads.h | 5 + > 1 file changed, 5 insertions(+) > > diff --git a/include/common/hathreads.h b/include/common/hathreads.h > index cae6eabe..7314293a 100644 > --- a/include/common/hathreads.h > +++ b/include/common/hathreads.h > @@ -140,6 +140,11 @@ static inline void __ha_barrier_full(void) > { > } > > +static inline int __ha_cas_dw(void *target, void *compare, void *set) > +{ > + return HA_ATOMIC_CAS((int *)target, (int *)compare, (int *)set); > +} > + > static inline void thread_harmless_now() > { > } > -- > 2.21.0 > > >
Re: [PATCH] BUILD: common: Add __ha_cas_dw fallback for single threaded builds
On Fri, May 10, 2019 at 09:38:08AM +, Chris Packham wrote: > On 10/05/19 8:57 PM, Willy Tarreau wrote: > > On Thu, May 09, 2019 at 05:07:40PM +1200, Chris Packham wrote: > >> __ha_cas_dw() is used in fd_rm_from_fd_list() and when built without > >> USE_THREADS=1 the linker fails to find __ha_cas_dw(). Add a definition > >> of __ha_cas_dw() for the #ifndef USE_THREADS case. > > > > Just found your patch, I think it's indeed OK to fall back to > > HA_ATOMIC_CAS in this case since we won't use atomic instructions. > > I'd like that we do a bit of tidying in this area so that it's > > clearer which functions are always atomic and which ones possibly > > are not, but for now that's OK. I've merged it now. > > Actually I think there's an additional change needed in my patch. By > passing the parameters to HA_ATOMIC_CAS we end up attempting to > dereference a void *. So this should needs to cast to a proper type. For > what it's worth I'll send a v2 that does this. OK, but since it's already merged, please send an incremental patch. Thanks, Willy
Re: cygwin compilation error
Hello! On Wed, May 08, 2019 at 10:13:38PM +, Zakharychev, Bob wrote: > I wouldn't bother even trying to add support for BoringSSL - they themselves > discourage people from doing so in their mission statement: > > "Although BoringSSL is an open source project, it is not intended for general > use, as OpenSSL is. We don't recommend that third parties depend upon it. > Doing so is likely to be frustrating because there are no guarantees of API > or ABI stability. > > Programs ship their own copies of BoringSSL when they use it and we update > everything as needed when deciding to make API changes. This allows us to > mostly avoid compromises in the name of compatibility. It works for us, but > it may not work for you." These are pretty valid points! Actually I'd say that we know some people do use BoringSSL with haproxy and provide regular fixes for it, so the maintenance cost for others remains low. If it starts to break here and there, or to trigger false alarms on the CI, then it will be time to 1) remove it from the CI, and maybe 2) stop supporting it. But as long as it works it's an inexpensive indicator of the probability of forthcoming user reports ;-) Thanks, Willy
Re: [PATCH] BUILD: common: Add __ha_cas_dw fallback for single threaded builds
On 10/05/19 8:57 PM, Willy Tarreau wrote: > On Thu, May 09, 2019 at 05:07:40PM +1200, Chris Packham wrote: >> __ha_cas_dw() is used in fd_rm_from_fd_list() and when built without >> USE_THREADS=1 the linker fails to find __ha_cas_dw(). Add a definition >> of __ha_cas_dw() for the #ifndef USE_THREADS case. > > Just found your patch, I think it's indeed OK to fall back to > HA_ATOMIC_CAS in this case since we won't use atomic instructions. > I'd like that we do a bit of tidying in this area so that it's > clearer which functions are always atomic and which ones possibly > are not, but for now that's OK. I've merged it now. Actually I think there's an additional change needed in my patch. By passing the parameters to HA_ATOMIC_CAS we end up attempting to dereference a void *. So this should needs to cast to a proper type. For what it's worth I'll send a v2 that does this.
[PATCHv2] BUILD: common: Add __ha_cas_dw fallback for single threaded builds
__ha_cas_dw() is used in fd_rm_from_fd_list() and when built without USE_THREADS=1 the linker fails to find __ha_cas_dw(). Add a definition of __ha_cas_dw() for the #ifndef USE_THREADS case. Signed-off-by: Chris Packham --- Changes in v2: - cast to int * to avoid dereferencing void * include/common/hathreads.h | 5 + 1 file changed, 5 insertions(+) diff --git a/include/common/hathreads.h b/include/common/hathreads.h index cae6eabe..7314293a 100644 --- a/include/common/hathreads.h +++ b/include/common/hathreads.h @@ -140,6 +140,11 @@ static inline void __ha_barrier_full(void) { } +static inline int __ha_cas_dw(void *target, void *compare, void *set) +{ + return HA_ATOMIC_CAS((int *)target, (int *)compare, (int *)set); +} + static inline void thread_harmless_now() { } -- 2.21.0
Re: Fwd: Very odd behavior with 'cookie' only working intermittently
On Fri, May 10, 2019 at 10:53:07AM +0200, Willy Tarreau wrote: > On Wed, Jun 26, 2013 at 12:10:27PM -0400, Chris Patti wrote: (...) Just noticed that a thread sorting issue on my side brought this very old thread back and that this post is probably not interesting anymore to the initial requester! Next time I'll check the date before responding. Willy
Re: [PATCH] BUILD: common: Add __ha_cas_dw fallback for single threaded builds
On Thu, May 09, 2019 at 05:07:40PM +1200, Chris Packham wrote: > __ha_cas_dw() is used in fd_rm_from_fd_list() and when built without > USE_THREADS=1 the linker fails to find __ha_cas_dw(). Add a definition > of __ha_cas_dw() for the #ifndef USE_THREADS case. Just found your patch, I think it's indeed OK to fall back to HA_ATOMIC_CAS in this case since we won't use atomic instructions. I'd like that we do a bit of tidying in this area so that it's clearer which functions are always atomic and which ones possibly are not, but for now that's OK. I've merged it now. Thanks! Willy
Re: Fwd: Very odd behavior with 'cookie' only working intermittently
On Wed, Jun 26, 2013 at 12:10:27PM -0400, Chris Patti wrote: > Thank you *VERY* much for this tidbit Nenad. > > With the early version of HAProxy we're using (v1.3.18) the actual syntax > is: > > option httpclose > > This worked perfectly, session afinity started performing as expected. > > (Just wanted to record this for posterity) Good catch indeed. Also this is not needed anymore starting from 1.4 (which supports keep-alive). And your version contains 73 known bugs which were later fixed in 1.3.28. Note that the 1.3 branch is not maintained anymore, but if you have a particular reason not to upgrade, please at least have a look there : http://www.haproxy.org/bugs/bugs-1.3.18.html Willy
Re: [PATCH] BUILD: add BoringSSL to travis-ci build matrix
merged, thank you Ilya. Willy
Re: CI question related to openssl matrix
Hi Ilya, On Thu, May 09, 2019 at 12:19:45AM +0500, ??? wrote: > Hello, > > does haproxy have some issues when it is built using openssl-1.1.0 and > running with openssl-1.1.1, for example ? I don't know. I'd say that openssl guarantees ABI compatibility for low numbers (as you could theorically build with 1.0.1 and run on 1.0.2) but I'm not totally sure it's *really* expected to work. > should we consider such situations in travis-ci openssl matrix ? I don't think so. We already emit a warning in -vv saying "versions differ" when this happens and I don't think anyone reasonable will run their production with this. It can still be convenient for people who need a quick and dirty proxy to serve as a swiss army knife but it's not what we need to check for. Thanks, Willy
Re: [1.9 HEAD] HAProxy using 100% CPU
Hi Maciej, On Thu, May 09, 2019 at 07:25:54PM +0200, Maciej Zdeb wrote: > Hi again, > > I have bad news, HAProxy 1.9.7-35b44da still looping :/ Well, it's getting really annoying. Something's definitely wrong in this list and I can't figure what. > 2609list_for_each_entry_safe(h2s, h2s_back, &h2c->send_list, list) { > (gdb) p *h2s > $1 = {cs = 0x1499030, sess = 0x819580 , h2c = 0x1432420, h1m ^^^ Seeing things like the above make me doubt about the list's integrity, thus it could again hold an element that was already reused somewhere else. Could it be possible for you to share your unstripped executable, a core dump and your config ? (not to the list! just send a private link to Olivier or me). At this point either we find what's happening or we'll have to issue 1.9.8 with this bug still alive, which doesn't make me feel comfortable to say the least :-/ Willy
Re: haproxy stopped balancing after about 2 weeks
Hello, On Thu, May 09, 2019 at 11:42:54AM -0600, ericr wrote: > A couple of weeks ago I installed haproxy on our server running FreeBSD > 11.0-RELEASE-p16. (yes, I know it's an old version of the OS, I'm going to > upgrade it as soon as I solve my haproxy problem.) Can you tell us what exact version you're running ? Please send the output of "haproxy -vv". > Haproxy is supposed to load balance between 2 web servers running apache. > haproxy ran fine and balanced well for about 2 weeks, and then it stopped > sending client connections to the second web server. But it still works for the first one ? > It still does health checks to both servers just fine, and reports L7OK/200 > at every check for both servers. I've tried using both roundrobin and > leastconn, with no luck. I've restarted haproxy several times, and > rebooted the server it's running on, and it the behavior doesn't change. Did you notice if it's always after the exact same amount of time ? Or maybe after a certain number of requests ? We could have imagined a bug with one LB algo but if it does it regardless of the algo this rules it out. Oh wait a minute : > # info about backend servers > backend web_servers > balance leastconn > cookie phpsessid insert indirect nocache > option httpchk HEAD / > > default-server check maxconn 2048 > > server vi-www3 10.3.3.10:8080 cookie phpsessid inter 120s > server vi-www4 10.3.3.11:8080 cookie phpsessid inter 120s So for both servers you're setting a response cookie "phpsessid=phpsessid" which has the effect that all your visitors will come back with this cookie and that the first server which matches this value will take it, hence the first server. First, I recommend against naming your stickiness cookies "phpsessid" as it makes one think about the application's cookie which it is not. Second, you need to use different cookie values here, for example "cookie w3" and "cookie w4" for your two respective servers. Lat recommendation, I don't know if it's on purpose that you check your servers only once every two minutes, because it's extremely slow and will take a very long time to detect a failure. Unless you're facing a specific limitation, you should significantly shorten this interval to just a few seconds. Regards, Willy
Re: Link error building haproxy-1.9.7
On Thu, May 09, 2019 at 08:59:44PM +, Chris Packham wrote: > >>haproxy-1.9.7/src/fd.c:267: undefined reference to `__ha_cas_dw' (...) > >>collect2: error: ld returned 1 exit status > >>Makefile:994: recipe for target 'haproxy' failed > >>make: *** [haproxy] Error 1 > >> > >> Eyeballing the code I think it's because USE_THREAD is not defined and > >> __ha_cas_dw is only defined when USE_THREAD is defined (...) > Here's the full make invocation (MUA wrapped unfortunately) > > make -j32 -l16 CC=arm-unknown-linux-gnueabihf-gcc > LD=arm-unknown-linux-gnueabihf-gcc > DESTDIR=output/armv7/haproxy/new/install PREFIX=/usr CFLAGS=-"O2 -g2 > -mtune=cortex-a9 -march=armv7-a -mabi=aapcs-linux > --sysroot=output/armv7/haproxy/staging > LDFLAGS=--sysroot=output/armv7/haproxy/staging USE_OPENSSL=1 > SSL_INC=output/armv7/haproxy/staging/usr/include > SSL_LIB=output/armv7/haproxy/staging/usr/lib TARGET=linux26 Oh you're absolutely right. I build my arm versions with threads by default and I didn't notice this one. I can obviously reproduce it as well. The problem doesn't happen on x86_64 because it has the macro HA_CAS_IS_8B defined and it can fall back to the regular CAS macro which is implemented in this case. We should have a higher level HA_CAS_DW function that supports absence of threads and use this one instead. I'll double-check with Olivier. Thanks, Willy