Re: [PATCH] BUILD: make TMPDIR global variable in travis-ci in order to show reg-tests errors

2019-05-10 Thread Willy Tarreau
On Fri, May 10, 2019 at 03:42:17PM +0500,  ??? wrote:
> this patch will reveal osx reg-tests errors (after osx build is repaired)
> 
> ??, 10 ??? 2019 ?. ? 15:38, :
> 
> > From: Ilya Shipitsin 
> >
> > v2, rebased to master
(...)

Thanks Ilya, I've applied it. However, please in the future, do not rush
your patches, take the time to re-read them and to write correct commit
messages. It's really taking quite some time on my side to systematically
recompose commit messages by assembling sentences picked from multiple
e-mails.

A good way to avoid forgetting parts of the description is to keep the
"what, why, how" principle in mind. The "what" is the subject, the "why"
and the "how" are in the commit message. Usually commits written this
way are correct at the first iteration and are merged very quickly.
Please have a look at section 11 of CONTRIBUTING to get an idea, it
requires a little effort first but is easy to get used to and is useful
for any project.

Thanks!
Willy



Re: [PATCH] BUG/MINOR: vars: Fix memory leak in vars_check_arg

2019-05-10 Thread Willy Tarreau
On Fri, May 10, 2019 at 05:50:50PM +0200, Tim Duesterhus wrote:
> vars_check_arg previously leaked the string containing the variable
> name:
(...)

Thanks Tim! I'm going to apply a minor change :

> diff --git a/src/vars.c b/src/vars.c
> index 477a14632..d32310270 100644
> --- a/src/vars.c
> +++ b/src/vars.c
> @@ -510,6 +510,7 @@ int vars_check_arg(struct arg *arg, char **err)
>err);
>   if (!name)
>   return 0;
> + free(arg->data.str.area);

Here I'll add "arg->data.str.area=NULL". It significantly simplifies debugging
sessions to avoid leaving pointers to freed areas in various structs.

Thanks!
Willy



Re: [1.9 HEAD] HAProxy using 100% CPU

2019-05-10 Thread Maciej Zdeb
Olivier, it's still looping, but differently:

2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
(gdb) n
2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags &
H2_CF_MUX_BLOCK_ANY)
(gdb)
2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
(gdb)
2613if (!LIST_ISEMPTY(>sending_list))
(gdb)
2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
(gdb)
2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags &
H2_CF_MUX_BLOCK_ANY)
(gdb)
2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
(gdb)
2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags &
H2_CF_MUX_BLOCK_ANY)
(gdb)
2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
(gdb)
2613if (!LIST_ISEMPTY(>sending_list))
(gdb)
2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
(gdb)
2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags &
H2_CF_MUX_BLOCK_ANY)
(gdb)
2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
(gdb)
2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags &
H2_CF_MUX_BLOCK_ANY)
(gdb)
2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
(gdb)
2613if (!LIST_ISEMPTY(>sending_list))
(gdb)
2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
(gdb)
2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags &
H2_CF_MUX_BLOCK_ANY)
(gdb)
2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
(gdb)
2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags &
H2_CF_MUX_BLOCK_ANY)
(gdb)
2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
(gdb)
2613if (!LIST_ISEMPTY(>sending_list))
(gdb)
2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
(gdb)
2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags &
H2_CF_MUX_BLOCK_ANY)
(gdb)
2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
(gdb)
2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags &
H2_CF_MUX_BLOCK_ANY)
(gdb)
2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
(gdb)
2613if (!LIST_ISEMPTY(>sending_list))
(gdb)
2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
(gdb)
2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags &
H2_CF_MUX_BLOCK_ANY)
(gdb)
2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
(gdb)
2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags &
H2_CF_MUX_BLOCK_ANY)
(gdb)
2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
(gdb)
2613if (!LIST_ISEMPTY(>sending_list))
(gdb)
2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
(gdb)
2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags &
H2_CF_MUX_BLOCK_ANY)
(gdb)
2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
(gdb)
2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags &
H2_CF_MUX_BLOCK_ANY)
(gdb)
2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
(gdb)
2613if (!LIST_ISEMPTY(>sending_list))
(gdb) gcore
warning: target file /proc/12265/cmdline contained unexpected null
characters
warning: Memory read failed for corefile section, 12288 bytes at
0x7fff17ff3000.
Saved corefile core.12265
(gdb) p *h2s
$1 = {cs = 0x2f84190, sess = 0x819580 , h2c = 0x2f841a0, h1m
= {state = 48, flags = 0, curr_len = 38317, body_len = 103852, next = 413,
err_pos = -1, err_state = 0}, by_id = {node = {branches = {b = {0x34c0260,
  0x321d330}}, node_p = 0x0, leaf_p = 0x0, bit = 1, pfx = 47005},
key = 3}, id = 3, flags = 28675, mws = 1017461, errcode = H2_ERR_NO_ERROR,
st = H2_SS_CLOSED, status = 200, body_len = 0, rxbuf = {size = 0, area =
0x0, data = 0,
head = 0}, wait_event = {task = 0x2cd0ed0, handle = 0x3, events = 0},
recv_wait = 0x0, send_wait = 0x321d390, list = {n = 0x321d3b8, p =
0x321d3b8}, sending_list = {n = 0x3174cf8, p = 0x3174cf8}}
(gdb) p *h2s_back
$2 = {cs = 0x2f84190, sess = 0x819580 , h2c = 0x2f841a0, h1m
= {state = 48, flags = 0, curr_len = 38317, body_len = 103852, next = 413,
err_pos = -1, err_state = 0}, by_id = {node = {branches = {b = {0x34c0260,
  0x321d330}}, node_p = 0x0, leaf_p = 0x0, bit = 1, pfx = 47005},
key = 3}, id = 3, flags = 28675, mws = 1017461, errcode = H2_ERR_NO_ERROR,
st = H2_SS_CLOSED, status = 200, body_len = 0, rxbuf = {size = 0, area =
0x0, data = 0,
head = 0}, wait_event = {task = 0x2cd0ed0, handle = 0x3, events = 0},
recv_wait = 0x0, send_wait = 0x321d390, list = {n = 0x321d3b8, p =
0x321d3b8}, sending_list = {n = 0x3174cf8, p = 0x3174cf8}}
(gdb) p *h2c
$3 = {conn = 0x2d10700, st0 = H2_CS_FRAME_H, errcode = H2_ERR_NO_ERROR,
flags = 0, streams_limit = 100, max_id = 9, rcvd_c = 0, rcvd_s = 0, ddht =
0x2f311a0, dbuf = {size = 0, area = 0x0, data = 0, head = 0}, dsi = 3, dfl
= 4,
  dft = 8 '\b', dff = 0 '\000', dpl = 0 '\000', last_sid = -1, mbuf = {size
= 16384, area = 0x34f0a10 "", 

[PATCH] BUG/MINOR: vars: Fix memory leak in vars_check_arg

2019-05-10 Thread Tim Duesterhus
vars_check_arg previously leaked the string containing the variable
name:

Consider this config:

frontend fe1
mode http
bind :8080
http-request set-header X %[var(txn.host)]

Starting HAProxy and immediately stopping it by sending a SIGINT makes
Valgrind report this leak:

==7795== 9 bytes in 1 blocks are definitely lost in loss record 15 of 71
==7795==at 0x4C2DB8F: malloc (in 
/usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==7795==by 0x4AA2AD: my_strndup (standard.c:2227)
==7795==by 0x51FCC5: make_arg_list (arg.c:146)
==7795==by 0x4CF095: sample_parse_expr (sample.c:897)
==7795==by 0x4BA7D7: add_sample_to_logformat_list (log.c:495)
==7795==by 0x4BBB62: parse_logformat_string (log.c:688)
==7795==by 0x4E70A9: parse_http_req_cond (http_rules.c:239)
==7795==by 0x41CD7B: cfg_parse_listen (cfgparse-listen.c:1466)
==7795==by 0x480383: readcfgfile (cfgparse.c:2089)
==7795==by 0x47A081: init (haproxy.c:1581)
==7795==by 0x4049F2: main (haproxy.c:2591)

This leak can be detected even in HAProxy 1.6, this patch thus should
be backported to all supported branches.
---
 src/vars.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/src/vars.c b/src/vars.c
index 477a14632..d32310270 100644
--- a/src/vars.c
+++ b/src/vars.c
@@ -510,6 +510,7 @@ int vars_check_arg(struct arg *arg, char **err)
 err);
if (!name)
return 0;
+   free(arg->data.str.area);
 
/* Use the global variable name pointer. */
arg->type = ARGT_VAR;
-- 
2.21.0




Re: [1.9 HEAD] HAProxy using 100% CPU

2019-05-10 Thread Maciej Zdeb
I've just sent some additional data to Willy. :)

Sure, I'll test your patch!

pt., 10 maj 2019 o 15:11 Olivier Houchard 
napisał(a):

> Hi Maciej,
>
> On Thu, May 09, 2019 at 07:25:54PM +0200, Maciej Zdeb wrote:
> > Hi again,
> >
> > I have bad news, HAProxy 1.9.7-35b44da still looping :/
> >
> > gdb session:
> > h2_process_mux (h2c=0x1432420) at src/mux_h2.c:2609
> > 2609list_for_each_entry_safe(h2s, h2s_back, >send_list,
> list) {
> > (gdb) n
> > 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags &
> > H2_CF_MUX_BLOCK_ANY)
> > (gdb)
> > 2609list_for_each_entry_safe(h2s, h2s_back, >send_list,
> list) {
> > (gdb)
> > 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags &
> > H2_CF_MUX_BLOCK_ANY)
> > (gdb)
> > 2609list_for_each_entry_safe(h2s, h2s_back, >send_list,
> list) {
> > (gdb)
> > 2613if (!LIST_ISEMPTY(>sending_list))
> > (gdb)
> > 2619if (!h2s->send_wait) {
> > (gdb)
> > 2620LIST_DEL_INIT(>list);
> > (gdb)
> > 2609list_for_each_entry_safe(h2s, h2s_back, >send_list,
> list) {
> > (gdb)
> > 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags &
> > H2_CF_MUX_BLOCK_ANY)
> > (gdb)
> > 2609list_for_each_entry_safe(h2s, h2s_back, >send_list,
> list) {
> > (gdb)
> > 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags &
> > H2_CF_MUX_BLOCK_ANY)
> > (gdb)
> > 2609list_for_each_entry_safe(h2s, h2s_back, >send_list,
> list) {
> > (gdb)
> > 2613if (!LIST_ISEMPTY(>sending_list))
> > (gdb)
> > 2619if (!h2s->send_wait) {
> > (gdb)
> > 2620LIST_DEL_INIT(>list);
> > (gdb)
> > 2609list_for_each_entry_safe(h2s, h2s_back, >send_list,
> list) {
> > (gdb)
> > 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags &
> > H2_CF_MUX_BLOCK_ANY)
> > (gdb)
> > 2609list_for_each_entry_safe(h2s, h2s_back, >send_list,
> list) {
> > (gdb)
> > 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags &
> > H2_CF_MUX_BLOCK_ANY)
> > (gdb)
> > 2609list_for_each_entry_safe(h2s, h2s_back, >send_list,
> list) {
> > (gdb)
> > 2613if (!LIST_ISEMPTY(>sending_list))
> > (gdb)
> > 2619if (!h2s->send_wait) {
> > (gdb)
> > 2620LIST_DEL_INIT(>list);
> > (gdb)
> > 2609list_for_each_entry_safe(h2s, h2s_back, >send_list,
> list) {
> > (gdb) p *h2s
> > $1 = {cs = 0x1499030, sess = 0x819580 , h2c = 0x1432420,
> h1m
> > = {state = H1_MSG_DONE, flags = 29, curr_len = 0, body_len = 111976,
> next =
> > 411, err_pos = -1, err_state = 0}, by_id = {node = {branches = {b =
> > {0x13dcf50,
> >   0x15b3120}}, node_p = 0x125ab90, leaf_p = 0x15b3121, bit = 1,
> pfx
> > = 0}, key = 11}, id = 11, flags = 28675, mws = 977198, errcode =
> > H2_ERR_NO_ERROR, st = H2_SS_CLOSED, status = 200, body_len = 0, rxbuf =
> > {size = 0, area = 0x0,
> > data = 0, head = 0}, wait_event = {task = 0x15077a0, handle = 0x0,
> > events = 0}, recv_wait = 0x0, send_wait = 0x0, list = {n = 0x15b31a8, p =
> > 0x15b31a8}, sending_list = {n = 0x15b31b8, p = 0x15b31b8}}
> > (gdb) p *h2s_back
> > $2 = {cs = 0x1499030, sess = 0x819580 , h2c = 0x1432420,
> h1m
> > = {state = H1_MSG_DONE, flags = 29, curr_len = 0, body_len = 111976,
> next =
> > 411, err_pos = -1, err_state = 0}, by_id = {node = {branches = {b =
> > {0x13dcf50,
> >   0x15b3120}}, node_p = 0x125ab90, leaf_p = 0x15b3121, bit = 1,
> pfx
> > = 0}, key = 11}, id = 11, flags = 28675, mws = 977198, errcode =
> > H2_ERR_NO_ERROR, st = H2_SS_CLOSED, status = 200, body_len = 0, rxbuf =
> > {size = 0, area = 0x0,
> > data = 0, head = 0}, wait_event = {task = 0x15077a0, handle = 0x0,
> > events = 0}, recv_wait = 0x0, send_wait = 0x0, list = {n = 0x15b31a8, p =
> > 0x15b31a8}, sending_list = {n = 0x15b31b8, p = 0x15b31b8}}
> > (gdb) p *h2c
> > $3 = {conn = 0x17e3310, st0 = H2_CS_FRAME_H, errcode = H2_ERR_NO_ERROR,
> > flags = 0, streams_limit = 100, max_id = 13, rcvd_c = 0, rcvd_s = 0,
> ddht =
> > 0x1e99a40, dbuf = {size = 0, area = 0x0, data = 0, head = 0}, dsi = 13,
> dfl
> > = 4,
> >   dft = 8 '\b', dff = 0 '\000', dpl = 0 '\000', last_sid = -1, mbuf =
> {size
> > = 16384, area = 0x1e573a0 "", data = 13700, head = 0}, msi = -1, mfl = 0,
> > mft = 0 '\000', mff = 0 '\000', miw = 65535, mws = 10159243, mfs = 16384,
> >   timeout = 2, shut_timeout = 2, nb_streams = 2, nb_cs = 3,
> > nb_reserved = 0, stream_cnt = 7, proxy = 0xb85fc0, task = 0x126aa30,
> > streams_by_id = {b = {0x125ab91, 0x0}}, send_list = {n = 0x15b31a8, p =
> > 0x125ac18}, fctl_list = {
> > n = 0x14324f8, p = 0x14324f8}, sending_list = {n = 0x1432508, p =
> > 0x1432508}, buf_wait = {target = 0x0, wakeup_cb = 0x0, list = {n =
> > 0x1432528, p = 0x1432528}}, wait_event = {task = 0x1420fa0, handle = 0x0,
> > events = 1}}
> > (gdb) p list
> > $4 = (int *) 0x0
> > (gdb) n
> > 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags &
> > H2_CF_MUX_BLOCK_ANY)
> > (gdb) n
> > 2609list_for_each_entry_safe(h2s, 

Re: HAProxy 1.9.6 unresponsive

2019-05-10 Thread Patrick Hemmer




*From:* Willy Tarreau [mailto:w...@1wt.eu]
*Sent:* Tuesday, May 7, 2019, 14:46 EDT
*To:* Patrick Hemmer 
*Cc:* haproxy@formilux.org
*Subject:* HAProxy 1.9.6 unresponsive


Hi Patrick,

On Tue, May 07, 2019 at 02:01:33PM -0400, Patrick Hemmer wrote:

Just in case it's useful, we had the issue recur today. However I gleaned a
little more information from this recurrence. Provided below are several
outputs from a gdb `bt full`. The important bit is that in the captures, the
last frame which doesn't change between each capture is the `si_cs_send`
function. The last stack capture provided has the shortest stack depth of
all the captures, and is inside `h2_snd_buf`.

Thank you. At first glance this remains similar. Christopher and I have
been studying these issues intensely these days because they have deep
roots into some design choices and tradeoffs we've had to make and that
we're relying on, and we've come to conclusions about some long term
changes to address the causes, and some fixes for 1.9 that now appear
valid. We're still carefully reviewing our changes before pushing them.
Then I think we'll emit 1.9.8 anyway since it will already fix quite a
number of issues addressed since 1.9.7, so for you it will probably be
easier to try again.
  
So I see a few updates on some of the other 100% CPU usage threads, and 
that some fixes have been pushed. Are any of those in relation to this 
issue? Or is this one still outstanding?


Thanks

-Patrick



Re: [1.9 HEAD] HAProxy using 100% CPU

2019-05-10 Thread Olivier Houchard
Hi Maciej,

On Thu, May 09, 2019 at 07:25:54PM +0200, Maciej Zdeb wrote:
> Hi again,
> 
> I have bad news, HAProxy 1.9.7-35b44da still looping :/
> 
> gdb session:
> h2_process_mux (h2c=0x1432420) at src/mux_h2.c:2609
> 2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
> (gdb) n
> 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags &
> H2_CF_MUX_BLOCK_ANY)
> (gdb)
> 2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
> (gdb)
> 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags &
> H2_CF_MUX_BLOCK_ANY)
> (gdb)
> 2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
> (gdb)
> 2613if (!LIST_ISEMPTY(>sending_list))
> (gdb)
> 2619if (!h2s->send_wait) {
> (gdb)
> 2620LIST_DEL_INIT(>list);
> (gdb)
> 2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
> (gdb)
> 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags &
> H2_CF_MUX_BLOCK_ANY)
> (gdb)
> 2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
> (gdb)
> 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags &
> H2_CF_MUX_BLOCK_ANY)
> (gdb)
> 2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
> (gdb)
> 2613if (!LIST_ISEMPTY(>sending_list))
> (gdb)
> 2619if (!h2s->send_wait) {
> (gdb)
> 2620LIST_DEL_INIT(>list);
> (gdb)
> 2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
> (gdb)
> 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags &
> H2_CF_MUX_BLOCK_ANY)
> (gdb)
> 2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
> (gdb)
> 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags &
> H2_CF_MUX_BLOCK_ANY)
> (gdb)
> 2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
> (gdb)
> 2613if (!LIST_ISEMPTY(>sending_list))
> (gdb)
> 2619if (!h2s->send_wait) {
> (gdb)
> 2620LIST_DEL_INIT(>list);
> (gdb)
> 2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
> (gdb) p *h2s
> $1 = {cs = 0x1499030, sess = 0x819580 , h2c = 0x1432420, h1m
> = {state = H1_MSG_DONE, flags = 29, curr_len = 0, body_len = 111976, next =
> 411, err_pos = -1, err_state = 0}, by_id = {node = {branches = {b =
> {0x13dcf50,
>   0x15b3120}}, node_p = 0x125ab90, leaf_p = 0x15b3121, bit = 1, pfx
> = 0}, key = 11}, id = 11, flags = 28675, mws = 977198, errcode =
> H2_ERR_NO_ERROR, st = H2_SS_CLOSED, status = 200, body_len = 0, rxbuf =
> {size = 0, area = 0x0,
> data = 0, head = 0}, wait_event = {task = 0x15077a0, handle = 0x0,
> events = 0}, recv_wait = 0x0, send_wait = 0x0, list = {n = 0x15b31a8, p =
> 0x15b31a8}, sending_list = {n = 0x15b31b8, p = 0x15b31b8}}
> (gdb) p *h2s_back
> $2 = {cs = 0x1499030, sess = 0x819580 , h2c = 0x1432420, h1m
> = {state = H1_MSG_DONE, flags = 29, curr_len = 0, body_len = 111976, next =
> 411, err_pos = -1, err_state = 0}, by_id = {node = {branches = {b =
> {0x13dcf50,
>   0x15b3120}}, node_p = 0x125ab90, leaf_p = 0x15b3121, bit = 1, pfx
> = 0}, key = 11}, id = 11, flags = 28675, mws = 977198, errcode =
> H2_ERR_NO_ERROR, st = H2_SS_CLOSED, status = 200, body_len = 0, rxbuf =
> {size = 0, area = 0x0,
> data = 0, head = 0}, wait_event = {task = 0x15077a0, handle = 0x0,
> events = 0}, recv_wait = 0x0, send_wait = 0x0, list = {n = 0x15b31a8, p =
> 0x15b31a8}, sending_list = {n = 0x15b31b8, p = 0x15b31b8}}
> (gdb) p *h2c
> $3 = {conn = 0x17e3310, st0 = H2_CS_FRAME_H, errcode = H2_ERR_NO_ERROR,
> flags = 0, streams_limit = 100, max_id = 13, rcvd_c = 0, rcvd_s = 0, ddht =
> 0x1e99a40, dbuf = {size = 0, area = 0x0, data = 0, head = 0}, dsi = 13, dfl
> = 4,
>   dft = 8 '\b', dff = 0 '\000', dpl = 0 '\000', last_sid = -1, mbuf = {size
> = 16384, area = 0x1e573a0 "", data = 13700, head = 0}, msi = -1, mfl = 0,
> mft = 0 '\000', mff = 0 '\000', miw = 65535, mws = 10159243, mfs = 16384,
>   timeout = 2, shut_timeout = 2, nb_streams = 2, nb_cs = 3,
> nb_reserved = 0, stream_cnt = 7, proxy = 0xb85fc0, task = 0x126aa30,
> streams_by_id = {b = {0x125ab91, 0x0}}, send_list = {n = 0x15b31a8, p =
> 0x125ac18}, fctl_list = {
> n = 0x14324f8, p = 0x14324f8}, sending_list = {n = 0x1432508, p =
> 0x1432508}, buf_wait = {target = 0x0, wakeup_cb = 0x0, list = {n =
> 0x1432528, p = 0x1432528}}, wait_event = {task = 0x1420fa0, handle = 0x0,
> events = 1}}
> (gdb) p list
> $4 = (int *) 0x0
> (gdb) n
> 2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags &
> H2_CF_MUX_BLOCK_ANY)
> (gdb) n
> 2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
> (gdb) p *h2s
> $5 = {cs = 0x1499030, sess = 0x819580 , h2c = 0x1432420, h1m
> = {state = H1_MSG_DONE, flags = 29, curr_len = 0, body_len = 111976, next =
> 411, err_pos = -1, err_state = 0}, by_id = {node = {branches = {b =
> {0x13dcf50,
>   0x15b3120}}, node_p = 0x125ab90, leaf_p = 0x15b3121, bit = 1, pfx
> = 0}, key = 11}, id = 11, flags = 28675, 

Re: [PATCH] BUILD: make TMPDIR global variable in travis-ci in order to show reg-tests errors

2019-05-10 Thread Илья Шипицин
please ignore, I will send "v2" soon

пт, 10 мая 2019 г. в 15:32, :

> From: Ilya Shipitsin 
>
> ---
>  .travis.yml | 8 ++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/.travis.yml b/.travis.yml
> index f9a13586..530d1682 100644
> --- a/.travis.yml
> +++ b/.travis.yml
> @@ -8,6 +8,7 @@ env:
>  - FLAGS="USE_ZLIB=1 USE_PCRE=1 USE_LUA=1 USE_OPENSSL=1"
>  - SSL_LIB=${HOME}/opt/lib
>  - SSL_INC=${HOME}/opt/include
> +- TMPDIR=/tmp
>
>  addons:
>apt:
> @@ -44,6 +45,9 @@ matrix:
>- os: linux
>  compiler: gcc
>  env: TARGET=linux2628 LIBRESSL_VERSION=2.7.5
> +  - os: linux
> +compiler: gcc
> +env: TARGET=linux2628 BORINGSSL=yes
>- os: linux
>  compiler: clang
>  env: TARGET=linux2628 FLAGS=
> @@ -64,11 +68,11 @@ script:
>- ./haproxy -vv
>- if [ "${TRAVIS_OS_NAME}" = "linux" ]; then ldd haproxy; fi
>- if [ "${TRAVIS_OS_NAME}" = "osx" ]; then otool -L haproxy; fi
> -  - env TMPDIR=/tmp VTEST_PROGRAM=../vtest/vtest make reg-tests
> +  - env VTEST_PROGRAM=../vtest/vtest make reg-tests
>
>  after_failure:
>- |
> -for folder in ${TMPDIR:-/tmp}/*regtest*/vtc.*; do
> +for folder in ${TMPDIR}/*regtest*/vtc.*; do
>cat $folder/INFO
>cat $folder/LOG
>  done
> --
> 2.20.1
>
>


[PATCH] BUILD: make TMPDIR global variable in travis-ci in order to show reg-tests errors

2019-05-10 Thread chipitsine
From: Ilya Shipitsin 

v2, rebased to master

---
 .travis.yml | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/.travis.yml b/.travis.yml
index c00725d8..530d1682 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -8,6 +8,7 @@ env:
 - FLAGS="USE_ZLIB=1 USE_PCRE=1 USE_LUA=1 USE_OPENSSL=1"
 - SSL_LIB=${HOME}/opt/lib
 - SSL_INC=${HOME}/opt/include
+- TMPDIR=/tmp
 
 addons:
   apt:
@@ -67,11 +68,11 @@ script:
   - ./haproxy -vv
   - if [ "${TRAVIS_OS_NAME}" = "linux" ]; then ldd haproxy; fi
   - if [ "${TRAVIS_OS_NAME}" = "osx" ]; then otool -L haproxy; fi
-  - env TMPDIR=/tmp VTEST_PROGRAM=../vtest/vtest make reg-tests
+  - env VTEST_PROGRAM=../vtest/vtest make reg-tests
 
 after_failure:
   - |
-for folder in ${TMPDIR:-/tmp}/*regtest*/vtc.*; do
+for folder in ${TMPDIR}/*regtest*/vtc.*; do
   cat $folder/INFO
   cat $folder/LOG
 done
-- 
2.20.1




Re: [PATCH] BUILD: make TMPDIR global variable in travis-ci in order to show reg-tests errors

2019-05-10 Thread Илья Шипицин
this patch will reveal osx reg-tests errors (after osx build is repaired)

пт, 10 мая 2019 г. в 15:38, :

> From: Ilya Shipitsin 
>
> v2, rebased to master
>
> ---
>  .travis.yml | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/.travis.yml b/.travis.yml
> index c00725d8..530d1682 100644
> --- a/.travis.yml
> +++ b/.travis.yml
> @@ -8,6 +8,7 @@ env:
>  - FLAGS="USE_ZLIB=1 USE_PCRE=1 USE_LUA=1 USE_OPENSSL=1"
>  - SSL_LIB=${HOME}/opt/lib
>  - SSL_INC=${HOME}/opt/include
> +- TMPDIR=/tmp
>
>  addons:
>apt:
> @@ -67,11 +68,11 @@ script:
>- ./haproxy -vv
>- if [ "${TRAVIS_OS_NAME}" = "linux" ]; then ldd haproxy; fi
>- if [ "${TRAVIS_OS_NAME}" = "osx" ]; then otool -L haproxy; fi
> -  - env TMPDIR=/tmp VTEST_PROGRAM=../vtest/vtest make reg-tests
> +  - env VTEST_PROGRAM=../vtest/vtest make reg-tests
>
>  after_failure:
>- |
> -for folder in ${TMPDIR:-/tmp}/*regtest*/vtc.*; do
> +for folder in ${TMPDIR}/*regtest*/vtc.*; do
>cat $folder/INFO
>cat $folder/LOG
>  done
> --
> 2.20.1
>
>


[PATCH] BUILD: make TMPDIR global variable in travis-ci in order to show reg-tests errors

2019-05-10 Thread chipitsine
From: Ilya Shipitsin 

---
 .travis.yml | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/.travis.yml b/.travis.yml
index f9a13586..530d1682 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -8,6 +8,7 @@ env:
 - FLAGS="USE_ZLIB=1 USE_PCRE=1 USE_LUA=1 USE_OPENSSL=1"
 - SSL_LIB=${HOME}/opt/lib
 - SSL_INC=${HOME}/opt/include
+- TMPDIR=/tmp
 
 addons:
   apt:
@@ -44,6 +45,9 @@ matrix:
   - os: linux
 compiler: gcc
 env: TARGET=linux2628 LIBRESSL_VERSION=2.7.5
+  - os: linux
+compiler: gcc
+env: TARGET=linux2628 BORINGSSL=yes
   - os: linux
 compiler: clang
 env: TARGET=linux2628 FLAGS=
@@ -64,11 +68,11 @@ script:
   - ./haproxy -vv
   - if [ "${TRAVIS_OS_NAME}" = "linux" ]; then ldd haproxy; fi
   - if [ "${TRAVIS_OS_NAME}" = "osx" ]; then otool -L haproxy; fi
-  - env TMPDIR=/tmp VTEST_PROGRAM=../vtest/vtest make reg-tests
+  - env VTEST_PROGRAM=../vtest/vtest make reg-tests
 
 after_failure:
   - |
-for folder in ${TMPDIR:-/tmp}/*regtest*/vtc.*; do
+for folder in ${TMPDIR}/*regtest*/vtc.*; do
   cat $folder/INFO
   cat $folder/LOG
 done
-- 
2.20.1




Re: [1.9 HEAD] HAProxy using 100% CPU

2019-05-10 Thread Maciej Zdeb
I'm gettingh old... I failed to remember to dump core :( And already killed
the process. Sorry, but the issue must reoccur and I can't say how long it
may take.

As soon as I get core dump I'll return.

pt., 10.05.2019, 10:35 użytkownik Willy Tarreau  napisał:

> Hi Maciej,
>
> On Thu, May 09, 2019 at 07:25:54PM +0200, Maciej Zdeb wrote:
> > Hi again,
> >
> > I have bad news, HAProxy 1.9.7-35b44da still looping :/
>
> Well, it's getting really annoying. Something's definitely wrong in
> this list and I can't figure what.
>
> > 2609list_for_each_entry_safe(h2s, h2s_back, >send_list,
> list) {
> > (gdb) p *h2s
> > $1 = {cs = 0x1499030, sess = 0x819580 , h2c = 0x1432420,
> h1m
> ^^^
>
> Seeing things like the above make me doubt about the list's integrity, thus
> it could again hold an element that was already reused somewhere else.
> Could
> it be possible for you to share your unstripped executable, a core dump and
> your config ? (not to the list! just send a private link to Olivier or me).
>
> At this point either we find what's happening or we'll have to issue 1.9.8
> with this bug still alive, which doesn't make me feel comfortable to say
> the least :-/
>
> Willy
>


Re: [PATCHv2] BUILD: common: Add __ha_cas_dw fallback for single threaded builds

2019-05-10 Thread Илья Шипицин
osx build is broken

https://travis-ci.com/haproxy/haproxy/jobs/199157750

seems to be related

пт, 10 мая 2019 г. в 14:45, Chris Packham :

> __ha_cas_dw() is used in fd_rm_from_fd_list() and when built without
> USE_THREADS=1 the linker fails to find __ha_cas_dw(). Add a definition
> of __ha_cas_dw() for the #ifndef USE_THREADS case.
>
> Signed-off-by: Chris Packham 
> ---
> Changes in v2:
> - cast to int * to avoid dereferencing void *
>
>  include/common/hathreads.h | 5 +
>  1 file changed, 5 insertions(+)
>
> diff --git a/include/common/hathreads.h b/include/common/hathreads.h
> index cae6eabe..7314293a 100644
> --- a/include/common/hathreads.h
> +++ b/include/common/hathreads.h
> @@ -140,6 +140,11 @@ static inline void __ha_barrier_full(void)
>  {
>  }
>
> +static inline int __ha_cas_dw(void *target, void *compare, void *set)
> +{
> +   return HA_ATOMIC_CAS((int *)target, (int *)compare, (int *)set);
> +}
> +
>  static inline void thread_harmless_now()
>  {
>  }
> --
> 2.21.0
>
>
>


Re: [PATCH] BUILD: common: Add __ha_cas_dw fallback for single threaded builds

2019-05-10 Thread Willy Tarreau
On Fri, May 10, 2019 at 09:38:08AM +, Chris Packham wrote:
> On 10/05/19 8:57 PM, Willy Tarreau wrote:
> > On Thu, May 09, 2019 at 05:07:40PM +1200, Chris Packham wrote:
> >> __ha_cas_dw() is used in fd_rm_from_fd_list() and when built without
> >> USE_THREADS=1 the linker fails to find __ha_cas_dw(). Add a definition
> >> of __ha_cas_dw() for the #ifndef USE_THREADS case.
> > 
> > Just found your patch, I think it's indeed OK to fall back to
> > HA_ATOMIC_CAS in this case since we won't use atomic instructions.
> > I'd like that we do a bit of tidying in this area so that it's
> > clearer which functions are always atomic and which ones possibly
> > are not, but for now that's OK. I've merged it now.
> 
> Actually I think there's an additional change needed in my patch. By 
> passing the parameters to HA_ATOMIC_CAS we end up attempting to 
> dereference a void *. So this should needs to cast to a proper type. For 
> what it's worth I'll send a v2 that does this.

OK, but since it's already merged, please send an incremental patch.

Thanks,
Willy



Re: cygwin compilation error

2019-05-10 Thread Willy Tarreau
Hello!

On Wed, May 08, 2019 at 10:13:38PM +, Zakharychev, Bob wrote:
> I wouldn't bother even trying to add support for BoringSSL - they themselves
> discourage people from doing so in their mission statement:
> 
> "Although BoringSSL is an open source project, it is not intended for general
> use, as OpenSSL is. We don't recommend that third parties depend upon it.
> Doing so is likely to be frustrating because there are no guarantees of API
> or ABI stability.
> 
> Programs ship their own copies of BoringSSL when they use it and we update
> everything as needed when deciding to make API changes. This allows us to
> mostly avoid compromises in the name of compatibility. It works for us, but
> it may not work for you."

These are pretty valid points! Actually I'd say that we know some people
do use BoringSSL with haproxy and provide regular fixes for it, so the
maintenance cost for others remains low. If it starts to break here and
there, or to trigger false alarms on the CI, then it will be time to 1)
remove it from the CI, and maybe 2) stop supporting it. But as long as
it works it's an inexpensive indicator of the probability of forthcoming
user reports ;-)

Thanks,
Willy



Re: [PATCH] BUILD: common: Add __ha_cas_dw fallback for single threaded builds

2019-05-10 Thread Chris Packham
On 10/05/19 8:57 PM, Willy Tarreau wrote:
> On Thu, May 09, 2019 at 05:07:40PM +1200, Chris Packham wrote:
>> __ha_cas_dw() is used in fd_rm_from_fd_list() and when built without
>> USE_THREADS=1 the linker fails to find __ha_cas_dw(). Add a definition
>> of __ha_cas_dw() for the #ifndef USE_THREADS case.
> 
> Just found your patch, I think it's indeed OK to fall back to
> HA_ATOMIC_CAS in this case since we won't use atomic instructions.
> I'd like that we do a bit of tidying in this area so that it's
> clearer which functions are always atomic and which ones possibly
> are not, but for now that's OK. I've merged it now.

Actually I think there's an additional change needed in my patch. By 
passing the parameters to HA_ATOMIC_CAS we end up attempting to 
dereference a void *. So this should needs to cast to a proper type. For 
what it's worth I'll send a v2 that does this.



[PATCHv2] BUILD: common: Add __ha_cas_dw fallback for single threaded builds

2019-05-10 Thread Chris Packham
__ha_cas_dw() is used in fd_rm_from_fd_list() and when built without
USE_THREADS=1 the linker fails to find __ha_cas_dw(). Add a definition
of __ha_cas_dw() for the #ifndef USE_THREADS case.

Signed-off-by: Chris Packham 
---
Changes in v2:
- cast to int * to avoid dereferencing void *

 include/common/hathreads.h | 5 +
 1 file changed, 5 insertions(+)

diff --git a/include/common/hathreads.h b/include/common/hathreads.h
index cae6eabe..7314293a 100644
--- a/include/common/hathreads.h
+++ b/include/common/hathreads.h
@@ -140,6 +140,11 @@ static inline void __ha_barrier_full(void)
 {
 }
 
+static inline int __ha_cas_dw(void *target, void *compare, void *set)
+{
+   return HA_ATOMIC_CAS((int *)target, (int *)compare, (int *)set);
+}
+
 static inline void thread_harmless_now()
 {
 }
-- 
2.21.0




Re: Fwd: Very odd behavior with 'cookie' only working intermittently

2019-05-10 Thread Willy Tarreau
On Fri, May 10, 2019 at 10:53:07AM +0200, Willy Tarreau wrote:
> On Wed, Jun 26, 2013 at 12:10:27PM -0400, Chris Patti wrote:
(...)

Just noticed that a thread sorting issue on my side brought this very old
thread back and that this post is probably not interesting anymore to the
initial requester! Next time I'll check the date before responding.

Willy



Re: [PATCH] BUILD: common: Add __ha_cas_dw fallback for single threaded builds

2019-05-10 Thread Willy Tarreau
On Thu, May 09, 2019 at 05:07:40PM +1200, Chris Packham wrote:
> __ha_cas_dw() is used in fd_rm_from_fd_list() and when built without
> USE_THREADS=1 the linker fails to find __ha_cas_dw(). Add a definition
> of __ha_cas_dw() for the #ifndef USE_THREADS case.

Just found your patch, I think it's indeed OK to fall back to
HA_ATOMIC_CAS in this case since we won't use atomic instructions.
I'd like that we do a bit of tidying in this area so that it's
clearer which functions are always atomic and which ones possibly
are not, but for now that's OK. I've merged it now.

Thanks!
Willy



Re: Fwd: Very odd behavior with 'cookie' only working intermittently

2019-05-10 Thread Willy Tarreau
On Wed, Jun 26, 2013 at 12:10:27PM -0400, Chris Patti wrote:
> Thank you *VERY* much for this tidbit Nenad.
> 
> With the early version of HAProxy we're using (v1.3.18) the actual syntax
> is:
> 
> option httpclose
> 
> This worked perfectly, session afinity started performing as expected.
> 
> (Just wanted to record this for posterity)

Good catch indeed. Also this is not needed anymore starting from 1.4
(which supports keep-alive). And your version contains 73 known bugs
which were later fixed in 1.3.28. Note that the 1.3 branch is not
maintained anymore, but if you have a particular reason not to upgrade,
please at least have a look there :

   http://www.haproxy.org/bugs/bugs-1.3.18.html

Willy



Re: [PATCH] BUILD: add BoringSSL to travis-ci build matrix

2019-05-10 Thread Willy Tarreau
merged, thank you Ilya.

Willy



Re: CI question related to openssl matrix

2019-05-10 Thread Willy Tarreau
Hi Ilya,

On Thu, May 09, 2019 at 12:19:45AM +0500,  ??? wrote:
> Hello,
> 
> does haproxy have some issues when it is built using openssl-1.1.0 and
> running with openssl-1.1.1, for example ?

I don't know. I'd say that openssl guarantees ABI compatibility for low
numbers (as you could theorically build with 1.0.1 and run on 1.0.2) but
I'm not totally sure it's *really* expected to work.

> should we consider such situations in travis-ci openssl matrix ?

I don't think so. We already emit a warning in -vv saying "versions differ"
when this happens and I don't think anyone reasonable will run their
production with this. It can still be convenient for people who need a
quick and dirty proxy to serve as a swiss army knife but it's not what
we need to check for.

Thanks,
Willy



Re: [1.9 HEAD] HAProxy using 100% CPU

2019-05-10 Thread Willy Tarreau
Hi Maciej,

On Thu, May 09, 2019 at 07:25:54PM +0200, Maciej Zdeb wrote:
> Hi again,
> 
> I have bad news, HAProxy 1.9.7-35b44da still looping :/

Well, it's getting really annoying. Something's definitely wrong in
this list and I can't figure what.

> 2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
> (gdb) p *h2s
> $1 = {cs = 0x1499030, sess = 0x819580 , h2c = 0x1432420, h1m
^^^

Seeing things like the above make me doubt about the list's integrity, thus
it could again hold an element that was already reused somewhere else. Could
it be possible for you to share your unstripped executable, a core dump and
your config ? (not to the list! just send a private link to Olivier or me).

At this point either we find what's happening or we'll have to issue 1.9.8
with this bug still alive, which doesn't make me feel comfortable to say 
the least :-/

Willy



Re: haproxy stopped balancing after about 2 weeks

2019-05-10 Thread Willy Tarreau
Hello,

On Thu, May 09, 2019 at 11:42:54AM -0600, ericr wrote:
> A couple of weeks ago I installed haproxy on our server running FreeBSD
> 11.0-RELEASE-p16. (yes, I know it's an old version of the OS, I'm going to
> upgrade it as soon as I solve my haproxy problem.)

Can you tell us what exact version you're running ? Please send the output
of "haproxy -vv".

> Haproxy is supposed to load balance between 2 web servers running apache.
> haproxy ran fine and balanced well for about 2 weeks, and then it stopped
> sending client connections to the second web server.

But it still works for the first one ?

> It still does health checks to both servers just fine, and reports L7OK/200
> at every check for both servers. I've tried using both roundrobin and
> leastconn, with no luck.  I've restarted haproxy several times, and
> rebooted the server it's running on, and it the behavior doesn't change.

Did you notice if it's always after the exact same amount of time ? Or
maybe after a certain number of requests ? We could have imagined a bug
with one LB algo but if it does it regardless of the algo this rules it
out.

Oh wait a minute :

> # info about backend servers
> backend web_servers
> balance leastconn
> cookie phpsessid insert indirect nocache
> option httpchk HEAD /
> 
> default-server check maxconn 2048
> 
> server vi-www3 10.3.3.10:8080 cookie phpsessid inter 120s
> server vi-www4 10.3.3.11:8080 cookie phpsessid inter 120s

So for both servers you're setting a response cookie "phpsessid=phpsessid"
which has the effect that all your visitors will come back with this cookie
and that the first server which matches this value will take it, hence the
first server. First, I recommend against naming your stickiness cookies
"phpsessid" as it makes one think about the application's cookie which it
is not. Second, you need to use different cookie values here, for example
"cookie w3" and "cookie w4" for your two respective servers.

Lat recommendation, I don't know if it's on purpose that you check your
servers only once every two minutes, because it's extremely slow and will
take a very long time to detect a failure. Unless you're facing a specific
limitation, you should significantly shorten this interval to just a few
seconds.

Regards,
Willy



Re: Link error building haproxy-1.9.7

2019-05-10 Thread Willy Tarreau
On Thu, May 09, 2019 at 08:59:44PM +, Chris Packham wrote:
> >>haproxy-1.9.7/src/fd.c:267: undefined reference to `__ha_cas_dw'
(...)
> >>collect2: error: ld returned 1 exit status
> >>Makefile:994: recipe for target 'haproxy' failed
> >>make: *** [haproxy] Error 1
> >>
> >> Eyeballing the code I think it's because USE_THREAD is not defined and
> >> __ha_cas_dw is only defined when USE_THREAD is defined
(...)

> Here's the full make invocation (MUA wrapped unfortunately)
> 
> make -j32 -l16 CC=arm-unknown-linux-gnueabihf-gcc 
> LD=arm-unknown-linux-gnueabihf-gcc 
> DESTDIR=output/armv7/haproxy/new/install PREFIX=/usr CFLAGS=-"O2 -g2 
> -mtune=cortex-a9 -march=armv7-a -mabi=aapcs-linux 
> --sysroot=output/armv7/haproxy/staging 
> LDFLAGS=--sysroot=output/armv7/haproxy/staging USE_OPENSSL=1 
> SSL_INC=output/armv7/haproxy/staging/usr/include 
> SSL_LIB=output/armv7/haproxy/staging/usr/lib TARGET=linux26

Oh you're absolutely right. I build my arm versions with threads by
default and I didn't notice this one. I can obviously reproduce it
as well. The problem doesn't happen on x86_64 because it has the
macro HA_CAS_IS_8B defined and it can fall back to the regular CAS
macro which is implemented in this case. We should have a higher
level HA_CAS_DW function that supports absence of threads and use
this one instead. I'll double-check with Olivier.

Thanks,
Willy



Fwd: Paid Guest Post Inquiry - Content Collaboration Opportunity

2019-05-10 Thread Levente seo
Hello,

I would like to inquire about publishing a guest post on your site - unique
& objective content, not self-promotional and of course relevant to your
site's audience.

We'd be willing to pay for it.

Please let me know how can we proceed.

Looking forward to working with you,

Levente


Guide Lines for Link Publishing - April 2019.docx
Description: MS-Word 2007 document