[PATCH] BUG/MEDIUM: init: Initialize idle_orphan_conns for first server in server-template

2019-01-08 Thread cripy
Hi,

I found a segfault when using server-template within 1.9.x and 2.0-dev.
This seems to be related to  "http-reuse" as when I set to "never" it does
not crash anymore.

It appears that idle_orphan_conns is not being properly initialized for the
first server within the server-template.  I was able to confirm this by
creating a small server-template with 4 servers and setting all of the
addresses except for the first 1.  This did not result in a crash.  As soon
as I set and was sent to the first address it resulted in a crash.

I found that server_template_init() establishes everything fine for all
servers (setting id from prefix with srv_set_id_from_prefix() , etc... )
and then at the bottom of the function you can see it calls
srv_set_id_from_prefix() to then establish the id for the first server --
however, the first server doesn't get any of the logic to initialize the
idle_orphan_conns.

My initial fix added the idle_orphan_conns initialization code to the
bottom of server_template_init() (right below the srv_set_id_from_prefix()
which sets the prefix specifically for the first server slot) -- however
this seemed like it might be too messy.

I believe a better option is to remove the check for !srv->tmpl_info.prefix
within server_finalize_init().  Patch attached.

Feel free to correct me if I am wrong on this assumption.

Here is the config which results in a crash:

listen fe_main
mode http
bind *:80
timeout server 5ms
timeout client 5ms
timeout connect 5ms
server-template srv 2 10.1.0.1:80

(Should segfault after the first request)

HA-Proxy version 2.0-dev0-251a6b-97 2019/01/08 - https://haproxy.org/
Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement
-fwrapv -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter
-Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered
-Wno-missing-field-initializers -Wtype-limits
  OPTIONS = USE_OPENSSL=1

Backtrace:
[New LWP 14046]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `./haproxy -f crash.cfg -d'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x004f82fe in srv_add_to_idle_list (conn=0x2331320,
srv=0x22aeb60) at include/proto/server.h:244
244LIST_ADDQ(&srv->idle_orphan_conns[tid], &conn->list);
(gdb) bt
#0  0x004f82fe in srv_add_to_idle_list (conn=0x2331320,
srv=0x22aeb60) at include/proto/server.h:244
#1  session_free (sess=0x2330970) at src/session.c:90
#2  0x0050dca3 in mux_pt_destroy (ctx=0x2330920) at src/mux_pt.c:38
#3  0x00446bdb in cs_destroy (cs=0x2331230) at
include/proto/connection.h:708
#4  si_release_endpoint (si=si@entry=0x2330cd8) at
include/proto/stream_interface.h:170
#5  0x0044c9ec in stream_free (s=0x2330a40) at src/stream.c:446
#6  process_stream (t=t@entry=0x2330e30, context=0x2330a40,
state=) at src/stream.c:2610
#7  0x00509955 in process_runnable_tasks () at src/task.c:432
#8  0x0048b485 in run_poll_loop () at src/haproxy.c:2619
#9  run_thread_poll_loop (data=data@entry=0x23267d0) at src/haproxy.c:2684
#10 0x0040aa0c in main (argc=, argv=0x7fffd8018e48)
at src/haproxy.c:3313

(gdb) frame 0
#0  0x004f82fe in srv_add_to_idle_list (conn=0x2331320,
srv=0x22aeb60) at include/proto/server.h:244
244LIST_ADDQ(&srv->idle_orphan_conns[tid], &conn->list);

(gdb) print &srv->idle_orphan_conns[tid]
$1 = (struct list *) 0x0

(gdb) print &conn->list
$2 = (struct list *) 0x2331370


0001-BUG-MEDIUM-init-Initialize-idle_orphan_conns-for-fir.patch
Description: Binary data


Re: haproxy issue tracker discussion

2019-01-08 Thread Willy Tarreau
On Tue, Jan 08, 2019 at 07:18:07PM +0100, Tim Düsterhus wrote:
> Willy,
> 
> Am 08.01.19 um 18:30 schrieb Willy Tarreau:
> > I totally agree. This is the tool I'm missing the most currently. I'm
> > not aware of a *good* and manageable issue tracker. Having a status for
> > a bug per branch most likely eliminates most of them...
> 
> I'm not sure this is required. The bugfixes naturally land in the
> current development repository and have the affected branches in their
> commit message. They naturally "trickle down" to the maintained
> branches. So if the issue is marked as fixed the fix will *eventually*
> appear in the stable branch of the reporter.

Except that the "naturally" part here is manually performed by someone,
and an issue tracker is nothing more than an organized todo list, which
*is* useful to remind that you missed some backports. It regularly happens
to us, like when the safety of some fixes is not certain and we prefer to
let them run for a while in the most recent versions before backporting
them to older branches. This is exactly where an issue tracker is needed,
to remind us that these fixes are still needed in older branches.

If the issue tracker only tracks issues related to the most recent branch,
it will only solve the problem for this branch. For example, Veiko Kukk
reported in November that compression in 1.7.11 was broken again. How do
I know this ? Just because I've added an entry for this in my TODO file.
This bug is apparently a failed backport, so it requires that the original
bug is reopened and that any backport attempt to an older version is paused.
Without having cross-branches indications, you can't reliably do this. There
is a significant risk that the faulty fix gets backported to 1.6 before
anyone qualifies the issue.

This can possibly be dealt with using labels, I'm not sure how convenient
it will be.

> In my experience non-issues (a.k.a. RTFM) are usually easy to detect
> when reading through the explanation.

As long as there is manpower to clear them up it's OK, but this means
that some of us will count on you guys for this. With e-mails I don't
even need to do anything to stop reading a faulty bug report. The mail
is marked read once I glance over it, and I don't see it anymore. I
can also mark a whole thread as read when I see that any of the people
I trust here on the list start to respond. When reading an issue tracker,
the same issues pop up until I've done something with them. This makes a
huge difference. This is why I've always been highly concerned with the
risk of pollution.

> >> What you are describing basically is a unmaintained issue tracker,
> >> which is of course useless.
> >>
> >> But keeping people from filing new issues is the wrong approach to
> >> this, imho. Proper templating, triaging and correct labeling of the
> >> issues is the difference between a useless and a useful issue tracker
> >> from my point of view.
> > 
> > I do have strong doubts but I'm open to trying :-)
> 
> I have to agree with Lukas here. Proper triaging is key here. And this
> is no different to email. You have to read email to decide what to do.

Yes but you don't have to act on them to stop seeing them ;-)

> And you have to read the issue to decide what to do.

Most of the time many of us will conclude they have nothing to do because
they're not the best people for this. When you have 10 people responsible
for different areas, on average 10% of the issues will be of interest to
them, and I want to be sure that we will not end up with developers looking
at the issue tracker and constantly seeing stuff that's not for them without
an easy way to definitely skip it and focus on their stuff. Again with
e-mail it's easy because without doing anything you don't see the same
e-mail again. Here if you have to label or ack all the stuff that's not
for you just in order not to see it again, it's inefficient.

But it can likely be improved with proper triaging.

> Labels in the issue tracker are like folders in your mailbox and the
> issue tracker itself is like a public mailbox.

That's exactly the problem. With e-mails, there's one state per reader,
here there's a single state that everyone has to share.

> I'd throw my hat into the ring as well. I maintain a few Open Source
> projects myself (though not of the size and importance of haproxy) and
> actually use the GitHub issue tracker.

Thanks. From what I've been used to see on github, very very few projects
do care about maintenance. Most of them are rolling releases. It actually
took me a very long time to try to figure one project with multiple
maintenance branches to see how they dealt with issues, and the few I
found by then had disabled issues, which could have already been a hint
about its suitability to the task. Just a few examples :

   Apache  : https://github.com/apache/httpd
   Squid   : https://github.com/squid-cache/squid
   Nginx   : https://github.com/nginx/nginx
   Linux   : https://github.com/torvald

Re: coredump in h2_process_mux with 1.9.0-8223050

2019-01-08 Thread Willy Tarreau
On Wed, Jan 09, 2019 at 02:09:47AM +0100, Tim Düsterhus wrote:
> Pieter,
> 
> Am 08.01.19 um 23:37 schrieb PiBa-NL:
> > Got a coredump of 1.9.0-8223050 today, see below. Would this be 'likely'
> > the same one with the 'PRIORITY' that 1.9.1 fixes?
> 
> Without knowing much about the mux code: This is highly unlikely to be
> related. In my tests the bug lead to an immediate crash when receiving
> the bogusrequest. In your case it appears to me that the crash happened
> while sending the response.

For me it looks like this one, first one fixed in 1.9 :

af97229 ("BUG/MEDIUM: h2: Don't forget to quit the sending_list if 
SUB_CALL_UNSUBSCRIBE.")

> Generally I believe it's best to send those kind of reports to Willy in
> private. If this is a actually something that can be triggered by an
> unauthenticated client it does not need to be exposed for the whole
> world to see :-)

Well, as long as there's no reproducing data, public report often increase
the likeliness of the "me too" effect which often helps narrow down an issue.

> > Anyhow i updated my system to 2.0-dev0-251a6b7 for the moment, lets see
> > if something strange happens again. Might take a few days though, IF it
> > still occurs..

I'm confident you will not see this one anymore. Interestingly, 1.9 was
much more difficult to develop than 1.8 but looks much less chaotic after
the release. I think that the cleaner architecture and the regtests have
significantly reduced the opportunities for introducing stupid bugs, this
is great.

Thanks,
Willy



Re: coredump in h2_process_mux with 1.9.0-8223050

2019-01-08 Thread Tim Düsterhus
Pieter,

Am 08.01.19 um 23:37 schrieb PiBa-NL:
> Got a coredump of 1.9.0-8223050 today, see below. Would this be 'likely'
> the same one with the 'PRIORITY' that 1.9.1 fixes?

Without knowing much about the mux code: This is highly unlikely to be
related. In my tests the bug lead to an immediate crash when receiving
the bogusrequest. In your case it appears to me that the crash happened
while sending the response.

Generally I believe it's best to send those kind of reports to Willy in
private. If this is a actually something that can be triggered by an
unauthenticated client it does not need to be exposed for the whole
world to see :-)

> I don't have any idea what the exact circumstance request/response was..

What might be of interest is the configuration: Are you using HTX,
Compression, Lua or something like that?

Best regards
Tim Düsterhus

> Anyhow i updated my system to 2.0-dev0-251a6b7 for the moment, lets see
> if something strange happens again. Might take a few days though, IF it
> still occurs..
> 
> Regards,
> 
> PiBa-NL (Pieter)
> 
> Core was generated by `/usr/local/sbin/haproxy -f
> /var/etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid'.
> Program terminated with signal SIGSEGV, Segmentation fault.
> #0  0x004b91c7 in h2_process_mux (h2c=0x802657480) at
> src/mux_h2.c:2434
> 2434    src/mux_h2.c: No such file or directory.
> (gdb) bt full
> #0  0x004b91c7 in h2_process_mux (h2c=0x802657480) at
> src/mux_h2.c:2434
>     h2s = 0x80262c7a0
>     h2s_back = 0x80262ca40
> #1  0x004b844d in h2_send (h2c=0x802657480) at src/mux_h2.c:2560
>     flags = 0
>     conn = 0x8026dc300
>     done = 0
>     sent = 1
> #2  0x004b8a49 in h2_process (h2c=0x802657480) at src/mux_h2.c:2640
>     conn = 0x8026dc300
> #3  0x004b32e1 in h2_wake (conn=0x8026dc300) at src/mux_h2.c:2715
>     h2c = 0x802657480
> #4  0x005c8158 in conn_fd_handler (fd=7) at src/connection.c:190
>     conn = 0x8026dc300
>     flags = 0
>     io_available = 0
> #5  0x005e3c7c in fdlist_process_cached_events (fdlist=0x9448f0
> ) at src/fd.c:441
>     fd = 7
>     old_fd = 7
>     e = 117
> #6  0x005e377c in fd_process_cached_events () at src/fd.c:459
> No locals.
> #7  0x00514296 in run_poll_loop () at src/haproxy.c:2655
>     next = 762362654
>     exp = 762362654
> #8  0x00510b78 in run_thread_poll_loop (data=0x802615970) at
> src/haproxy.c:2684
>     start_lock = 0
>     ptif = 0x92ed10 
>     ptdf = 0x0
> #9  0x0050d1a6 in main (argc=6, argv=0x7fffec60) at
> src/haproxy.c:3313
>     tids = 0x802615970
>     threads = 0x802615998
>     i = 1
>     old_sig = {__bits = {0, 0, 0, 0}}
>     blocked_sig = {__bits = {4227856759, 4294967295, 4294967295,
> 4294967295}}
>     err = 0
>     retry = 200
>     limit = {rlim_cur = 2040, rlim_max = 2040}
>     errmsg =
> "\000\354\377\377\377\177\000\000\230\354\377\377\377\177\000\000`\354\377\377\377\177\000\000\006\000\000\000\000\000\000\000\f\373\353\230\373\032\351~\240\270\223\000\000\000\000\000X\354\377\377\377\177\000\000\230\354\377\377\377\177\000\000`\354\377\377\377\177\000\000\006\000\000\000\000\000\000\000\000\354\377\377\377\177\000\000\302z\000\002\b\000\000\000\001\000\000"
> 
>     pidfd = 17
> (gdb)
> 
> 



coredump in h2_process_mux with 1.9.0-8223050

2019-01-08 Thread PiBa-NL

Hi List, Willy,

Got a coredump of 1.9.0-8223050 today, see below. Would this be 'likely' 
the same one with the 'PRIORITY' that 1.9.1 fixes?
I don't have any idea what the exact circumstance request/response was.. 
Anyhow i updated my system to 2.0-dev0-251a6b7 for the moment, lets see 
if something strange happens again. Might take a few days though, IF it 
still occurs..


Regards,

PiBa-NL (Pieter)

Core was generated by `/usr/local/sbin/haproxy -f 
/var/etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid'.

Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x004b91c7 in h2_process_mux (h2c=0x802657480) at 
src/mux_h2.c:2434

2434    src/mux_h2.c: No such file or directory.
(gdb) bt full
#0  0x004b91c7 in h2_process_mux (h2c=0x802657480) at 
src/mux_h2.c:2434

    h2s = 0x80262c7a0
    h2s_back = 0x80262ca40
#1  0x004b844d in h2_send (h2c=0x802657480) at src/mux_h2.c:2560
    flags = 0
    conn = 0x8026dc300
    done = 0
    sent = 1
#2  0x004b8a49 in h2_process (h2c=0x802657480) at src/mux_h2.c:2640
    conn = 0x8026dc300
#3  0x004b32e1 in h2_wake (conn=0x8026dc300) at src/mux_h2.c:2715
    h2c = 0x802657480
#4  0x005c8158 in conn_fd_handler (fd=7) at src/connection.c:190
    conn = 0x8026dc300
    flags = 0
    io_available = 0
#5  0x005e3c7c in fdlist_process_cached_events (fdlist=0x9448f0 
) at src/fd.c:441

    fd = 7
    old_fd = 7
    e = 117
#6  0x005e377c in fd_process_cached_events () at src/fd.c:459
No locals.
#7  0x00514296 in run_poll_loop () at src/haproxy.c:2655
    next = 762362654
    exp = 762362654
#8  0x00510b78 in run_thread_poll_loop (data=0x802615970) at 
src/haproxy.c:2684

    start_lock = 0
    ptif = 0x92ed10 
    ptdf = 0x0
#9  0x0050d1a6 in main (argc=6, argv=0x7fffec60) at 
src/haproxy.c:3313

    tids = 0x802615970
    threads = 0x802615998
    i = 1
    old_sig = {__bits = {0, 0, 0, 0}}
    blocked_sig = {__bits = {4227856759, 4294967295, 4294967295, 
4294967295}}

    err = 0
    retry = 200
    limit = {rlim_cur = 2040, rlim_max = 2040}
    errmsg = 
"\000\354\377\377\377\177\000\000\230\354\377\377\377\177\000\000`\354\377\377\377\177\000\000\006\000\000\000\000\000\000\000\f\373\353\230\373\032\351~\240\270\223\000\000\000\000\000X\354\377\377\377\177\000\000\230\354\377\377\377\177\000\000`\354\377\377\377\177\000\000\006\000\000\000\000\000\000\000\000\354\377\377\377\177\000\000\302z\000\002\b\000\000\000\001\000\000"

    pidfd = 17
(gdb)




Re: [PATCH] REGTEST: filters: add compression test

2019-01-08 Thread PiBa-NL

Hi Frederic,

Op 7-1-2019 om 10:13 schreef Frederic Lecaille:

On 12/23/18 11:38 PM, PiBa-NL wrote:
As requested hereby the regtest send for inclusion into the git 
repository.

It is OK like that.

Note that you patch do not add reg-test/filters/common.pem which could 
be a symlink to ../ssl/common.pem.
Also note that since 8f16148Christopher's commit, we add such a line 
where possible:

    ${no-htx} option http-use-htx
We should also rename your test files to reg-test/filters/h0.*
Thank you.
Fred.


Together with these changes you have supplied me already off-list, i've 
also added a " --max-time 15" for the curl request, that should be 
sufficient for most systems to complete the 3 second testcase, and 
allows the shell command to complete without varnishtest killing it 
after a timeout and not showing any of the curl output..


One last question, currently its being added to a new folder: 
reg-test/filters/ , perhaps it should be in reg-test/compression/ ?
If you agree that needs changing i guess that can be done upon 
committing it?


Note that the test fails on my FreeBSD system when using HTX when using 
'2.0-dev0-251a6b7 2019/01/08', i'm not aware it ever worked (i didn't 
test it with HTX before..).
 top  15.2 shell_out|curl: (28) Operation timed out after 15036 
milliseconds with 187718 bytes received


Log attached.. Would it help to log it with the complete "filter trace 
name BEFORE / filter compression / filter trace name AFTER" ? Or are 
there other details i could try and gather?


Regards,

PiBa-NL (Pieter)

From 793e770b399157a1549a2655612a29845b165dd6 Mon Sep 17 00:00:00 2001
From: PiBa-NL 
Date: Sun, 23 Dec 2018 21:21:51 +0100
Subject: [PATCH] REGTEST: filters: add compression test

This test checks that data transferred with compression is correctly received 
at different download speeds
---
 reg-tests/filters/common.pem |  1 +
 reg-tests/filters/s0.lua | 19 ++
 reg-tests/filters/s0.vtc | 59 
 3 files changed, 79 insertions(+)
 create mode 12 reg-tests/filters/common.pem
 create mode 100644 reg-tests/filters/s0.lua
 create mode 100644 reg-tests/filters/s0.vtc

diff --git a/reg-tests/filters/common.pem b/reg-tests/filters/common.pem
new file mode 12
index ..a4433d56
--- /dev/null
+++ b/reg-tests/filters/common.pem
@@ -0,0 +1 @@
+../ssl/common.pem
\ No newline at end of file
diff --git a/reg-tests/filters/s0.lua b/reg-tests/filters/s0.lua
new file mode 100644
index ..2cc874b9
--- /dev/null
+++ b/reg-tests/filters/s0.lua
@@ -0,0 +1,19 @@
+
+local data = "abcdefghijklmnopqrstuvwxyz"
+local responseblob = ""
+for i = 1,1 do
+  responseblob = responseblob .. "\r\n" .. i .. data:sub(1, math.floor(i % 27))
+end
+
+http01applet = function(applet)
+  local response = responseblob
+  applet:set_status(200)
+  applet:add_header("Content-Type", "application/javascript")
+  applet:add_header("Content-Length", string.len(response)*10)
+  applet:start_response()
+  for i = 1,10 do
+applet:send(response)
+  end
+end
+
+core.register_service("fileloader-http01", "http", http01applet)
diff --git a/reg-tests/filters/s0.vtc b/reg-tests/filters/s0.vtc
new file mode 100644
index ..231344a6
--- /dev/null
+++ b/reg-tests/filters/s0.vtc
@@ -0,0 +1,59 @@
+# Checks that compression doesnt cause corruption..
+
+varnishtest "Compression validation"
+#REQUIRE_VERSION=1.6
+
+feature ignore_unknown_macro
+
+haproxy h1 -conf {
+global
+#  log stdout format short daemon
+   lua-load${testdir}/s0.lua
+
+defaults
+   modehttp
+   log global
+   ${no-htx} option http-use-htx
+   option  httplog
+
+frontend main-https
+   bind"fd@${fe1}" ssl crt ${testdir}/common.pem
+   compression algo gzip
+   compression type text/html text/plain application/json 
application/javascript
+   compression offload
+   use_backend TestBack  if  TRUE
+
+backend TestBack
+   server  LocalSrv ${h1_fe2_addr}:${h1_fe2_port}
+
+listen fileloader
+   mode http
+   bind "fd@${fe2}"
+   http-request use-service lua.fileloader-http01
+} -start
+
+shell {
+HOST=${h1_fe1_addr}
+if [ "${h1_fe1_addr}" = "::1" ] ; then
+HOST="\[::1\]"
+fi
+
+md5=$(which md5 || which md5sum)
+
+if [ -z $md5 ] ; then
+echo "MD5 checksum utility not found"
+exit 1
+fi
+
+expectchecksum="4d9c62aa5370b8d5f84f17ec2e78f483"
+
+for opt in "" "--limit-rate 300K" "--limit-rate 500K" ; do
+checksum=$(curl --max-time 15 --compressed -k 
"https://$HOST:${h1_fe1_port}"; $opt | $md5 | cut -d ' ' -f1)
+if [ "$checksum" != "$expectchecksum" ] ; then
+  echo "Expecting checksum $expectchecksum"
+  echo "Received checksum: $checksum"
+  exit 1;
+fi
+done
+
+} -run
-- 
2.11.0


Re: regtests - with option http-use-htx

2019-01-08 Thread Frederic Lecaille

On 1/8/19 9:05 PM, PiBa-NL wrote:

Hi Frederic,

Op 8-1-2019 om 16:27 schreef Frederic Lecaille:

On 12/15/18 4:52 PM, PiBa-NL wrote:

Hi List, Willy,

Trying to run some existing regtests with added option: option 
http-use-htx


Using: HA-Proxy version 1.9-dev10-c11ec4a 2018/12/15

I get the below issues sofar:

 based on /reg-tests/connection/b0.vtc
Takes 8 seconds to pass, in a slightly modified manor 1.1 > 2.0 
expectation for syslog. This surely needs a closer look?

#    top  TEST ./htx-test/connection-b0.vtc passed (8.490)

 based on /reg-tests/stick-table/b1.vtc
Difference here is the use=1 vs use=0 , maybe that is better, but 
then the 'old' expectation seems wrong, and the bug is the case 
without htx?


Note that the server s1 never responds.

Furthermore, c1 client is run with -run argument.
This means that we wait for its termination before running accessing CLI.
Then we check that there is no consistency issue with the stick-table:

if the entry has expired we get only this line:

    table: http1, type: ip, size:1024, used:0

if not we get these two lines:

    table: http1, type: ip, size:1024, used:1
    .*    use=0 ...

here used=1 means there is still an entry in the stick-table, and 
use=0 means it is not currently in use (I guess this is because the 
client has closed its connection).


I do not reproduce your issue with this script both on Linux and 
FreeBSD 11 both with or without htx.
Did you try with the 'old' development version (1.9-dev10-c11ec4a 
2018/12/15), i think in current version its already fixed see my own 
test results also below.


No. I have not tested with the previous dev versions.
I wanted to clarify the logic behind the test. I agree perhaps there is
not enough comments in this script.

 h1    0.0 CLI recv|0x8026612c0: key=127.0.0.1 use=1 exp=0 gpt0=0 
gpc0=0 gpc0_rate(1)=0 conn_rate(1)=1 http_req_cnt=1 
http_req_rate(1)=1 http_err_cnt=0 http_err_rate(1)=0

 h1    0.0 CLI recv|
 h1    0.0 CLI expect failed ~ "table: http1, type: ip, 
size:1024, used:(0|1\n0x[0-9a-f]*: key=127\.0\.0\.1 use=0 exp=[0-9]* 
gpt0=0 gpc0=0 gpc0_rate\(1\)=0 conn_rate\(1\)=1 
http_req_cnt=1 http_req_rate\(1\)=1 http_err_cnt=0 
http_err_rate\(1\)=0)\n$"


Regards,

PiBa-NL (Pieter)



I tried again today with both 2.0-dev0-251a6b7 and 1.9.0-8223050 and 
1.9-dev10-c11ec4a :


HA-Proxy version 2.0-dev0-251a6b7 2019/01/08 - https://haproxy.org/
## Without HTX
#    top  TEST ./PB-TEST/2018/connection-b0.vtc passed (0.146)
#    top  TEST ./PB-TEST/2018/stick-table-b1.vtc passed (0.127)
0 tests failed, 0 tests skipped, 2 tests passed
## With HTX
#    top  TEST ./PB-TEST/2018/connection-b0.vtc passed (0.147)
#    top  TEST ./PB-TEST/2018/stick-table-b1.vtc passed (0.127)
0 tests failed, 0 tests skipped, 2 tests passed


HA-Proxy version 1.9.0-8223050 2018/12/19 - https://haproxy.org/
## Without HTX
#    top  TEST ./PB-TEST/2018/connection-b0.vtc passed (0.150)
#    top  TEST ./PB-TEST/2018/stick-table-b1.vtc passed (0.128)
0 tests failed, 0 tests skipped, 2 tests passed
## With HTX
#    top  TEST ./PB-TEST/2018/connection-b0.vtc passed (0.148)
#    top  TEST ./PB-TEST/2018/stick-table-b1.vtc passed (0.127)
0 tests failed, 0 tests skipped, 2 tests passed



Ok.



HA-Proxy version 1.9-dev10-c11ec4a 2018/12/15
Copyright 2000-2018 Willy Tarreau 
## Without HTX
#    top  TEST ./PB-TEST/2018/connection-b0.vtc passed (0.146)
#    top  TEST ./PB-TEST/2018/stick-table-b1.vtc passed (0.127)
0 tests failed, 0 tests skipped, 2 tests passed
## With HTX
#    top  TEST ./PB-TEST/2018/connection-b0.vtc passed (8.646)
*    top   0.0 TEST ./PB-TEST/2018/stick-table-b1.vtc starting
 h1    0.0 CLI recv|# table: http1, type: ip, size:1024, used:1
 h1    0.0 CLI recv|0x80262a200: key=127.0.0.1 use=1 exp=0 gpt0=0 
gpc0=0 gpc0_rate(1)=0 conn_rate(1)=1 http_req_cnt=1 
http_req_rate(1)=1 http_err_cnt=0 http_err_rate(1)=0

 h1    0.0 CLI recv|
 h1    0.0 CLI expect failed ~ "table: http1, type: ip, size:1024, 
used:(0|1\n0x[0-9a-f]*: key=127\.0\.0\.1 use=0 exp=[0-9]* gpt0=0 gpc0=0 
gpc0_rate\(1\)=0 conn_rate\(1\)=1 http_req_cnt=1 
http_req_rate\(1\)=1 http_err_cnt=0 http_err_rate\(1\)=0)\n$"

*    top   0.0 RESETTING after ./PB-TEST/2018/stick-table-b1.vtc
**   h1    0.0 Reset and free h1 haproxy 92940
#    top  TEST ./PB-TEST/2018/stick-table-b1.vtc FAILED (0.127) exit=2
1 tests failed, 0 tests skipped, 1 tests passed

With the 'old' 1.9-dev10 version and with HTX i can still reproduce the 
"passed (8.646)" and "use=1".. But both 1.9.0 and 2.0-dev don't show 
that behavior. I have not 'bisected' further, but i don't think there is 
anything to do a.t.m. regarding this old (already fixed) issue.


Good news.

Thanks Pieter.


Regards,

PiBa-NL (Pieter)




Regards.

Fred



Re: regtests - with option http-use-htx

2019-01-08 Thread PiBa-NL

Hi Frederic,

Op 8-1-2019 om 16:27 schreef Frederic Lecaille:

On 12/15/18 4:52 PM, PiBa-NL wrote:

Hi List, Willy,

Trying to run some existing regtests with added option: option 
http-use-htx


Using: HA-Proxy version 1.9-dev10-c11ec4a 2018/12/15

I get the below issues sofar:

 based on /reg-tests/connection/b0.vtc
Takes 8 seconds to pass, in a slightly modified manor 1.1 > 2.0 
expectation for syslog. This surely needs a closer look?

#    top  TEST ./htx-test/connection-b0.vtc passed (8.490)

 based on /reg-tests/stick-table/b1.vtc
Difference here is the use=1 vs use=0 , maybe that is better, but 
then the 'old' expectation seems wrong, and the bug is the case 
without htx?


Note that the server s1 never responds.

Furthermore, c1 client is run with -run argument.
This means that we wait for its termination before running accessing CLI.
Then we check that there is no consistency issue with the stick-table:

if the entry has expired we get only this line:

    table: http1, type: ip, size:1024, used:0

if not we get these two lines:

    table: http1, type: ip, size:1024, used:1
    .*    use=0 ...

here used=1 means there is still an entry in the stick-table, and 
use=0 means it is not currently in use (I guess this is because the 
client has closed its connection).


I do not reproduce your issue with this script both on Linux and 
FreeBSD 11 both with or without htx.
Did you try with the 'old' development version (1.9-dev10-c11ec4a 
2018/12/15), i think in current version its already fixed see my own 
test results also below.
 h1    0.0 CLI recv|0x8026612c0: key=127.0.0.1 use=1 exp=0 gpt0=0 
gpc0=0 gpc0_rate(1)=0 conn_rate(1)=1 http_req_cnt=1 
http_req_rate(1)=1 http_err_cnt=0 http_err_rate(1)=0

 h1    0.0 CLI recv|
 h1    0.0 CLI expect failed ~ "table: http1, type: ip, 
size:1024, used:(0|1\n0x[0-9a-f]*: key=127\.0\.0\.1 use=0 exp=[0-9]* 
gpt0=0 gpc0=0 gpc0_rate\(1\)=0 conn_rate\(1\)=1 
http_req_cnt=1 http_req_rate\(1\)=1 http_err_cnt=0 
http_err_rate\(1\)=0)\n$"


Regards,

PiBa-NL (Pieter)



I tried again today with both 2.0-dev0-251a6b7 and 1.9.0-8223050 and  
1.9-dev10-c11ec4a :


HA-Proxy version 2.0-dev0-251a6b7 2019/01/08 - https://haproxy.org/
## Without HTX
#    top  TEST ./PB-TEST/2018/connection-b0.vtc passed (0.146)
#    top  TEST ./PB-TEST/2018/stick-table-b1.vtc passed (0.127)
0 tests failed, 0 tests skipped, 2 tests passed
## With HTX
#    top  TEST ./PB-TEST/2018/connection-b0.vtc passed (0.147)
#    top  TEST ./PB-TEST/2018/stick-table-b1.vtc passed (0.127)
0 tests failed, 0 tests skipped, 2 tests passed


HA-Proxy version 1.9.0-8223050 2018/12/19 - https://haproxy.org/
## Without HTX
#    top  TEST ./PB-TEST/2018/connection-b0.vtc passed (0.150)
#    top  TEST ./PB-TEST/2018/stick-table-b1.vtc passed (0.128)
0 tests failed, 0 tests skipped, 2 tests passed
## With HTX
#    top  TEST ./PB-TEST/2018/connection-b0.vtc passed (0.148)
#    top  TEST ./PB-TEST/2018/stick-table-b1.vtc passed (0.127)
0 tests failed, 0 tests skipped, 2 tests passed


HA-Proxy version 1.9-dev10-c11ec4a 2018/12/15
Copyright 2000-2018 Willy Tarreau 
## Without HTX
#    top  TEST ./PB-TEST/2018/connection-b0.vtc passed (0.146)
#    top  TEST ./PB-TEST/2018/stick-table-b1.vtc passed (0.127)
0 tests failed, 0 tests skipped, 2 tests passed
## With HTX
#    top  TEST ./PB-TEST/2018/connection-b0.vtc passed (8.646)
*    top   0.0 TEST ./PB-TEST/2018/stick-table-b1.vtc starting
 h1    0.0 CLI recv|# table: http1, type: ip, size:1024, used:1
 h1    0.0 CLI recv|0x80262a200: key=127.0.0.1 use=1 exp=0 gpt0=0 
gpc0=0 gpc0_rate(1)=0 conn_rate(1)=1 http_req_cnt=1 
http_req_rate(1)=1 http_err_cnt=0 http_err_rate(1)=0

 h1    0.0 CLI recv|
 h1    0.0 CLI expect failed ~ "table: http1, type: ip, size:1024, 
used:(0|1\n0x[0-9a-f]*: key=127\.0\.0\.1 use=0 exp=[0-9]* gpt0=0 gpc0=0 
gpc0_rate\(1\)=0 conn_rate\(1\)=1 http_req_cnt=1 
http_req_rate\(1\)=1 http_err_cnt=0 http_err_rate\(1\)=0)\n$"

*    top   0.0 RESETTING after ./PB-TEST/2018/stick-table-b1.vtc
**   h1    0.0 Reset and free h1 haproxy 92940
#    top  TEST ./PB-TEST/2018/stick-table-b1.vtc FAILED (0.127) exit=2
1 tests failed, 0 tests skipped, 1 tests passed

With the 'old' 1.9-dev10 version and with HTX i can still reproduce the 
"passed (8.646)" and "use=1".. But both 1.9.0 and 2.0-dev don't show 
that behavior. I have not 'bisected' further, but i don't think there is 
anything to do a.t.m. regarding this old (already fixed) issue.


Regards,

PiBa-NL (Pieter)




Re: haproxy issue tracker discussion

2019-01-08 Thread Tim Düsterhus
Willy,

Am 08.01.19 um 18:30 schrieb Willy Tarreau:
> I totally agree. This is the tool I'm missing the most currently. I'm
> not aware of a *good* and manageable issue tracker. Having a status for
> a bug per branch most likely eliminates most of them...

I'm not sure this is required. The bugfixes naturally land in the
current development repository and have the affected branches in their
commit message. They naturally "trickle down" to the maintained
branches. So if the issue is marked as fixed the fix will *eventually*
appear in the stable branch of the reporter.

The "first responder" will try to reproduce the issue both in the
current dev version (to see if appears in bleeding edge) as well as the
most recent version of the branch the reporter uses (in case it was
accidentally fixed during a refactor). If possible the first responder
creates a reg-test showing the issue.

> I'm really not convinced it would change anything. We've put prominently
> on the previous repo an indication that it was not the official project
> and that issues / PRs were not accepted, despite this your automatic bot
> was needed to close them. My goal is contributors efficiency. Several of
> us are on the critical path and every minute they waste slows someone
> else down. So if it becomes more annoying for everyone to create a new
> issue just for the purpose of discouraging wanderers from trying to
> create them, it will only have a globally negative effect.

In my experience non-issues (a.k.a. RTFM) are usually easy to detect
when reading through the explanation.

>>
>> What you are describing basically is a unmaintained issue tracker,
>> which is of course useless.
>>
>> But keeping people from filing new issues is the wrong approach to
>> this, imho. Proper templating, triaging and correct labeling of the
>> issues is the difference between a useless and a useful issue tracker
>> from my point of view.
> 
> I do have strong doubts but I'm open to trying :-)

I have to agree with Lukas here. Proper triaging is key here. And this
is no different to email. You have to read email to decide what to do.
And you have to read the issue to decide what to do.

Labels in the issue tracker are like folders in your mailbox and the
issue tracker itself is like a public mailbox.

>> I'm in favor of the former, because I believe triage will be required
>> for both and I don't think the amount of bogus issues will be
>> unmanageable. I'd also volunteer to triage incoming issues - not that
>> it's much different from what's currently done discourse anyway - as
>> Tim already said.
> 
> Oh yes! I just want to be sure we don't burn you out! This is also why
> I'm open to trying what those doing the work propose. If I'm imposing
> a painful or inefficient process for those doing the work, it will not
> work either.

I'd throw my hat into the ring as well. I maintain a few Open Source
projects myself (though not of the size and importance of haproxy) and
actually use the GitHub issue tracker.

>> Regarding what tool to use, in the "open to everyone" case I'd opt for
>> github simply because it has the lowest barrier of entry for everyone,
>> as well as me being mildly familiar with it personally. Pre-existing
>> github labels would have to be replaced by useful ones and a issue
>> template would have to be created.

I would suggest using GitHub's issue tracker as well. It works fairly
well in my experience.

> I'd say that we managed to close the issues there when we discovered
> them. It took a huge amount of efforts, sure, but eventually it was
> done. So if we figure later that it was a wrong choice, we know it can
> be done again. In this regard I think it's the lowest cost to *try*.
> 
> However I'd like that we spend some time thinking about what can be
> done to properly route the bugs to the various branches. Bugs in dev
> are useless, we want them to be fixed in stable branches. So we have

See the very top of my email.

> Thanks a lot for these links, they are very informative. This utility
> doesn't "yet" support cross-branches, but it's said that bugseverywhere
> does. But bugseverywhere claims that its value is in keeping a hard
> relation between bug's state and the fix in the tree, which precisely
> is the wrong way to do it for me : I strongly prefer that the bug tracker
> is never fully up to date regarding the resolved status for a branch than
> having it believe the branch is fixed because it doesn't know that this
> branch requires an extra patch. An apparently unresolved bug will be
> noticed by the next person trying to resolve it. It's how it really
> works with most developers and bug trackers in real life. I had too
> quick a look at the other ones for now.

Please don't opt for some obscure system. The biggest value of a public
issue tracker to me is that people are actually are able to research
whether their issue is already known / a duplicate and possible
workarounds. The mail archive is not really accessible.

>

Re: haproxy issue tracker discussion

2019-01-08 Thread Willy Tarreau
On Sun, Jan 06, 2019 at 07:41:08PM +0300, Alexey Elymanov wrote:
> Ansible, for example (https://github.com/ansible/ansible/issues), uses some
> advanced automation and templates to manage their enormous issues stream.
> Issue are checked against conforming rules/tests/codestyle checks or, for
> example, automatically closed due inactivity
> I believe most if not all soluitons they use are opensource.

Thank you for this link Alexey, it definitely brings some value to this
discussion!

Willy



Re: haproxy issue tracker discussion

2019-01-08 Thread Willy Tarreau
Hi guys,

sorry for the long delay, it was not the best week for me to restart
all of this discussion, but now it's OK, I'm catching up!

On Sun, Jan 06, 2019 at 05:29:43PM +0100, Lukas Tribus wrote:
> Hello everyone,
> 
> 
> as per Tim's suggestion I'm restarting the discussion about the issue
> tracker, started in "haproxy 1.9 status update" (2018-05-25),
> Message-ID 20180525161044.ga6...@1wt.eu:
> https://www.mail-archive.com/haproxy@formilux.org/msg30139.html
> 
> 
> > It would be nice to show what's pending or being worked on, and
> > to sometimes add extra info regarding a given task.
> 
> Yes, we are in need of an issue tracker, not only to handle open bugs,
> but even more so to handle feature/change requests that often need
> more time. Those do get lost on the mailing list, even when everyone
> already agreed it's needed.

I totally agree. This is the tool I'm missing the most currently. I'm
not aware of a *good* and manageable issue tracker. Having a status for
a bug per branch most likely eliminates most of them...

> > The problem we've faced in the past with GitHub's issue tracker
> > is that it's impossible to restrict the creation of new issues to
> > participants. Given that haproxy is a complex component, the boundary
> > between human error, misunderstanding and bug is extremely thin.
> 
> It is, but I don't like restricting the creation of new issues to 
> participants.
> 
> Issue templates need to be clear that the issue tracker is not a
> support forum.

I'm really not convinced it would change anything. We've put prominently
on the previous repo an indication that it was not the official project
and that issues / PRs were not accepted, despite this your automatic bot
was needed to close them. My goal is contributors efficiency. Several of
us are on the critical path and every minute they waste slows someone
else down. So if it becomes more annoying for everyone to create a new
issue just for the purpose of discouraging wanderers from trying to
create them, it will only have a globally negative effect.

Based on your experience dealing with all these reports on discourse,
what is the approximate ratio between valid and invalid reports ?

> Triaging new issues will be needed anyway, and that
> also includes closing misdirected support request.

Sure, and any support / issue reporting chain requires this since you
only know what it was once the issue is fixed (and sometimes even after
it's fixed you discover new impacts).

> To be clear: I think developers should not receive an email
> notifications for new or untriaged issues - only when specifically
> assigned to the issue or if they commented previously on it. Instead,
> maybe an automated weekly report or something like that (similar to
> what we have now for the stable-queue) could go out to the mailing
> list with a summary of the open issues, grouped by it's labels.

Maybe. But right now issues reported here are dealt with because all
of us presume that "someone will surely look at this one", and if we
see a long unresponded e-mail, we try to assign more time to it (and
sometimes we miss it, of course). And what is nice is that actually a
large number of people respond, some with links to a similar previous
report, others suggesting to adapt the config to narrow the issue down,
etc. So there is a huge value in having the community participate to
issue resolution and not just some developers. The problem with batches
is that when you get 20 new bugs at once that you hadn't had the time
to look at previously, well, you don't know where to start and you
simply prefer to pretend you didn't see them. So possibly that having
the triaged issues being forwarded here in real time with a visible tag
would maintain the same level of participation with a higher accuracy
and less losses.

> > It resulted in the issue tracker being filled with wrong bugs, 100% of
> > which were in fact requests for help. It makes the utility totally
> > useless for development and bug fixing as it requires more time to
> > maintain in a clean state than it takes to put issues in a mailbox.
> 
> What you are describing basically is a unmaintained issue tracker,
> which is of course useless.
> 
> But keeping people from filing new issues is the wrong approach to
> this, imho. Proper templating, triaging and correct labeling of the
> issues is the difference between a useless and a useful issue tracker
> from my point of view.

I do have strong doubts but I'm open to trying :-)

> So I guess we ultimately have to decide between:
> 
>   - an issue tracker open to everyone, however requiring some
> volunteers to triage incoming bugs (and close invalid ones)
>   - an issue tracker that is open to "previous participants", with the
> expectation to require less manpower for triage
> 
> 
> I'm in favor of the former, because I believe triage will be required
> for both and I don't think the amount of bogus issues will be
> unmanageable. I'd also volunteer to triage in

Re: regtests - with option http-use-htx

2019-01-08 Thread Frederic Lecaille

On 12/15/18 4:52 PM, PiBa-NL wrote:

Hi List, Willy,

Trying to run some existing regtests with added option: option http-use-htx

Using: HA-Proxy version 1.9-dev10-c11ec4a 2018/12/15

I get the below issues sofar:

 based on /reg-tests/connection/b0.vtc
Takes 8 seconds to pass, in a slightly modified manor 1.1 > 2.0 
expectation for syslog. This surely needs a closer look?

#    top  TEST ./htx-test/connection-b0.vtc passed (8.490)

 based on /reg-tests/stick-table/b1.vtc
Difference here is the use=1 vs use=0 , maybe that is better, but then 
the 'old' expectation seems wrong, and the bug is the case without htx?


Note that the server s1 never responds.

Furthermore, c1 client is run with -run argument.
This means that we wait for its termination before running accessing CLI.
Then we check that there is no consistency issue with the stick-table:

if the entry has expired we get only this line:

table: http1, type: ip, size:1024, used:0

if not we get these two lines:

table: http1, type: ip, size:1024, used:1
.*use=0 ...

here used=1 means there is still an entry in the stick-table, and use=0 
means it is not currently in use (I guess this is because the client has 
closed its connection).


I do not reproduce your issue with this script both on Linux and FreeBSD 
11 both with or without htx.




 h1    0.0 CLI recv|0x8026612c0: key=127.0.0.1 use=1 exp=0 gpt0=0 
gpc0=0 gpc0_rate(1)=0 conn_rate(1)=1 http_req_cnt=1 
http_req_rate(1)=1 http_err_cnt=0 http_err_rate(1)=0

 h1    0.0 CLI recv|
 h1    0.0 CLI expect failed ~ "table: http1, type: ip, size:1024, 
used:(0|1\n0x[0-9a-f]*: key=127\.0\.0\.1 use=0 exp=[0-9]* gpt0=0 gpc0=0 
gpc0_rate\(1\)=0 conn_rate\(1\)=1 http_req_cnt=1 
http_req_rate\(1\)=1 http_err_cnt=0 http_err_rate\(1\)=0)\n$"


Regards,

PiBa-NL (Pieter)






Re: State of 0-RTT TLS resumption with OpenSSL

2019-01-08 Thread Willy Tarreau
On Tue, Jan 08, 2019 at 03:27:58PM +0100, Olivier Houchard wrote:
> On Tue, Jan 08, 2019 at 03:00:32PM +0100, Janusz Dziemidowicz wrote:
> > pt., 4 sty 2019 o 11:59 Olivier Houchard  
> > napisa??(a):
> > However, I believe in general this is a bit more complicated. RFC 8446
> > described this in detail in section 8:
> > https://tools.ietf.org/html/rfc8446#section-8
> > My understanding is that RFC highly recommends anti-replay with 0-RTT.
> > It seems that s_server implements single use tickets, which is exactly
> > what is in section 8.1. The above patch disables anti-replay
> > completely in haproxy, which might warrant some updates to
> > documentation about allow-0rtt option?
> > 
> 
> Hi Janusz,
> 
> Yes indeed, I thought I documented it better than that, but obviously I
> didn't :)
> The allow-0rtt option was added before OpenSSL added anti-replay protection,
> and I'm pretty sure the RFC wasn't talking about it yet, it was mostly saying
> it was a security concern, so it was designed with "only allow it for what
> would be safe to replay", and the documentation should certainly reflect that.
> I will make it explicit.
> 
> Thanks a lot !

To clear some of the confusion on this subject, when 0-RTT was designed
in TLS 1.3, it was sort of a backport from QUIC, and it was decided that
the replay protection was up to the application layer (here "application"
is not in the sense of the web application you're using, but the HTTP layer
on top of TLS). And RFC8470 (https://tools.ietf.org/html/rfc8470), which
haproxy implements, was produced exactly to address this specific area.

Thus an HTTP implementation which properly implements RFC8470 does have
TLS anti-replay covered by the application layer.

Hoping this helps,
Willy



Re: haproxy reload terminated with master/worker

2019-01-08 Thread Emmanuel Hocdet


> Le 8 janv. 2019 à 15:02, William Lallemand  a écrit :
> 
> On Tue, Jan 08, 2019 at 02:03:22PM +0100, Tim Düsterhus wrote:
>> Emmanuel,
>> 
>> Am 08.01.19 um 13:53 schrieb Emmanuel Hocdet:
>>> Without master/worker, haproxy reload work with an active waiting (haproxy 
>>> exec).
>>> With master/worker, kill -USR2 return immediately: Is there a way to know 
>>> when the reload is finished?
>>> 
>> 
>> Are you using systemd with -Ws? haproxy informs systemd when a reload
>> starts and when it is finished using the sd_notify protocol:
>> 
>> https://www.freedesktop.org/software/systemd/man/sd_notify.html#RELOADING=1
>> 
>> Best regards
>> Tim Düsterhus
>> 
> 
> Hi guys,
> 
> Indeed you can't do that without systemd, haproxy send a READY=1 variable with
> sd_notify once the processes are ready.
> 
> Unfortunately sd_notify has some limitation, there is no sd_notify variable
> which permits to notify that a reload failed, but you could read the errors in
> the service logs.
> 
> I have some plans for the master worker in the future, to have a synchronous
> state of the reload on the CLI, but this is not possible with the current
> architecture. We will probably need to stop reexecuting the process to achieve
> that and do a reload over the master CLI with a return code.
> 
> -- 
> William Lallemand

Thanks for quick responses.
I do not use systemd and even, it would be a shame to depend on systemd for 
that.

Yes return code for reload failed is one of the use case to reload active 
waiting.
(read the errors in the service logs is not really scripts friendly).

Reload over the master CLI with return code seem to be a good solution.
Waiting for the feature ;-)

++
Manu





Re: State of 0-RTT TLS resumption with OpenSSL

2019-01-08 Thread Olivier Houchard
On Tue, Jan 08, 2019 at 03:00:32PM +0100, Janusz Dziemidowicz wrote:
> pt., 4 sty 2019 o 11:59 Olivier Houchard  napisa??(a):
> > I understand the concern.
> > I checked and both nghttp2 and nginx disable the replay protection. The idea
> > is you're supposed to allow early data only on harmless requests anyway, ie
> > ones that could be replayed with no consequence.
> 
> Sorry for the late reply, I was pondering the problem ;) I'm pretty ok
> with this patch, especially since others seem to do the same. And my
> use case is DNS-over-TLS, which has no problems with replays anyway ;)
> 
> However, I believe in general this is a bit more complicated. RFC 8446
> described this in detail in section 8:
> https://tools.ietf.org/html/rfc8446#section-8
> My understanding is that RFC highly recommends anti-replay with 0-RTT.
> It seems that s_server implements single use tickets, which is exactly
> what is in section 8.1. The above patch disables anti-replay
> completely in haproxy, which might warrant some updates to
> documentation about allow-0rtt option?
> 

Hi Janusz,

Yes indeed, I thought I documented it better than that, but obviously I
didn't :)
The allow-0rtt option was added before OpenSSL added anti-replay protection,
and I'm pretty sure the RFC wasn't talking about it yet, it was mostly saying
it was a security concern, so it was designed with "only allow it for what
would be safe to replay", and the documentation should certainly reflect that.
I will make it explicit.

Thanks a lot !

Olivier



[ANNOUNCE] haproxy-1.8.17

2019-01-08 Thread Willy Tarreau
Hi,

HAProxy 1.8.17 was released on 2019/01/08. It added 12 new commits
after version 1.8.16.

One of them fixes a security issue discovered by Tim Düsterhus
(CVE-2018-20615) :

   BUG/CRITICAL: mux-h2: re-check the frame length when PRIORITY is used

An incorrect frame length check is performed on HEADERS frame having the
PRIORITY flag, possibly resulting in a read-past-bound which can cause a
crash depending how the frame is crafted. All 1.9 and 1.8 versions are
affected. As a result, all HTTP/2 users must either upgrade or temporarily
disable HTTP/2 by commenting the "npn h2" and "alpn h2" statements on their
related "bind" lines.

Another issue which is very hard to trigger in 1.8 is a lack of timeout
for certain tasks when running from applets. Since 1.8's cache only
supports small objects, it cannot trigger it. In theory extra large stats
page could trigger them but the fact that output contents almost never
end on a buffer boundary makes this very unlikely as well. So I guess
nobody has ever faced it before we had support for large objects in the
1.9 cache.

The rest is pretty minor.

Please do not forget to update!

Please find the usual URLs below :
   Site index   : http://www.haproxy.org/
   Discourse: http://discourse.haproxy.org/
   Sources  : http://www.haproxy.org/download/1.8/src/
   Git repository   : http://git.haproxy.org/git/haproxy-1.8.git/
   Git Web browsing : http://git.haproxy.org/?p=haproxy-1.8.git
   Changelog: http://www.haproxy.org/download/1.8/src/CHANGELOG
   Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/

Willy
---
Complete changelog :
Christopher Faulet (2):
  BUG/MAJOR: stream-int: Update the stream expiration date in 
stream_int_notify()
  BUG/MINOR: lua: Return an error if a legacy HTTP applet doesn't send 
anything

Olivier Houchard (1):
  BUG/MEDIUM: server: Also copy "check-sni" for server templates.

Thierry FOURNIER (2):
  BUG/MINOR: lua: bad args are returned for Lua actions
  BUG/MEDIUM: lua: dead lock when Lua tasks are trigerred

Willy Tarreau (7):
  MINOR: mux-h2: only increase the connection window with the first update
  BUG/MEDIUM: mux-h2: mark that we have too many CS once we have more than 
the max
  MINOR: lb: allow redispatch when using consistent hash
  MINOR: stream/cli: fix the location of the waiting flag in "show sess all"
  MINOR: stream/cli: report more info about the HTTP messages on "show sess 
all"
  BUG/MEDIUM: cli: make "show sess" really thread-safe
  BUG/CRITICAL: mux-h2: re-check the frame length when PRIORITY is used

---



[ANNOUNCE] haproxy-1.9.1

2019-01-08 Thread Willy Tarreau
Hi,

HAProxy 1.9.1 was released on 2019/01/08. It added 90 new commits
after version 1.9.0.

One of them fixes a security issue discovered by Tim Düsterhus
(CVE-2018-20615) :

   BUG/CRITICAL: mux-h2: re-check the frame length when PRIORITY is used

An incorrect frame length check is performed on HEADERS frame having the
PRIORITY flag, possibly resulting in a read-past-bound which can cause a
crash depending how the frame is crafted. All 1.9 and 1.8 versions are
affected. As a result, all HTTP/2 users must either upgrade or temporarily
disable HTTP/2 by commenting the "npn h2" and "alpn h2" statements on their
related "bind" lines.

This version also collects a number of significant bug fixes that were
reported since the release, among which :
  - risk of crashes when using HTTP reuse with more than 5 servers for
a given session ;
  - occasional zombie connections when objects retrieved from the cache
were compressed during delivery ;
  - some chunked-encoding inconsistencies between H1 on one side and H2
on the other one in HTX mode ;
  - a few other HTX issues I honestly don't remember in details
  - a small number of lost event issues affecting the H1 and H2 muxes,
possibly resulting in occasional timeouts and/or zombie connections

Lukas' update to redispatch connection failures when using consistent
hash was merged as well as eventhough it was not really a bug, it was at
least a counter-intuitive behaviour.

An annoying limitation was also reported and addressed : health checks
currently cannot use the H2 mux to send HTTP requests to H2 servers, but
since the ALPN string is set per server, it wasn't possible to force
these checks to at least rely on HTTPS instead. A new "check-alpn"
directive was added to allow to specify the ALPN string to advertise for
checks to address this.

A number of updates were merged to the regression testing suite since it
helps us a lot to reproduce bugs and improve reliability.

What's nice is that www.haproxy.org has been running on this code since
the release with only very minor glitches (a few tens of zombie connections
a week due to the compression+cache issue etc) and doesn't show any sign of
trouble anymore after these fixes.

I intend to issue 1.9.2 soon (possibly next week) with a small bunch of
additional minor fixes that I didn't want to mix with this version. In
addition I managed to implement the long-missing support for H2
CONTINUATION frames and trailers which are sufficiently low risk to be
backported. Thanks to these, h2spec now reports zero error, and gRPC
works out of the box through HAProxy :-)  Thus unless someone steps up
with a good objection to these being backported into 1.9, we'll do it.

Anyway, please don't forget to update!

Please find the usual URLs below :
   Site index   : http://www.haproxy.org/
   Discourse: http://discourse.haproxy.org/
   Slack channel: https://slack.haproxy.org/
   Sources  : http://www.haproxy.org/download/1.9/src/
   Git repository   : http://git.haproxy.org/git/haproxy-1.9.git/
   Git Web browsing : http://git.haproxy.org/?p=haproxy-1.9.git
   Changelog: http://www.haproxy.org/download/1.9/src/CHANGELOG
   Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/

Willy
---
Complete changelog :
Alex Zorin (1):
  MINOR: payload: add sample fetch for TLS ALPN

Christopher Faulet (36):
  BUG/MAJOR: stream-int: Update the stream expiration date in 
stream_int_notify()
  BUG/MINOR: compression/htx: Don't compress responses with unknown body 
length
  BUG/MINOR: compression/htx: Don't add the last block of data if it is 
empty
  MINOR: channel: Add the function channel_add_input
  MINOR: stats/htx: Call channel_add_input instead of updating channel 
state by hand
  BUG/MEDIUM: cache: Be sure to end the forwarding when XFER length is 
unknown
  BUG/MAJOR: htx: Return the good block address after a defrag
  BUG/MEDIUM: mux-h1: Add a task to handle connection timeouts
  BUG/MEDIUM: proto-htx: Set SI_FL_NOHALF on server side when request is 
done
  REGTEST: Require the option LUA to run lua tests
  REGTEST: script: Process script arguments before everything else
  REGTEST: script: Evaluate the varnishtest command to allow quoted 
parameters
  REGTEST: script: Add the option --clean to remove previous log direcotries
  REGTEST: script: Add the option --debug to show logs on standard ouput
  REGTEST: script: Add the option --keep-logs to keep all log directories
  REGTEST: script: Add the option --use-htx to enable the HTX in regtests
  REGTEST: script: Print only errors in the results report
  REGTEST: Add option to use HTX prefixed by the macro 'no-htx'
  REGTEST: script: Add support of alternatives in requited options list
  REGTEST: Add a basic test for the compression
  BUG/MINOR: cache/htx: Be sure to count partial trailers
  MINOR: stream/htx: Add info about the HTX structs in "show

Re: haproxy reload terminated with master/worker

2019-01-08 Thread William Lallemand
On Tue, Jan 08, 2019 at 02:03:22PM +0100, Tim Düsterhus wrote:
> Emmanuel,
> 
> Am 08.01.19 um 13:53 schrieb Emmanuel Hocdet:
> > Without master/worker, haproxy reload work with an active waiting (haproxy 
> > exec).
> > With master/worker, kill -USR2 return immediately: Is there a way to know 
> > when the reload is finished?
> > 
> 
> Are you using systemd with -Ws? haproxy informs systemd when a reload
> starts and when it is finished using the sd_notify protocol:
> 
> https://www.freedesktop.org/software/systemd/man/sd_notify.html#RELOADING=1
> 
> Best regards
> Tim Düsterhus
> 

Hi guys,

Indeed you can't do that without systemd, haproxy send a READY=1 variable with
sd_notify once the processes are ready.

Unfortunately sd_notify has some limitation, there is no sd_notify variable
which permits to notify that a reload failed, but you could read the errors in
the service logs.

I have some plans for the master worker in the future, to have a synchronous
state of the reload on the CLI, but this is not possible with the current
architecture. We will probably need to stop reexecuting the process to achieve
that and do a reload over the master CLI with a return code.

-- 
William Lallemand



Re: State of 0-RTT TLS resumption with OpenSSL

2019-01-08 Thread Janusz Dziemidowicz
pt., 4 sty 2019 o 11:59 Olivier Houchard  napisał(a):
> I understand the concern.
> I checked and both nghttp2 and nginx disable the replay protection. The idea
> is you're supposed to allow early data only on harmless requests anyway, ie
> ones that could be replayed with no consequence.

Sorry for the late reply, I was pondering the problem ;) I'm pretty ok
with this patch, especially since others seem to do the same. And my
use case is DNS-over-TLS, which has no problems with replays anyway ;)

However, I believe in general this is a bit more complicated. RFC 8446
described this in detail in section 8:
https://tools.ietf.org/html/rfc8446#section-8
My understanding is that RFC highly recommends anti-replay with 0-RTT.
It seems that s_server implements single use tickets, which is exactly
what is in section 8.1. The above patch disables anti-replay
completely in haproxy, which might warrant some updates to
documentation about allow-0rtt option?

-- 
Janusz Dziemidowicz



Re: haproxy reload terminated with master/worker

2019-01-08 Thread Tim Düsterhus
Emmanuel,

Am 08.01.19 um 13:53 schrieb Emmanuel Hocdet:
> Without master/worker, haproxy reload work with an active waiting (haproxy 
> exec).
> With master/worker, kill -USR2 return immediately: Is there a way to know 
> when the reload is finished?
> 

Are you using systemd with -Ws? haproxy informs systemd when a reload
starts and when it is finished using the sd_notify protocol:

https://www.freedesktop.org/software/systemd/man/sd_notify.html#RELOADING=1

Best regards
Tim Düsterhus



Important update in about one hour

2019-01-08 Thread Willy Tarreau
Hi all,

Tim found a possible remote crash in the H2 code which requires a quick
release for 1.8 and 1.9. I've already backported the patch, I'm preparing
the new releases (1.9.1 and 1.8.17) that will be issued in around one hour
(leaving some time for US to wake up). Distros were already notified so
that there will be a coordinated release of the fix.

If for any reason you can't upgrade, you'll simply have to temporarily
disable H2 in your config by removing "npn h2 alpn h2" from your bind
lines.

More info soon.
Willy



haproxy reload terminated with master/worker

2019-01-08 Thread Emmanuel Hocdet
Hi,

Without master/worker, haproxy reload work with an active waiting (haproxy 
exec).
With master/worker, kill -USR2 return immediately: Is there a way to know when 
the reload is finished?

++
Manu




[PATCH 0/1] Be more verbous when reg tests fail.

2019-01-08 Thread flecaille
From: Frédéric Lécaille 

With this patch when the test fails it may be useful to collect additional 
information
coming from varnishtes especially when this latter aborts.

For instance without this patch reg-tests/mailers/k_healthcheckmail.vtc does not
produce relevant information.

$ VARNISHTEST_PROGRAM=<...> make reg-tests 
reg-tests/mailers/k_healthcheckmail.vtc

## test results in: ...
  

With this patch these following lines are added to the test results output:

*diag  0.0 Assert error in vtc_log_emit(), vtc_log.c line 157:
*diag  0.0   Condition(vtclog_left > l) not true. (errno=0 Success)


In this case varnishtest aborts because its logging buffer is full.
We can make it use a 32MB buffer as follows:

$ VARNISHTEST_PROGRAM=<...> make reg-tests 
reg-tests/mailers/k_healthcheckmail.vtc -- \
   --varnishtestparams "-b$((32<<20))"

with more relevant results:

## test results in: ...
 c27.1 EXPECT resp.http.mailsreceived (11858) == "16" failed

Frédéric Lécaille (1):
  REGTEST: Add some informatoin to test results.

 scripts/run-regtests.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

-- 
2.11.0



Re: [PATCH 1/1] REGTEST: Add some informatoin to test results.

2019-01-08 Thread Christopher Faulet

Le 08/01/2019 à 11:30, flecai...@haproxy.com a écrit :

From: Frédéric Lécaille 

When the reg tests fail, it may be useful to display additional information
coming from varnishtest, especially when this latter aborts.
In such case, the test output may be made of lines prefixed by "* diag"
string.
---
  scripts/run-regtests.sh | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/scripts/run-regtests.sh b/scripts/run-regtests.sh
index 80353de9..e648fa0a 100755
--- a/scripts/run-regtests.sh
+++ b/scripts/run-regtests.sh
@@ -454,7 +454,7 @@ if [ -d "${TESTDIR}" ]; then
  cat <<- EOF | tee -a "$TESTDIR/failedtests.log"
  $(echo "## $(cat "$i/INFO") ##")
  $(echo "## test results in: \"$i\"")
-$(grep -- ^ "$i/LOG")
+$(grep -E -- "^(|\*diag)" "$i/LOG")
  EOF
done' sh {} +
  fi



Thanks, now merged!

--
Christopher Faulet



Re: compression in defaults happens twice with 1.9.0

2019-01-08 Thread Christopher Faulet

Le 07/01/2019 à 22:08, PiBa-NL a écrit :

Hi Christopher,

Op 7-1-2019 om 16:32 schreef Christopher Faulet:

Le 06/01/2019 à 16:22, PiBa-NL a écrit :

Hi List,

Using both 1.9.0 and 2.0-dev0-909b9d8 compression happens twice when
configured in defaults.
This was noticed by user walle303 on IRC.

Seems like a bug to me as 1.8.14 does not show this behavior. Attached a
little regtest that reproduces the issue.

Can someone take a look, thanks in advance.



Hi Pieter,

Here is the patch that should fix this issue. Could you confirm please ?


Works for me. Thanks!



Thanks, now merged!

--
Christopher Faulet



[PATCH 1/1] REGTEST: Add some informatoin to test results.

2019-01-08 Thread flecaille
From: Frédéric Lécaille 

When the reg tests fail, it may be useful to display additional information
coming from varnishtest, especially when this latter aborts.
In such case, the test output may be made of lines prefixed by "* diag"
string.
---
 scripts/run-regtests.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/scripts/run-regtests.sh b/scripts/run-regtests.sh
index 80353de9..e648fa0a 100755
--- a/scripts/run-regtests.sh
+++ b/scripts/run-regtests.sh
@@ -454,7 +454,7 @@ if [ -d "${TESTDIR}" ]; then
 cat <<- EOF | tee -a "$TESTDIR/failedtests.log"
 $(echo "## $(cat "$i/INFO") ##")
 $(echo "## test results in: \"$i\"")
-$(grep -- ^ "$i/LOG")
+$(grep -E -- "^(|\*diag)" "$i/LOG")
 EOF
   done' sh {} +
 fi
-- 
2.11.0



Re: [PATCH] ssl certificates load speedup and dedup (pem/ctx)

2019-01-08 Thread Emmanuel Hocdet


Hi Emeric,

> Le 7 janv. 2019 à 18:11, Emeric Brun  a écrit :
> 
> Hi Manu,
> 
> On 1/7/19 5:59 PM, Emmanuel Hocdet wrote:
>> It's better with patches…
>> 
>>> Le 7 janv. 2019 à 17:57, Emmanuel Hocdet >> > a écrit :
>>> 
>>> Hi,
>>> 
>>> Following the first patch series (included).
>>> The goal is to deduplicate common certificates in memory and in shared pem 
>>> files.
>>> 
>>> PATCH 7/8 is only for boringssl (directive to dedup certificate in memory 
>>> for ctx)
>>> Last patch should be the more interesting:
>>> [PATCH 8/8] MINOR: ssl: add "issuer-path" directive.
>>> 
>>> Certificates loaded with "crt" and "crt-list" commonly share the same
>>> intermediate certificate in PEM file. "issuer-path" is a global
>>> directive to share intermediate certificate in a directory. If
>>> certificate chain is not included in certificate PEM file, haproxy
>>> will complete chain if issuer match the first certificate of the chain
>>> stored via "issuer-path" directive. Such chains will be shared in ssl
>>> shared memory.
>>> . "issuer-path" directive can be set several times.
>>> . only sha1 key identifier is supported (rfc5280 4.2.1.2. (1))
>>> 
>>> If you want to test it, the patch series can be apply to haproxy-dev or 
>>> haproxy-1.9.
>>> 
>>> Feedbacks are welcome :)
>>> 
>>> ++
>>> Manu
>>> 
>> 
>> 
> 
> We have to double check this patches proposal because we have a pending 
> feature in roadmap which could heavily collide: to load only one time a 
> certificate per fs entry.
> 
> For us it is a mandatory feature to allow a clean "hot" update of 
> certificates. (the key to identify a certificate to update will be the path 
> on the fs, or at least, the base path)
> 
> Emeric

Interesting, i have some questions about this feature, i will wait. With fs 
entry only it can conflict with crt-list.

I think it should not heavily collide. Some conflicts with code refactoring for 
sure, issuer sharing feature should be complementary.

++
Manu






[PATCH 1/1] REGTEST: "capture (request|response)" regtest.

2019-01-08 Thread flecaille
From: Frédéric Lécaille 

---
 reg-tests/http-capture/h0.vtc | 92 +++
 1 file changed, 92 insertions(+)
 create mode 100644 reg-tests/http-capture/h0.vtc

diff --git a/reg-tests/http-capture/h0.vtc 
b/reg-tests/http-capture/h0.vtc
new file mode 100644
index ..9f6ec8c5
--- /dev/null
+++ b/reg-tests/http-capture/h0.vtc
@@ -0,0 +1,92 @@
+varnishtest "Tests for 'capture (request|response) header"
+feature ignore_unknown_macro
+
+# This script checks that the last occurrences of "fooresp" and "fooreq" header
+# are correctly captured and added to the logs.
+# Note that varnishtest does not support more than MAX_HDR header.
+
+syslog S -level info {
+recv
+expect ~ "[^:\\[ ]\\[${h_pid}\\]: .* .* fe be/srv .* 200 17641 - -  .* 
.* {HPhx8n59qjjNBLjP} {htb56qDdCcbRVTfS} \"GET / HTTP/1\\.1\""
+} -start
+
+server s {
+rxreq
+txresp -hdr "fooresp: HnFDGJ6KvhSG5QjX" -hdr "fooresp: 8dp7vBMQjTMkVwtG" \
+   -hdr "fooresp: NTpxWmvsNKGxvH6K" -hdr "fooresp: sPKNNJ5VRBDz9qXP" \
+   -hdr "fooresp: HnFDGJ6KvhSG5QjX" -hdr "fooresp: 8dp7vBMQjTMkVwtG" \
+   -hdr "fooresp: VSNnccbGkvfM9JK9" -hdr "fooresp: 9D5cjwtK3LCxmg4F" \
+   -hdr "fooresp: dsbxGqlBPRWGP3vX" -hdr "fooresp: xf6VK6GXlgdj5mwc" \
+   -hdr "fooresp: 8jzM3clRKtdL2WWb" -hdr "fooresp: v7ZHrTPjDR6lm6Bg" \
+   -hdr "fooresp: FQT6th9whMqQ7Z6C" -hdr "fooresp: KM22HH6lRBw6SHQT" \
+   -hdr "fooresp: PmRHphHXmTV9kZNS" -hdr "fooresp: CkGRbTJrD5nSVpFk" \
+   -hdr "fooresp: KQ9mzmMHpmmZ2SXP" -hdr "fooresp: W5FqfFDN6dqBxjK7" \
+   -hdr "fooresp: bvcxNPK4gpnTvn3z" -hdr "fooresp: BSXRLSWMsgQN54cC" \
+   -hdr "fooresp: ZX9ttTnlbXtJK55d" -hdr "fooresp: KH9StjMHF73NqzL8" \
+   -hdr "fooresp: W2q2m6MvMLcnXsX7" -hdr "fooresp: wtrjnJgFzHDvMg5r" \
+   -hdr "fooresp: Vpk2c2DsbWf2Gtwh" -hdr "fooresp: sCcW2qpRhFHHRDpH" \
+   -hdr "fooresp: P4mltXtvxLsnPcNS" -hdr "fooresp: TXgdSKNMmsJ8x9zq" \
+   -hdr "fooresp: n5t8pdZgnGFXZDd3" -hdr "fooresp: pD84GCtkWZqWbCM9" \
+   -hdr "fooresp: wx2FPxsGqSRjNVws" -hdr "fooresp: TXmtBCqPTVGFc3NK" \
+   -hdr "fooresp: 4DrFTLxpcPk2n3Zv" -hdr "fooresp: vrcFr9MWpqJWhK4h" \
+   -hdr "fooresp: HMsCHMZnHT3q8qD2" -hdr "fooresp: HsCXQGTxDpsMf4z6" \
+   -hdr "fooresp: 9rb2vjvvd2SzCQVT" -hdr "fooresp: qn5C2fZTWHVp7NkC" \
+   -hdr "fooresp: ZVd5ltcngZFHXfvr" -hdr "fooresp: j6BZVdV8fkz5tgjR" \
+   -hdr "fooresp: 6qfVwfHqdfntQjmP" -hdr "fooresp: RRr9nTnwjG6d2x7X" \
+   -hdr "fooresp: RJXtWtdJRTss6JgZ" -hdr "fooresp: zzHZWm6bqXDN9k47" \
+   -hdr "fooresp: htb56qDdCcbRVTfS" \
+   -bodylen 16384
+} -start
+
+haproxy h -conf {
+defaults
+mode http
+${no-htx} option http-use-htx
+timeout client 1s
+timeout server 1s
+timeout connect 1s
+
+backend be
+server srv ${s_addr}:${s_port}
+
+frontend fe
+option httplog
+log ${S_addr}:${S_port} local0 debug err
+capture request header fooreq len 25
+capture response header fooresp len 25
+
+bind "fd@${fe}"
+use_backend be
+} -start
+
+client c1 -connect ${h_fe_sock} {
+ txreq -hdr "fooreq: c8Ck8sx8qfXk5pSS" -hdr "fooreq: TGNXbG2DF3TmLWK3" \
+   -hdr "fooreq: mBxq9Cgr8GN6hkt6" -hdr "fooreq: MHZ6VBCPgs564KfR" \
+   -hdr "fooreq: BCCwX2kL9BSMCqvt" -hdr "fooreq: 8rXw87xVTphpRQb7" \
+   -hdr "fooreq: gJ3Tp9kXQlqLC8Qp" -hdr "fooreq: dFnLs6wpMl2M5N7c" \
+   -hdr "fooreq: r3f9WgQ8Brqw37Kj" -hdr "fooreq: dbJzSSdCqV3ZVtXK" \
+   -hdr "fooreq: 5HxHd6g4n2Rj2CNG" -hdr "fooreq: HNqQSNfkt6q4zK26" \
+   -hdr "fooreq: rzqNcfskPR7vW4jG" -hdr "fooreq: 9c7txWhsdrwmkR6d" \
+   -hdr "fooreq: 3v8Nztg9l9vLSKJm" -hdr "fooreq: lh4WDxMX577h4z3l" \
+   -hdr "fooreq: mFtHj5SKDvfcGzfq" -hdr "fooreq: PZ5B5wRM9D7GLm7W" \
+   -hdr "fooreq: fFpN4zCkLTxzp5Dz" -hdr "fooreq: J5XMdfCCHmmwkr2f" \
+   -hdr "fooreq: KqssZ3SkZnZJF8mz" -hdr "fooreq: HrGgsnBnslKN7Msz" \
+   -hdr "fooreq: d8TQltZ39xFZBNx2" -hdr "fooreq: mwDt2k2tvqM8x5kQ" \
+   -hdr "fooreq: 7Qh6tM7s7z3P8XCl" -hdr "fooreq: S3mTVbbPhJbLR7n2" \
+   -hdr "fooreq: zr7hMDvrrwfvpmTT" -hdr "fooreq: lV9TnZX2CtSnr4k8" \
+   -hdr "fooreq: bMdJx8pVDG2nWFNg" -hdr "fooreq: FkGvB2cBwNrB3cm4" \
+   -hdr "fooreq: 5ckNn3m6m8r2CXLF" -hdr "fooreq: sk4pJGTSZ5HMPJP5" \
+   -hdr "fooreq: HgVgQ73zhLwX6Wzq" -hdr "fooreq: T5k2QbFKvCVJlz4c" \
+   -hdr "fooreq: SKcNPw8CXGKhtxNP" -hdr "fooreq: n9fFrcR2kRQJrCpZ" \
+   -hdr "fooreq: hrJ2MXCdcSCDhQ6n" -hdr "fooreq: 9xsWQ8srzLDvG9F4" \
+   -hdr "fooreq: GNcP9NBTFJkg4hbk" -hdr "fooreq: Vg8B8MNwz4T7q5Tj" \
+   -hdr "fooreq: XXns3qPCzZmt9j4G" -hdr "fooreq: hD7TnP43bcPHm5g2" \
+   -hdr "fooreq: wZbxVq7MwmfBSqb5" -hdr "fooreq: HPhx8n59qjjNBLjP" \
+   -bod

[PATCH 0/1] A basic reg test for HTTP header captures

2019-01-08 Thread flecaille
From: Frédéric Lécaille 

Hi ML,

Here is a basic test to check that this is the last occurence of 
request/response
headers which are sent to the logs.

Fred.

Frédéric Lécaille (1):
  REGTEST: "capture (request|response)" regtest.

 reg-tests/http-capture/h0.vtc | 92 +++
 1 file changed, 92 insertions(+)
 create mode 100644 reg-tests/http-capture/h0.vtc

-- 
2.11.0



Re: [PATCH] REG-TEST: mailers: add new test for 'mailers' section

2019-01-08 Thread Willy Tarreau
On Tue, Jan 08, 2019 at 09:31:22AM +0100, Frederic Lecaille wrote:
> Indeed this script could worked with a short mailer timeout before af4021e6
> commit. Another git bisect shows that 53216e7d introduced the email bombing
> issue.
> 
> Note that 33a09a5f refers to 53216e7d commit.
> 
> I am not sure this can help.

Sure it helps, at least to know whom to ping :-)

Willy



Re: [PATCH] REG-TEST: mailers: add new test for 'mailers' section

2019-01-08 Thread Frederic Lecaille

On 1/7/19 9:24 PM, PiBa-NL wrote:

Hi Willy,
Op 7-1-2019 om 15:25 schreef Willy Tarreau:

Hi Pieter,

On Sun, Jan 06, 2019 at 04:38:21PM +0100, PiBa-NL wrote:

The 23654 mails received for a failed server is a bit much..

I agree. I really don't know much how the mails work to be honest, as
I have never used them. I remember that we reused a part of the tcp-check
infrastructure because by then it offered a convenient way to proceed 
with

send/expect sequences. Maybe there's something excessive in the sequence
there, such as a certain status code being expected at the end while the
mail succeeds, I don't know.

Given that this apparently has always been broken,
For 1 part its always been broken (needing the short mailer timeout to 
send all expected mails), for the other part, at least until 1.8.14 it 
used to NOT send thousands of mails so that would be a regression in the 
current 1.9 version that should get fixed on a shorter term.


Indeed this script could worked with a short mailer timeout before 
af4021e6 commit. Another git bisect shows that 53216e7d introduced the 
email bombing issue.


Note that 33a09a5f refers to 53216e7d commit.

I am not sure this can help.

Fred.