Re: haproxy issue tracker discussion

2019-01-11 Thread Willy Tarreau
Hi Lukas,

On Sat, Jan 12, 2019 at 02:53:45AM +0100, Lukas Tribus wrote:
> Hi Tim, Willy,
> 
> apologies for not responding sooner, I always have to force myself to
> policy/organizational discussions, when I can also read stack or
> straces :)

You really don't need to apologize :-)

> > So in the end we can live with simply "affects-1.8" etc and remove these
> > flags once the fix(es) is/are backported.
> 
> I like that. If it has that branch specific affected label, we know
> there is still something to do. We can add and remove a label multiple
> times if required (when having backporting issues).
> 
> So for example a new issue is opened and handled in the following sequence?
> 
> - OP opens a new issue
> - we assign generic default labels, for example: needs-triage
> - first human responder may ask for further information's, adds
> status: gathering feedback
> - OP responds with additional information's
> - triaging occurs, removing status: gathering feedback, needs-triage;
> adding bug, dev-affected, 1.8-affected, 1.7-affected, bisected
> - assigning for example a MAINTAINER to the bug (would be useful if
> MAINTAINERS had github accounts and we document those in the file)
> - the bug is fixed in -dev and marked for backport, removing dev-affected
> - backported to 1.8, removing 1.8-affected
> - backported to 1.7, removing 1.7-affected and closing issue (all
> backports completed)
> - OP reports the bug is still present in 1.7 (post fix)
> - we re-open the issue add 1.7-affected again

This last point is exactly what I was thinking as well and I like this.
I consider issues a cleaner and more reliable todo-list. It follows
exactly the process we're currently living with. In my todo notes it's
sometimes written "backport Foo's fix to 1.7". And this can be added
at any point in time, even when backporting it to 1.8 because it was
only diagnosed as affecting 1.8 and during the backport it was figured
that it in fact also affects another one.

Another good point is that it can allow us to emit an emergency fix
for a previously closed branch if we think it comes with a low cost
and is worth it. Then it's trivial to reopen the issue for this
unplanned operation.

> - after a new fix for 1.7 has been committed, remove the label and
> close the issue again

Yep.

> Just to get a feel for it, I'm playing with a trial repo here:
> https://github.com/lukastribus/hap-issue-trial/
> 
> I added some labels, stole template from another project (restic) with
> slight modifications and talked to myself over some issue. Also tried
> referencing issues from commits and vice-versa.

It's a good idea to have a label the suggested severity level. It doesn't
prevent the developer from changing it later (especially increasing it)
but it definitely helps. It even allows to lower it when the last backports
only are pending and the issue is less important (this is quite common,
like crashes in 1.9 but only returns an error in 1.8).

> I don't like how github automatically closes issues when commits refer
> to a issue prepended with some specific words like Fix or resolves:
> https://help.github.com/articles/closing-issues-using-keywords/

It's always been one of my concerns as well. In some projects, contributors
provide fixes with their internal bug IDs, and this can automatically close
random issues when IDs collide. As usual it shows that the tool focuses
more on ease of use and limited effort than accuracy and fine-grained
control.

> The following issue was closed ... :
> https://github.com/lukastribus/hap-issue-trial/issues/3
> 
> from another repository, just because I referenced the issue
> prepending the word "Fix":
> https://github.com/lukastribus/hap-issue-trial-1.8/commit/91a0776fec856766c64b8f3a34a796718c2368c1

This one is less of a problem because the likelihood that someone writes
"fixes haproxy/haproxy/#4" in a commit message is particularly low, unless
they do it on purpose to annoy us of course.

> As our intention I believe is to keep the issue open until all
> affected branches are fixed, this github feature is a little
> inconvenient. But I guess we can just refer to the issue by prepending
> it with "issue" or "bug", so GH doesn't see "Fix". Still it feels like
> we are working against the system.

As often yes. I'm wondering if based on the previous work you did on the
pull requests using the API it would be possible to :
  - reopen closed issues that still have a "*-affected" or "needs triage" label
  - close issues that don't have "*-affected" nor "needs triage"

In this case it would mean we just need to use the labels and not care
about the state at all.

> So we'd have to define:
> 
> - a rough consensus of the process (like the sequence above)
> - the actual set of labels

Your proposal below looks reasonably good, probably we can refine them
later.

> - and issue and feature request template

For this one I have no idea since I don't know much how it works.

> I like the type and status labels 

Re: haproxy issue tracker discussion

2019-01-11 Thread Lukas Tribus
Hi Tim, Willy,



apologies for not responding sooner, I always have to force myself to
policy/organizational discussions, when I can also read stack or
straces :)


>> When should the binary "issue open" / "issue closed" property be
>> toggled? When the issue is fixed in Dev? When the issue is fixed in the
>> lowest affected version?
>
> [...]
> Conclusion : the affected status is only temporary and enough to go
> once the backport is done. This simply means we don't need a "fixed-1.9"
> or whatever, we just have to remove the "affected" label exactly as it
> would have been if the issue had been reported the day after.
>
> So in the end we can live with simply "affects-1.8" etc and remove these
> flags once the fix(es) is/are backported.

I like that. If it has that branch specific affected label, we know
there is still something to do. We can add and remove a label multiple
times if required (when having backporting issues).

So for example a new issue is opened and handled in the following sequence?

- OP opens a new issue
- we assign generic default labels, for example: needs-triage
- first human responder may ask for further information's, adds
status: gathering feedback
- OP responds with additional information's
- triaging occurs, removing status: gathering feedback, needs-triage;
adding bug, dev-affected, 1.8-affected, 1.7-affected, bisected
- assigning for example a MAINTAINER to the bug (would be useful if
MAINTAINERS had github accounts and we document those in the file)
- the bug is fixed in -dev and marked for backport, removing dev-affected
- backported to 1.8, removing 1.8-affected
- backported to 1.7, removing 1.7-affected and closing issue (all
backports completed)
- OP reports the bug is still present in 1.7 (post fix)
- we re-open the issue add 1.7-affected again
- after a new fix for 1.7 has been committed, remove the label and
close the issue again


Just to get a feel for it, I'm playing with a trial repo here:
https://github.com/lukastribus/hap-issue-trial/

I added some labels, stole template from another project (restic) with
slight modifications and talked to myself over some issue. Also tried
referencing issues from commits and vice-versa.

I don't like how github automatically closes issues when commits refer
to a issue prepended with some specific words like Fix or resolves:
https://help.github.com/articles/closing-issues-using-keywords/

The following issue was closed ... :
https://github.com/lukastribus/hap-issue-trial/issues/3

from another repository, just because I referenced the issue
prepending the word "Fix":
https://github.com/lukastribus/hap-issue-trial-1.8/commit/91a0776fec856766c64b8f3a34a796718c2368c1


As our intention I believe is to keep the issue open until all
affected branches are fixed, this github feature is a little
inconvenient. But I guess we can just refer to the issue by prepending
it with "issue" or "bug", so GH doesn't see "Fix". Still it feels like
we are working against the system.



So we'd have to define:

- a rough consensus of the process (like the sequence above)
- the actual set of labels
- and issue and feature request template


I like the type and status labels of netbox:
https://github.com/digitalocean/netbox/labels


How about something like this (status: and type: should be unique):

help wanted
needs-triage
bisected

affected/dev
affected/1.9
affected/1.8
affected/1.7
affected/1.6

status: accepted
status: blocked
status: duplicate
status: gathering feedback
status: rejected
status: revisions needed
status: pending-backport
status: done

type: bug
type: documentation
type: change request
type: housekeeping
type: major feature
type: minor feature

and maybe some technical labels like "subsystem: xyz". Maybe
needs-triage should be "status: needs-triage".


Regards,
Lukas



Re: Lots of mail from email alert on 1.9.x

2019-01-11 Thread PiBa-NL

Hi Olivier,

Op 11-1-2019 om 19:17 schreef Olivier Houchard:

Ok so erm, I'd be lying if I claimed I enjoy working on the check code, or
that I understand it fully. However, after talking with Willy and Christopher,
I think I may have comed with an acceptable solution, and the attached patch
should fix it (at least by getting haproxy to segfault, but it shouldn't
mailbomb you anymore).
Pieter, I'd be very interested to know if it still work with your setup.
It's a different way of trying to fix what you tried ot fix with
1714b9f28694d750d446917672dd59c46e16afd7
I'd like to be sure I didn't break it for you again:)

Regards,

Olivier

(Slightly modified patches, I think there were a potential race condition
when running with multiple threads).

Olivier


Thanks for this 'change in behavior' ;). Indeed the mailbomb is fixed, 
and it seems the expected mails get generated and delivered, but a 
segfault also happens on occasion. Not with the regtest as it was, but 
with a few minor modifications (adding a unreachable mailserver, and 
giving it a little more time seems to be the most reliable reproduction 
a.t.m.) it will crash consistently after 11 seconds.. So i guess the 
patch needs a bit more tweaking.


Regards,
PiBa-NL (Pieter)

Core was generated by `haproxy -d -f /tmp/vtc.37274.4b8a1a3a/h1/cfg'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x00500955 in chk_report_conn_err (check=0x802616a10, 
errno_bck=0, expired=1) at src/checks.c:689

689 dns_trigger_resolution(check->server->dns_requester);
(gdb) bt full
#0  0x00500955 in chk_report_conn_err (check=0x802616a10, 
errno_bck=0, expired=1) at src/checks.c:689

    cs = 0x8027de0c0
    conn = 0x802683180
    err_msg = 0x80266d0c0 " at step 1 of tcp-check (connect)"
    chk = 0x80097b848
    step = 1
    comment = 0x0
#1  0x005065a5 in process_chk_conn (t=0x802656640, 
context=0x802616a10, state=513) at src/checks.c:2261

    check = 0x802616a10
    proxy = 0x8026c3000
    cs = 0x8027de0c0
    conn = 0x802683180
    rv = 0
    ret = 0
    expired = 1
#2  0x0050596e in process_chk (t=0x802656640, 
context=0x802616a10, state=513) at src/checks.c:2330

    check = 0x802616a10
#3  0x004fe0a2 in process_email_alert (t=0x802656640, 
context=0x802616a10, state=513) at src/checks.c:3210

    check = 0x802616a10
    q = 0x802616a00
    alert = 0x7fffe340
#4  0x005f2523 in process_runnable_tasks () at src/task.c:435
    t = 0x802656640
    state = 513
    ctx = 0x802616a10
    process = 0x4fdeb0 
    t = 0x8026566e0
    max_processed = 200
#5  0x005163a2 in run_poll_loop () at src/haproxy.c:2619
    next = 1062130135
    exp = 1062129684
#6  0x00512ff8 in run_thread_poll_loop (data=0x8026310f0) at 
src/haproxy.c:2684

    start_lock = 0
    ptif = 0x935d40 
---Type  to continue, or q  to quit---
    ptdf = 0x0
#7  0x0050f626 in main (argc=4, argv=0x7fffead8) at 
src/haproxy.c:3313

    tids = 0x8026310f0
    threads = 0x8026310f8
    i = 1
    old_sig = {__bits = {0, 0, 0, 0}}
    blocked_sig = {__bits = {4227856759, 4294967295, 4294967295, 
4294967295}}

    err = 0
    retry = 200
    limit = {rlim_cur = 4046, rlim_max = 4046}
    errmsg = 
"\000\352\377\377\377\177\000\000\000\353\377\377\377\177\000\000\330\352\377\377\377\177\000\000\004\000\000\000\000\000\000\00 
0\b\250\037\315})5:`)\224\000\000\000\000\000\320\352\377\377\377\177\000\000\000\353\377\377\377\177\000\000\330\352\377\377\377\177\000\000\004 
\000\000\000\000\000\00
 reg-tests/mailers/k_healthcheckmail.vtc | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/reg-tests/mailers/k_healthcheckmail.vtc 
b/reg-tests/mailers/k_healthcheckmail.vtc
index d3af3589..820191c8 100644
--- a/reg-tests/mailers/k_healthcheckmail.vtc
+++ b/reg-tests/mailers/k_healthcheckmail.vtc
@@ -48,6 +48,7 @@ defaults
 #  timeout mail 20s
 #  timeout mail 200ms
   mailer smtp1 ${h1_femail_addr}:${h1_femail_port}
+  mailer smtp2 ipv4@192.0.2.100:1025
 
 } -start
 
@@ -62,7 +63,7 @@ client c1 -connect ${h1_luahttpservice_sock} {
 
 delay 2
 server s2 -repeat 5 -start
-delay 5
+delay 10
 
 client c2 -connect ${h1_luahttpservice_sock} {
 timeout 2


Re: Lots of mail from email alert on 1.9.x

2019-01-11 Thread Olivier Houchard
On Fri, Jan 11, 2019 at 06:53:11PM +0100, Olivier Houchard wrote:
> Hi,
> 
> On Fri, Jan 11, 2019 at 10:36:04AM +0100, Johan Hendriks wrote:
> > Thanks all.
> > No rush on my side as it is a test machine, at least we do know when a
> > backend server fails.
> > With all this mail the mail server are giving alarms too :D
> > 
> > I disable the mail feature for now.
> > 
> > Thanks again Pieter, Willy and Oliver for all your work.
> > 
> > 
> > Op 10-01-19 om 20:05 schreef PiBa-NL:
> > > Hi Johan, Olivier, Willy,
> > >
> > > Op 10-1-2019 om 17:00 schreef Johan Hendriks:
> > >> I just updated to 1.9.1 on my test system.
> > >>
> > >> We noticed that when a server fails we now get tons of mail, and with
> > >> tons we mean a lot.
> > >>
> > >> After a client backend server fails we usually get 1 mail on 1.8.x now
> > >> with 1.9.1 within 1 minute we have the following.
> > >>
> > >> mailq | grep -B2 l...@testdomain.nl | grep '^[A-F0-9]' | awk '{print
> > >> $1}' | sed 's/*//' | postsuper -d -
> > >> postsuper: Deleted: 19929 messages
> > >>
> > >> My setting from the backend part is as follows.
> > >>
> > >>  email-alert mailers alert-mailers
> > >>  email-alert from l...@testdomain.nl
> > >>  email-alert to not...@testdomain.nl
> > >>  server webserver09 11.22.33.44:80 check
> > >>
> > >> Has something changed in 1.9.x (it was on 1.9.0 also)
> > >>
> > >> regards
> > >> Johan Hendriks
> > >>
> > >>
> > > Its a 'known issue' see:
> > > https://www.mail-archive.com/haproxy@formilux.org/msg32290.html
> > > a 'regtest' is added in that mail thread also to aid developers in
> > > reproducing the issue and validating a possible fix.
> > >
> > > @Olivier, Willy, may i assume this mailbomb feature is 'planned' to
> > > get fixed in 1.9.2 ? (perhaps a bugtracker with a 'target version'
> > > would be nice ;) ?)
> > >
> > > Regards,
> > > PiBa-NL (Pieter)
> > 
> 
> Ok so erm, I'd be lying if I claimed I enjoy working on the check code, or
> that I understand it fully. However, after talking with Willy and Christopher,
> I think I may have comed with an acceptable solution, and the attached patch
> should fix it (at least by getting haproxy to segfault, but it shouldn't
> mailbomb you anymore).
> Pieter, I'd be very interested to know if it still work with your setup.
> It's a different way of trying to fix what you tried ot fix with
> 1714b9f28694d750d446917672dd59c46e16afd7 
> I'd like to be sure I didn't break it for you again :)
> 
> Regards,
> 
> Olivier

(Slightly modified patches, I think there were a potential race condition
when running with multiple threads).

Olivier
>From 7e7b41cac480029c5fd93338dd86a875fee0b5a7 Mon Sep 17 00:00:00 2001
From: Olivier Houchard 
Date: Fri, 11 Jan 2019 18:17:17 +0100
Subject: [PATCH 1/2] MINOR: checks: Store the proxy in checks.

Instead of assuming we have a server, store the proxy directly in struct
check, and use it instead of s->server.
This should be a no-op for now, but will be useful later when we change
mail checks to avoid having a server.

This should be backported to 1.9.
---
 include/types/checks.h |  1 +
 src/checks.c   | 32 +---
 src/server.c   |  1 +
 3 files changed, 19 insertions(+), 15 deletions(-)

diff --git a/include/types/checks.h b/include/types/checks.h
index 6346fe33..f89abcba 100644
--- a/include/types/checks.h
+++ b/include/types/checks.h
@@ -180,6 +180,7 @@ struct check {
int send_string_len;/* length of agent command 
string */
char *send_string;  /* optionally send a string 
when connecting to the agent */
struct server *server;  /* back-pointer to server */
+   struct proxy *proxy;/* proxy to be used */
char **argv;/* the arguments to use if 
running a process-based check */
char **envp;/* the environment to use if 
running a process-based check */
struct pid_list *curpid;/* entry in pid_list used for 
current process-based test, or -1 if not in test */
diff --git a/src/checks.c b/src/checks.c
index 4baaf9fc..edb61b4f 100644
--- a/src/checks.c
+++ b/src/checks.c
@@ -2124,7 +2124,7 @@ static struct task *process_chk_proc(struct task *t, void 
*context, unsigned sho
 static struct task *process_chk_conn(struct task *t, void *context, unsigned 
short state)
 {
struct check *check = context;
-   struct server *s = check->server;
+   struct proxy *proxy = check->proxy;
struct conn_stream *cs = check->cs;
struct connection *conn = cs_conn(cs);
int rv;
@@ -2142,7 +2142,7 @@ static struct task *process_chk_conn(struct task *t, void 
*context, unsigned sho
 * is disabled.
 */
if (((check->state & (CHK_ST_ENABLED | CHK_ST_PAUSED)) != 
CHK_ST_ENABLED) ||
-   s->proxy->state == PR_STSTOPPED)
+ 

Re: Lots of mail from email alert on 1.9.x

2019-01-11 Thread Olivier Houchard
Hi,

On Fri, Jan 11, 2019 at 10:36:04AM +0100, Johan Hendriks wrote:
> Thanks all.
> No rush on my side as it is a test machine, at least we do know when a
> backend server fails.
> With all this mail the mail server are giving alarms too :D
> 
> I disable the mail feature for now.
> 
> Thanks again Pieter, Willy and Oliver for all your work.
> 
> 
> Op 10-01-19 om 20:05 schreef PiBa-NL:
> > Hi Johan, Olivier, Willy,
> >
> > Op 10-1-2019 om 17:00 schreef Johan Hendriks:
> >> I just updated to 1.9.1 on my test system.
> >>
> >> We noticed that when a server fails we now get tons of mail, and with
> >> tons we mean a lot.
> >>
> >> After a client backend server fails we usually get 1 mail on 1.8.x now
> >> with 1.9.1 within 1 minute we have the following.
> >>
> >> mailq | grep -B2 l...@testdomain.nl | grep '^[A-F0-9]' | awk '{print
> >> $1}' | sed 's/*//' | postsuper -d -
> >> postsuper: Deleted: 19929 messages
> >>
> >> My setting from the backend part is as follows.
> >>
> >>  email-alert mailers alert-mailers
> >>  email-alert from l...@testdomain.nl
> >>  email-alert to not...@testdomain.nl
> >>  server webserver09 11.22.33.44:80 check
> >>
> >> Has something changed in 1.9.x (it was on 1.9.0 also)
> >>
> >> regards
> >> Johan Hendriks
> >>
> >>
> > Its a 'known issue' see:
> > https://www.mail-archive.com/haproxy@formilux.org/msg32290.html
> > a 'regtest' is added in that mail thread also to aid developers in
> > reproducing the issue and validating a possible fix.
> >
> > @Olivier, Willy, may i assume this mailbomb feature is 'planned' to
> > get fixed in 1.9.2 ? (perhaps a bugtracker with a 'target version'
> > would be nice ;) ?)
> >
> > Regards,
> > PiBa-NL (Pieter)
> 

Ok so erm, I'd be lying if I claimed I enjoy working on the check code, or
that I understand it fully. However, after talking with Willy and Christopher,
I think I may have comed with an acceptable solution, and the attached patch
should fix it (at least by getting haproxy to segfault, but it shouldn't
mailbomb you anymore).
Pieter, I'd be very interested to know if it still work with your setup.
It's a different way of trying to fix what you tried ot fix with
1714b9f28694d750d446917672dd59c46e16afd7 
I'd like to be sure I didn't break it for you again :)

Regards,

Olivier
>From 7e7b41cac480029c5fd93338dd86a875fee0b5a7 Mon Sep 17 00:00:00 2001
From: Olivier Houchard 
Date: Fri, 11 Jan 2019 18:17:17 +0100
Subject: [PATCH 1/2] MINOR: checks: Store the proxy in checks.

Instead of assuming we have a server, store the proxy directly in struct
check, and use it instead of s->server.
This should be a no-op for now, but will be useful later when we change
mail checks to avoid having a server.

This should be backported to 1.9.
---
 include/types/checks.h |  1 +
 src/checks.c   | 32 +---
 src/server.c   |  1 +
 3 files changed, 19 insertions(+), 15 deletions(-)

diff --git a/include/types/checks.h b/include/types/checks.h
index 6346fe33..f89abcba 100644
--- a/include/types/checks.h
+++ b/include/types/checks.h
@@ -180,6 +180,7 @@ struct check {
int send_string_len;/* length of agent command 
string */
char *send_string;  /* optionally send a string 
when connecting to the agent */
struct server *server;  /* back-pointer to server */
+   struct proxy *proxy;/* proxy to be used */
char **argv;/* the arguments to use if 
running a process-based check */
char **envp;/* the environment to use if 
running a process-based check */
struct pid_list *curpid;/* entry in pid_list used for 
current process-based test, or -1 if not in test */
diff --git a/src/checks.c b/src/checks.c
index 4baaf9fc..edb61b4f 100644
--- a/src/checks.c
+++ b/src/checks.c
@@ -2124,7 +2124,7 @@ static struct task *process_chk_proc(struct task *t, void 
*context, unsigned sho
 static struct task *process_chk_conn(struct task *t, void *context, unsigned 
short state)
 {
struct check *check = context;
-   struct server *s = check->server;
+   struct proxy *proxy = check->proxy;
struct conn_stream *cs = check->cs;
struct connection *conn = cs_conn(cs);
int rv;
@@ -2142,7 +2142,7 @@ static struct task *process_chk_conn(struct task *t, void 
*context, unsigned sho
 * is disabled.
 */
if (((check->state & (CHK_ST_ENABLED | CHK_ST_PAUSED)) != 
CHK_ST_ENABLED) ||
-   s->proxy->state == PR_STSTOPPED)
+   proxy->state == PR_STSTOPPED)
goto reschedule;
 
/* we'll initiate a new check */
@@ -2167,8 +2167,8 @@ static struct task *process_chk_conn(struct task *t, void 
*context, unsigned sho
 */
t->expire = tick_add(now_ms, 

[PATCH 01/10] MINOR: cfgparse: Extract some code to be re-used.

2019-01-11 Thread flecaille
From: Frédéric Lécaille 

Create init_peers_frontend() function to allocate and initialize
the frontend of "peers" sections (->peers_fe) so that to reuse it later.

May be backported to 1.5 and newer.
---
 src/cfgparse.c | 34 ++
 1 file changed, 26 insertions(+), 8 deletions(-)

diff --git a/src/cfgparse.c b/src/cfgparse.c
index 7c316df0..6fde7c9f 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -483,6 +483,31 @@ void init_default_instance()
defproxy.load_server_state_from_file = PR_SRV_STATE_FILE_UNSPEC;
 }
 
+/* Allocate and initialize the frontend of a "peers" section found in
+ * file  at line  with  as ID.
+ * Return 0 if succeeded, -1 if not.
+ */
+static int init_peers_frontend(const char *file, int linenum,
+   const char *id, struct peers *peers)
+{
+   struct proxy *p;
+
+   p = calloc(1, sizeof *p);
+   if (!p) {
+   ha_alert("parsing [%s:%d] : out of memory.\n", file, linenum);
+   return -1;
+   }
+
+   init_new_proxy(p);
+   p->parent = peers;
+   p->id = strdup(id);
+   p->conf.args.file = p->conf.file = strdup(file);
+   p->conf.args.line = p->conf.line = linenum;
+   peers_setup_frontend(p);
+   peers->peers_fe = p;
+
+   return 0;
+}
 
 /*
  * Parse a line in a ,  or  section.
@@ -625,19 +650,12 @@ int cfg_parse_peers(const char *file, int linenum, char 
**args, int kwm)
cfg_peers->local = newpeer;
 
if (!curpeers->peers_fe) {
-   if ((curpeers->peers_fe  = calloc(1, 
sizeof(struct proxy))) == NULL) {
+   if (init_peers_frontend(file, linenum, args[1], 
curpeers) != 0) {
ha_alert("parsing [%s:%d] : out of 
memory.\n", file, linenum);
err_code |= ERR_ALERT | ERR_ABORT;
goto out;
}
 
-   init_new_proxy(curpeers->peers_fe);
-   curpeers->peers_fe->parent = curpeers;
-   curpeers->peers_fe->id = strdup(args[1]);
-   curpeers->peers_fe->conf.args.file = 
curpeers->peers_fe->conf.file = strdup(file);
-   curpeers->peers_fe->conf.args.line = 
curpeers->peers_fe->conf.line = linenum;
-   peers_setup_frontend(curpeers->peers_fe);
-
bind_conf = bind_conf_alloc(curpeers->peers_fe, 
file, linenum, args[2], xprt_get(XPRT_RAW));
 
if (!str2listener(args[2], curpeers->peers_fe, 
bind_conf, file, linenum, )) {
-- 
2.11.0



[PATCH 06/10] MINOR: cfgparse: Simplication.

2019-01-11 Thread flecaille
From: Frédéric Lécaille 

Make init_peers_frontend() be callable without having to check if
there is something to do or not.

May be backported to 1.5 and newer.
---
 src/cfgparse.c | 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/src/cfgparse.c b/src/cfgparse.c
index 22a3da72..715faaef 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -492,6 +492,10 @@ static int init_peers_frontend(const char *file, int 
linenum,
 {
struct proxy *p;
 
+   if (peers->peers_fe)
+   /* Nothing to do */
+   return 0;
+
p = calloc(1, sizeof *p);
if (!p) {
ha_alert("parsing [%s:%d] : out of memory.\n", file, linenum);
@@ -659,8 +663,7 @@ int cfg_parse_peers(const char *file, int linenum, char 
**args, int kwm)
/* Current is local peer, it define a frontend */
newpeer->local = 1;
 
-   if (!curpeers->peers_fe &&
-   init_peers_frontend(file, linenum, args[1], curpeers) != 0) 
{
+   if (init_peers_frontend(file, linenum, args[1], curpeers) != 0) 
{
ha_alert("parsing [%s:%d] : out of memory.\n", 
file, linenum);
err_code |= ERR_ALERT | ERR_ABORT;
goto out;
-- 
2.11.0



[PATCH 05/10] MINOR: cfgparse: Rework peers frontend init.

2019-01-11 Thread flecaille
From: Frédéric Lécaille 

Even if not already the case, we suppose that the frontend "peers" section
may have been already initialized outside of "peer" line, we seperate
their initializations from their binding initializations.

May be backported to 1.5 and newer.
---
 src/cfgparse.c | 50 --
 1 file changed, 24 insertions(+), 26 deletions(-)

diff --git a/src/cfgparse.c b/src/cfgparse.c
index e3e96b51..22a3da72 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -659,39 +659,37 @@ int cfg_parse_peers(const char *file, int linenum, char 
**args, int kwm)
/* Current is local peer, it define a frontend */
newpeer->local = 1;
 
-   if (!curpeers->peers_fe) {
-   if (init_peers_frontend(file, linenum, args[1], 
curpeers) != 0) {
+   if (!curpeers->peers_fe &&
+   init_peers_frontend(file, linenum, args[1], curpeers) != 0) 
{
ha_alert("parsing [%s:%d] : out of memory.\n", 
file, linenum);
err_code |= ERR_ALERT | ERR_ABORT;
goto out;
-   }
-
-   bind_conf = bind_conf_alloc(curpeers->peers_fe, file, 
linenum, args[2], xprt_get(XPRT_RAW));
+   }
 
-   if (!str2listener(args[2], curpeers->peers_fe, 
bind_conf, file, linenum, )) {
-   if (errmsg && *errmsg) {
-   indent_msg(, 2);
-   ha_alert("parsing [%s:%d] : '%s %s' : 
%s\n", file, linenum, args[0], args[1], errmsg);
-   }
-   else
-   ha_alert("parsing [%s:%d] : '%s %s' : 
error encountered while parsing listening address %s.\n",
-file, linenum, args[0], 
args[1], args[2]);
-   err_code |= ERR_FATAL;
-   goto out;
-   }
+   bind_conf = bind_conf_alloc(curpeers->peers_fe, file, linenum, 
args[2], xprt_get(XPRT_RAW));
 
-   list_for_each_entry(l, _conf->listeners, by_bind) {
-   l->maxaccept = 1;
-   l->maxconn = curpeers->peers_fe->maxconn;
-   l->backlog = curpeers->peers_fe->backlog;
-   l->accept = session_accept_fd;
-   l->analysers |=  curpeers->peers_fe->fe_req_ana;
-   l->default_target = 
curpeers->peers_fe->default_target;
-   l->options |= LI_O_UNLIMITED; /* don't make the 
peers subject to global limits */
-   global.maxsock += l->maxconn;
+   if (!str2listener(args[2], curpeers->peers_fe, bind_conf, file, 
linenum, )) {
+   if (errmsg && *errmsg) {
+   indent_msg(, 2);
+   ha_alert("parsing [%s:%d] : '%s %s' : %s\n", 
file, linenum, args[0], args[1], errmsg);
}
-   cfg_peers->local = newpeer;
+   else
+   ha_alert("parsing [%s:%d] : '%s %s' : error 
encountered while parsing listening address %s.\n",
+file, linenum, args[0], args[1], 
args[2]);
+   err_code |= ERR_FATAL;
+   goto out;
}
+   list_for_each_entry(l, _conf->listeners, by_bind) {
+   l->maxaccept = 1;
+   l->maxconn = curpeers->peers_fe->maxconn;
+   l->backlog = curpeers->peers_fe->backlog;
+   l->accept = session_accept_fd;
+   l->analysers |=  curpeers->peers_fe->fe_req_ana;
+   l->default_target = curpeers->peers_fe->default_target;
+   l->options |= LI_O_UNLIMITED; /* don't make the peers 
subject to global limits */
+   global.maxsock += l->maxconn;
+   }
+   cfg_peers->local = newpeer;
} /* neither "peer" nor "peers" */
else if (!strcmp(args[0], "disabled")) {  /* disables this peers 
section */
curpeers->state = PR_STSTOPPED;
-- 
2.11.0



[PATCH 10/10] DOC: peers: SSL/TLS documentation for "peers"

2019-01-11 Thread flecaille
From: Frédéric Lécaille 

---
 doc/configuration.txt | 23 ++-
 1 file changed, 22 insertions(+), 1 deletion(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 888515fb..d55e4bd3 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -1928,6 +1928,12 @@ peers 
   Creates a new peer list with name . It is an independent section,
   which is referenced by one or more stick-tables.
 
+bind [param*]
+  Defines the binding parameters of the local peer of this "peers" section.
+  To avoid some redundancy, and as the  and  parameters
+  are already provided on "peer" (or "server") lines, they are not supported
+  by "bind" keyword in "peers" sections.
+
 disabled
   Disables a peers section. It disables both listening and any synchronization
   related to this section. This is provided to disable synchronization of stick
@@ -1936,7 +1942,7 @@ disabled
 enable
   This re-enables a disabled peers section which was previously disabled.
 
-peer  :
+peer  : [param*]
   Defines a peer inside a peers section.
   If  is set to the local peer name (by default hostname, or forced
   using "-L" command line option), haproxy will listen for incoming remote peer
@@ -1955,6 +1961,15 @@ peer  :
   You may want to reference some environment variables in the address
   parameter, see section 2.3 about environment variables.
 
+  Note: "peer" keyword may transparently be replaced by "server" keyword (see
+  "server" keyword explanation below).
+
+server  : [param*]
+  As previously mentionned, "peer" keyword may be replaced by "server" keyword
+  with a support for all "server" parameters found in 5.2 paragraph.
+  Some of these parameters are irrelevant for "peers" sections.
+
+
   Example:
 peers mypeers
 peer haproxy1 192.168.0.1:1024
@@ -1970,6 +1985,12 @@ peer  :
 server srv1 192.168.0.30:80
 server srv2 192.168.0.31:80
 
+   Example:
+ peers mypeers
+ bind ssl crt mycerts/pem
+ default-server ssl verify none
+ server hostA  127.0.0.10:1
+ server hostB  127.0.0.11:10001
 
 3.6. Mailers
 
-- 
2.11.0



[PATCH 04/10] MINOR: cfgparse: Useless frontend initialization in "peers" sections.

2019-01-11 Thread flecaille
From: Frédéric Lécaille 

Use ->local "peers" struct member to flag a "peers" section frontend
has being initialized. This is to be able to initialize the frontend
of "peers" sections on lines different from "peer" lines.

May be backported to 1.5 and newer.
---
 src/cfgparse.c | 17 +
 1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/src/cfgparse.c b/src/cfgparse.c
index 04e36e8c..e3e96b51 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -648,9 +648,16 @@ int cfg_parse_peers(const char *file, int linenum, char 
**args, int kwm)
/* We are done. */
goto out;
 
+   if (cfg_peers->local) {
+   ha_alert("parsing [%s:%d] : '%s %s' : local peer name 
already referenced at %s:%d.\n",
+file, linenum, args[0], args[1],
+curpeers->peers_fe->conf.file, 
curpeers->peers_fe->conf.line);
+   err_code |= ERR_FATAL;
+   goto out;
+   }
+
/* Current is local peer, it define a frontend */
newpeer->local = 1;
-   cfg_peers->local = newpeer;
 
if (!curpeers->peers_fe) {
if (init_peers_frontend(file, linenum, args[1], 
curpeers) != 0) {
@@ -683,13 +690,7 @@ int cfg_parse_peers(const char *file, int linenum, char 
**args, int kwm)
l->options |= LI_O_UNLIMITED; /* don't make the 
peers subject to global limits */
global.maxsock += l->maxconn;
}
-   }
-   else {
-   ha_alert("parsing [%s:%d] : '%s %s' : local peer name 
already referenced at %s:%d.\n",
-file, linenum, args[0], args[1],
-curpeers->peers_fe->conf.file, 
curpeers->peers_fe->conf.line);
-   err_code |= ERR_FATAL;
-   goto out;
+   cfg_peers->local = newpeer;
}
} /* neither "peer" nor "peers" */
else if (!strcmp(args[0], "disabled")) {  /* disables this peers 
section */
-- 
2.11.0



[PATCH 07/10] MINOR: cfgparse: Make "peer" lines be parsed as "server" lines.

2019-01-11 Thread flecaille
From: Frédéric Lécaille 

With this patch "default-server" lines are supported in "peers" sections
to setup the default settings of peers which are from now setup
when parsing both "peer" and "server" lines.

May be backported to 1.5 and newer.
---
 src/cfgparse.c | 88 +++---
 src/peers.c|  2 +-
 src/server.c   |  3 +-
 3 files changed, 32 insertions(+), 61 deletions(-)

diff --git a/src/cfgparse.c b/src/cfgparse.c
index 715faaef..6d199c97 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -486,15 +486,18 @@ void init_default_instance()
 /* Allocate and initialize the frontend of a "peers" section found in
  * file  at line  with  as ID.
  * Return 0 if succeeded, -1 if not.
+ * Note that this function may be called from "default-server"
+ * or "peer" lines.
  */
 static int init_peers_frontend(const char *file, int linenum,
const char *id, struct peers *peers)
 {
struct proxy *p;
 
-   if (peers->peers_fe)
-   /* Nothing to do */
-   return 0;
+   if (peers->peers_fe) {
+   p = peers->peers_fe;
+   goto out;
+   }
 
p = calloc(1, sizeof *p);
if (!p) {
@@ -503,12 +506,16 @@ static int init_peers_frontend(const char *file, int 
linenum,
}
 
init_new_proxy(p);
+   peers_setup_frontend(p);
p->parent = peers;
-   p->id = strdup(id);
+   /* Finally store this frontend. */
+   peers->peers_fe = p;
+
+ out:
+   if (id && !p->id)
+   p->id = strdup(id);
p->conf.args.file = p->conf.file = strdup(file);
p->conf.args.line = p->conf.line = linenum;
-   peers_setup_frontend(p);
-   peers->peers_fe = p;
 
return 0;
 }
@@ -533,7 +540,14 @@ int cfg_parse_peers(const char *file, int linenum, char 
**args, int kwm)
int err_code = 0;
char *errmsg = NULL;
 
-   if (strcmp(args[0], "peers") == 0) { /* new peers section */
+   if (strcmp(args[0], "default-server") == 0) {
+   if (init_peers_frontend(file, linenum, NULL, curpeers) != 0) {
+   err_code |= ERR_ALERT | ERR_ABORT;
+   goto out;
+   }
+   err_code |= parse_server(file, linenum, args, 
curpeers->peers_fe, NULL);
+   }
+   else if (strcmp(args[0], "peers") == 0) { /* new peers section */
if (!*args[1]) {
ha_alert("parsing [%s:%d] : missing name for peers 
section.\n", file, linenum);
err_code |= ERR_ALERT | ERR_ABORT;
@@ -577,26 +591,8 @@ int cfg_parse_peers(const char *file, int linenum, char 
**args, int kwm)
curpeers->id = strdup(args[1]);
curpeers->state = PR_STNEW;
}
-   else if (strcmp(args[0], "peer") == 0) { /* peer definition */
-   struct sockaddr_storage *sk;
-   int port1, port2;
-   struct protocol *proto;
-
-   if (!*args[2]) {
-   ha_alert("parsing [%s:%d] : '%s' expects  and 
[:] as arguments.\n",
-file, linenum, args[0]);
-   err_code |= ERR_ALERT | ERR_FATAL;
-   goto out;
-   }
-
-   err = invalid_char(args[1]);
-   if (err) {
-   ha_alert("parsing [%s:%d] : character '%c' is not 
permitted in server name '%s'.\n",
-file, linenum, *err, args[1]);
-   err_code |= ERR_ALERT | ERR_FATAL;
-   goto out;
-   }
-
+   else if (strcmp(args[0], "peer") == 0 ||
+strcmp(args[0], "server") == 0) { /* peer or server definition 
*/
if ((newpeer = calloc(1, sizeof(*newpeer))) == NULL) {
ha_alert("parsing [%s:%d] : out of memory.\n", file, 
linenum);
err_code |= ERR_ALERT | ERR_ABORT;
@@ -613,37 +609,17 @@ int cfg_parse_peers(const char *file, int linenum, char 
**args, int kwm)
newpeer->last_change = now.tv_sec;
newpeer->id = strdup(args[1]);
 
-   sk = str2sa_range(args[2], NULL, , , , NULL, 
NULL, 1);
-   if (!sk) {
-   ha_alert("parsing [%s:%d] : '%s %s' : %s\n", file, 
linenum, args[0], args[1], errmsg);
-   err_code |= ERR_ALERT | ERR_FATAL;
-   goto out;
-   }
-
-   proto = protocol_by_family(sk->ss_family);
-   if (!proto || !proto->connect) {
-   ha_alert("parsing [%s:%d] : '%s %s' : connect() not 
supported for this address family.\n",
-file, linenum, args[0], args[1]);
-   err_code |= ERR_ALERT | ERR_FATAL;
-   goto out;
-   }
-
-   if (port1 != port2) {
- 

[PATCH 03/10] CLEANUP: cfgparse: Code reindentation.

2019-01-11 Thread flecaille
From: Frédéric Lécaille 

May help the series of patches to be reviewed.

May be backported to 1.5 and newer.
---
 src/cfgparse.c | 72 +-
 1 file changed, 36 insertions(+), 36 deletions(-)

diff --git a/src/cfgparse.c b/src/cfgparse.c
index 6670a861..04e36e8c 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -648,49 +648,49 @@ int cfg_parse_peers(const char *file, int linenum, char 
**args, int kwm)
/* We are done. */
goto out;
 
-   /* Current is local peer, it define a frontend */
-   newpeer->local = 1;
-   cfg_peers->local = newpeer;
-
-   if (!curpeers->peers_fe) {
-   if (init_peers_frontend(file, linenum, args[1], 
curpeers) != 0) {
-   ha_alert("parsing [%s:%d] : out of 
memory.\n", file, linenum);
-   err_code |= ERR_ALERT | ERR_ABORT;
-   goto out;
-   }
+   /* Current is local peer, it define a frontend */
+   newpeer->local = 1;
+   cfg_peers->local = newpeer;
 
-   bind_conf = bind_conf_alloc(curpeers->peers_fe, 
file, linenum, args[2], xprt_get(XPRT_RAW));
+   if (!curpeers->peers_fe) {
+   if (init_peers_frontend(file, linenum, args[1], 
curpeers) != 0) {
+   ha_alert("parsing [%s:%d] : out of memory.\n", 
file, linenum);
+   err_code |= ERR_ALERT | ERR_ABORT;
+   goto out;
+   }
 
-   if (!str2listener(args[2], curpeers->peers_fe, 
bind_conf, file, linenum, )) {
-   if (errmsg && *errmsg) {
-   indent_msg(, 2);
-   ha_alert("parsing [%s:%d] : '%s 
%s' : %s\n", file, linenum, args[0], args[1], errmsg);
-   }
-   else
-   ha_alert("parsing [%s:%d] : '%s 
%s' : error encountered while parsing listening address %s.\n",
-file, linenum, 
args[0], args[1], args[2]);
-   err_code |= ERR_FATAL;
-   goto out;
-   }
+   bind_conf = bind_conf_alloc(curpeers->peers_fe, file, 
linenum, args[2], xprt_get(XPRT_RAW));
 
-   list_for_each_entry(l, _conf->listeners, 
by_bind) {
-   l->maxaccept = 1;
-   l->maxconn = 
curpeers->peers_fe->maxconn;
-   l->backlog = 
curpeers->peers_fe->backlog;
-   l->accept = session_accept_fd;
-   l->analysers |=  
curpeers->peers_fe->fe_req_ana;
-   l->default_target = 
curpeers->peers_fe->default_target;
-   l->options |= LI_O_UNLIMITED; /* don't 
make the peers subject to global limits */
-   global.maxsock += l->maxconn;
+   if (!str2listener(args[2], curpeers->peers_fe, 
bind_conf, file, linenum, )) {
+   if (errmsg && *errmsg) {
+   indent_msg(, 2);
+   ha_alert("parsing [%s:%d] : '%s %s' : 
%s\n", file, linenum, args[0], args[1], errmsg);
}
-   }
-   else {
-   ha_alert("parsing [%s:%d] : '%s %s' : local 
peer name already referenced at %s:%d.\n",
-file, linenum, args[0], args[1],
-curpeers->peers_fe->conf.file, 
curpeers->peers_fe->conf.line);
+   else
+   ha_alert("parsing [%s:%d] : '%s %s' : 
error encountered while parsing listening address %s.\n",
+file, linenum, args[0], 
args[1], args[2]);
err_code |= ERR_FATAL;
goto out;
}
+
+   list_for_each_entry(l, _conf->listeners, by_bind) {
+   l->maxaccept = 1;
+   l->maxconn = curpeers->peers_fe->maxconn;
+   l->backlog = curpeers->peers_fe->backlog;
+   l->accept = session_accept_fd;
+   

[PATCH 08/10] MINOR: peers: Make outgoing connection to SSL/TLS peers work.

2019-01-11 Thread flecaille
From: Frédéric Lécaille 

This patch adds pointer to a struct server to peer structure which
is initialized after having parsed a remote "peer" line.

After having parsed all peers section we run ->prepare_srv to initialize
all SSL/TLS stuff of remote perr (or server).

Remaining thing to do to completely support peer protocol over SSL/TLS:
make "bind" keyword be supported in "peers" sections to make SSL/TLS
incoming connections to local peers work.

May be backported to 1.5 and newer.
---
 include/proto/peers.h | 26 ++
 include/types/peers.h |  1 +
 src/cfgparse.c| 13 +++--
 src/peers.c   |  5 +++--
 4 files changed, 41 insertions(+), 4 deletions(-)

diff --git a/include/proto/peers.h b/include/proto/peers.h
index 9d4aaff2..ce4feaa4 100644
--- a/include/proto/peers.h
+++ b/include/proto/peers.h
@@ -25,9 +25,35 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 
+#if defined(USE_OPENSSL)
+static inline enum obj_type *peer_session_target(struct peer *p, struct stream 
*s)
+{
+   if (p->srv->use_ssl)
+   return >srv->obj_type;
+   else
+   return >be->obj_type;
+}
+
+static inline struct xprt_ops *peer_xprt(struct peer *p)
+{
+   return p->srv->use_ssl ? xprt_get(XPRT_SSL) : xprt_get(XPRT_RAW);
+}
+#else
+static inline enum obj_type *peer_session_target(struct peer *p, struct stream 
*s)
+{
+   return >be->obj_type;
+}
+
+static inline struct xprt_ops *peer_xprt(struct peer *p)
+{
+   return xprt_get(XPRT_RAW);
+}
+#endif
+
 int peers_init_sync(struct peers *peers);
 void peers_register_table(struct peers *, struct stktable *table);
 void peers_setup_frontend(struct proxy *fe);
diff --git a/include/types/peers.h b/include/types/peers.h
index 58c8c4ee..5200d56b 100644
--- a/include/types/peers.h
+++ b/include/types/peers.h
@@ -67,6 +67,7 @@ struct peer {
struct shared_table *remote_table;
struct shared_table *last_local_table;
struct shared_table *tables;
+   struct server *srv;
__decl_hathreads(HA_SPINLOCK_T lock); /* lock used to handle this peer 
section */
struct peer *next;/* next peer in the list */
 };
diff --git a/src/cfgparse.c b/src/cfgparse.c
index 6d199c97..f6f25143 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -514,6 +514,7 @@ static int init_peers_frontend(const char *file, int 
linenum,
  out:
if (id && !p->id)
p->id = strdup(id);
+   free(p->conf.file);
p->conf.args.file = p->conf.file = strdup(file);
p->conf.args.line = p->conf.line = linenum;
 
@@ -624,9 +625,10 @@ int cfg_parse_peers(const char *file, int linenum, char 
**args, int kwm)
newpeer->sock_init_arg = NULL;
HA_SPIN_INIT(>lock);
 
-   if (strcmp(newpeer->id, localpeer) != 0)
-   /* We are done. */
+   if (strcmp(newpeer->id, localpeer) != 0) {
+   newpeer->srv = curpeers->peers_fe->srv;
goto out;
+   }
 
if (cfg_peers->local) {
ha_alert("parsing [%s:%d] : '%s %s' : local peer name 
already referenced at %s:%d.\n",
@@ -3634,6 +3636,13 @@ out_uri_auth_compat:
curpeers->peers_fe = NULL;
}
else {
+   p = curpeers->remote;
+   while (p) {
+   if (p->srv && p->srv->use_ssl &&
+   xprt_get(XPRT_SSL) && 
xprt_get(XPRT_SSL)->prepare_srv)
+   cfgerr += 
xprt_get(XPRT_SSL)->prepare_srv(p->srv);
+   p = p->next;
+   }
if (!peers_init_sync(curpeers)) {
ha_alert("Peers section '%s': out of 
memory, giving up on peers.\n",
 curpeers->id);
diff --git a/src/peers.c b/src/peers.c
index e580f2ca..d4d3859e 100644
--- a/src/peers.c
+++ b/src/peers.c
@@ -39,6 +39,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -1996,10 +1997,10 @@ static struct appctx *peer_session_create(struct peers 
*peers, struct peer *peer
if (unlikely((cs = cs_new(conn)) == NULL))
goto out_free_conn;
 
-   conn->target = s->target = >be->obj_type;
+   conn->target = s->target = peer_session_target(peer, s);
memcpy(>addr.to, >addr, sizeof(conn->addr.to));
 
-   conn_prepare(conn, peer->proto, peer->xprt);
+   conn_prepare(conn, peer->proto, peer_xprt(peer));
conn_install_mux(conn, _pt_ops, cs, s->be, NULL);
si_attach_cs(>si[1], cs);
 
-- 
2.11.0



[PATCH 09/10] MINOR: cfgparse: SSL/TLS binding in "peers" sections.

2019-01-11 Thread flecaille
From: Frédéric Lécaille 

This patch makes "bind" work in "peers" sections. All "bind" settings
are supported, excepted ip:port parameters which are provided on
"peer" (or server) line matching the local peer.
After having parsed the configuration files ->prepare_bind_conf is run
to initialize all SSL/TLS stuff for the local peer.

May be backported to 1.5 and newer.
---
 src/cfgparse.c | 95 ++
 1 file changed, 90 insertions(+), 5 deletions(-)

diff --git a/src/cfgparse.c b/src/cfgparse.c
index f6f25143..ef69b37d 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -521,6 +521,31 @@ static int init_peers_frontend(const char *file, int 
linenum,
return 0;
 }
 
+/* Only change ->file, ->line and ->arg struct bind_conf member values
+ * if already present.
+ */
+static struct bind_conf *bind_conf_uniq_alloc(struct proxy *p,
+  const char *file, int line,
+  const char *arg, struct xprt_ops 
*xprt)
+{
+   struct bind_conf *bind_conf;
+
+   if (!LIST_ISEMPTY(>conf.bind)) {
+   bind_conf = LIST_ELEM((>conf.bind)->n, typeof(bind_conf), 
by_fe);
+   free(bind_conf->file);
+   bind_conf->file = strdup(file);
+   bind_conf->line = line;
+   if (arg) {
+   free(bind_conf->arg);
+   bind_conf->arg = strdup(arg);
+   }
+   }
+   else {
+   bind_conf = bind_conf_alloc(p, file, line, arg, xprt);
+   }
+
+   return bind_conf;
+}
 /*
  * Parse a line in a ,  or  section.
  * Returns the error code, 0 if OK, or any combination of :
@@ -541,7 +566,56 @@ int cfg_parse_peers(const char *file, int linenum, char 
**args, int kwm)
int err_code = 0;
char *errmsg = NULL;
 
-   if (strcmp(args[0], "default-server") == 0) {
+   if (strcmp(args[0], "bind") == 0) {
+   int cur_arg;
+   static int kws_dumped;
+   struct bind_conf *bind_conf;
+   struct bind_kw *kw;
+   char *kws;
+
+   cur_arg = 1;
+
+   if (init_peers_frontend(file, linenum, NULL, curpeers) != 0) {
+   err_code |= ERR_ALERT | ERR_ABORT;
+   goto out;
+   }
+
+   bind_conf = bind_conf_uniq_alloc(curpeers->peers_fe, file, 
linenum,
+NULL, xprt_get(XPRT_RAW));
+   while (*args[cur_arg] && (kw = bind_find_kw(args[cur_arg]))) {
+   int ret;
+
+   ret = kw->parse(args, cur_arg, curpeers->peers_fe, 
bind_conf, );
+   err_code |= ret;
+   if (ret) {
+   if (errmsg && *errmsg) {
+   indent_msg(, 2);
+   ha_alert("parsing [%s:%d] : %s\n", 
file, linenum, errmsg);
+   }
+   else
+   ha_alert("parsing [%s:%d]: error 
encountered while processing '%s'\n",
+file, linenum, args[cur_arg]);
+   if (ret & ERR_FATAL)
+   goto out;
+   }
+   cur_arg += 1 + kw->skip;
+   }
+   kws = NULL;
+   if (!kws_dumped) {
+   kws_dumped = 1;
+   bind_dump_kws();
+   indent_msg(, 4);
+   }
+   if (*args[cur_arg] != 0) {
+   ha_alert("parsing [%s:%d] : unknown keyword '%s' in 
'%s' section.%s%s\n",
+file, linenum, args[cur_arg], cursection,
+kws ? " Registered keywords :" : "", kws ? 
kws: "");
+   free(kws);
+   err_code |= ERR_ALERT | ERR_FATAL;
+   goto out;
+   }
+   }
+   else if (strcmp(args[0], "default-server") == 0) {
if (init_peers_frontend(file, linenum, NULL, curpeers) != 0) {
err_code |= ERR_ALERT | ERR_ABORT;
goto out;
@@ -641,7 +715,7 @@ int cfg_parse_peers(const char *file, int linenum, char 
**args, int kwm)
/* Current is local peer, it define a frontend */
newpeer->local = 1;
 
-   bind_conf = bind_conf_alloc(curpeers->peers_fe, file, linenum, 
args[2], xprt_get(XPRT_RAW));
+   bind_conf = bind_conf_uniq_alloc(curpeers->peers_fe, file, 
linenum, args[2], xprt_get(XPRT_RAW));
 
if (!str2listener(args[2], curpeers->peers_fe, bind_conf, file, 
linenum, )) {
if (errmsg && *errmsg) {
@@ -3638,9 +3712,20 @@ 

[PATCH 02/10] CLEANUP: cfgparse: Return asap from cfg_parse_peers().

2019-01-11 Thread flecaille
From: Frédéric Lécaille 

Avoid useless code indentation.

May be backported to 1.5 and newer.
---
 src/cfgparse.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/src/cfgparse.c b/src/cfgparse.c
index 6fde7c9f..6670a861 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -644,7 +644,10 @@ int cfg_parse_peers(const char *file, int linenum, char 
**args, int kwm)
newpeer->sock_init_arg = NULL;
HA_SPIN_INIT(>lock);
 
-   if (strcmp(newpeer->id, localpeer) == 0) {
+   if (strcmp(newpeer->id, localpeer) != 0)
+   /* We are done. */
+   goto out;
+
/* Current is local peer, it define a frontend */
newpeer->local = 1;
cfg_peers->local = newpeer;
@@ -688,7 +691,6 @@ int cfg_parse_peers(const char *file, int linenum, char 
**args, int kwm)
err_code |= ERR_FATAL;
goto out;
}
-   }
} /* neither "peer" nor "peers" */
else if (!strcmp(args[0], "disabled")) {  /* disables this peers 
section */
curpeers->state = PR_STSTOPPED;
-- 
2.11.0



[PATCH 00/10] Peers SSL/TLS support

2019-01-11 Thread flecaille
From: Frédéric Lécaille 

Hi ML,

With this series of patches we add the SSL/TLS support to haproxy peers.

Regards,
Fred.

Frédéric Lécaille (10):
  MINOR: cfgparse: Extract some code to be re-used.
  CLEANUP: cfgparse: Return asap from cfg_parse_peers().
  CLEANUP: cfgparse: Code reindentation.
  MINOR: cfgparse: Useless frontend initialization in "peers" sections.
  MINOR: cfgparse: Rework peers frontend init.
  MINOR: cfgparse: Simplication.
  MINOR: cfgparse: Make "peer" lines be parsed as "server" lines.
  MINOR: peers: Make outgoing connection to SSL/TLS peers work.
  MINOR: cfgparse: SSL/TLS binding in "peers" sections.
  DOC: peers: SSL/TLS documentation for "peers"

 doc/configuration.txt |  23 -
 include/proto/peers.h |  26 +
 include/types/peers.h |   1 +
 src/cfgparse.c| 272 +-
 src/peers.c   |   7 +-
 src/server.c  |   3 +-
 6 files changed, 234 insertions(+), 98 deletions(-)

-- 
2.11.0



Keep client side open on FIN till backend responds

2019-01-11 Thread Assen Totin
Hi, everybody!

I'm facing an issue with a somewhat weird/broken client and I'm looking for
some advise on it.

The client opens a HTTPS session and sends its request (POST data in my
case). The HTTPS part is fine. The POST data is small, so it fits into a
single TCP packet, which arrives with ACK,PSH,FIN flags set. HAProxy
diligently responds with FIN,ACK and closes the client connection, not
sending anything to the backend server. It also logs the request as CR--
which seems proper.

There is nothing specific in my HAProxy setup in this case, default config
with a single frontend/backend. I'm on 1.8 series.

I'm aware that sending a FIN on a TLS connection violates the TLS RFC, but
I suspect that such a client would do the same on a non-TLS connection too.
It brings me to the question, is there a way to tell HAProxy not to close a
client connection upon receipt of FIN on the client side if there is an
unserved HTTP request, but rather keep the client connection half-open
until the backend responds? The issue I'm facing is likely exacerbated  by
the fact that client sends FIN together with the HTTP request, so there is
no backend connection yet when HAProxy decides to close the client
connection.

Any ideas will he much appreciated. I'm open to testing config
changes/patches if needed (or even writing one with some guidance).

WWell,

Assen Totin


Re: Lots of mail from email alert on 1.9.x

2019-01-11 Thread Johan Hendriks
Thanks all.
No rush on my side as it is a test machine, at least we do know when a
backend server fails.
With all this mail the mail server are giving alarms too :D

I disable the mail feature for now.

Thanks again Pieter, Willy and Oliver for all your work.


Op 10-01-19 om 20:05 schreef PiBa-NL:
> Hi Johan, Olivier, Willy,
>
> Op 10-1-2019 om 17:00 schreef Johan Hendriks:
>> I just updated to 1.9.1 on my test system.
>>
>> We noticed that when a server fails we now get tons of mail, and with
>> tons we mean a lot.
>>
>> After a client backend server fails we usually get 1 mail on 1.8.x now
>> with 1.9.1 within 1 minute we have the following.
>>
>> mailq | grep -B2 l...@testdomain.nl | grep '^[A-F0-9]' | awk '{print
>> $1}' | sed 's/*//' | postsuper -d -
>> postsuper: Deleted: 19929 messages
>>
>> My setting from the backend part is as follows.
>>
>>  email-alert mailers alert-mailers
>>  email-alert from l...@testdomain.nl
>>  email-alert to not...@testdomain.nl
>>  server webserver09 11.22.33.44:80 check
>>
>> Has something changed in 1.9.x (it was on 1.9.0 also)
>>
>> regards
>> Johan Hendriks
>>
>>
> Its a 'known issue' see:
> https://www.mail-archive.com/haproxy@formilux.org/msg32290.html
> a 'regtest' is added in that mail thread also to aid developers in
> reproducing the issue and validating a possible fix.
>
> @Olivier, Willy, may i assume this mailbomb feature is 'planned' to
> get fixed in 1.9.2 ? (perhaps a bugtracker with a 'target version'
> would be nice ;) ?)
>
> Regards,
> PiBa-NL (Pieter)



[PATCH 2/2] REGTEST: Adapt reg test doc files to vtest.

2019-01-11 Thread flecaille
From: Frédéric Lécaille 

This is a first patch to switch from varnishtest to new standalone
varnish cache reg testing program: vtest.

More information may be found here:

https://github.com/vtest/VTest
https://varnish-cache.org/docs/trunk/reference/varnishtest.html
https://varnish-cache.org/docs/trunk/reference/vtc.html
---
 doc/regression-testing.txt | 78 +-
 reg-tests/README   | 69 
 2 files changed, 78 insertions(+), 69 deletions(-)

diff --git a/doc/regression-testing.txt b/doc/regression-testing.txt
index 2a96b0cd..77ebaecb 100644
--- a/doc/regression-testing.txt
+++ b/doc/regression-testing.txt
@@ -1,74 +1,82 @@
-   +-+
-   | HAProxy regression testing with varnishtest |
-   +-+
+   +---+
+   | HAProxy regression testing with vtest |
+   +---+
 
 
 The information found in this file are a short starting guide to help you to
 write VTC (Varnish Test Case) scripts (or VTC files) for haproxy regression 
testing.
 Such VTC files are currently used to test Varnish cache application developed 
by
 Poul-Henning Kamp. A very big thanks you to him for having helped you to add
-our haproxy C modules to varnishtest tool.
+our haproxy C modules to vtest tool. Note that vtest was formally developed for
+varnish cache reg testing and was named varnishtest. vtest is an haproxy 
specific
+version of varnishtest program which reuses the non varnish cache specific 
code.
 
 A lot of general information about how to write VTC files may be found in 
'man/vtc.7'
-manual. It is *highly* recommended to read this manual before asking. This
-documentation only deals with the varnishtest support for haproxy.
+manual of varnish cache sources directory or directly on the web here:
 
+https://varnish-cache.org/docs/trunk/reference/vtc.html
 
-varnishtest installation
+It is *highly* recommended to read this manual before asking to haproxy ML. 
This
+documentation only deals with the vtest support for haproxy.
+
+
+vtest installation
 
 
-To use varnishtest you will have to download and compile the recent Varnish 
cache
-sources found at https://github.com/varnishcache/varnish-cache.
+To use vtest you will have to download and compile the recent vtest
+sources found at https://github.com/vtest/VTest.
 
-To compile Varnish cache :
+To compile vtest:
 
-$ ./autogen.sh
-$ ./configure
-$ make
+$ cd VTest
+$ make vtest
 
-The varnishtest sources may be found in 'bin/varnishtest' directory.
-'bin/varnishtest/tests' is plenty of VTC files for Varnish cache. After having
-compiled these sources, the varnishtest executable location is
-'bin/varnishtest/varnishtest'.
+Note that varnishtest may be also compiled but not without the varnish cache
+sources already compiled:
 
-varnishtest is able to search for the haproxy executable file it is supposed to
-launch in the PATH environment variable. To force the executable to be used by
-varnishtest, the HAPROXY_PROGRAM environment variable for varnishtest may be
-typically set as follows:
+$ VARNISH_SRC=<...> make varnishtest
 
- $ HAPROXY_PROGRAM=~/srcs/haproxy/haproxy varnishtest ...
+After having compiled these sources, the vtest executable location is at the
+root of the vtest sources directory.
 
 
-varnistest exectution
+vtest exectution
 -
 
-varnishtest program comes with interesting options. The most interesting are:
+vtest is able to search for the haproxy executable file it is supposed to
+launch thanks to the PATH environment variable. To force the executable to be 
used by
+vtest, the HAPROXY_PROGRAM environment variable for vtest may be
+typically set as follows:
+
+ $ HAPROXY_PROGRAM=~/srcs/haproxy/haproxy vtest ...
+
+vtest program comes with interesting options. The most interesting are:
 
 -t  Timeout in seconds to abort the test if some launched program
--v  By default, varnishtest does not dump the outputs of processus it 
launched
+-v  By default, vtest does not dump the outputs of processus it launched
 when the test passes. With this option the outputs are dumped even
 when the test passes.
 -L  to always keep the temporary VTC directories.
 -l  to keep the temporary VTC directories only when the test fails.
 
-About haproxy when launched by varnishtest, -d option is enabled by default.
+About haproxy, when launched by vtest, -d option is enabled by default.
 
 
 How to write VTC files
 --
 
-A VTC file must start with a "varnishtest" command line followed by a 
descriptive
-line enclosed by double quotes. This is not specific to the VTC files for 
haproxy.
+A VTC file must start with a "varnishtest" or "vtest" command 

[PATCH 0/2] Switch to vtest.

2019-01-11 Thread flecaille
From: Frédéric Lécaille 

Hi ML,

With these patches, haproxy switches to the new varnish cache reg testing tool
named vtest, formerly known as varnishtest.

From the user point of view, there is no very much differences compared to the
usage of varnishtest. Before we started the reg testing process as follows:

   $ VARNISTEST_PROGRAM=<...> make reg-tests

now we have to run:

   $ VTEST_PROGRAM=<...> make reg-tests

More information may be found here: https://github.com/vtest/VTest

Fred.


Frédéric Lécaille (2):
  REGTEST: Adapt reg test doc files to vtest.
  REGTEST: Switch to vtest.

 doc/regression-testing.txt | 78 +-
 reg-tests/README   | 67 ++-
 scripts/run-regtests.sh| 28 -
 3 files changed, 96 insertions(+), 77 deletions(-)

-- 
2.11.0



[PATCH 1/2] REGTEST: Switch to vtest.

2019-01-11 Thread flecaille
From: Frédéric Lécaille 

This patch replace the usage of the formerly varnish cache reg
testing program, name varnishtest by the new standalone one: vtest.
---
 Makefile| 10 +-
 scripts/run-regtests.sh | 28 ++--
 2 files changed, 19 insertions(+), 19 deletions(-)

diff --git a/Makefile b/Makefile
index 6e77b073..10be7cc3 100644
--- a/Makefile
+++ b/Makefile
@@ -101,7 +101,7 @@
 #   SUBVERS: add a sub-version (eg: platform, model, ...).
 #   VERDATE: force haproxy's release date.
 #
-#   VARNISHTEST_PROGRAM : location of the varnishtest program to run reg-tests.
+#   VTEST_PROGRAM : location of the vtest program to run reg-tests.
 
 # verbosity: pass V=1 for verbose shell invocation
 V = 0
@@ -1113,10 +1113,10 @@ reg-tests:
 
 reg-tests-help:
@echo
-   @echo "To launch the reg tests for haproxy, first export to your 
environment VARNISHTEST_PROGRAM variable to point to your varnishtest program:"
-   @echo "$$ export VARNISHTEST_PROGRAM=/opt/local/bin/varnishtest"
+   @echo "To launch the reg tests for haproxy, first export to your 
environment VTEST_PROGRAM variable to point to your vtest program:"
+   @echo "$$ export VTEST_PROGRAM=/opt/local/bin/vtest"
@echo "or"
-   @echo "$$ setenv VARNISHTEST_PROGRAM /opt/local/bin/varnishtest"
+   @echo "$$ setenv VTEST_PROGRAM /opt/local/bin/vtest"
@echo
@echo "The same thing may be done to set your haproxy program with 
HAPROXY_PROGRAM but with ./haproxy as default value."
@echo
@@ -1124,7 +1124,7 @@ reg-tests-help:
@echo "$$ make reg-tests"
@echo
@echo "You can also set the programs to be used on the command line:"
-   @echo "$$ VARNISHTEST_PROGRAM=<...> HAPROXY_PROGRAM=<...> make 
reg-tests"
+   @echo "$$ VTEST_PROGRAM=<...> HAPROXY_PROGRAM=<...> make reg-tests"
@echo
@echo "To run tests with specific levels:"
@echo "$$ LEVEL=1,3,4   make reg-tests  #list of levels"
diff --git a/scripts/run-regtests.sh b/scripts/run-regtests.sh
index e648fa0a..8469163b 100755
--- a/scripts/run-regtests.sh
+++ b/scripts/run-regtests.sh
@@ -13,11 +13,11 @@ _help()
 run-regtests.sh ./tests1 ./tests2
 
   Parameters:
---j , To run varnishtest with multiple jobs / threads for a faster 
overall result
+--j , To run vtest with multiple jobs / threads for a faster overall 
result
   run-regtests.sh ./fasttest --j 16
 
 --v, to run verbose
-  run-regtests.sh --v, disables the default varnishtest 'quiet' parameter
+  run-regtests.sh --v, disables the default vtest 'quiet' parameter
 
 --debug to show test logs on standard ouput (implies --v)
   run-regtests.sh --debug
@@ -25,8 +25,8 @@ _help()
 --keep-logs to keep all log directories (by default kept if test fails)
   run-regtests.sh --keep-logs
 
---varnishtestparams , passes custom ARGS to varnishtest
-  run-regtests.sh --varnishtestparams "-n 10"
+--vtestparams , passes custom ARGS to vtest
+  run-regtests.sh --vtestparams "-n 10"
 
 --clean to cleanup previous reg-tests log directories and exit
   run-regtests.sh --clean
@@ -54,12 +54,12 @@ _help()
 #REQUIRE_VERSION=0.0
 #REQUIRE_VERSION_BELOW=99.9
 
-  Configure environment variables to set the haproxy and varnishtest binaries 
to use
+  Configure environment variables to set the haproxy and vtest binaries to use
 setenv HAPROXY_PROGRAM /usr/local/sbin/haproxy
-setenv VARNISHTEST_PROGRAM /usr/local/bin/varnishtest
+setenv VTEST_PROGRAM /usr/local/bin/vtest
   or
 export HAPROXY_PROGRAM=/usr/local/sbin/haproxy
-export VARNISHTEST_PROGRAM=/usr/local/bin/varnishtest
+export VTEST_PROGRAM=/usr/local/bin/vtest
 EOF
   exit 0
 }
@@ -248,8 +248,8 @@ _process() {
   jobcount="$2"
   shift
   ;;
---varnishtestparams)
-  varnishtestparams="$2"
+--vtestparams)
+  vtestparams="$2"
   shift
   ;;
 --v)
@@ -294,7 +294,7 @@ _version() {
 
 
 HAPROXY_PROGRAM="${HAPROXY_PROGRAM:-${PWD}/haproxy}"
-VARNISHTEST_PROGRAM="${VARNISHTEST_PROGRAM:-varnishtest}"
+VTEST_PROGRAM="${VTEST_PROGRAM:-vtest}"
 TESTDIR="${TMPDIR:-/tmp}"
 REGTESTS=""
 
@@ -315,8 +315,8 @@ if ! [ -x "$(command -v $HAPROXY_PROGRAM)" ]; then
   echo "haproxy not found in path, please specify HAPROXY_PROGRAM environment 
variable"
   preparefailed=1
 fi
-if ! [ -x "$(command -v $VARNISHTEST_PROGRAM)" ]; then
-  echo "varnishtest not found in path, please specify VARNISHTEST_PROGRAM 
environment variable"
+if ! [ -x "$(command -v $VTEST_PROGRAM)" ]; then
+  echo "vtest not found in path, please specify VTEST_PROGRAM environment 
variable"
   preparefailed=1
 fi
 if [ $preparefailed ]; then
@@ -424,14 +424,14 @@ else
   done
 fi
 
-echo "## Starting varnishtest 
##"
+echo "## 

Re: [PATCH] REG-TEST: mailers: add new test for 'mailers' section

2019-01-11 Thread Frederic Lecaille

On 1/11/19 12:35 AM, Cyril Bonté wrote:

Hi all,

Le 08/01/2019 à 10:06, Willy Tarreau a écrit :

On Tue, Jan 08, 2019 at 09:31:22AM +0100, Frederic Lecaille wrote:
Indeed this script could worked with a short mailer timeout before 
af4021e6
commit. Another git bisect shows that 53216e7d introduced the email 
bombing

issue.

Note that 33a09a5f refers to 53216e7d commit.

I am not sure this can help.


Sure it helps, at least to know whom to ping :-)


Well, from what I've seen with a small test, I've a different conclusion 
about the commit which introduced the issue. It looks to have been 
introduced earlier with commit 0108bb3e4 "MEDIUM: mailers: Init alerts 
during conf parsing and refactor their processing"

http://git.haproxy.org/?p=haproxy.git;a=commit;h=0108bb3e4


You are true.
The feature was broken by af4021e6 fixed by 53216e7d which made
the bug occur again.

Thanks Cyril.

Fred.