[PATCH 00/12] Peers SSL/TSL support

2019-01-16 Thread flecaille
From: Frédéric Lécaille 

Hi ML, Willy,

Here is a new series of patches for this feature with Willy's remarks
taken into an account. Was easy to break something ;) but I think
this series does not break the current usage of "peers" section.

I prefered work from the previous series without rebasing anything
excepted for the doc patch.

So, the first nine patches #1 up to #9 included are the same as in the
first series. Then come the three last patches with the patch for the
doc as the last one.

I will check how to test all this asap. I think we have to modify
vtest/varnishtest to resolve macros created by different haproxy processes.
The expansion must be done at the very last time: when all the macros have
been created.

Fred.

Frédéric Lécaille (12):
  MINOR: cfgparse: Extract some code to be re-used.
  CLEANUP: cfgparse: Return asap from cfg_parse_peers().
  CLEANUP: cfgparse: Code reindentation.
  MINOR: cfgparse: Useless frontend initialization in "peers" sections.
  MINOR: cfgparse: Rework peers frontend init.
  MINOR: cfgparse: Simplication.
  MINOR: cfgparse: Make "peer" lines be parsed as "server" lines.
  MINOR: peers: Make outgoing connection to SSL/TLS peers work.
  MINOR: cfgparse: SSL/TLS binding in "peers" sections.
  MINOR: cfgparse: peers: Be less confusing.
  MINOR: peers: Less confusing peer binding parsing.
  DOC: peers: SSL/TLS documentation for "peers"

 doc/configuration.txt  |  40 -
 include/proto/peers.h  |  26 
 include/proto/server.h |   2 +-
 include/types/peers.h  |   1 +
 src/cfgparse-listen.c  |   2 +-
 src/cfgparse.c | 405 -
 src/peers.c|   7 +-
 src/server.c   |  13 +-
 8 files changed, 380 insertions(+), 116 deletions(-)

-- 
2.11.0



[PATCH 02/12] CLEANUP: cfgparse: Return asap from cfg_parse_peers().

2019-01-16 Thread flecaille
From: Frédéric Lécaille 

Avoid useless code indentation.

May be backported to 1.5 and newer.
---
 src/cfgparse.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/src/cfgparse.c b/src/cfgparse.c
index 6fde7c9f..6670a861 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -644,7 +644,10 @@ int cfg_parse_peers(const char *file, int linenum, char 
**args, int kwm)
newpeer->sock_init_arg = NULL;
HA_SPIN_INIT(>lock);
 
-   if (strcmp(newpeer->id, localpeer) == 0) {
+   if (strcmp(newpeer->id, localpeer) != 0)
+   /* We are done. */
+   goto out;
+
/* Current is local peer, it define a frontend */
newpeer->local = 1;
cfg_peers->local = newpeer;
@@ -688,7 +691,6 @@ int cfg_parse_peers(const char *file, int linenum, char 
**args, int kwm)
err_code |= ERR_FATAL;
goto out;
}
-   }
} /* neither "peer" nor "peers" */
else if (!strcmp(args[0], "disabled")) {  /* disables this peers 
section */
curpeers->state = PR_STSTOPPED;
-- 
2.11.0



[PATCH 10/12] MINOR: cfgparse: peers: Be less confusing.

2019-01-16 Thread flecaille
From: Frédéric Lécaille 

Make "bind" line also parse the local peer bind address.
Add "default-bind" option to parse the binding options excepted the bind 
address.
Prevent "bind" lines to be mixed with "peer" line to help in handling the 
migration.
---
 src/cfgparse.c | 153 +
 1 file changed, 122 insertions(+), 31 deletions(-)

diff --git a/src/cfgparse.c b/src/cfgparse.c
index ef69b37d..2a6a4839 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -516,7 +516,8 @@ static int init_peers_frontend(const char *file, int 
linenum,
p->id = strdup(id);
free(p->conf.file);
p->conf.args.file = p->conf.file = strdup(file);
-   p->conf.args.line = p->conf.line = linenum;
+   if (linenum != -1)
+   p->conf.args.line = p->conf.line = linenum;
 
return 0;
 }
@@ -546,6 +547,44 @@ static struct bind_conf *bind_conf_uniq_alloc(struct proxy 
*p,
 
return bind_conf;
 }
+
+/*
+ * Allocate a new struct peer parsed at line  in file 
+ * to be added to .
+ * Returns the new allocated structure if succeeded, NULL if not.
+ */
+static struct peer *cfg_peers_add_peer(struct peers *peers,
+   const char *file, int linenum,
+   const char *id, int local)
+{
+   struct peer *p;
+
+   p = calloc(1, sizeof *p);
+   if (!p) {
+   ha_alert("parsing [%s:%d] : out of memory.\n", file, linenum);
+   return NULL;
+   }
+
+   /* the peers are linked backwards first */
+   peers->count++;
+   p->next = peers->remote;
+   peers->remote = p;
+   p->conf.file = strdup(file);
+   p->conf.line = linenum;
+   p->last_change = now.tv_sec;
+   p->xprt  = xprt_get(XPRT_RAW);
+   p->sock_init_arg = NULL;
+   HA_SPIN_INIT(>lock);
+   if (id)
+   p->id = strdup(id);
+   if (local) {
+   p->local = 1;
+   peers->local = p;
+   }
+
+   return p;
+}
+
 /*
  * Parse a line in a ,  or  section.
  * Returns the error code, 0 if OK, or any combination of :
@@ -565,8 +604,9 @@ int cfg_parse_peers(const char *file, int linenum, char 
**args, int kwm)
struct listener *l;
int err_code = 0;
char *errmsg = NULL;
+   static int bind_line, peer_line;
 
-   if (strcmp(args[0], "bind") == 0) {
+   if (strcmp(args[0], "bind") == 0 || strcmp(args[0], "default-bind") == 
0) {
int cur_arg;
static int kws_dumped;
struct bind_conf *bind_conf;
@@ -582,6 +622,44 @@ int cfg_parse_peers(const char *file, int linenum, char 
**args, int kwm)
 
bind_conf = bind_conf_uniq_alloc(curpeers->peers_fe, file, 
linenum,
 NULL, xprt_get(XPRT_RAW));
+   if (*args[0] == 'b') {
+   struct sockaddr_storage *sk;
+   int port, port1, port2;
+
+   if (peer_line) {
+   ha_alert("parsing [%s:%d] : mixing \"peer\" and 
\"bind\" line is forbidden\n", file, linenum);
+   err_code |= ERR_ALERT | ERR_FATAL;
+   goto out;
+   }
+
+   sk = str2sa_range(args[1], , , , 
, NULL, NULL, 0);
+   if (!sk) {
+   ha_alert("parsing [%s:%d]: '%s %s': %s\n",
+file, linenum, args[0], 
args[1], errmsg);
+   err_code |= ERR_ALERT | ERR_FATAL;
+   goto out;
+   }
+
+   bind_line = 1;
+   if (cfg_peers->local) {
+   newpeer = cfg_peers->local;
+   }
+   else {
+   /* This peer is local.
+* Note that we do not set the peer ID. This 
latter is initialized
+* when parsing "peer" or "server" line.
+*/
+   newpeer = cfg_peers_add_peer(curpeers, file, 
linenum, NULL, 1);
+   if (!newpeer) {
+   err_code |= ERR_ALERT | ERR_ABORT;
+   goto out;
+   }
+   }
+   newpeer->addr = *sk;
+   newpeer->proto = 
protocol_by_family(newpeer->addr.ss_family);
+   cur_arg++;
+   }
+
while (*args[cur_arg] && (kw = bind_find_kw(args[cur_arg]))) {
int ret;
 
@@ -616,13 +694,15 @@ int cfg_parse_peers(const char *file, int linenum, char 
**args, int kwm)
}
}
else if 

[PATCH 09/12] MINOR: cfgparse: SSL/TLS binding in "peers" sections.

2019-01-16 Thread flecaille
From: Frédéric Lécaille 

This patch makes "bind" work in "peers" sections. All "bind" settings
are supported, excepted ip:port parameters which are provided on
"peer" (or server) line matching the local peer.
After having parsed the configuration files ->prepare_bind_conf is run
to initialize all SSL/TLS stuff for the local peer.

May be backported to 1.5 and newer.
---
 src/cfgparse.c | 95 ++
 1 file changed, 90 insertions(+), 5 deletions(-)

diff --git a/src/cfgparse.c b/src/cfgparse.c
index f6f25143..ef69b37d 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -521,6 +521,31 @@ static int init_peers_frontend(const char *file, int 
linenum,
return 0;
 }
 
+/* Only change ->file, ->line and ->arg struct bind_conf member values
+ * if already present.
+ */
+static struct bind_conf *bind_conf_uniq_alloc(struct proxy *p,
+  const char *file, int line,
+  const char *arg, struct xprt_ops 
*xprt)
+{
+   struct bind_conf *bind_conf;
+
+   if (!LIST_ISEMPTY(>conf.bind)) {
+   bind_conf = LIST_ELEM((>conf.bind)->n, typeof(bind_conf), 
by_fe);
+   free(bind_conf->file);
+   bind_conf->file = strdup(file);
+   bind_conf->line = line;
+   if (arg) {
+   free(bind_conf->arg);
+   bind_conf->arg = strdup(arg);
+   }
+   }
+   else {
+   bind_conf = bind_conf_alloc(p, file, line, arg, xprt);
+   }
+
+   return bind_conf;
+}
 /*
  * Parse a line in a ,  or  section.
  * Returns the error code, 0 if OK, or any combination of :
@@ -541,7 +566,56 @@ int cfg_parse_peers(const char *file, int linenum, char 
**args, int kwm)
int err_code = 0;
char *errmsg = NULL;
 
-   if (strcmp(args[0], "default-server") == 0) {
+   if (strcmp(args[0], "bind") == 0) {
+   int cur_arg;
+   static int kws_dumped;
+   struct bind_conf *bind_conf;
+   struct bind_kw *kw;
+   char *kws;
+
+   cur_arg = 1;
+
+   if (init_peers_frontend(file, linenum, NULL, curpeers) != 0) {
+   err_code |= ERR_ALERT | ERR_ABORT;
+   goto out;
+   }
+
+   bind_conf = bind_conf_uniq_alloc(curpeers->peers_fe, file, 
linenum,
+NULL, xprt_get(XPRT_RAW));
+   while (*args[cur_arg] && (kw = bind_find_kw(args[cur_arg]))) {
+   int ret;
+
+   ret = kw->parse(args, cur_arg, curpeers->peers_fe, 
bind_conf, );
+   err_code |= ret;
+   if (ret) {
+   if (errmsg && *errmsg) {
+   indent_msg(, 2);
+   ha_alert("parsing [%s:%d] : %s\n", 
file, linenum, errmsg);
+   }
+   else
+   ha_alert("parsing [%s:%d]: error 
encountered while processing '%s'\n",
+file, linenum, args[cur_arg]);
+   if (ret & ERR_FATAL)
+   goto out;
+   }
+   cur_arg += 1 + kw->skip;
+   }
+   kws = NULL;
+   if (!kws_dumped) {
+   kws_dumped = 1;
+   bind_dump_kws();
+   indent_msg(, 4);
+   }
+   if (*args[cur_arg] != 0) {
+   ha_alert("parsing [%s:%d] : unknown keyword '%s' in 
'%s' section.%s%s\n",
+file, linenum, args[cur_arg], cursection,
+kws ? " Registered keywords :" : "", kws ? 
kws: "");
+   free(kws);
+   err_code |= ERR_ALERT | ERR_FATAL;
+   goto out;
+   }
+   }
+   else if (strcmp(args[0], "default-server") == 0) {
if (init_peers_frontend(file, linenum, NULL, curpeers) != 0) {
err_code |= ERR_ALERT | ERR_ABORT;
goto out;
@@ -641,7 +715,7 @@ int cfg_parse_peers(const char *file, int linenum, char 
**args, int kwm)
/* Current is local peer, it define a frontend */
newpeer->local = 1;
 
-   bind_conf = bind_conf_alloc(curpeers->peers_fe, file, linenum, 
args[2], xprt_get(XPRT_RAW));
+   bind_conf = bind_conf_uniq_alloc(curpeers->peers_fe, file, 
linenum, args[2], xprt_get(XPRT_RAW));
 
if (!str2listener(args[2], curpeers->peers_fe, bind_conf, file, 
linenum, )) {
if (errmsg && *errmsg) {
@@ -3638,9 +3712,20 @@ 

[PATCH 05/12] MINOR: cfgparse: Rework peers frontend init.

2019-01-16 Thread flecaille
From: Frédéric Lécaille 

Even if not already the case, we suppose that the frontend "peers" section
may have been already initialized outside of "peer" line, we seperate
their initializations from their binding initializations.

May be backported to 1.5 and newer.
---
 src/cfgparse.c | 50 --
 1 file changed, 24 insertions(+), 26 deletions(-)

diff --git a/src/cfgparse.c b/src/cfgparse.c
index e3e96b51..22a3da72 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -659,39 +659,37 @@ int cfg_parse_peers(const char *file, int linenum, char 
**args, int kwm)
/* Current is local peer, it define a frontend */
newpeer->local = 1;
 
-   if (!curpeers->peers_fe) {
-   if (init_peers_frontend(file, linenum, args[1], 
curpeers) != 0) {
+   if (!curpeers->peers_fe &&
+   init_peers_frontend(file, linenum, args[1], curpeers) != 0) 
{
ha_alert("parsing [%s:%d] : out of memory.\n", 
file, linenum);
err_code |= ERR_ALERT | ERR_ABORT;
goto out;
-   }
-
-   bind_conf = bind_conf_alloc(curpeers->peers_fe, file, 
linenum, args[2], xprt_get(XPRT_RAW));
+   }
 
-   if (!str2listener(args[2], curpeers->peers_fe, 
bind_conf, file, linenum, )) {
-   if (errmsg && *errmsg) {
-   indent_msg(, 2);
-   ha_alert("parsing [%s:%d] : '%s %s' : 
%s\n", file, linenum, args[0], args[1], errmsg);
-   }
-   else
-   ha_alert("parsing [%s:%d] : '%s %s' : 
error encountered while parsing listening address %s.\n",
-file, linenum, args[0], 
args[1], args[2]);
-   err_code |= ERR_FATAL;
-   goto out;
-   }
+   bind_conf = bind_conf_alloc(curpeers->peers_fe, file, linenum, 
args[2], xprt_get(XPRT_RAW));
 
-   list_for_each_entry(l, _conf->listeners, by_bind) {
-   l->maxaccept = 1;
-   l->maxconn = curpeers->peers_fe->maxconn;
-   l->backlog = curpeers->peers_fe->backlog;
-   l->accept = session_accept_fd;
-   l->analysers |=  curpeers->peers_fe->fe_req_ana;
-   l->default_target = 
curpeers->peers_fe->default_target;
-   l->options |= LI_O_UNLIMITED; /* don't make the 
peers subject to global limits */
-   global.maxsock += l->maxconn;
+   if (!str2listener(args[2], curpeers->peers_fe, bind_conf, file, 
linenum, )) {
+   if (errmsg && *errmsg) {
+   indent_msg(, 2);
+   ha_alert("parsing [%s:%d] : '%s %s' : %s\n", 
file, linenum, args[0], args[1], errmsg);
}
-   cfg_peers->local = newpeer;
+   else
+   ha_alert("parsing [%s:%d] : '%s %s' : error 
encountered while parsing listening address %s.\n",
+file, linenum, args[0], args[1], 
args[2]);
+   err_code |= ERR_FATAL;
+   goto out;
}
+   list_for_each_entry(l, _conf->listeners, by_bind) {
+   l->maxaccept = 1;
+   l->maxconn = curpeers->peers_fe->maxconn;
+   l->backlog = curpeers->peers_fe->backlog;
+   l->accept = session_accept_fd;
+   l->analysers |=  curpeers->peers_fe->fe_req_ana;
+   l->default_target = curpeers->peers_fe->default_target;
+   l->options |= LI_O_UNLIMITED; /* don't make the peers 
subject to global limits */
+   global.maxsock += l->maxconn;
+   }
+   cfg_peers->local = newpeer;
} /* neither "peer" nor "peers" */
else if (!strcmp(args[0], "disabled")) {  /* disables this peers 
section */
curpeers->state = PR_STSTOPPED;
-- 
2.11.0



[PATCH 12/12] DOC: peers: SSL/TLS documentation for "peers"

2019-01-16 Thread flecaille
From: Frédéric Lécaille 

---
 doc/configuration.txt | 40 +++-
 1 file changed, 39 insertions(+), 1 deletion(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 888515fb..960f1948 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -1928,15 +1928,34 @@ peers 
   Creates a new peer list with name . It is an independent section,
   which is referenced by one or more stick-tables.
 
+bind []: [, ...] [param*]
+  Defines the binding parameters of the local peer of this "peers" section.
+  Such lines are not supported with "peer" line in the same "peers" section.
+
 disabled
   Disables a peers section. It disables both listening and any synchronization
   related to this section. This is provided to disable synchronization of stick
   tables without having to comment out all "peers" references.
 
+default-bind [param*]
+  Defines the binding parameters for the local peer, excepted its address.
+
+default-server [param*]
+  Change default options for a server in a "peers" section.
+
+  Arguments:
+  is a list of parameters for this server. The "default-server"
+  keyword accepts an important number of options and has a complete
+  section dedicated to it. Please refer to section 5 for more
+  details.
+
+
+  See also: "server" and section 5 about server options
+
 enable
   This re-enables a disabled peers section which was previously disabled.
 
-peer  :
+peer  : [param*]
   Defines a peer inside a peers section.
   If  is set to the local peer name (by default hostname, or forced
   using "-L" command line option), haproxy will listen for incoming remote peer
@@ -1955,7 +1974,20 @@ peer  :
   You may want to reference some environment variables in the address
   parameter, see section 2.3 about environment variables.
 
+  Note: "peer" keyword may transparently be replaced by "server" keyword (see
+  "server" keyword explanation below).
+
+server  [:] [param*]
+  As previously mentionned, "peer" keyword may be replaced by "server" keyword
+  with a support for all "server" parameters found in 5.2 paragraph.
+  If the underlying peer is local, : parameters must not be present.
+  These parameters must  be provided on a "bind" line (see "bind" keyword
+  of this "peers" section).
+  Some of these parameters are irrelevant for "peers" sections.
+
+
   Example:
+# The old way.
 peers mypeers
 peer haproxy1 192.168.0.1:1024
 peer haproxy2 192.168.0.2:1024
@@ -1970,6 +2002,12 @@ peer  :
 server srv1 192.168.0.30:80
 server srv2 192.168.0.31:80
 
+   Example:
+ peers mypeers
+ bind 127.0.0.11:10001 ssl crt mycerts/pem
+ default-server ssl verify none
+ server hostA  127.0.0.10:1
+ server hostB  #local peer
 
 3.6. Mailers
 
-- 
2.11.0



[PATCH 03/12] CLEANUP: cfgparse: Code reindentation.

2019-01-16 Thread flecaille
From: Frédéric Lécaille 

May help the series of patches to be reviewed.

May be backported to 1.5 and newer.
---
 src/cfgparse.c | 72 +-
 1 file changed, 36 insertions(+), 36 deletions(-)

diff --git a/src/cfgparse.c b/src/cfgparse.c
index 6670a861..04e36e8c 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -648,49 +648,49 @@ int cfg_parse_peers(const char *file, int linenum, char 
**args, int kwm)
/* We are done. */
goto out;
 
-   /* Current is local peer, it define a frontend */
-   newpeer->local = 1;
-   cfg_peers->local = newpeer;
-
-   if (!curpeers->peers_fe) {
-   if (init_peers_frontend(file, linenum, args[1], 
curpeers) != 0) {
-   ha_alert("parsing [%s:%d] : out of 
memory.\n", file, linenum);
-   err_code |= ERR_ALERT | ERR_ABORT;
-   goto out;
-   }
+   /* Current is local peer, it define a frontend */
+   newpeer->local = 1;
+   cfg_peers->local = newpeer;
 
-   bind_conf = bind_conf_alloc(curpeers->peers_fe, 
file, linenum, args[2], xprt_get(XPRT_RAW));
+   if (!curpeers->peers_fe) {
+   if (init_peers_frontend(file, linenum, args[1], 
curpeers) != 0) {
+   ha_alert("parsing [%s:%d] : out of memory.\n", 
file, linenum);
+   err_code |= ERR_ALERT | ERR_ABORT;
+   goto out;
+   }
 
-   if (!str2listener(args[2], curpeers->peers_fe, 
bind_conf, file, linenum, )) {
-   if (errmsg && *errmsg) {
-   indent_msg(, 2);
-   ha_alert("parsing [%s:%d] : '%s 
%s' : %s\n", file, linenum, args[0], args[1], errmsg);
-   }
-   else
-   ha_alert("parsing [%s:%d] : '%s 
%s' : error encountered while parsing listening address %s.\n",
-file, linenum, 
args[0], args[1], args[2]);
-   err_code |= ERR_FATAL;
-   goto out;
-   }
+   bind_conf = bind_conf_alloc(curpeers->peers_fe, file, 
linenum, args[2], xprt_get(XPRT_RAW));
 
-   list_for_each_entry(l, _conf->listeners, 
by_bind) {
-   l->maxaccept = 1;
-   l->maxconn = 
curpeers->peers_fe->maxconn;
-   l->backlog = 
curpeers->peers_fe->backlog;
-   l->accept = session_accept_fd;
-   l->analysers |=  
curpeers->peers_fe->fe_req_ana;
-   l->default_target = 
curpeers->peers_fe->default_target;
-   l->options |= LI_O_UNLIMITED; /* don't 
make the peers subject to global limits */
-   global.maxsock += l->maxconn;
+   if (!str2listener(args[2], curpeers->peers_fe, 
bind_conf, file, linenum, )) {
+   if (errmsg && *errmsg) {
+   indent_msg(, 2);
+   ha_alert("parsing [%s:%d] : '%s %s' : 
%s\n", file, linenum, args[0], args[1], errmsg);
}
-   }
-   else {
-   ha_alert("parsing [%s:%d] : '%s %s' : local 
peer name already referenced at %s:%d.\n",
-file, linenum, args[0], args[1],
-curpeers->peers_fe->conf.file, 
curpeers->peers_fe->conf.line);
+   else
+   ha_alert("parsing [%s:%d] : '%s %s' : 
error encountered while parsing listening address %s.\n",
+file, linenum, args[0], 
args[1], args[2]);
err_code |= ERR_FATAL;
goto out;
}
+
+   list_for_each_entry(l, _conf->listeners, by_bind) {
+   l->maxaccept = 1;
+   l->maxconn = curpeers->peers_fe->maxconn;
+   l->backlog = curpeers->peers_fe->backlog;
+   l->accept = session_accept_fd;
+   

[PATCH 06/12] MINOR: cfgparse: Simplication.

2019-01-16 Thread flecaille
From: Frédéric Lécaille 

Make init_peers_frontend() be callable without having to check if
there is something to do or not.

May be backported to 1.5 and newer.
---
 src/cfgparse.c | 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/src/cfgparse.c b/src/cfgparse.c
index 22a3da72..715faaef 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -492,6 +492,10 @@ static int init_peers_frontend(const char *file, int 
linenum,
 {
struct proxy *p;
 
+   if (peers->peers_fe)
+   /* Nothing to do */
+   return 0;
+
p = calloc(1, sizeof *p);
if (!p) {
ha_alert("parsing [%s:%d] : out of memory.\n", file, linenum);
@@ -659,8 +663,7 @@ int cfg_parse_peers(const char *file, int linenum, char 
**args, int kwm)
/* Current is local peer, it define a frontend */
newpeer->local = 1;
 
-   if (!curpeers->peers_fe &&
-   init_peers_frontend(file, linenum, args[1], curpeers) != 0) 
{
+   if (init_peers_frontend(file, linenum, args[1], curpeers) != 0) 
{
ha_alert("parsing [%s:%d] : out of memory.\n", 
file, linenum);
err_code |= ERR_ALERT | ERR_ABORT;
goto out;
-- 
2.11.0



[PATCH 04/12] MINOR: cfgparse: Useless frontend initialization in "peers" sections.

2019-01-16 Thread flecaille
From: Frédéric Lécaille 

Use ->local "peers" struct member to flag a "peers" section frontend
has being initialized. This is to be able to initialize the frontend
of "peers" sections on lines different from "peer" lines.

May be backported to 1.5 and newer.
---
 src/cfgparse.c | 17 +
 1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/src/cfgparse.c b/src/cfgparse.c
index 04e36e8c..e3e96b51 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -648,9 +648,16 @@ int cfg_parse_peers(const char *file, int linenum, char 
**args, int kwm)
/* We are done. */
goto out;
 
+   if (cfg_peers->local) {
+   ha_alert("parsing [%s:%d] : '%s %s' : local peer name 
already referenced at %s:%d.\n",
+file, linenum, args[0], args[1],
+curpeers->peers_fe->conf.file, 
curpeers->peers_fe->conf.line);
+   err_code |= ERR_FATAL;
+   goto out;
+   }
+
/* Current is local peer, it define a frontend */
newpeer->local = 1;
-   cfg_peers->local = newpeer;
 
if (!curpeers->peers_fe) {
if (init_peers_frontend(file, linenum, args[1], 
curpeers) != 0) {
@@ -683,13 +690,7 @@ int cfg_parse_peers(const char *file, int linenum, char 
**args, int kwm)
l->options |= LI_O_UNLIMITED; /* don't make the 
peers subject to global limits */
global.maxsock += l->maxconn;
}
-   }
-   else {
-   ha_alert("parsing [%s:%d] : '%s %s' : local peer name 
already referenced at %s:%d.\n",
-file, linenum, args[0], args[1],
-curpeers->peers_fe->conf.file, 
curpeers->peers_fe->conf.line);
-   err_code |= ERR_FATAL;
-   goto out;
+   cfg_peers->local = newpeer;
}
} /* neither "peer" nor "peers" */
else if (!strcmp(args[0], "disabled")) {  /* disables this peers 
section */
-- 
2.11.0



[PATCH 11/12] MINOR: peers: Less confusing peer binding parsing.

2019-01-16 Thread flecaille
From: Frédéric Lécaille 

With this patch the "server" lines do not parse anymore the bind address
for local peers.
We do not use anymore list_for_each_entry() to set the "peers" section
listeners parameter because there is only one listener by "peers" section.
---
 include/proto/server.h |  2 +-
 src/cfgparse-listen.c  |  2 +-
 src/cfgparse.c | 58 --
 src/server.c   | 10 ++---
 4 files changed, 46 insertions(+), 26 deletions(-)

diff --git a/include/proto/server.h b/include/proto/server.h
index b3a9b877..436ffb5d 100644
--- a/include/proto/server.h
+++ b/include/proto/server.h
@@ -39,7 +39,7 @@
 int srv_downtime(const struct server *s);
 int srv_lastsession(const struct server *s);
 int srv_getinter(const struct check *check);
-int parse_server(const char *file, int linenum, char **args, struct proxy 
*curproxy, struct proxy *defproxy);
+int parse_server(const char *file, int linenum, char **args, struct proxy 
*curproxy, struct proxy *defproxy, int parse_addr);
 int update_server_addr(struct server *s, void *ip, int ip_sin_family, const 
char *updater);
 const char *update_server_addr_port(struct server *s, const char *addr, const 
char *port, char *updater);
 struct server *server_find_by_id(struct proxy *bk, int id);
diff --git a/src/cfgparse-listen.c b/src/cfgparse-listen.c
index aa2d8608..fdbdfd34 100644
--- a/src/cfgparse-listen.c
+++ b/src/cfgparse-listen.c
@@ -677,7 +677,7 @@ int cfg_parse_listen(const char *file, int linenum, char 
**args, int kwm)
if (!strcmp(args[0], "server") ||
!strcmp(args[0], "default-server") ||
!strcmp(args[0], "server-template")) {
-   err_code |= parse_server(file, linenum, args, curproxy, 
);
+   err_code |= parse_server(file, linenum, args, curproxy, 
, 1);
if (err_code & ERR_FATAL)
goto out;
}
diff --git a/src/cfgparse.c b/src/cfgparse.c
index 2a6a4839..4d0a9ade 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -623,8 +623,7 @@ int cfg_parse_peers(const char *file, int linenum, char 
**args, int kwm)
bind_conf = bind_conf_uniq_alloc(curpeers->peers_fe, file, 
linenum,
 NULL, xprt_get(XPRT_RAW));
if (*args[0] == 'b') {
-   struct sockaddr_storage *sk;
-   int port, port1, port2;
+   struct listener *l;
 
if (peer_line) {
ha_alert("parsing [%s:%d] : mixing \"peer\" and 
\"bind\" line is forbidden\n", file, linenum);
@@ -632,13 +631,26 @@ int cfg_parse_peers(const char *file, int linenum, char 
**args, int kwm)
goto out;
}
 
-   sk = str2sa_range(args[1], , , , 
, NULL, NULL, 0);
-   if (!sk) {
-   ha_alert("parsing [%s:%d]: '%s %s': %s\n",
-file, linenum, args[0], 
args[1], errmsg);
-   err_code |= ERR_ALERT | ERR_FATAL;
+   if (!str2listener(args[1], curpeers->peers_fe, 
bind_conf, file, linenum, )) {
+   if (errmsg && *errmsg) {
+   indent_msg(, 2);
+   ha_alert("parsing +[%s:%d] : '%s %s' : 
%s\n", file, linenum, args[0], args[1], errmsg);
+   }
+   else
+   ha_alert("parsing [%s:%d] : '%s %s' : 
error encountered while parsing listening address %s.\n",
+file, linenum, 
args[0], args[1], args[2]);
+   err_code |= ERR_FATAL;
goto out;
}
+   l = LIST_ELEM(bind_conf->listeners.n, typeof(l), 
by_bind);
+   l->maxaccept = 1;
+   l->maxconn = curpeers->peers_fe->maxconn;
+   l->backlog = curpeers->peers_fe->backlog;
+   l->accept = session_accept_fd;
+   l->analysers |=  curpeers->peers_fe->fe_req_ana;
+   l->default_target = curpeers->peers_fe->default_target;
+   l->options |= LI_O_UNLIMITED; /* don't make the peers 
subject to global limits */
+   global.maxsock += l->maxconn;
 
bind_line = 1;
if (cfg_peers->local) {
@@ -655,7 +667,7 @@ int cfg_parse_peers(const char *file, int linenum, char 
**args, int kwm)
goto out;
}
}
-   newpeer->addr = *sk;
+   newpeer->addr = l->addr;

[PATCH 08/12] MINOR: peers: Make outgoing connection to SSL/TLS peers work.

2019-01-16 Thread flecaille
From: Frédéric Lécaille 

This patch adds pointer to a struct server to peer structure which
is initialized after having parsed a remote "peer" line.

After having parsed all peers section we run ->prepare_srv to initialize
all SSL/TLS stuff of remote perr (or server).

Remaining thing to do to completely support peer protocol over SSL/TLS:
make "bind" keyword be supported in "peers" sections to make SSL/TLS
incoming connections to local peers work.

May be backported to 1.5 and newer.
---
 include/proto/peers.h | 26 ++
 include/types/peers.h |  1 +
 src/cfgparse.c| 13 +++--
 src/peers.c   |  5 +++--
 4 files changed, 41 insertions(+), 4 deletions(-)

diff --git a/include/proto/peers.h b/include/proto/peers.h
index 9d4aaff2..ce4feaa4 100644
--- a/include/proto/peers.h
+++ b/include/proto/peers.h
@@ -25,9 +25,35 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 
+#if defined(USE_OPENSSL)
+static inline enum obj_type *peer_session_target(struct peer *p, struct stream 
*s)
+{
+   if (p->srv->use_ssl)
+   return >srv->obj_type;
+   else
+   return >be->obj_type;
+}
+
+static inline struct xprt_ops *peer_xprt(struct peer *p)
+{
+   return p->srv->use_ssl ? xprt_get(XPRT_SSL) : xprt_get(XPRT_RAW);
+}
+#else
+static inline enum obj_type *peer_session_target(struct peer *p, struct stream 
*s)
+{
+   return >be->obj_type;
+}
+
+static inline struct xprt_ops *peer_xprt(struct peer *p)
+{
+   return xprt_get(XPRT_RAW);
+}
+#endif
+
 int peers_init_sync(struct peers *peers);
 void peers_register_table(struct peers *, struct stktable *table);
 void peers_setup_frontend(struct proxy *fe);
diff --git a/include/types/peers.h b/include/types/peers.h
index 58c8c4ee..5200d56b 100644
--- a/include/types/peers.h
+++ b/include/types/peers.h
@@ -67,6 +67,7 @@ struct peer {
struct shared_table *remote_table;
struct shared_table *last_local_table;
struct shared_table *tables;
+   struct server *srv;
__decl_hathreads(HA_SPINLOCK_T lock); /* lock used to handle this peer 
section */
struct peer *next;/* next peer in the list */
 };
diff --git a/src/cfgparse.c b/src/cfgparse.c
index 6d199c97..f6f25143 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -514,6 +514,7 @@ static int init_peers_frontend(const char *file, int 
linenum,
  out:
if (id && !p->id)
p->id = strdup(id);
+   free(p->conf.file);
p->conf.args.file = p->conf.file = strdup(file);
p->conf.args.line = p->conf.line = linenum;
 
@@ -624,9 +625,10 @@ int cfg_parse_peers(const char *file, int linenum, char 
**args, int kwm)
newpeer->sock_init_arg = NULL;
HA_SPIN_INIT(>lock);
 
-   if (strcmp(newpeer->id, localpeer) != 0)
-   /* We are done. */
+   if (strcmp(newpeer->id, localpeer) != 0) {
+   newpeer->srv = curpeers->peers_fe->srv;
goto out;
+   }
 
if (cfg_peers->local) {
ha_alert("parsing [%s:%d] : '%s %s' : local peer name 
already referenced at %s:%d.\n",
@@ -3634,6 +3636,13 @@ out_uri_auth_compat:
curpeers->peers_fe = NULL;
}
else {
+   p = curpeers->remote;
+   while (p) {
+   if (p->srv && p->srv->use_ssl &&
+   xprt_get(XPRT_SSL) && 
xprt_get(XPRT_SSL)->prepare_srv)
+   cfgerr += 
xprt_get(XPRT_SSL)->prepare_srv(p->srv);
+   p = p->next;
+   }
if (!peers_init_sync(curpeers)) {
ha_alert("Peers section '%s': out of 
memory, giving up on peers.\n",
 curpeers->id);
diff --git a/src/peers.c b/src/peers.c
index e580f2ca..d4d3859e 100644
--- a/src/peers.c
+++ b/src/peers.c
@@ -39,6 +39,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -1996,10 +1997,10 @@ static struct appctx *peer_session_create(struct peers 
*peers, struct peer *peer
if (unlikely((cs = cs_new(conn)) == NULL))
goto out_free_conn;
 
-   conn->target = s->target = >be->obj_type;
+   conn->target = s->target = peer_session_target(peer, s);
memcpy(>addr.to, >addr, sizeof(conn->addr.to));
 
-   conn_prepare(conn, peer->proto, peer->xprt);
+   conn_prepare(conn, peer->proto, peer_xprt(peer));
conn_install_mux(conn, _pt_ops, cs, s->be, NULL);
si_attach_cs(>si[1], cs);
 
-- 
2.11.0



[PATCH 01/12] MINOR: cfgparse: Extract some code to be re-used.

2019-01-16 Thread flecaille
From: Frédéric Lécaille 

Create init_peers_frontend() function to allocate and initialize
the frontend of "peers" sections (->peers_fe) so that to reuse it later.

May be backported to 1.5 and newer.
---
 src/cfgparse.c | 34 ++
 1 file changed, 26 insertions(+), 8 deletions(-)

diff --git a/src/cfgparse.c b/src/cfgparse.c
index 7c316df0..6fde7c9f 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -483,6 +483,31 @@ void init_default_instance()
defproxy.load_server_state_from_file = PR_SRV_STATE_FILE_UNSPEC;
 }
 
+/* Allocate and initialize the frontend of a "peers" section found in
+ * file  at line  with  as ID.
+ * Return 0 if succeeded, -1 if not.
+ */
+static int init_peers_frontend(const char *file, int linenum,
+   const char *id, struct peers *peers)
+{
+   struct proxy *p;
+
+   p = calloc(1, sizeof *p);
+   if (!p) {
+   ha_alert("parsing [%s:%d] : out of memory.\n", file, linenum);
+   return -1;
+   }
+
+   init_new_proxy(p);
+   p->parent = peers;
+   p->id = strdup(id);
+   p->conf.args.file = p->conf.file = strdup(file);
+   p->conf.args.line = p->conf.line = linenum;
+   peers_setup_frontend(p);
+   peers->peers_fe = p;
+
+   return 0;
+}
 
 /*
  * Parse a line in a ,  or  section.
@@ -625,19 +650,12 @@ int cfg_parse_peers(const char *file, int linenum, char 
**args, int kwm)
cfg_peers->local = newpeer;
 
if (!curpeers->peers_fe) {
-   if ((curpeers->peers_fe  = calloc(1, 
sizeof(struct proxy))) == NULL) {
+   if (init_peers_frontend(file, linenum, args[1], 
curpeers) != 0) {
ha_alert("parsing [%s:%d] : out of 
memory.\n", file, linenum);
err_code |= ERR_ALERT | ERR_ABORT;
goto out;
}
 
-   init_new_proxy(curpeers->peers_fe);
-   curpeers->peers_fe->parent = curpeers;
-   curpeers->peers_fe->id = strdup(args[1]);
-   curpeers->peers_fe->conf.args.file = 
curpeers->peers_fe->conf.file = strdup(file);
-   curpeers->peers_fe->conf.args.line = 
curpeers->peers_fe->conf.line = linenum;
-   peers_setup_frontend(curpeers->peers_fe);
-
bind_conf = bind_conf_alloc(curpeers->peers_fe, 
file, linenum, args[2], xprt_get(XPRT_RAW));
 
if (!str2listener(args[2], curpeers->peers_fe, 
bind_conf, file, linenum, )) {
-- 
2.11.0



[PATCH 07/12] MINOR: cfgparse: Make "peer" lines be parsed as "server" lines.

2019-01-16 Thread flecaille
From: Frédéric Lécaille 

With this patch "default-server" lines are supported in "peers" sections
to setup the default settings of peers which are from now setup
when parsing both "peer" and "server" lines.

May be backported to 1.5 and newer.
---
 src/cfgparse.c | 88 +++---
 src/peers.c|  2 +-
 src/server.c   |  3 +-
 3 files changed, 32 insertions(+), 61 deletions(-)

diff --git a/src/cfgparse.c b/src/cfgparse.c
index 715faaef..6d199c97 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -486,15 +486,18 @@ void init_default_instance()
 /* Allocate and initialize the frontend of a "peers" section found in
  * file  at line  with  as ID.
  * Return 0 if succeeded, -1 if not.
+ * Note that this function may be called from "default-server"
+ * or "peer" lines.
  */
 static int init_peers_frontend(const char *file, int linenum,
const char *id, struct peers *peers)
 {
struct proxy *p;
 
-   if (peers->peers_fe)
-   /* Nothing to do */
-   return 0;
+   if (peers->peers_fe) {
+   p = peers->peers_fe;
+   goto out;
+   }
 
p = calloc(1, sizeof *p);
if (!p) {
@@ -503,12 +506,16 @@ static int init_peers_frontend(const char *file, int 
linenum,
}
 
init_new_proxy(p);
+   peers_setup_frontend(p);
p->parent = peers;
-   p->id = strdup(id);
+   /* Finally store this frontend. */
+   peers->peers_fe = p;
+
+ out:
+   if (id && !p->id)
+   p->id = strdup(id);
p->conf.args.file = p->conf.file = strdup(file);
p->conf.args.line = p->conf.line = linenum;
-   peers_setup_frontend(p);
-   peers->peers_fe = p;
 
return 0;
 }
@@ -533,7 +540,14 @@ int cfg_parse_peers(const char *file, int linenum, char 
**args, int kwm)
int err_code = 0;
char *errmsg = NULL;
 
-   if (strcmp(args[0], "peers") == 0) { /* new peers section */
+   if (strcmp(args[0], "default-server") == 0) {
+   if (init_peers_frontend(file, linenum, NULL, curpeers) != 0) {
+   err_code |= ERR_ALERT | ERR_ABORT;
+   goto out;
+   }
+   err_code |= parse_server(file, linenum, args, 
curpeers->peers_fe, NULL);
+   }
+   else if (strcmp(args[0], "peers") == 0) { /* new peers section */
if (!*args[1]) {
ha_alert("parsing [%s:%d] : missing name for peers 
section.\n", file, linenum);
err_code |= ERR_ALERT | ERR_ABORT;
@@ -577,26 +591,8 @@ int cfg_parse_peers(const char *file, int linenum, char 
**args, int kwm)
curpeers->id = strdup(args[1]);
curpeers->state = PR_STNEW;
}
-   else if (strcmp(args[0], "peer") == 0) { /* peer definition */
-   struct sockaddr_storage *sk;
-   int port1, port2;
-   struct protocol *proto;
-
-   if (!*args[2]) {
-   ha_alert("parsing [%s:%d] : '%s' expects  and 
[:] as arguments.\n",
-file, linenum, args[0]);
-   err_code |= ERR_ALERT | ERR_FATAL;
-   goto out;
-   }
-
-   err = invalid_char(args[1]);
-   if (err) {
-   ha_alert("parsing [%s:%d] : character '%c' is not 
permitted in server name '%s'.\n",
-file, linenum, *err, args[1]);
-   err_code |= ERR_ALERT | ERR_FATAL;
-   goto out;
-   }
-
+   else if (strcmp(args[0], "peer") == 0 ||
+strcmp(args[0], "server") == 0) { /* peer or server definition 
*/
if ((newpeer = calloc(1, sizeof(*newpeer))) == NULL) {
ha_alert("parsing [%s:%d] : out of memory.\n", file, 
linenum);
err_code |= ERR_ALERT | ERR_ABORT;
@@ -613,37 +609,17 @@ int cfg_parse_peers(const char *file, int linenum, char 
**args, int kwm)
newpeer->last_change = now.tv_sec;
newpeer->id = strdup(args[1]);
 
-   sk = str2sa_range(args[2], NULL, , , , NULL, 
NULL, 1);
-   if (!sk) {
-   ha_alert("parsing [%s:%d] : '%s %s' : %s\n", file, 
linenum, args[0], args[1], errmsg);
-   err_code |= ERR_ALERT | ERR_FATAL;
-   goto out;
-   }
-
-   proto = protocol_by_family(sk->ss_family);
-   if (!proto || !proto->connect) {
-   ha_alert("parsing [%s:%d] : '%s %s' : connect() not 
supported for this address family.\n",
-file, linenum, args[0], args[1]);
-   err_code |= ERR_ALERT | ERR_FATAL;
-   goto out;
-   }
-
-   if (port1 != port2) {
- 

Re: stats webpage crash, htx and scope filter, [PATCH] REGTEST is included

2019-01-16 Thread Christopher Faulet

Le 15/01/2019 à 21:07, PiBa-NL a écrit :

Hi Christopher,

Op 15-1-2019 om 10:48 schreef Christopher Faulet:

Le 14/01/2019 à 21:53, PiBa-NL a écrit :

Hi Christopher,

Op 14-1-2019 om 11:17 schreef Christopher Faulet:

Le 12/01/2019 à 23:23, PiBa-NL a écrit :

Hi List,

I've configured haproxy with htx and when i try to filter the stats
webpage.
Sending this request: "GET /?;csv;scope=b1" to '2.0-dev0-762475e
2019/01/10' it will crash with the trace below.
1.9.0 and 1.9.1 are also affected.

Can someone take a look? Thanks in advance.

A regtest is attached that reproduces the behavior, and which i think
could be included into the haproxy repository.



Pieter,

Here is the patch that should fix this issue. This was "just" an
oversight when the stats applet has been adapted to support the HTX.

If it's ok for you, I'll also merge your regtest.

Thanks


It seems the patch did not change/fix the crash.? Below looks pretty
much the same as previously. Did i fail to apply the patch properly.? It
seems to have 'applied' properly checking a few lines of the touched
code manually. As for the regtest, yes please merge that if its okay
as-is, perhaps after the fix is also ready :).



Hi Pieter,

Sorry, I made my patch too quickly. It seemed ok, but obviously not...
This new one should do the trick.


Well.. 'something' changed, still crashing though.. but at a different
place.



Rah ! I'll probably need some rest. I've done my tests without the HTX 
enabled... It's a bit embarrassing and not really responsible.


Anyway, here is a new patch, again. Willy, I hope it will be good for 
the release 1.9.2.


--
Christopher Faulet
>From cacd3205bbe5c1a0bf123631178b61e1f6e9ffc1 Mon Sep 17 00:00:00 2001
From: Christopher Faulet 
Date: Mon, 14 Jan 2019 11:07:34 +0100
Subject: [PATCH] BUG/MEDIUM: stats: Get the rigth scope pointer depending on
 HTX is used or not

For HTX streams, the scope pointer is relative to the URI in the start-line. But
for streams using the legacy HTTP representation, the scope pointer is relative
to the beginning of output data in the channel's buffer. So we must be carefull
to use the right one depending on the HTX is used or not.

Because the start-line is used to get de scope pointer, it is important to keep
it after the parsing of post paramters. So now, instead of removing blocks when
read in the function stats_process_http_post(), we just move on next, leaving it
in the HTX message.

Thanks to Pieter (PiBa-NL) to report this bug.

This patch must be backported to 1.9.
---
 src/proto_htx.c |  4 ++--
 src/stats.c | 62 ++---
 2 files changed, 45 insertions(+), 21 deletions(-)

diff --git a/src/proto_htx.c b/src/proto_htx.c
index 236bfd04d..9fa820653 100644
--- a/src/proto_htx.c
+++ b/src/proto_htx.c
@@ -4887,8 +4887,8 @@ static int htx_handle_stats(struct stream *s, struct channel *req)
 
 			h += strlen(STAT_SCOPE_INPUT_NAME) + 1;
 			h2 = h;
-			appctx->ctx.stats.scope_str = h2 - s->txn->uri;
-			while (h <= end) {
+			appctx->ctx.stats.scope_str = h2 - HTX_SL_REQ_UPTR(sl);
+			while (h < end) {
 if (*h == ';' || *h == '&' || *h == ' ')
 	break;
 itx++;
diff --git a/src/stats.c b/src/stats.c
index ebd95d3f0..a7c12e120 100644
--- a/src/stats.c
+++ b/src/stats.c
@@ -55,6 +55,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -257,6 +258,23 @@ static int stats_putchk(struct channel *chn, struct htx *htx, struct buffer *chk
 	return 1;
 }
 
+static const char *stats_scope_ptr(struct appctx *appctx, struct stream_interface *si)
+{
+	const char *p;
+
+	if (IS_HTX_STRM(si_strm(si))) {
+		struct channel *req = si_oc(si);
+		struct htx *htx = htxbuf(>buf);
+		struct ist uri = htx_sl_req_uri(http_find_stline(htx));
+
+		p = uri.ptr;
+	}
+	else
+		p = co_head(si_oc(si));
+
+	return p + appctx->ctx.stats.scope_str;
+}
+
 /*
  * http_stats_io_handler()
  * -> stats_dump_stat_to_buffer() // same as above, but used for CSV or HTML
@@ -1912,8 +1930,10 @@ static void stats_dump_html_px_hdr(struct stream_interface *si, struct proxy *px
 		/* scope_txt = search pattern + search query, appctx->ctx.stats.scope_len is always <= STAT_SCOPE_TXT_MAXLEN */
 		scope_txt[0] = 0;
 		if (appctx->ctx.stats.scope_len) {
+			const char *scope_ptr = stats_scope_ptr(appctx, si);
+
 			strcpy(scope_txt, STAT_SCOPE_PATTERN);
-			memcpy(scope_txt + strlen(STAT_SCOPE_PATTERN), co_head(si_oc(si)) + appctx->ctx.stats.scope_str, appctx->ctx.stats.scope_len);
+			memcpy(scope_txt + strlen(STAT_SCOPE_PATTERN), scope_ptr, appctx->ctx.stats.scope_len);
 			scope_txt[strlen(STAT_SCOPE_PATTERN) + appctx->ctx.stats.scope_len] = 0;
 		}
 
@@ -2075,9 +2095,12 @@ int stats_dump_proxy_to_buffer(struct stream_interface *si, struct htx *htx,
 		/* if the user has requested a limited output and the proxy
 		 * name does not match, skip it.
 		 */
-		if (appctx->ctx.stats.scope_len &&
-		strnistr(px->id, strlen(px->id), co_head(si_oc(si)) + 

[PATCH] Buffer API changes for 51d.c

2019-01-16 Thread Ben Shillito
Hi Willy,

It appears that 51.d still uses some elements of the the now deprecated buffer 
API, so I have attached a patch which updates the usage to the new buffer API.

This can also be backported to 1.9 where the new API was introduced.

Thanks,

Ben Shillito
Developer
[51Degrees]
O: +44 1183 287152
E: b...@51degrees.com
[https://51degrees.com/portals/0/images/twitterbird.png] 
@51Degrees
[https://51degrees.com/portals/0/images/linkedinicon.png] 
51Degrees

[Find out More]


This email and any attachments are confidential and may also be privileged. If 
you are not the named recipient, please notify the sender immediately and do 
not disclose, use, store or copy the information contained herein. This is an 
email from 51Degrees.mobi Limited, 5 Charlotte Close, Reading. RG47BY. T: +44 
118 328 7152; E: i...@51degrees.com; 51Degrees.mobi Limited t/as 51Degrees.


0001-BUG-51d-Changes-to-the-buffer-API-in-1.9-were-not-ap.patch
Description:  0001-BUG-51d-Changes-to-the-buffer-API-in-1.9-were-not-ap.patch


Re: stats webpage crash, htx and scope filter, [PATCH] REGTEST is included

2019-01-16 Thread Willy Tarreau
On Wed, Jan 16, 2019 at 02:28:56PM +0100, Christopher Faulet wrote:
> Rah ! I'll probably need some rest. I've done my tests without the HTX
> enabled... It's a bit embarrassing and not really responsible.

Let's say it's due to uncaught -EKIDSAROUND :-)

> Anyway, here is a new patch, again. Willy, I hope it will be good for the
> release 1.9.2.

OK so I've mergd it now, thank you!

Willy



Re: Browser downloads failing because of h2c_send_goaway_error (1.8.17 + 1.9.1)

2019-01-16 Thread Willy Tarreau
Hi,

On Wed, Jan 16, 2019 at 10:32:00AM +0300, Wert wrote:
> How to reproduce:
> 1. Start browser-download (content-disposition: attachment) of some big file 
> through H2
> * Tested with 1Gb file and several Chrome-versions (67-)
> 2. Make reload
> 3. Process with this connection would stay, transfer everything successfully 
> and even send to log info about code-200 full-size transfer without any 
> termination flags
> 4. Chrome would show "Network error" at the end of transfer
> 
> Versions:
> 1.8.17 - it happens in all cases
> #c4d56e5 - looks like it happens only in some conditions, but just because 
> there is another bug that makes old processes stay much longer than with 
> 1.8.17
> 
> Fix:
> For 1.8.17 I just disabled condition "if (sess && unlikely(sess->fe->state == 
> PR_STSTOPPED))" leading to h2c_send_goaway_error() in mux_h2.c:2330
> It is not correct way and probably breaks something else, but fixes this

I understand what's happening. And it is caused by the H2 state machine
which does not have provisions for something equivalent to the LAST_ACK
TCP state. A stream is closed as soon as the END_STREAM flag has been
seen in both directions. Thus during such a reload, we end up with a
connection which doesn't have any stream anymore but still bytes in
flight for the last stream(s). Since we have a reload in progress, we
know the process is waiting for old connections to leave, thus we close
this idle and unused connection. The problem is that the client is still
sending WINDOW_UPDATES for the data it receives, and that these ones will
cause a TCP RST to be emitted. Depending on the packet ordering on the
path and on the remote TCP stack, this can lead to truncated responses,
or even in a client reporting an error after it correctly gets everything.

I'd like to study the possibility to introduce an equivalent of the LAST_ACK
state for streams. In theory it should not be very difficult but in practice
it's tricky because frames are not really ACKed, the client sends us more
budget to send more data, so we never exactly know when all data were received
on the other end, though it serves as a hint. Ideally we could consider that
we can watch the window increments only and compare them to the amount of
data sent.

I think it's something I'll bring to the IETF HTTP working group, because
it could actually serve to address another issue related to priorities
which are sometimes incorrectly re-attached when streams disappear.

For the short term however I don't have any solution to this, and tricks
such as the one above are indeed risky, you may face connections which
never die :-/

Best regards,
Willy



RE: [PATCH] Buffer API changes for 51d.c

2019-01-16 Thread Ben Shillito
Hi Willy,

Great, thanks for the quick turnaround.

Regards,

Ben Shillito
Developer
O: +44 1183 287152
E: b...@51degrees.com
T: @51Degrees

-Original Message-
From: Willy Tarreau [mailto:w...@1wt.eu]
Sent: 16 January 2019 16:27
To: Ben Shillito 
Cc: haproxy@formilux.org
Subject: Re: [PATCH] Buffer API changes for 51d.c

Hi Ben,

On Wed, Jan 16, 2019 at 11:43:03AM +, Ben Shillito wrote:
> Hi Willy,
>
> It appears that 51.d still uses some elements of the the now deprecated 
> buffer API, so I have attached a patch which updates the usage to the new 
> buffer API.
>
> This can also be backported to 1.9 where the new API was introduced.

Great, now merged, and still in time for 1.9.2.

Thank you!
Willy
This email and any attachments are confidential and may also be privileged. If 
you are not the named recipient, please notify the sender immediately and do 
not disclose, use, store or copy the information contained herein. This is an 
email from 51Degrees.mobi Limited, 5 Charlotte Close, Reading. RG47BY. T: +44 
118 328 7152; E: i...@51degrees.com; 51Degrees.mobi Limited t/as 51Degrees.



Re: HTTP version log format for http2

2019-01-16 Thread Amin Shayan
Hi Willy,

Thanks for the workaround and explanation. Works as expected.

Sincerely,
Amin Shayan


On Wed, Jan 16, 2019 at 6:15 PM Willy Tarreau  wrote:

> Hi Amin,
>
> On Mon, Dec 31, 2018 at 05:23:17PM +0100, Amin Shayan wrote:
> > Hi guys,
> >
> > I'm trying to get clients request http version and it seems %HV which is
> > the last field of %r works fine for http/0.9,1.0,1.1. However I get
> > http/1.1 on logs for http2 requests.
> >
> > Using HAProxy 1.8.16, Is there still below limitation?
> >
> >   - no trivial way to report HTTP/2 in the logs. I'm using a sample
> > fetch function reporting the on-wire format as 1 or 2 for now. I
> > considered replacing "HTTP/1.1" with "HTTP/2.0" in the logs but
> > that's inaccurate since we really process "1.1" so it might be
> > confusing to those dealing with regex which don't seem to match,
> > and in addition "HTTP/2.0" is not the correct version string, the
> > correct one is "HTTP/2". But writing this without the dot and the
> > minor version is going to break some log processing tools. Thus I
> > was thinking about having some optional fields that are supposed
> > to be easy to use. Note that we had the same issue with SSL long
> > ago, ending with "~" after the frontend's name in the logs...
> > Better avoid this for H2. Ideas are welcome.
>
> Yes that's true for 1.8. However there is a sample fetch function called
> "fc_http_major" which returns the major protocol number on the frontend
> connection (1 for 0.9-1.1, 2 for 2.0). You could thus add something like
> this to your log-format :
>
>... HTTP/%[fc_http_major]
>
> Hoping this helps,
> Willy
>


Re: Not enough timeout for socket transfer (completely broken after reload)

2019-01-16 Thread Willy Tarreau
Hi,

On Wed, Jan 16, 2019 at 11:24:58AM +0300, Wert wrote:
> Problem:
> Sometimes in multi-process configuration might appear error "Failed to get 
> the number of sockets to be transferred !" during reload.
> Than new instance would silently fail like 50-80% of new connections.
> 
> 
> Reason:
> There is hard-coded timeout in get_old_sockets() - 1 second.
> It is used for sockets receiving from old master to new master during reload 
> (https://github.com/haproxy/haproxy/blob/v1.8.0/src/haproxy.c#L977)
> 
> In case of "very low CPU-power + a lot of FDs" (or just temporary latency 
> problems that possible on VPS, during overload and in many other rare but 
> easy imaginable cases)
> - 1 second exceeded and there is no workaround for this case
> 
> 
> Fix:
> Some smart processing for error should be the right way.
> But I don't see problems that might be related to bigger timeout.
> So, I just clumsy change this timeout to 9sec and it works 100% OK on dozens 
> heavy-loaded production servers.

Actually I'm quite shocked to see that there is no response for one full
second. It's unrelated to the number of FDs, it means the socket remained
silent for one second, since it's a socket timeout! This means it could
happen with any other socket, and one second is an eternity in the network
world. I see how increasing the value may hide deeper problems, but frankly
I'd also try to figure how it is possible that the socket remains
unresponsive for this long.

I think that's something we should probably make configurable in the
global section anyway, since there's no good value. The larger the value,
the higher the risk of trouble upon startup when trying to communicate
with a sick or dead process. 

Thanks,
Willy



[ANNOUNCE] haproxy-1.9.2

2019-01-16 Thread Willy Tarreau
Hi,

HAProxy 1.9.2 was released on 2019/01/16. It added 58 new commits
after version 1.9.1.

It addresses a number of lower importance pending issues that were not
yet merged into 1.9.1, one bug in the cache and fixes some long-standing
limitations that were affecting H2.

The highest severity issue but the hardest to trigger as well is the
one affecting the cache, as it's possible to corrupt the shared memory
segment when using some asymmetric caching rules, and crash the process.
There is a workaround though, which consists in always making sure an
"http-request cache-use" action is always performed before an
"http-response cache-store" action (i.e.  the conditions must match).
This bug already affects 1.8 and nobody noticed so I'm not worried :-)

The rest is of lower importance but mostly annoyance. One issue was
causing the mailers to spam the server in loops. Another one affected
idle server connections (I don't remember the details after seeing
several of them to be honest), apparently the stats page could crash
when using HTX, and there were still a few cases where stale HTTP/1
connections would never leave in HTX (after certain situations of client
timeout). The 0-RTT feature was broken when openssl 1.1.1 was released
due to the anti-replay protection being enabled by default there (which
makes sense since not everyone uses it with HTTP and proper support),
this is now fixed.

While we have been observing a slowly growing amount of orphaned connections
on haproxy.org last week (several per hour), and since the recent fixes we
could confirm that it's perfectly clean now.

There's a small improvement regarding the encryption of TLS tickets. We
used to support 128 bits only and it looks like the default setting
changed 2 years ago without us noticing. Some users were asking for 256
bit support, so that was implemented and backported. It will work
transparently as the key size is determined automatically. We don't
think it would make sense at this point to backport this to 1.8, but if
there is compelling demand for this Emeric knows how to do it.

Regarding the long-standing limitations affecting H2, some of you
probably remember that haproxy used not to support CONTINUATION frames,
which was causing an issue with one very old version of chromium, and
that it didn't support trailers, making it incompatible with gRPC (which
may also use CONTINUATION). This has constantly resulted in h2spec to
return 6 failed tests. These limitations could be addressed in 2.0-dev
relatively easily thanks to the much better new architecture, and I
considered it was right to backport these patches so that we don't have
to work around them anymore. I'd say that while from a developer's
perspective these limitations were not bugs ("works as designed"), from
the user's perspective they definitely were.

I could try this with the gRPC helloworld tests (which by the way support
H2 in clear text) :

   haproxy$ cat h2grpc.cfg
   defaults
mode http
timeout client 5s
timeout server 5s
timeout connect 1s

   listen grpc
log stdout format raw local0
option httplog
option http-use-htx
bind :50052 proto h2
server srv1 127.0.0.1:50051 proto h2
   haproxy$ ./haproxy -d -f h2grpc.cfg

   grpc$ go run examples/helloworld/greeter_server/main.go &
   grpc$ go run examples/helloworld/greeter_client/main.go haproxy 
   2019/01/04 11:11:40 Received: haproxy
   2019/01/04 11:11:40 Greeting: Hello haproxy

   (...)haproxy$ ./haproxy -d -f h2grpc.cfg
   :grpc.accept(0008)=000b from [127.0.0.1:37538] ALPN=  
   :grpc.clireq[000b:]: POST /helloworld.Greeter/SayHello 
HTTP/2.0
   :grpc.clihdr[000b:]: content-type: application/grpc 
   :grpc.clihdr[000b:]: user-agent: grpc-go/1.18.0-dev   
   :grpc.clihdr[000b:]: te: trailers
   :grpc.clihdr[000b:]: grpc-timeout: 994982u
   :grpc.clihdr[000b:]: host: localhost:50052
   :grpc.srvrep[000b:000c]: HTTP/2.0 200
   :grpc.srvhdr[000b:000c]: content-type: application/grpc
   :grpc.srvcls[000b:000c]
   :grpc.clicls[000b:000c]
   :grpc.closed[000b:000c]
   127.0.0.1:37538 [04/Jan/2019:11:11:40.705] grpc grpc/srv1 0/0/0/1/1 200 116 
- -  1/1/0/0/0 0/0 "POST /helloworld.Greeter/SayHello HTTP/2.0"

In the past we'd get an error from the client saying that the response
came without trailers. So now this limitation is expected to be just bad
old memories.

Last, some might have followed the updates around varnishtest. It
evolved into an autonomous project called VTest, but it used to be very
difficult to build due to remaining intimate dependencies with Varnish.
Poul-Henning and Fred and have addressed this and now it's trivial to
build and works like a charm. Given that varnishtest was still affected
by a few issues causing crashes on certain tests, it was about time to
complete the switch. Thus 

Re: How to replicate RedirectMatch (apache reverse proxy) in Haproxy

2019-01-16 Thread Aleksandar Lazic
Hi.

Am 16.01.2019 um 16:35 schrieb mirko stefanelli:
> Hi to all,
> 
> we are trying to move from Apache reverse proxy to Haproxy, you can see below 
> a
> part of del file Apache httpd.conf:
> 
> 
>  ServerName dipendenti.xxx.xxx.it
>  ErrorLog logs/intranet_ssl_error_log
>  TransferLog logs/intranet_ssl_access_log
>  LogLevel info
>  ProxyRequests Off
>  ProxyPreserveHost On
>  ProxyPass / http://intranet.xx.xxx/
>  ProxyPassReverse / http://intranet.xxx.xxx/
>  RedirectMatch ^/$ https://dipendenti.xxx.xxx.it  /
> 
>  SSLEngine on
>  SSLProxyEngine On
>  SSLProtocol all -SSLv2
>  SSLCipherSuite HIGH:MEDIUM:!aNULL:!MD5
> 
>  SSLCertificateFile /etc/pki/tls/certs/STAR_xt.crt
>  SSLCertificateKeyFile /etc/pki/tls/private/.pem
>  SSLCertificateChainFile /etc/pki/tls/certs/STAR_xxx_ca-bundle.crt
>  BrowserMatch "MSIE [2-5]" \
>              nokeepalive ssl-unclean-shutdown \
>              downgrade-1.0 force-response-1.0
> 
> 
> As you can see here we use RedirectMatch to force respons in HTTPS.
> 
> Here part of conf on HAproxy:
> 
> in frontend part:
> 
> bind *:443 ssl crt /etc/haproxy/ssl/ #here are stored each certificates
> 
> acl acl_dipendenti hdr_dom(host) -i dipendenti.xxx.xxx.it
> 
> use_backend dipendenti if acl_dipendenti
> 
> in backend part:
> 
> backend dipendenti
>         log 127.0.0.1:514 local6 debug
>         stick-table type ip size 20k peers mypeers
>         server intranet 10.xxx.xxx.xxx:80 check
> 
> When we start service we connect to https://dipendenti.xxx.xxx.it, but
> during navigation seems that haproxy respons change from HTTPS to HTTP.
> 
> Can you suggests some idea in order to investigate on this behavior?

Maybe you get a startpoint on this blog post.

https://www.haproxy.com/blog/howto-write-apache-proxypass-rules-in-haproxy/

> Regards,
> Mirko.

Regards
Aleks



How to replicate RedirectMatch (apache reverse proxy) in Haproxy

2019-01-16 Thread mirko stefanelli
Hi to all,

we are trying to move from Apache reverse proxy to Haproxy, you can see
below a part of del file Apache httpd.conf:


 ServerName dipendenti.xxx.xxx.it
 ErrorLog logs/intranet_ssl_error_log
 TransferLog logs/intranet_ssl_access_log
 LogLevel info
 ProxyRequests Off
 ProxyPreserveHost On
 ProxyPass / http://intranet.xx.xxx/
 ProxyPassReverse / http://intranet.xxx.xxx/
 RedirectMatch ^/$ https://dipendenti.xxx.xxx.it  /

 SSLEngine on
 SSLProxyEngine On
 SSLProtocol all -SSLv2
 SSLCipherSuite HIGH:MEDIUM:!aNULL:!MD5

 SSLCertificateFile /etc/pki/tls/certs/STAR_xt.crt
 SSLCertificateKeyFile /etc/pki/tls/private/.pem
 SSLCertificateChainFile /etc/pki/tls/certs/STAR_xxx_ca-bundle.crt
 BrowserMatch "MSIE [2-5]" \
 nokeepalive ssl-unclean-shutdown \
 downgrade-1.0 force-response-1.0


As you can see here we use RedirectMatch to force respons in HTTPS.

Here part of conf on HAproxy:

in frontend part:

bind *:443 ssl crt /etc/haproxy/ssl/ #here are stored each certificates

acl acl_dipendenti hdr_dom(host) -i dipendenti.xxx.xxx.it

use_backend dipendenti if acl_dipendenti

in backend part:

backend dipendenti
log 127.0.0.1:514 local6 debug
stick-table type ip size 20k peers mypeers
server intranet 10.xxx.xxx.xxx:80 check

When we start service we connect to https://dipendenti.xxx.xxx.it,
but during navigation seems that haproxy respons change from HTTPS to HTTP.

Can you suggests some idea in order to investigate on this behavior?

Regards,
Mirko.


Re: [PATCH] Buffer API changes for 51d.c

2019-01-16 Thread Willy Tarreau
Hi Ben,

On Wed, Jan 16, 2019 at 11:43:03AM +, Ben Shillito wrote:
> Hi Willy,
> 
> It appears that 51.d still uses some elements of the the now deprecated 
> buffer API, so I have attached a patch which updates the usage to the new 
> buffer API.
> 
> This can also be backported to 1.9 where the new API was introduced.

Great, now merged, and still in time for 1.9.2.

Thank you!
Willy



Re: HTTP version log format for http2

2019-01-16 Thread Willy Tarreau
Hi Amin,

On Mon, Dec 31, 2018 at 05:23:17PM +0100, Amin Shayan wrote:
> Hi guys,
> 
> I'm trying to get clients request http version and it seems %HV which is
> the last field of %r works fine for http/0.9,1.0,1.1. However I get
> http/1.1 on logs for http2 requests.
> 
> Using HAProxy 1.8.16, Is there still below limitation?
> 
>   - no trivial way to report HTTP/2 in the logs. I'm using a sample
> fetch function reporting the on-wire format as 1 or 2 for now. I
> considered replacing "HTTP/1.1" with "HTTP/2.0" in the logs but
> that's inaccurate since we really process "1.1" so it might be
> confusing to those dealing with regex which don't seem to match,
> and in addition "HTTP/2.0" is not the correct version string, the
> correct one is "HTTP/2". But writing this without the dot and the
> minor version is going to break some log processing tools. Thus I
> was thinking about having some optional fields that are supposed
> to be easy to use. Note that we had the same issue with SSL long
> ago, ending with "~" after the frontend's name in the logs...
> Better avoid this for H2. Ideas are welcome.

Yes that's true for 1.8. However there is a sample fetch function called
"fc_http_major" which returns the major protocol number on the frontend
connection (1 for 0.9-1.1, 2 for 2.0). You could thus add something like
this to your log-format :

   ... HTTP/%[fc_http_major]

Hoping this helps,
Willy



Re: stats webpage crash, htx and scope filter, [PATCH] REGTEST is included

2019-01-16 Thread Willy Tarreau
Hi Pieter,

On Wed, Jan 16, 2019 at 08:44:58PM +0100, PiBa-NL wrote:
> Hi Willy, Christopher,
> Op 16-1-2019 om 17:32 schreef Willy Tarreau:
> > On Wed, Jan 16, 2019 at 02:28:56PM +0100, Christopher Faulet wrote:
> > > here is a new patch, again. Willy, I hope it will be good for the
> > > release 1.9.2.
> This one works :).

Great, thank you!

> Op 14-1-2019 om 11:17 schreef Christopher Faulet:
> > If it's ok for you, I'll also merge your regtest.
> 
> Can you add the regtest as well into the git repo?

I'm not sure which one it is, I'll let Christopher take care of it.

Thanks,
Willy



Re: [ANNOUNCE] haproxy-1.9.2

2019-01-16 Thread Aleksandar Lazic
Hi.

Am 16.01.2019 um 19:02 schrieb Willy Tarreau:
> Hi,
> 
> HAProxy 1.9.2 was released on 2019/01/16. It added 58 new commits
> after version 1.9.1.
> 
> It addresses a number of lower importance pending issues that were not
> yet merged into 1.9.1, one bug in the cache and fixes some long-standing
> limitations that were affecting H2.
> 
> The highest severity issue but the hardest to trigger as well is the
> one affecting the cache, as it's possible to corrupt the shared memory
> segment when using some asymmetric caching rules, and crash the process.
> There is a workaround though, which consists in always making sure an
> "http-request cache-use" action is always performed before an
> "http-response cache-store" action (i.e.  the conditions must match).
> This bug already affects 1.8 and nobody noticed so I'm not worried :-)
> 
> The rest is of lower importance but mostly annoyance. One issue was
> causing the mailers to spam the server in loops. Another one affected
> idle server connections (I don't remember the details after seeing
> several of them to be honest), apparently the stats page could crash
> when using HTX, and there were still a few cases where stale HTTP/1
> connections would never leave in HTX (after certain situations of client
> timeout). The 0-RTT feature was broken when openssl 1.1.1 was released
> due to the anti-replay protection being enabled by default there (which
> makes sense since not everyone uses it with HTTP and proper support),
> this is now fixed.
> 
> While we have been observing a slowly growing amount of orphaned connections
> on haproxy.org last week (several per hour), and since the recent fixes we
> could confirm that it's perfectly clean now.
> 
> There's a small improvement regarding the encryption of TLS tickets. We
> used to support 128 bits only and it looks like the default setting
> changed 2 years ago without us noticing. Some users were asking for 256
> bit support, so that was implemented and backported. It will work
> transparently as the key size is determined automatically. We don't
> think it would make sense at this point to backport this to 1.8, but if
> there is compelling demand for this Emeric knows how to do it.
> 
> Regarding the long-standing limitations affecting H2, some of you
> probably remember that haproxy used not to support CONTINUATION frames,
> which was causing an issue with one very old version of chromium, and
> that it didn't support trailers, making it incompatible with gRPC (which
> may also use CONTINUATION). This has constantly resulted in h2spec to
> return 6 failed tests. These limitations could be addressed in 2.0-dev
> relatively easily thanks to the much better new architecture, and I
> considered it was right to backport these patches so that we don't have
> to work around them anymore. I'd say that while from a developer's
> perspective these limitations were not bugs ("works as designed"), from
> the user's perspective they definitely were.
> 
> I could try this with the gRPC helloworld tests (which by the way support
> H2 in clear text) :
> 
>haproxy$ cat h2grpc.cfg
>defaults
> mode http
> timeout client 5s
> timeout server 5s
> timeout connect 1s
> 
>listen grpc
> log stdout format raw local0
> option httplog
> option http-use-htx
> bind :50052 proto h2
> server srv1 127.0.0.1:50051 proto h2
>haproxy$ ./haproxy -d -f h2grpc.cfg
> 
>grpc$ go run examples/helloworld/greeter_server/main.go &
>grpc$ go run examples/helloworld/greeter_client/main.go haproxy 
>2019/01/04 11:11:40 Received: haproxy
>2019/01/04 11:11:40 Greeting: Hello haproxy
> 
>(...)haproxy$ ./haproxy -d -f h2grpc.cfg
>:grpc.accept(0008)=000b from [127.0.0.1:37538] ALPN=  
>:grpc.clireq[000b:]: POST /helloworld.Greeter/SayHello 
> HTTP/2.0
>:grpc.clihdr[000b:]: content-type: application/grpc 
>:grpc.clihdr[000b:]: user-agent: grpc-go/1.18.0-dev   
>:grpc.clihdr[000b:]: te: trailers
>:grpc.clihdr[000b:]: grpc-timeout: 994982u
>:grpc.clihdr[000b:]: host: localhost:50052
>:grpc.srvrep[000b:000c]: HTTP/2.0 200
>:grpc.srvhdr[000b:000c]: content-type: application/grpc
>:grpc.srvcls[000b:000c]
>:grpc.clicls[000b:000c]
>:grpc.closed[000b:000c]
>127.0.0.1:37538 [04/Jan/2019:11:11:40.705] grpc grpc/srv1 0/0/0/1/1 200 
> 116 - -  1/1/0/0/0 0/0 "POST /helloworld.Greeter/SayHello HTTP/2.0"
> 
> In the past we'd get an error from the client saying that the response
> came without trailers. So now this limitation is expected to be just bad
> old memories.

That's great ;-) ;-)

For service routing are the standard haproxy content routing options possible
(path, header, ...) , right?

If someone want to route based on grpc content he can use lua with body 

Re: stats webpage crash, htx and scope filter, [PATCH] REGTEST is included

2019-01-16 Thread PiBa-NL

Hi Willy, Christopher,
Op 16-1-2019 om 17:32 schreef Willy Tarreau:

On Wed, Jan 16, 2019 at 02:28:56PM +0100, Christopher Faulet wrote:

here is a new patch, again. Willy, I hope it will be good for the
release 1.9.2.

This one works :).

OK so I've mergd it now, thank you!
Willy


Op 14-1-2019 om 11:17 schreef Christopher Faulet:

If it's ok for you, I'll also merge your regtest.


Can you add the regtest as well into the git repo?

Regards,
PiBa-NL (Pieter)




Re: [ANNOUNCE] haproxy-1.9.2

2019-01-16 Thread Willy Tarreau
Hi Aleks,

On Wed, Jan 16, 2019 at 11:52:12PM +0100, Aleksandar Lazic wrote:
> For service routing are the standard haproxy content routing options possible
> (path, header, ...) , right?

Yes absolutely.

> If someone want to route based on grpc content he can use lua with body 
> content
> right?
> 
> For example this library https://github.com/Neopallium/lua-pb

Very likely, yes. If you want to inspect the body you simply have to
enable "option http-buffer-request" so that haproxy waits for the body
before executing rules. From there, indeed you can pass whatever Lua
code on req.body. I don't know if there would be any value in trying
to implement some protobuf converters to decode certain things natively.
What I don't know is if the contents can be deserialized even without
compiling the proto files.

> > That's about all. With each major release we feel like version dot-2
> > works pretty well. This one is no exception. We'll see in 6 months if
> > it was wise :-)
> 
> So you would say I can use it in production with htx ;-)

As long as you're still a bit careful, yes, definitely. haproxy.org has
been running it in production since 1.9-dev9 or so. Since 1.9.0 was
released, we've had one crash a few times (fixed in 1.9.1) and two
massive slowdowns due to non-expiring connections reaching the frontend's
maxconn limit (fixed in 1.9.2).

> and the docker image is also updated ;-)
> 
> https://hub.docker.com/r/me2digital/haproxy19

Thanks.

> As we have now a separated protocol handling layer (htx) how difficult is it 
> to
> add `mode fast-cgi` like `mode http`?

We'd like to have this for 2.0. But it wouldn't be "mode fast-cgi" but
rather "proto fast-cgi" on the server lines to replace the htx-to-h1 mux
with an htx-to-fcgi one, because fast-cgi is another representation of
HTTP. The "mode http" setting is what enables all HTTP processing
(http-request rules, cookie parsing etc). Thus you definitely want to
have it enabled.

> I ask because php have not a production ready http implementation but a robust
> fast cgi process manager (php-fpm). There are several possible solution to add
> http to php (nginx+php-fpm, uwsgi+php-fpm, uwsgi+embeded php) but all this
> solutions requires a additional hop.
> 
> My wish is to have such a flow.
> 
> haproxy -> *.php  => php-fpm
> -> *.static-files => nginx,h2o

It's *exactly* what I've been wanting for a long time as well. Mind you
that Thierry implemented some experimental fast-cgi code many years ago
in 1.3! By then we were facing some strong architectural limitations,
but now I think we should have everything ready thanks to the muxes.

> I have take a look into fcgi protocol but sadly I'm not a good enough 
> programmer
> for that task. I can offer the tests for the implementation.

That's good to know, thanks!

Cheers,
Willy



Re: Seamless reloads: file descriptors utilization in LUA

2019-01-16 Thread Wert
> CC'ing Thierry: as this has come on this discourse, can we have your
> opinion about the FD's in LUA and howto best handle ulimit?


> Apologies for the duplicate mail.


> Thanks,
> Lukas

1. FD
I don't know your architecture too much. From user-side I just see no reasons 
to keep FD that created in LUA.
For cases when I make redis-connection, open GEO-DB file or some socket to 
send, there is no reason to keep such FDs for new instance.
It might be option or default policy to completely stop transferring LUA-FDs to 
new master.

If it is difficult, probably there could be some ways to make ability for 
checking and cleaning it manually with CLI.

2. Ulimit
It is impossible to know - how many FDs would be used by LUA even after fixing 
infinite grow.
I use "ulimit-n 1000". Of cause it looks like dirty thing but I can't 
imagine the case when it can make real harm, while low limit can a lot.
If there is some harm, at least you may adjust current auto-calculated limit 
with "+100","*2" or some similar modifier and it will cover many real cases.

---
Wert




Not enough timeout for socket transfer (completely broken after reload)

2019-01-16 Thread Wert
Problem:
Sometimes in multi-process configuration might appear error "Failed to get the 
number of sockets to be transferred !" during reload.
Than new instance would silently fail like 50-80% of new connections.


Reason:
There is hard-coded timeout in get_old_sockets() - 1 second.
It is used for sockets receiving from old master to new master during reload 
(https://github.com/haproxy/haproxy/blob/v1.8.0/src/haproxy.c#L977)

In case of "very low CPU-power + a lot of FDs" (or just temporary latency 
problems that possible on VPS, during overload and in many other rare but easy 
imaginable cases)
- 1 second exceeded and there is no workaround for this case


Fix:
Some smart processing for error should be the right way.
But I don't see problems that might be related to bigger timeout.
So, I just clumsy change this timeout to 9sec and it works 100% OK on dozens 
heavy-loaded production servers.

---
Wert




Re: Get client IP

2019-01-16 Thread Aleksandar Lazic
Hi.

Am 16.01.2019 um 06:43 schrieb Vũ Xuân Học:
> Dear,
> 
> I fixed it. I use { src x.x.x.x ... } in use_backend and it worked.
> 
> Many thanks,

Great ;-).

How about the origin issue with the ssl, how is the solution now?

Best regards
Aleks

> -Original Message-
> From: Vũ Xuân Học  
> Sent: Wednesday, January 16, 2019 10:37 AM
> To: 'Aleksandar Lazic' ; 'haproxy@formilux.org' 
> ; 'PiBa-NL' 
> Subject: RE: Get client IP
> 
> Hi,
> 
> I have other problem. I want to only allow some ip access my website. Please 
> show me how to allow some IP by domain name.
> 
> I try with: tcp-request connection reject if { hdr(host) crmone.thaison.vn } 
> !{ src x.x.x.x x.x.x.y } but it’s not work. I get error message: 
>
>   keyword 'hdr' which is incompatible with 'frontend 
> tcp-request connection rule'
> 
> I try with some other keyword but not successful.
> 
> 
> 
> 
> 
> -Original Message-
> From: Aleksandar Lazic 
> Sent: Monday, January 14, 2019 5:20 PM
> To: Vũ Xuân Học ; haproxy@formilux.org; 'PiBa-NL' 
> 
> Subject: Re: Get client IP
> 
> Hi.
> 
> Am 14.01.2019 um 03:11 schrieb Vũ Xuân Học:
>> Hi,
>>
>>  
>>
>> I don’t know how to use ssl in http mode. I have many site with many 
>> certificate.
>>
>> As you see:
>>
>> …
>>
>> bind 192.168.0.4:443   (I NAT port 443 from firewall to HAProxy IP
>> 192.168.0.4)
>>
>> …
>>
>> # Define hosts
>>
>> acl host_1 req.ssl_sni -i ebh.vn
>>
>> acl host_2 req.ssl_sni hdr_end(host) -i einvoice.com.vn
>>
>> … (many acl like above)
>>
>>
>> use_backend eBH if host_1
>>
>>use_backend einvoice443 if host_2
> 
> You can use maps for this.
> https://www.haproxy.com/blog/introduction-to-haproxy-maps/
> 
> The openshift router have a complex but usable solution. Don't get confused 
> with the golang template stuff in there.
> 
> https://github.com/openshift/router/blob/master/images/router/haproxy/conf/haproxy-config.template#L180
> 
> https://github.com/openshift/router/blob/master/images/router/haproxy/conf/haproxy-config.template#L198
> 
> Regards
> Aleks
> 
>> *From:* Aleksandar Lazic 
>> *Sent:* Monday, January 14, 2019 8:45 AM
>> *To:* haproxy@formilux.org; Vũ Xuân Học ; 'PiBa-NL'
>> 
>> *Subject:* RE: Get client IP
>>
>>  
>>
>> Hi.
>>
>> As you use IIS I strongly suggest to terminate the https on haproxy 
>> and use mode http instead of tcp.
>>
>> Here is a blog post about basic setup of haproxy with ssl
>>
>> https://www.haproxy.com/blog/how-to-get-ssl-with-haproxy-getting-rid-o
>> f-stunnel-stud-nginx-or-pound/
>>
>> I assume that haproxy have the client ip as the setup works in the http 
>> config.
>>
>> Best regards
>> Aleks
>>
>> --
>> --
>>
>> *Von:*"Vũ Xuân Học" mailto:ho...@thaison.vn>>
>> *Gesendet:* 14. Jänner 2019 02:17:23 MEZ
>> *An:* 'PiBa-NL' > >, 'Aleksandar Lazic'
>> mailto:al-hapr...@none.at>>, haproxy@formilux.org 
>> 
>> *Betreff:* RE: Get client IP
>>
>>  
>>
>> Thanks for your help
>>
>>  
>>
>> I try config HAProxy with accept-proxy like this:
>>
>> frontend ivan
>>
>>  
>>
>> bind 192.168.0.4:443 accept-proxy
>>
>> mode tcp
>>
>> option tcplog
>>
>>  
>>
>> #option forwardfor
>>
>>  
>>
>> reqadd X-Forwarded-Proto:\ https
>>
>>  
>>
>> then my website can not access.
>>
>> I use IIS as webserver and I don’t know how to accept proxy, I only 
>> know config X-Forwarded-For like this
>>
>> http://www.loadbalancer.org/blog/iis-and-x-forwarded-for-header/
>>
>>  
>>
>>  
>>
>> *From:* PiBa-NL mailto:piba.nl@gmail.com>>
>> *Sent:* Sunday, January 13, 2019 10:06 PM
>> *To:* Aleksandar Lazic > >; Vũ Xuân Học > >; haproxy@formilux.org 
>> 
>> *Subject:* Re: Get client IP
>>
>>  
>>
>> Hi,
>>
>> Op 13-1-2019 om 13:11 schreef Aleksandar Lazic:
>>
>> Hi.
>>
>>  
>>
>> Am 13.01.2019 um 12:17 schrieb Vũ Xuân Học:
>>
>> Hi,
>>
>>  
>>
>> Please help me to solve this problem.
>>
>>  
>>
>> I use HAProxy version 1.5.18, SSL transparent mode and I can 
>> not get client IP
>>
>> in my .net mvc website. With mode http, I can use option 
>> forwardfor to catch
>>
>> client ip but with tcp mode, my web read X_Forwarded_For is null.
>>
>>  
>>
>>  
>>
>>  
>>
>> My diagram:
>>
>>  
>>
>> Client => Firewall => HAProxy => Web
>>
>>  
>>
>>  
>>
>>  
>>
>> I read HAProxy document, try to use send-proxy. But when use 
>> send-proxy, I can
>>
>> access my web.
>>
>>  
>>
>> This is my config:
>>
>>  
>>
>> frontend test2233
>>
>>  
>>
>> bind *:2233
>>
>>  
>>
>> option forwardfor
>>
>>