[PATCH] DOC: Fix typo in req.ssl_alpn example (commit 4afdd138424ab...)

2019-01-02 Thread Jarno Huuskonen
Also link to ssl_fc_alpn.
---
 doc/configuration.txt | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index dc1f222..03a567d 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -15472,13 +15472,13 @@ req.ssl_alpn : string
   request buffer and not to the contents deciphered via an SSL data layer, so
   this will not work with "bind" lines having the "ssl" option. This is useful
   in ACL to make a routing decision based upon the ALPN preferences of a TLS
-  client, like in the example below.
+  client, like in the example below. See also "ssl_fc_alpn".
 
   Examples :
  # Wait for a client hello for at most 5 seconds
  tcp-request inspect-delay 5s
  tcp-request content accept if { req_ssl_hello_type 1 }
- use_backend bk_acme if { req_ssl.alpn acme-tls/1 }
+ use_backend bk_acme if { req.ssl_alpn acme-tls/1 }
  default_backend bk_default
 
 req.ssl_ec_ext : boolean
-- 
1.8.3.1




Re: [PATCH] MINOR: lb: allow redispatch when using constant hash

2019-01-02 Thread Willy Tarreau
On Wed, Jan 02, 2019 at 02:48:31PM +0100, Lukas Tribus wrote:
> From: Willy Tarreau 
> 
> Redispatch traditionally only worked for cookie based persistence.
> 
> Adding redispatch support for constant hash based persistence - also
> update docs.
> 
> Reported by Oskar Stenman on discourse:
> https://discourse.haproxy.org/t/balance-uri-consistent-hashing-redispatch-3-not-redispatching/3344
> 
> Should be backported to 1.8.
(...)

Ah cool, thank you Lukas, now merged!

Willy



Re: htx with compression issue, "Gunzip error: Body lacks gzip magics"

2019-01-02 Thread Willy TARREAU
Hi guys,

On Wed, Jan 02, 2019 at 07:42:37PM +0100, PiBa-NL wrote:
> The patch fixes the reg-test for me as well, I guess its good to go :).

Great, thanks for letting me know, now merged!

Willy



Re: htx with compression issue, "Gunzip error: Body lacks gzip magics"

2019-01-02 Thread PiBa-NL

Hi Christopher, Willy,

Op 2-1-2019 om 15:37 schreef Christopher Faulet:

Le 29/12/2018 à 01:29, PiBa-NL a écrit :
compression with htx, and a slightly delayed body content it will 
prefix some rubbish and corrupt the gzip header..

Hi Pieter,

In fact, It is not a bug related to the compression. But a pure HTX 
one, about the defragmentation when we need space to store data. Here 
is a patch. It fixes the problem for me.
Okay so the compression somehow 'triggers' this defragmentation to 
happen, are there simpler ways to make that happen 'on demand' ?
Willy, if it is ok for you, I can merge it in upstream and backport it 
in 1.9.

--
Christopher Faulet
The patch fixes the reg-test for me as well, I guess its good to go :). 
Thanks.


Regards,
PiBa-NL (Pieter)




Re: State of 0-RTT TLS resumption with OpenSSL

2019-01-02 Thread Olivier Houchard
Hi Janusz,

On Sun, Dec 30, 2018 at 05:38:26PM +0100, Janusz Dziemidowicz wrote:
> Hi,
> I've been trying to get 0-RTT resumption working with haproxy 1.8.16
> and OpenSSL 1.1.1a.
> No matter what I put in configuration file, testing with openssl
> s_client always results in:
> Max Early Data: 0
> 
> OK, let's look at ssl_sock.c
> The only thing that seems to try to enable 0-RTT is this:
> #ifdef OPENSSL_IS_BORINGSSL
> if (allow_early)
> SSL_set_early_data_enabled(ssl, 1);
> #else
> if (!allow_early)
> SSL_set_max_early_data(ssl, 0);
> #endif
> 
> But I fail to see how this is supposed to work. OpenSSL has 0-RTT
> disabled by default. To enable this one must call
> SSL_set_max_early_data with the amount of bytes it is willing to read.
> The above simply does... nothing.
> 
> Is it supposed to work at all or do I miss something? ;)
> 

You're right indeed. 0RTT was added with a development version of OpenSSL 1.1.1,
which had a default value for max early data of 16384, but it was changed to
0 in the meanwhile.
Does the attached patch work for you ?

Thanks !

Olivier
>From cdb864da7cebb97800aef2e114bae6f0d0f96814 Mon Sep 17 00:00:00 2001
From: Olivier Houchard 
Date: Wed, 2 Jan 2019 18:46:41 +0100
Subject: [PATCH] MEDIUM: ssl: Call SSL_CTX_set_max_early_data() to enable
 0RTT.

When we want to enable early data on a listener, explicitely call
SSL_CTX_set_max_early_data(), as the default is now 0.

This should be backported to 1.8.
---
 src/ssl_sock.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/src/ssl_sock.c b/src/ssl_sock.c
index 282b85dd..c24de955 100644
--- a/src/ssl_sock.c
+++ b/src/ssl_sock.c
@@ -3869,6 +3869,8 @@ ssl_sock_initial_ctx(struct bind_conf *bind_conf)
SSL_CTX_set_select_certificate_cb(ctx, ssl_sock_switchctx_cbk);
SSL_CTX_set_tlsext_servername_callback(ctx, ssl_sock_switchctx_err_cbk);
 #elif (OPENSSL_VERSION_NUMBER >= 0x10101000L)
+   if (bind_conf->ssl_conf.early_data)
+   SSL_CTX_set_max_early_data(ctx, global.tune.bufsize - 
global.tune.maxrewrite);
SSL_CTX_set_client_hello_cb(ctx, ssl_sock_switchctx_cbk, NULL);
SSL_CTX_set_tlsext_servername_callback(ctx, ssl_sock_switchctx_err_cbk);
 #else
-- 
2.14.4



Re: htx with compression issue, "Gunzip error: Body lacks gzip magics"

2019-01-02 Thread Willy TARREAU
On Wed, Jan 02, 2019 at 03:37:54PM +0100, Christopher Faulet wrote:
> In fact, It is not a bug related to the compression. But a pure HTX one,
> about the defragmentation when we need space to store data. Here is a patch.
> It fixes the problem for me.
> 
> Willy, if it is ok for you, I can merge it in upstream and backport it in
> 1.9.

I'm always OK, especially for bugs I don't understand :-)

Willy



Re: htx with compression issue, "Gunzip error: Body lacks gzip magics"

2019-01-02 Thread Christopher Faulet

Le 29/12/2018 à 01:29, PiBa-NL a écrit :

Hi List,

When using compression with htx, and a slightly delayed body content it 
will prefix some rubbish and corrupt the gzip header..


Below output i get with attached test.. Removing http-use-htx 'fixes' 
the test.


This happens with both 1.9.0 and todays commit a2dbeb2, not sure if this 
ever worked before..


 c1    0.1 len|1A\r
 c1    0.1 
chunk|\222\7\0\0\0\377\377\213\10\0\0\0\0\0\4\3JLJN\1\0\0\0\377\377

 c1    0.1 len|0\r
 c1    0.1 bodylen = 26
**   c1    0.1 === expect resp.status == 200
 c1    0.1 EXPECT resp.status (200) == "200" match
**   c1    0.1 === expect resp.http.content-encoding == "gzip"
 c1    0.1 EXPECT resp.http.content-encoding (gzip) == "gzip" match
**   c1    0.1 === gunzip
 c1    0.1 Gunzip error: Body lacks gzip magics

Can someone take a look? Thanks in advance.



Hi Pieter,

In fact, It is not a bug related to the compression. But a pure HTX one, 
about the defragmentation when we need space to store data. Here is a 
patch. It fixes the problem for me.


Willy, if it is ok for you, I can merge it in upstream and backport it 
in 1.9.


--
Christopher Faulet
>From a6e9d6be951b8724921d1582a2cddea81b5b6a6a Mon Sep 17 00:00:00 2001
From: Christopher Faulet 
Date: Wed, 2 Jan 2019 11:23:44 +0100
Subject: [PATCH] BUG/MAJOR: htx: Return the good block address after a defrag

When an HTX structure is defragmented, it is possible to retrieve the new block
corresponding to an old one. This is useful to do a defrag during a loop on
blocks, to be sure to continue looping on the good block. But, instead of
returning the address of the new block in the HTX structure, the one in the
temporary structure used to do the defrag was returned, leading to unexpected
behaviours.

This patch must be backported to 1.9.
---
 src/htx.c | 13 -
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/src/htx.c b/src/htx.c
index bda293b43..83243e060 100644
--- a/src/htx.c
+++ b/src/htx.c
@@ -26,13 +26,15 @@ struct htx_blk *htx_defrag(struct htx *htx, struct htx_blk *blk)
 	struct buffer *chunk = get_trash_chunk();
 	struct htx *tmp = htxbuf(chunk);
 	struct htx_blk *newblk, *oldblk;
-	uint32_t new, old;
+	uint32_t new, old, blkpos;
 	uint32_t addr, blksz;
 	int32_t sl_off = -1;
 
 	if (!htx->used)
 		return NULL;
 
+	blkpos = -1;
+
 	new  = 0;
 	addr = 0;
 	tmp->size = htx->size;
@@ -54,13 +56,14 @@ struct htx_blk *htx_defrag(struct htx *htx, struct htx_blk *blk)
 		if (htx->sl_off == oldblk->addr)
 			sl_off = addr;
 
+		/* if  is defined, set its new position */
+		if (blk != NULL && blk == oldblk)
+			blkpos = new;
+
 		memcpy((void *)tmp->blocks + addr, htx_get_blk_ptr(htx, oldblk), blksz);
 		new++;
 		addr += blksz;
 
-		/* if  is defined, set its new location */
-		if (blk != NULL && blk == oldblk)
-			blk = newblk;
 	} while (new < htx->used);
 
 	htx->sl_off = sl_off;
@@ -68,7 +71,7 @@ struct htx_blk *htx_defrag(struct htx *htx, struct htx_blk *blk)
 	htx->front = htx->tail = new - 1;
 	memcpy((void *)htx->blocks, (void *)tmp->blocks, htx->size);
 
-	return blk;
+	return ((blkpos == -1) ? NULL : htx_get_blk(htx, blkpos));
 }
 
 /* Reserves a new block in the HTTP message  with a content of 
-- 
2.19.2



Re: Setting a unique header per server in a backend

2019-01-02 Thread Sachin Shetty
Thankyou Willy for the prompt response.

We have a lot of servers, 100s of them, but we are generating the configs
using scripts  so this logically work for us, just that it would make the
config long and complex. I will try it out.

Thanks
Sachin

On Wed, Jan 2, 2019 at 7:43 PM Willy Tarreau  wrote:

> Hi Sachin,
>
> On Wed, Jan 02, 2019 at 07:33:03PM +0530, Sachin Shetty wrote:
> > Hi Willy,
> >
> > It seems the http-send-name-header directive is not sent with
> health-check
> > and I need it in the health-check as well :)
>
> Indeed it's not supported there because the health checks are independant
> on the traffic and could even be sent somewhere else. Also the request is
> forged per backend and the same request is sent to all servers in the farm.
>
> > is there a way to make it work with health-check as well?
>
> There is a solution, it's not pretty, it depends on the number of servers
> you're dealing with in your farm. The solution consists in replacing health
> checks with trackers and to manually configure your health checks in
> separate
> backends, one per server. For example :
>
>backend my_prod_backend
> server s1 1.1.1.1:80 track chk_s1/srv
> server s2 1.1.1.2:80 track chk_s2/srv
> server s3 1.1.1.3:80 track chk_s3/srv
>
>backend chk_s1
> option httpchk GET /foo "HTTP/1.0\r\nHost: blah\r\nsrv: s1"
> server srv 1.1.1.1:80 check
>
>backend chk_s2
> option httpchk GET /foo "HTTP/1.0\r\nHost: blah\r\nsrv: s2"
> server srv 1.1.1.1:80 check
>
>backend chk_s3
> option httpchk GET /foo "HTTP/1.0\r\nHost: blah\r\nsrv: s3"
> server srv 1.1.1.1:80 check
>
> As you can see, the check is performed by these chk_* backends, and
> reflected in the prod backend thanks to the "track" directive. I know
> it's not pretty but it provides a lot of flexibility, including the
> ability to have different checks per server.
>
> We definitely need to revamp all the check subsystem to bring more
> flexibility...
>
> Cheers,
> Willy
>


Re: Setting a unique header per server in a backend

2019-01-02 Thread Willy Tarreau
Hi Sachin,

On Wed, Jan 02, 2019 at 07:33:03PM +0530, Sachin Shetty wrote:
> Hi Willy,
> 
> It seems the http-send-name-header directive is not sent with health-check
> and I need it in the health-check as well :)

Indeed it's not supported there because the health checks are independant
on the traffic and could even be sent somewhere else. Also the request is
forged per backend and the same request is sent to all servers in the farm.

> is there a way to make it work with health-check as well?

There is a solution, it's not pretty, it depends on the number of servers
you're dealing with in your farm. The solution consists in replacing health
checks with trackers and to manually configure your health checks in separate
backends, one per server. For example :

   backend my_prod_backend
server s1 1.1.1.1:80 track chk_s1/srv
server s2 1.1.1.2:80 track chk_s2/srv
server s3 1.1.1.3:80 track chk_s3/srv

   backend chk_s1
option httpchk GET /foo "HTTP/1.0\r\nHost: blah\r\nsrv: s1"
server srv 1.1.1.1:80 check

   backend chk_s2
option httpchk GET /foo "HTTP/1.0\r\nHost: blah\r\nsrv: s2"
server srv 1.1.1.1:80 check

   backend chk_s3
option httpchk GET /foo "HTTP/1.0\r\nHost: blah\r\nsrv: s3"
server srv 1.1.1.1:80 check

As you can see, the check is performed by these chk_* backends, and
reflected in the prod backend thanks to the "track" directive. I know
it's not pretty but it provides a lot of flexibility, including the
ability to have different checks per server.

We definitely need to revamp all the check subsystem to bring more
flexibility...

Cheers,
Willy



Re: Setting a unique header per server in a backend

2019-01-02 Thread Sachin Shetty
Hi Willy,

It seems the http-send-name-header directive is not sent with health-check
and I need it in the health-check as well :)

is there a way to make it work with health-check as well?

Thanks
Sachin



On Tue, Dec 18, 2018 at 5:18 PM Sachin Shetty  wrote:

> Thankyou Willy. http-send-name-header works for my use case.
>
> @Norman - Yes, we are looking at replacing the usage of X- headers.
>
> Thanks
> Sachin
>
> On Mon, Dec 17, 2018 at 2:18 AM Norman Branitsky <
> norman.branit...@micropact.com> wrote:
>
>> Don't forget the "X-" header prefix is deprecated:
>> https://tools.ietf.org/html/rfc6648
>>
>> Norman Branitsky
>>
>> On Dec 16, 2018, at 03:50, Willy Tarreau  wrote:
>>
>> Hi Sachin,
>>
>> On Sat, Dec 15, 2018 at 10:32:21PM +0530, Sachin Shetty wrote:
>>
>> Hi,
>>
>>
>> We have a tricky requirement to set a different header value in the
>> request
>>
>> based on which server in a backend is picked.
>>
>>
>> backend pod0
>>
>>...
>>
>>server server1 server1:6180  check
>>
>>server server2 server2:6180  check
>>
>>server server3 server3:6180  check
>>
>>
>> so when request is forwarded to server1 - I want to inject an header
>>
>> "X-Some-Header: Server1",  "X-Some-Header: Server2"  for server 2 and so
>>
>> on.
>>
>>
>> You have this with "http-send-name-header", you need to pass it the
>> header field name and it will fill the value with the server's name.
>> It will even support redispatch by rewinding the stream and rewriting
>> the value (which made it very tricky and infamous for quite some time).
>>
>> If it possible to register some lua action that would inject the header
>>
>> based on the server selected  before the request is forwarded to the
>> server.
>>
>>
>> In fact except for the directive above it's not possible to perform
>> changes after the server has been selected, because the server is
>> selected when trying to connect, which happens after the contents are
>> being forwarded, thus you can't perform any processing anymore. There
>> is quite some ugly code to support http-send-name-header and it cannot
>> be generalized at all. Just to give you an idea, think that a hash-based
>> LB algo (balance uri, balance hdr) could decide to use some contents
>> you're about to modify... So the contents have to be fixed before the
>> server is chosen.
>>
>> Cheers,
>> Willy
>>
>>


Re: Seamless reloads: file descriptors utilization in LUA

2019-01-02 Thread Lukas Tribus
Hello,


On Wed, 2 Jan 2019 at 14:54, Lukas Tribus  wrote:
>
> Hello,
>
> On Sun, 15 Jul 2018 at 07:19, Wert  wrote:
> >
> > Hello,
> >
> > 1. When in LUA
> > - I open some socket and left it unclosed (even UDP-sender socket)
> > - Or open some files (for example, I use LUA-maxmind lib that opens GEO-DB 
> > file)
> >
> > It is never destroyed. With each reload amount of used descriptors grows 
> > and finally reaches limits.
> > According to "lsof", all sockets and descriptors are belongs to master 
> > process and all new worker processes.
> >
> > Should be some way to destroy it during reload or to really use advantages 
> > of such transfers.
> >
> > Tested with Haproxy 1.8.12
> >
> > 2. Since haproxy has LUA, user could have needs for file descriptors that 
> > is impossible to count.
> > Is there any real reason to keep "auto-calculated" ulimit-n option with 
> > very low values, based just on connection limits?
> >
> > Of cause, it is easy to set (for those who read docs very carefully =)), 
> > but some extra value could cover a few more cases "from the box", also 
> > making a bit less critical FD-related bugs.
> >
> > At least some warning in docs for this option should be useful.

CC'ing Thierry: as this has come on this discourse, can we have your
opinion about the FD's in LUA and howto best handle ulimit?


Apologies for the duplicate mail.


Thanks,
Lukas



Re: Seamless reloads: file descriptors utilization in LUA

2019-01-02 Thread Lukas Tribus
Hello,

On Sun, 15 Jul 2018 at 07:19, Wert  wrote:
>
> Hello,
>
> 1. When in LUA
> - I open some socket and left it unclosed (even UDP-sender socket)
> - Or open some files (for example, I use LUA-maxmind lib that opens GEO-DB 
> file)
>
> It is never destroyed. With each reload amount of used descriptors grows and 
> finally reaches limits.
> According to "lsof", all sockets and descriptors are belongs to master 
> process and all new worker processes.
>
> Should be some way to destroy it during reload or to really use advantages of 
> such transfers.
>
> Tested with Haproxy 1.8.12
>
> 2. Since haproxy has LUA, user could have needs for file descriptors that is 
> impossible to count.
> Is there any real reason to keep "auto-calculated" ulimit-n option with very 
> low values, based just on connection limits?
>
> Of cause, it is easy to set (for those who read docs very carefully =)), but 
> some extra value could cover a few more cases "from the box", also making a 
> bit less critical FD-related bugs.
>
> At least some warning in docs for this option should be useful.



[PATCH] MINOR: lb: allow redispatch when using constant hash

2019-01-02 Thread Lukas Tribus
From: Willy Tarreau 

Redispatch traditionally only worked for cookie based persistence.

Adding redispatch support for constant hash based persistence - also
update docs.

Reported by Oskar Stenman on discourse:
https://discourse.haproxy.org/t/balance-uri-consistent-hashing-redispatch-3-not-redispatching/3344

Should be backported to 1.8.
---
 doc/configuration.txt|  4 ++--
 include/proto/lb_chash.h |  2 +-
 src/backend.c| 44 ++--
 src/lb_chash.c   |  5 +++--
 4 files changed, 28 insertions(+), 27 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index dc1f222..25c155b 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -6764,8 +6764,8 @@ no option redispatch
   definitely stick to it because they cannot flush the cookie, so they will not
   be able to access the service anymore.
 
-  Specifying "option redispatch" will allow the proxy to break their
-  persistence and redistribute them to a working server.
+  Specifying "option redispatch" will allow the proxy to break cookie or
+  constant hash based persistence and redistribute them to a working server.
 
   It also allows to retry connections to another server in case of multiple
   connection failures. Of course, it requires having "retries" set to a nonzero
diff --git a/include/proto/lb_chash.h b/include/proto/lb_chash.h
index a0ebf69..679dff3 100644
--- a/include/proto/lb_chash.h
+++ b/include/proto/lb_chash.h
@@ -28,7 +28,7 @@
 
 void chash_init_server_tree(struct proxy *p);
 struct server *chash_get_next_server(struct proxy *p, struct server 
*srvtoavoid);
-struct server *chash_get_server_hash(struct proxy *p, unsigned int hash);
+struct server *chash_get_server_hash(struct proxy *p, unsigned int hash, const 
struct server *avoid);
 
 #endif /* _PROTO_LB_CHASH_H */
 
diff --git a/src/backend.c b/src/backend.c
index 3c1620b..c92e761 100644
--- a/src/backend.c
+++ b/src/backend.c
@@ -165,7 +165,7 @@ void update_backend_weight(struct proxy *px)
  * If any server is found, it will be returned. If no valid server is found,
  * NULL is returned.
  */
-static struct server *get_server_sh(struct proxy *px, const char *addr, int 
len)
+static struct server *get_server_sh(struct proxy *px, const char *addr, int 
len, const struct server *avoid)
 {
unsigned int h, l;
 
@@ -186,7 +186,7 @@ static struct server *get_server_sh(struct proxy *px, const 
char *addr, int len)
h = full_hash(h);
  hash_done:
if (px->lbprm.algo & BE_LB_LKUP_CHTREE)
-   return chash_get_server_hash(px, h);
+   return chash_get_server_hash(px, h, avoid);
else
return map_get_server_hash(px, h);
 }
@@ -203,7 +203,7 @@ static struct server *get_server_sh(struct proxy *px, const 
char *addr, int len)
  * algorithm out of a tens because it gave him the best results.
  *
  */
-static struct server *get_server_uh(struct proxy *px, char *uri, int uri_len)
+static struct server *get_server_uh(struct proxy *px, char *uri, int uri_len, 
const struct server *avoid)
 {
unsigned int hash = 0;
int c;
@@ -239,7 +239,7 @@ static struct server *get_server_uh(struct proxy *px, char 
*uri, int uri_len)
hash = full_hash(hash);
  hash_done:
if (px->lbprm.algo & BE_LB_LKUP_CHTREE)
-   return chash_get_server_hash(px, hash);
+   return chash_get_server_hash(px, hash, avoid);
else
return map_get_server_hash(px, hash);
 }
@@ -253,7 +253,7 @@ static struct server *get_server_uh(struct proxy *px, char 
*uri, int uri_len)
  * is returned. If any server is found, it will be returned. If no valid server
  * is found, NULL is returned.
  */
-static struct server *get_server_ph(struct proxy *px, const char *uri, int 
uri_len)
+static struct server *get_server_ph(struct proxy *px, const char *uri, int 
uri_len, const struct server *avoid)
 {
unsigned int hash = 0;
const char *start, *end;
@@ -296,7 +296,7 @@ static struct server *get_server_ph(struct proxy *px, const 
char *uri, int uri_l
hash = full_hash(hash);
 
if (px->lbprm.algo & BE_LB_LKUP_CHTREE)
-   return chash_get_server_hash(px, hash);
+   return chash_get_server_hash(px, hash, 
avoid);
else
return map_get_server_hash(px, hash);
}
@@ -315,7 +315,7 @@ static struct server *get_server_ph(struct proxy *px, const 
char *uri, int uri_l
 /*
  * this does the same as the previous server_ph, but check the body contents
  */
-static struct server *get_server_ph_post(struct stream *s)
+static struct server *get_server_ph_post(struct stream *s, const struct server 
*avoid)
 {
unsigned int hash = 0;
struct http_txn *txn  = s->txn;
@@ 

[RFC PATCH] couple of reg-tests

2019-01-02 Thread Jarno Huuskonen
Hello,

I started playing with reg-tests and came up with couple of regtests.
Is there a better subdirectory for these than http-rules ? Maybe
map/b0.vtc and converter/h* ?

I'm attaching the tests for comments.

-Jarno

-- 
Jarno Huuskonen
>From e75f2ef8b461caa164e81e2d39630e3b2e8791f4 Mon Sep 17 00:00:00 2001
From: Jarno Huuskonen 
Date: Thu, 27 Dec 2018 11:58:13 +0200
Subject: [PATCH 1/4] REGTESTS: test case for map_regm commit 271022150d

Minimal test case for map_regm commit 271022150d7961b9aa39dbfd88e0c6a4bc48c3ee.
Config and test is adapted from: Daniel Schneller's example
(https://www.mail-archive.com/haproxy@formilux.org/msg30523.html).
---
 reg-tests/http-rules/b0.map |  1 +
 reg-tests/http-rules/b0.vtc | 77 +
 2 files changed, 78 insertions(+)
 create mode 100644 reg-tests/http-rules/b0.map
 create mode 100644 reg-tests/http-rules/b0.vtc

diff --git a/reg-tests/http-rules/b0.map b/reg-tests/http-rules/b0.map
new file mode 100644
index 000..08ffcfb
--- /dev/null
+++ b/reg-tests/http-rules/b0.map
@@ -0,0 +1 @@
+^(.*)\.(.*)$ \1_AND_\2
diff --git a/reg-tests/http-rules/b0.vtc b/reg-tests/http-rules/b0.vtc
new file mode 100644
index 000..bdc3b34
--- /dev/null
+++ b/reg-tests/http-rules/b0.vtc
@@ -0,0 +1,77 @@
+#commit 271022150d7961b9aa39dbfd88e0c6a4bc48c3ee
+#BUG/MINOR: map: fix map_regm with backref
+#
+#Due to a cascade of get_trash_chunk calls the sample is
+#corrupted when we want to read it.
+#
+#The fix consist to use a temporary chunk to copy the sample
+#value and use it.
+
+varnishtest "map_regm get_trash_chunk test"
+feature ignore_unknown_macro
+
+#REQUIRE_VERSION=1.6
+syslog S1 -level notice {
+recv
+expect ~ "[^:\\[ ]\\[${h1_pid}\\]: Proxy (fe|be)1 started."
+recv
+expect ~ "[^:\\[ ]\\[${h1_pid}\\]: Proxy (fe|be)1 started."
+recv info
+# not expecting ${h1_pid} with master-worker
+expect ~ "[^:\\[ ]\\[[[:digit:]]+\\]: .* fe1 be1/s1 
[[:digit:]]+/[[:digit:]]+/[[:digit:]]+/[[:digit:]]+/[[:digit:]]+ 200 
[[:digit:]]+ - -  .* \"GET / HTTP/(1|2)(\\.1)?\""
+} -start
+
+server s1 {
+   rxreq
+   expect req.method == "GET"
+   expect req.http.x-mapped-from-header == example_AND_org
+   expect req.http.x-mapped-from-var == example_AND_org
+   txresp
+
+   rxreq
+   expect req.method == "GET"
+   expect req.http.x-mapped-from-header == www.example_AND_org
+   expect req.http.x-mapped-from-var == www.example_AND_org
+   txresp
+} -start
+
+haproxy h1 -conf {
+  global
+log ${S1_addr}:${S1_port} local0 debug err
+
+  defaults
+mode http
+${no-htx} option http-use-htx
+log global
+option httplog
+timeout connect 15ms
+timeout client  20ms
+timeout server  20ms
+
+  frontend fe1
+bind "fd@${fe1}"
+# Remove port from Host header
+http-request replace-value Host '(.*):.*' '\1'
+# Store host header in variable
+http-request set-var(txn.host) req.hdr(Host)
+# This works correctly
+http-request set-header X-Mapped-From-Header 
%[req.hdr(Host),map_regm(${testdir}/b0.map,"unknown")]
+# This breaks before commit 271022150d7961b9aa39dbfd88e0c6a4bc48c3ee
+http-request set-header X-Mapped-From-Var 
%[var(txn.host),map_regm(${testdir}/b0.map,"unknown")]
+
+default_backend be1
+
+backend be1
+server s1 ${s1_addr}:${s1_port}
+} -start
+
+client c1 -connect ${h1_fe1_sock} {
+txreq -hdr "Host: example.org:8443"
+rxresp
+expect resp.status == 200
+
+txreq -hdr "Host: www.example.org"
+rxresp
+expect resp.status == 200
+} -run
+
-- 
1.8.3.1

>From cd8c246769267bfcf69acef29104cef86ace4032 Mon Sep 17 00:00:00 2001
From: Jarno Huuskonen 
Date: Tue, 1 Jan 2019 13:39:52 +0200
Subject: [PATCH 2/4] REGTESTS: Basic tests for using maps to redirect requests
 / select backend

---
 reg-tests/http-rules/h3-be.map |   4 +
 reg-tests/http-rules/h3.map|   3 +
 reg-tests/http-rules/h3.vtc| 174 +
 3 files changed, 181 insertions(+)
 create mode 100644 reg-tests/http-rules/h3-be.map
 create mode 100644 reg-tests/http-rules/h3.map
 create mode 100644 reg-tests/http-rules/h3.vtc

diff --git a/reg-tests/http-rules/h3-be.map 
b/reg-tests/http-rules/h3-be.map
new file mode 100644
index 000..c8822fc
--- /dev/null
+++ b/reg-tests/http-rules/h3-be.map
@@ -0,0 +1,4 @@
+# These entries are used for use_backend rules
+test1.example.com  test1_be
+test1.example.invalid  test1_be
+test2.example.com  test2_be
diff --git a/reg-tests/http-rules/h3.map b/reg-tests/http-rules/h3.map
new file mode 100644
index 000..a0cc02d
--- /dev/null
+++ b/reg-tests/http-rules/h3.map
@@ -0,0 +1,3 @@
+# These entries are used for http-request redirect rules
+example.org https://www.example.org
+subdomain.example.org https://www.subdomain.example.org
diff --git