Re: [PATCH] JWT payloads break b64dec convertor

2021-04-13 Thread Moemen MHEDHBI

On 13/04/2021 11:39, Willy Tarreau wrote:

>> You can find attached the patches 0001-bis and 0002-bis modifying the
>> existing functions (introducing an url flag) to see how it looks like.
>> This solution may be cleaner (no chunk allocation and we don't loop
>> twice over input string) but has the drawbacks of being intrusive with
>> the rest of the code and less clearer imo regarding how url variant is
>> different from standard base64.
> 
> I agree they're not pretty due to the change of logic around the padding,
> thanks for having tested! But then how about having just *your* functions
> without relying on the other ones ? Now that you've extended the existing
> function, you can declare it yours, remove all the configurable stuff and
> keep the simplified version as the one you need. I'm sure it will be the
> best tradeoff overall.
>

Yes that makes sense to me too, the attached patch deal with it as
suggested.


>> diff --git a/src/base64.c b/src/base64.c
>> index 53e4d65b2..f2768d980 100644
>> --- a/src/base64.c
>> +++ b/src/base64.c
>> @@ -1,5 +1,5 @@
>>  /*
>> - * ASCII <-> Base64 conversion as described in RFC1421.
>> + * ASCII <-> Base64 conversion as described in RFC4648.
>>   *
>>   * Copyright 2006-2010 Willy Tarreau 
>>   * Copyright 2009-2010 Krzysztof Piotr Oledzki 
>> @@ -17,50 +17,70 @@
>>  #include 
>>  #include 
>>  
>> -#define B64BASE '#' /* arbitrary chosen base value */
>> -#define B64CMIN '+'
>> -#define B64CMAX 'z'
>> -#define B64PADV 64  /* Base64 chosen special pad value */
>> +#define B64BASE  '#'   /* arbitrary chosen base value */
>> +#define B64CMIN  '+'
>> +#define UB64CMIN '-'
>> +#define B64CMAX  'z'
>> +#define B64PADV  64   /* Base64 chosen special pad value */
> 
> Please do not needlessly reindent code parts for no reason. It seems that
> only the "-" was added there, the rest shouldn't change.

The reason is I was following the doc/coding-style where alignment
should use spaces, but since the existing bloc was aligned via tabs, I
thought about fixing that instead of repeating the issue. I understand
though that in such case better have this in separate commit, so I have
stuck with the tabs alignment.

> By the way, contrib/ was move to dev/ during your changes so if you keep
> this comment please update it.

Done.

On 13/04/2021 08:19, Jarno Huuskonen wrote:
> Could you add a cross reference from b64dec/base64 to ub64dec/ub64enc in
> configuration.txt.

Done thanks.


-- 
Moemen
>From b526416364b98afaa2d2b421fbf27f80bc4e8732 Mon Sep 17 00:00:00 2001
From: Moemen MHEDHBI 
Date: Fri, 2 Apr 2021 01:05:07 +0200
Subject: [PATCH 2/2] CLEANUP: align samples list in sample.c

---
 src/sample.c | 54 ++--
 1 file changed, 27 insertions(+), 27 deletions(-)

diff --git a/src/sample.c b/src/sample.c
index 04635a91f..7337ba06a 100644
--- a/src/sample.c
+++ b/src/sample.c
@@ -4129,33 +4129,33 @@ INITCALL1(STG_REGISTER, sample_register_fetches, _kws);
 
 /* Note: must not be declared  as its list will be overwritten */
 static struct sample_conv_kw_list sample_conv_kws = {ILH, {
-	{ "debug",  sample_conv_debug, ARG2(0,STR,STR), smp_check_debug, SMP_T_ANY,  SMP_T_ANY },
-	{ "b64dec", sample_conv_base642bin,0,NULL, SMP_T_STR,  SMP_T_BIN  },
-	{ "base64", sample_conv_bin2base64,0,NULL, SMP_T_BIN,  SMP_T_STR  },
-	{ "ub64dec", sample_conv_base64url2bin,0,NULL, SMP_T_STR,  SMP_T_BIN  },
-	{ "ub64enc", sample_conv_bin2base64url,0,NULL, SMP_T_BIN,  SMP_T_STR  },
-	{ "upper",  sample_conv_str2upper, 0,NULL, SMP_T_STR,  SMP_T_STR  },
-	{ "lower",  sample_conv_str2lower, 0,NULL, SMP_T_STR,  SMP_T_STR  },
-	{ "length", sample_conv_length,0,NULL, SMP_T_STR,  SMP_T_SINT },
-	{ "hex",sample_conv_bin2hex,   0,NULL, SMP_T_BIN,  SMP_T_STR  },
-	{ "hex2i",  sample_conv_hex2int,   0,NULL, SMP_T_STR,  SMP_T_SINT },
-	{ "ipmask", sample_conv_ipmask,ARG2(1,MSK4,MSK6), NULL, SMP_T_ADDR, SMP_T_IPV4 },
-	{ "ltime",  sample_conv_ltime, ARG2(1,STR,SINT), NULL, SMP_T_SINT, SMP_T_STR },
-	{ "utime",  sample_conv_utime, ARG2(1,STR,SINT), NULL, SMP_T_SINT, SMP_T_STR },
-	{ "crc32",  sample_conv_crc32, ARG1(0,SINT), NULL, SMP_T_BIN,  SMP_T_SINT  },
-	{ "crc32c", sample_conv_crc32c,ARG1(0,SINT), NULL, SMP_T_BIN,  SMP_T_SINT  },
-	{ "djb2",   sample_conv_djb2,  ARG1(0,SINT), NULL, SMP_T_BIN,  SMP_T_SINT  },
-	{ "sdbm",   sample_conv_sdbm,  ARG1(0,SINT), NULL, SMP_T_BIN,  SMP_

Re: [PATCH] MINOR: sample: add json_string

2021-04-12 Thread Moemen MHEDHBI



On 08/04/2021 21:55, Aleksandar Lazic wrote:
> Hi.
> 
> Attached the patch to add the json_string sample.
> 
> In combination with the JWT patch is a pre-validation of a bearer token
> part possible.
> 
> I have something like this in mind.
> 
> http-request set-var(sess.json)
> req.hdr(Authorization),word(2,.),ub64dec,json_string('$.iss')
> http-request deny unless { var(sess.json) -m str
> 'kubernetes/serviceaccount' }
> 
> Regards
> Aleks

Hi,
I have also thought about something similar.
However I am not sure using a third party library is encouraged because
it may make the code less portable. Also using a third party library by
directly importing its code may be hard to maintain later.
In the end I am wondering if it is not easier to handle json parsing via
a LUA module for example.

-- 
Moemen



Re: [PATCH] JWT payloads break b64dec convertor

2021-04-12 Thread Moemen MHEDHBI


On 12/04/2021 23:13, Aleksandar Lazic wrote:
> Hi Moemen,
> 
> any chance to get this feature before 2.4 will be realeased?
> 
> Regards
> Aleks
> 

Hi Aleksandar,
I have updated the patch (attached) so it gets reviewed and eventually
merged.
I know this is going to be useful with what you are trying to do with
the json converter so I will try to be more active on this.


On 06/04/2021 09:13, Willy Tarreau wrote:

>> in such case should we rather use dynamic allocation ?
>
> No, there are two possible approaches. One of them is to use a trash
> buffer using get_trash_chunk(). The trash buffers are "large enough"
> for anything that comes from outside. A second, cleaner solution
> simply consists in not using a temporary buffer but doing the conversion
> on the fly. Indeed, looking closer, what the function does is to first
> replace a few chars on the whole chain to then call the base64 conversion
> function. So it doubles the work on the string and one side effect of
> this double work is that you need a temporary storage.

The url variant is not only about a different alphabet that needs to be
replaced but also is a non padding variant. So the straightforward
algorithm to decoding it is to add the padding to the input encoded in
url variant and then use the standard base64 decoder.
Even doing this on the fly requires extending input with two more bytes
max. Unless I am missing smth but in such case on the fly conversion
will result in a out of bound array access. That's why I have copied
input in a "inlen+2" string.

In the end I have updated patch to handle extending input in decoding
function via get_trash_chunk to make sure a buffer of size input+2 is
available.

You can find attached the patches 0001 and 0002 for this implementation.


> Other approaches would consist in either reimplementing the functions
> with a different alphabet, or modifying the existing ones to take an
> extra argument for the conversion table, and make one set of functions
> making use of the current table and another set making use of your new
> table.
>
> Willy
>

You can find attached the patches 0001-bis and 0002-bis modifying the
existing functions (introducing an url flag) to see how it looks like.
This solution may be cleaner (no chunk allocation and we don't loop
twice over input string) but has the drawbacks of being intrusive with
the rest of the code and less clearer imo regarding how url variant is
different from standard base64.
Feel free to pick the one that looks better otherwise I can continue
with a different implementation if needbe.
-- 
Moemen
>From cf0a43dab4f5f88ddf5e5e736127132721b7f18e Mon Sep 17 00:00:00 2001
From: Moemen MHEDHBI 
Date: Thu, 1 Apr 2021 20:53:59 +0200
Subject: [PATCH 1/2] MINOR: sample: add ub64dec and ub64enc converters

ub64dec and ub64enc are the base64url equivalent of b64dec and base64
converters. base64url encoding is the "URL and Filename Safe Alphabet"
variant of base64 encoding. It is also used in in JWT (JSON Web Token)
standard.
RFC1421 mention in base64.c file is deprecated so it was replaced with
RFC4648 to which existing converters, base64/b64dec, still apply.

Example:
  HAProxy:
http-request return content-type text/plain lf-string %[req.hdr(Authorization),word(2,.),ub64dec]
  Client:
Token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VyIjoiZm9vIiwia2V5IjoiY2hhZTZBaFhhaTZlIn0.5VsVj7mdxVvo1wP5c0dVHnr-S_khnIdFkThqvwukmdg
$ curl -H "Authorization: Bearer ${TOKEN}" http://haproxy.local
{"user":"foo","key":"chae6AhXai6e"}
---
 doc/configuration.txt| 12 ++
 include/haproxy/base64.h |  2 +
 reg-tests/sample_fetches/ubase64.vtc | 45 +++
 src/base64.c | 64 +++-
 src/sample.c | 38 +
 5 files changed, 160 insertions(+), 1 deletion(-)
 create mode 100644 reg-tests/sample_fetches/ubase64.vtc

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 7048fb63e..c7fe416e5 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -16393,6 +16393,18 @@ table_trackers()
   connections there are from a given address for example. See also the
   sc_trackers sample fetch keyword.
 
+ub64dec
+  This converter is the base64url variant of b64dec converter. base64url
+	encoding is the "URL and Filename Safe Alphabet" variant of base64 encoding.
+	It is also the encoding used in JWT (JSON Web Token) standard.
+
+	Example:
+	  # Decoding a JWT payload:
+	  http-request set-var(txn.token_payload) req.hdr(Authorization),word(2,.),ub64dec
+
+ub64enc
+  This converter is the base64url variant of base64 converter.
+
 upper
   Convert a string sample to upper case. This can only be placed after a string
   sample fetch function or after a transformation keyword returnin

Re: [PATCH] JWT payloads break b64dec convertor

2021-04-05 Thread Moemen MHEDHBI
Thanks Willy and Tim for your feedback.

You can find attached the updated patches with fixed coding style (now
set correctly in my editor), updated commit message, entry doc in sorted
order, size_t instead of int in both enc/dec  and corresponding reg-test.

Only part unclear:
On 02/04/2021 15:04, Tim Düsterhus wrote:
>> +int base64urldec(const char *in, size_t ilen, char *out, size_t olen) {
>> +char conv[ilen+2];
>
> This looks like a remotely triggerable stack overflow.

You mean in case ilen is too big? in such case should we rather use
dynamic allocation ?

-- 
Moemen MHEDHBI
>From bae8d3890be6d2f5a58697bf7b8e9f01f4589d3b Mon Sep 17 00:00:00 2001
From: Moemen MHEDHBI 
Date: Thu, 1 Apr 2021 20:53:59 +0200
Subject: [PATCH 1/2] MINOR: sample: add ub64dec and ub64enc converters

ub64dec and ub64enc are the base64url equivalent of b64dec and base64
converters. base64url encoding is the "URL and Filename Safe Alphabet"
variant of base64 encoding. It is also used in in JWT (JSON Web Token)
standard.
RFC1421 mention in base64.c file is deprecated so it was replaced with
RFC4648 to which existing converters, base64/b64dec, still apply.

Example:
  HAProxy:
http-request return content-type text/plain lf-string %[req.hdr(Authorization),word(2,.),ub64dec]
  Client:
Token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VyIjoiZm9vIiwia2V5IjoiY2hhZTZBaFhhaTZlIn0.5VsVj7mdxVvo1wP5c0dVHnr-S_khnIdFkThqvwukmdg
$ curl -H "Authorization: Bearer ${TOKEN}" http://haproxy.local
{"user":"foo","key":"chae6AhXai6e"}
---
 doc/configuration.txt| 12 ++
 include/haproxy/base64.h |  2 +
 reg-tests/sample_fetches/ubase64.vtc | 24 +++
 src/base64.c | 59 +++-
 src/sample.c | 38 ++
 5 files changed, 134 insertions(+), 1 deletion(-)
 create mode 100644 reg-tests/sample_fetches/ubase64.vtc

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 7048fb63e..c7fe416e5 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -16393,6 +16393,18 @@ table_trackers()
   connections there are from a given address for example. See also the
   sc_trackers sample fetch keyword.
 
+ub64dec
+  This converter is the base64url variant of b64dec converter. base64url
+	encoding is the "URL and Filename Safe Alphabet" variant of base64 encoding.
+	It is also the encoding used in JWT (JSON Web Token) standard.
+
+	Example:
+	  # Decoding a JWT payload:
+	  http-request set-var(txn.token_payload) req.hdr(Authorization),word(2,.),ub64dec
+
+ub64enc
+  This converter is the base64url variant of base64 converter.
+
 upper
   Convert a string sample to upper case. This can only be placed after a string
   sample fetch function or after a transformation keyword returning a string
diff --git a/include/haproxy/base64.h b/include/haproxy/base64.h
index 1756bc058..532c46a44 100644
--- a/include/haproxy/base64.h
+++ b/include/haproxy/base64.h
@@ -17,7 +17,9 @@
 #include 
 
 int a2base64(char *in, int ilen, char *out, int olen);
+int a2base64url(char *in, size_t ilen, char *out, size_t olen);
 int base64dec(const char *in, size_t ilen, char *out, size_t olen);
+int base64urldec(const char *in, size_t ilen, char *out, size_t olen);
 const char *s30tob64(int in, char *out);
 int b64tos30(const char *in);
 
diff --git a/reg-tests/sample_fetches/ubase64.vtc b/reg-tests/sample_fetches/ubase64.vtc
new file mode 100644
index 0..a273321d2
--- /dev/null
+++ b/reg-tests/sample_fetches/ubase64.vtc
@@ -0,0 +1,24 @@
+varnishtest "ub64dec sample fetche Test"
+
+#REQUIRE_VERSION=2.4
+
+feature ignore_unknown_macro
+
+haproxy h1 -conf {
+defaults
+mode http
+timeout connect 1s
+timeout client  1s
+timeout server  1s
+
+frontend fe
+bind "fd@${fe}"
+http-request return content-type text/plain hdr encode %[hdr(input),ub64enc] lf-string %[req.hdr(Authorization),word(2,.),ub64dec]
+} -start
+
+client c1 -connect ${h1_fe_sock} {
+txreq -url "/" -hdr "input: biduule" -hdr "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VyIjoiZm9vIiwia2V5IjoiY2hhZTZBaFhhaTZlIn0.5VsVj7mdxVvo1wP5c0dVHnr-S_khnIdFkThqvwukmdg"
+rxresp
+expect resp.http.encode == "YmlkdXVsZQ"
+expect resp.body == "{\"user\":\"foo\",\"key\":\"chae6AhXai6e\"}"
+} -run
diff --git a/src/base64.c b/src/base64.c
index 53e4d65b2..c53c8b076 100644
--- a/src/base64.c
+++ b/src/base64.c
@@ -1,5 +1,5 @@
 /*
- * ASCII <-> Base64 conversion as described in RFC1421.
+ * ASCII <-> Base64 conversion as described in RFC4648.
  *
  * Copyright 2006-2010 Willy Tarreau 
  * Copyright 2009-2010 Krzysztof Piotr Oledzki 
@@ -138,6 +138,63 @@ int base64dec(const char *in, size_t ilen, char

Re: [PATCH] JWT payloads break b64dec convertor

2021-04-01 Thread Moemen MHEDHBI
> On Mon, May 28, 2018 at 01:43:41PM +0100, Jonathan Matthews wrote:
>> Improvements and suggestions welcome; flames and horror -> /dev/null ;-)
>
> Would anyone be interested in adding two new converters for this,
> working exactly like base64/b64dec but with the URL-compatible
> base64 encoding instead ? We could call them :
>
>  u64dec
>  u64enc
>
> Note that it can be a nice and useful exercise for a first-time
contribution,
> don't be ashamed guys!
>
> Willy


Hi,
I have came across the same use-case as Jonathan so I gave it a try and
implemented the converters for base64url variant.

- Regarding the converters name, I have just prefixed "u" and used
ubase64/ub64dec. Let me know if the names are not appropriate or if you
rather prefer to add an argument to existing converters.

- RFC1421 mention in base64.c is deprecated so I replaced it with
RFC4648 to which base64/b64dec converters seem to still apply.

- not sure if samples list in sample.c should be formatted/aligned after
the converters added in this patch. They seemed to be already not
completely aligned. Anyway I have done the aligning on a separate patch
so you can squash it or drop it at your convenience.

Testing Example:
 haproxy.cfg:
   http-request return content-type text/plain lf-string
%[req.hdr(Authorization),word(2,.),ub64dec]

 client:
   TOKEN =
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VyIjoiZm9vIiwia2V5IjoiY2hhZTZBaFhhaTZlIn0.5VsVj7mdxVvo1wP5c0dVHnr-S_khnIdFkThqvwukmdg
   $ curl -H "Authorization: Bearer ${TOKEN}" 127.0.0.1:8080
   {"user":"foo","key":"chae6AhXai6e"}


-- 
Moemen MHEDHBI
>From e599ada315d01513e21f11cdff176cff1639b25c Mon Sep 17 00:00:00 2001
From: Moemen MHEDHBI 
Date: Thu, 1 Apr 2021 20:53:59 +0200
Subject: [PATCH 1/2] MINOR: sample: add ub64dec and ubase64 converters

ub64dec and ubase64 are the base64url equivalent of b64dec and base64
converters. base64url encoding is the "URL and Filename Safe Alphabet"
variant of base64 encoding. It is also used in in JWT (JSON Web Token)
standard.
---
 doc/configuration.txt| 11 
 include/haproxy/base64.h |  2 ++
 src/base64.c | 54 +++-
 src/sample.c | 38 
 4 files changed, 104 insertions(+), 1 deletion(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 7048fb63e..10098adef 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -15494,6 +15494,17 @@ base64
   transfer binary content in a way that can be reliably transferred (e.g.
   an SSL ID can be copied in a header).
 
+ub64dec
+  This converter is the base64url variant of b64dec converter. base64url
+	encoding is the "URL and Filename Safe Alphabet" variant of base64 encoding.
+	It is also the encoding used in JWT (JSON Web Token) standard.
+
+	Example:
+	  http-request set-var(txn.token_payload) req.hdr(Authorization),word(2,.),ub64dec
+
+ubase64
+  This converter is the base64url variant of base64 converter.
+
 bool
   Returns a boolean TRUE if the input value of type signed integer is
   non-null, otherwise returns FALSE. Used in conjunction with and(), it can be
diff --git a/include/haproxy/base64.h b/include/haproxy/base64.h
index 1756bc058..aea4e7a73 100644
--- a/include/haproxy/base64.h
+++ b/include/haproxy/base64.h
@@ -18,6 +18,8 @@
 
 int a2base64(char *in, int ilen, char *out, int olen);
 int base64dec(const char *in, size_t ilen, char *out, size_t olen);
+int a2base64url(char *in, int ilen, char *out, int olen);
+int base64urldec(const char *in, size_t ilen, char *out, size_t olen);
 const char *s30tob64(int in, char *out);
 int b64tos30(const char *in);
 
diff --git a/src/base64.c b/src/base64.c
index 53e4d65b2..38902523f 100644
--- a/src/base64.c
+++ b/src/base64.c
@@ -1,5 +1,5 @@
 /*
- * ASCII <-> Base64 conversion as described in RFC1421.
+ * ASCII <-> Base64 conversion as described in RFC4648.
  *
  * Copyright 2006-2010 Willy Tarreau 
  * Copyright 2009-2010 Krzysztof Piotr Oledzki 
@@ -138,6 +138,58 @@ int base64dec(const char *in, size_t ilen, char *out, size_t olen) {
 	return convlen;
 }
 
+/* url variant of a2base64 */
+int a2base64url(char *in, int ilen, char *out, int olen){
+	int convlen;
+	convlen = a2base64(in,ilen,out,olen);
+	while (out[convlen-1]=='='){
+		convlen--;
+		out[convlen]='\0';
+	}
+	for(int i=0;idata = 0;
+	bin_len = base64urldec(smp->data.u.str.area, smp->data.u.str.data,
+			trash->area, trash->size);
+	if (bin_len < 0)
+		return 0;
+
+	trash->data = bin_len;
+	smp->data.u.str = *trash;
+	smp->data.type = SMP_T_BIN;
+	smp->flags &= ~SMP_F_CONST;
+	return 1;
+}
+
 static int sample_conv_bin2base64(const struct arg *arg_p, struct sample *smp, void *private)
 {
 	struct buffer *trash = get_trash_chunk();
@@ -1585,6 +1603,24 @@ static int sample_conv_bin2base64(const struct 

Re: [RFC] Add weights to kubernetes-ingress

2020-03-20 Thread Moemen MHEDHBI
Hey Willy

On 20/03/2020 12:02, Willy Tarreau wrote:
> Hi Moemen,
>
> On Thu, Mar 19, 2020 at 06:47:42PM +0100, Moemen MHEDHBI wrote:
>> This ML is the right place to contribute to the HAProxy software, but
>> for the ingress controller better do this by creating an issue in the
>> github project.
>>
>> It isn't your fault anyway, we have updated the contribute/discussion
>> sections of the README in order to put clear information.
> Please note that eventhough a number of us don't really understand the
> details in all of this, I'm pretty sure that another large part is very
> much interested in following such proposals, and there's more real users
> exposure here than in an issue tracker (which is also why we automatically
> forward the pull requests here). So as long as these ones do not represent
> 80% of the traffic here, I think it's fine and beneficial to let
> contributors share their thoughts here, even if the controller technically
> is a distinct project. But I could be wrong and if some people disagree
> with me, do not hesitate to blame me.
>
> Cheers
> Willy

Since we handle contributions via github pull requests (for tracking,
referencing, searching purposes), I thought in the first place that we
better avoid having same discussion scattered on different places.

I think that if we have all ingress controller discussion here this will
probably bring noise regarding HAProxy being the main subject of the ML.
In the other hand preventing any controller related discussion here is
counterproductive, so in my opinion any features/contributions to the
controller that also deals closely with HAProxy are probably better
discussed here.

Regarding what David is talking about, this is mostly related to
Kubernetes internals (pods, annotations, Endpoints, etc) that's why I
suggested we talk about this in github where other similar discussions
take place. But I may be wrong and among this ML there may be people
that have something to say about this post in this case of course they
should feel free to answer here.

-- 
Moemen MHEDHBI





Re: [RFC] Add weights to kubernetes-ingress

2020-03-19 Thread Moemen MHEDHBI
Hi David,

On 18/03/2020 18:21, David Spitzer-Dulagan wrote:
>
> Hi,
>
> If this isn't the right place please let me know - after reading the
> "contributing" page that was linked in the readme of
> kubernetes-ingress I thought that this mailing list might be my best
> bet to ask for your input.
>
Thanks for you interest in the project.

This ML is the right place to contribute to the HAProxy software, but
for the ingress controller better do this by creating an issue in the
github project.

It isn't your fault anyway, we have updated the contribute/discussion
sections of the README in order to put clear information.

> I wanted to know if the following feature would be something you'd be
> interested in seeing in the official kubernetes-ingress controller:
> The ability to add an annotation to a pod that would translate into a
> weight in the backend wherever that pod is used.
>
We'd be interested to know more about the use case.

In a standard use case, pods are ephemeral and they are not expected to
carry specific information. Thus they can be easily replaceable,
scalable, etc

So typical questions:
- What kind of pods are these? static pods or created via Service ?
- How they are configured (annotation part in this case)?

So don't hesitate to give us more context about this in the github issue.


> Our company currently has the need for setting weights for specific
> pods and there is no way that I could find to achieve that with the
> current kubernetes-ingress controller.
>
> The implementation would be as follows:
>
>   * Cache all Pod objects and subscribe to changes on all namespaces
> similar to the other observed k8s objects in the controller
>   * Extend EndpointIP in types.go to include the targetRef property of
> the k8s object
>   * handleEndpointIP in controller.go would be extended to look up the
> pod from the cache via the targetRef property and set the weight
> if found
>   * Changes to the weight in the Pod would trigger a reload as well
>
> I've started working on a patch for kubernetes-ingress to implement
> that. I'd love your thoughts on this.
>
> David
>
>
The general idea makes sense, here are some notes:

- Caching  pods objects, tracking them and syncing the corresponding
configuration is an important amount of work which for now seems to be
only useful for an edge case. That's why we would probably prefer having
CLI arg to activate this.
- No need for reload since we can set server weight via the Runtime API
(for example we do this already to change a server address via
c.NativeAPI.Runtime.SetServerAddr)

My coworkers will probable have other thoughts to add once the github
issue is created.

-- 
Moemen MHEDHBI



Re: Question about httplog and backend prot

2019-05-24 Thread Moemen MHEDHBI


On 19/05/2019 00:28, Aleksandar Lazic wrote:
> Hi.
>
> I have the following setup
>
> ```
> frontend public_ssl
>
> bind :::443 v4v6
>
> option tcplog
>
> tcp-request inspect-delay 5s
> tcp-request content capture req.ssl_sni len 25
> tcp-request content accept if { req.ssl_hello_type 1 }
>   
> # https://www.haproxy.com/blog/introduction-to-haproxy-maps/
> use_backend
> %[req.ssl_sni,lower,map(/usr/local/etc/haproxy/tcp-domain2backend-map.txt)]
>
> default_backend be_sni
>
> backend be_sni
>   server fe_sni 127.0.0.1:10444 weight 10 send-proxy-v2-ssl-cn
>
> frontend https-in
>
> # terminate ssl
> bind 127.0.0.1:10444 accept-proxy ssl strict-sni alpn h2,http/1.1 crt
> /usr/local/etc/haproxy-certs
>
> mode http
> option forwardfor
> option httplog
> option http-use-htx
> option http-ignore-probes
>
> # https://www.haproxy.com/blog/introduction-to-haproxy-maps/
> use_backend
> %[req.hdr(host),lower,map(/usr/local/etc/haproxy/http-domain2backend-map.txt)]
>
> #-
> #  backends
> #-
>
> backend nextcloud-backend
> mode http
> option http-use-htx
> option httpchk GET / HTTP/1.1\r\nHost:\ cloud.Domain.com
> server short-cloud 127.0.0.1:81 check
> ```
>
> I know that the backend can't handle h2.
> The log line looks like this.
>
> ```
> :::Client-IP:4552 [18/May/2019:18:21:33.886] https-in~
> nextcloud-backend/short-cloud 0/0/0/53/53 200 691 - -  21/3/0/0/0 0/0 "GET
> /ocs/v2.php/apps/notifications/api/v2/notifications HTTP/2.0"
> ```
>
> What variable can I use for the log to see which protocol is used for the
> backend, as with htx the frontend can have different http proto then the 
> backend?
>
> I haven't seen any variable in the custom log fields which reflects the 
> backend
> protocol.
> https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#8.2.4
>
> Best regards
> Aleks

Hey Aleksandar,

Is "bc_http_major" what you are looking for?
https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#7.3.3-bc_http_major

But in this case it will be always http1 since there is no alpn
directive in the server line.

Unless you're looking for the http protocol in the frontend side and
this can be fetched with fc_http_major.

Anyway you can use the following log-line to see both:

    log-format "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC
%CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r
proto_frontend:%[fc_http_major] proto_backend:%[bc_http_major]"

-- 
Moemen 





Re: Allowing more codes for `errorfile` (like 404) (that can be later re-used with `http-request deny deny_status 404`)

2019-02-10 Thread Moemen MHEDHBI
Hi Ciprian,

On 09/02/2019 11:11, Ciprian Dorin Craciun wrote:
> First of all I understand that the `errorfile` (and related
> `errorloc`) are for HAProxy's own generated errors.
>
> However given how powerful the ACL system is, and the availability of
> `http-request deny deny_status `, one can leverage all this and
> implement a powerful WAF.
>
> For example last week I tried to "hide" some URL's and pretend they
> are 404's directly from HAProxy (as it seems that the backend server I
> was using doesn't support this feature...)  However when I tried to
> use `deny_status 404` it actually replied with a 403 (and the custom
> page for that error).  Now this is not a big issue, however it might
> give an "attacker" a hint that "something" is at that URL...
>
> 
>
> Therefore I took a look at the `errorfile` related code and how it is
> currently implemented:
>
> 
> https://github.com/haproxy/haproxy/blob/61ea7dc005bc490ed3e5298ade3d932926fdb9f7/include/common/http.h#L82-L96
> 
> https://github.com/haproxy/haproxy/blob/61ea7dc005bc490ed3e5298ade3d932926fdb9f7/src/http.c#L218-L231
> 
> https://github.com/haproxy/haproxy/blob/61ea7dc005bc490ed3e5298ade3d932926fdb9f7/src/http.c#L233
> 
> https://github.com/haproxy/haproxy/blob/06f5b6435ba99b7a6a034d27b56192e16249f6f0/src/http_rules.c#L106-L111
> 
> https://github.com/haproxy/haproxy/blob/21c741a665f4c7b354e961267824c81ec44f503f/include/types/proxy.h#L408
>
> At a first glance it implements a sort of "associative-array"
> structure, of fixed `HTTP_ERR_SIZE`, with the actual codes being
> statically mapped to indices via `http_err_codes`.
>
> There is a global default array `http_err_msgs` (plus the
> `http_err_chunks` counterpart) and a per-proxy embedded array
> `proxy->errmsg`, both pre-allocated of size `HTTP_ERR_SIZE`.
>
> Therefore adding support for a new status code is quite trivial, just
> create the relevant entries in `http_err_codes` and `http_err_msgs`
> and a new `HTTP_ERR_*` entry.  The downside is minimal, just a slight
> increase of `sizeof (struct buffer)` bytes (32) globally and one for
> each proxy, and almost no run-time impact in terms of CPU.
>
> Thus I would suggest adding at least the following:
> * 404 -- not-found -- with an obvious use-case;
> * 402 -- payment-required -- which might be used as a "soft" 403
> alternative, which suggests that although you are "authenticated" and
> issued a valid request, you are not-allowed because of some
> "accounting" reason;
> * 406 -- not-acceptable -- although used for content negotiation, it
> could be "abused" as a page indicating to the user that his "browser"
> (or agent) is not "compatible" with the generated content;
> * 409/410 -- conflict/gone -- alternatives for 403/404 if the user has
> need for them;
> * 451 -- unavailable-for-legal-reasons -- perhaps useful in these GDPR-days?
> * 501 -- not-implemented -- useful perhaps to "hide" some
> not-yet-published API endpoints, or other backend-related
> "blacklisting";
>
> 
>
> However my proposal would be to allow "user-defined" error codes, the
> main use-case being WAF/CDN custom status codes
> (https://support.cloudflare.com/hc/en-us/sections/200820298-Error-Pages).
>
> Such a change shouldn't be that involved, although for efficiency the
> `proxy->errmsg` should be transformed from an embedded array to an
> array pointer of variable size.
>
> Would such a feature be accepted by the HAProxy developers?

I think the reason of making HAProxy capable of returning a fixed number
of HTTP status codes is to avoid confusion with status codes returned
from Web servers.

For example, it is not the role of a reverse proxy to fetch a Web
resource so returning "404 Not found" won't make much sense and will
make debug harder when trying to identify where the 404 originated from.

That being said, it is still very useful to be able to make a reverse
proxy send any desired response. So you can do that by hard coding the
error you want to send in the errorfile. Example:

errorfile 400 /tmp/400
http-request deny deny_status 400 if { path_beg /test }

Then in your /tmp/400:

HTTP/1.1 404 Not Found
Content-Type: text/html
Content-Length: 128
Connection: Closed


404 Not Found

404 Not Found



This way you will be able to send a 404 to the client but you will see
400 in the logs.


> 
>
> Moreover, dare I say, this feature could be "abused" to serve a few
> "static files" (like `favicon.ico` or `robots.txt`) directly from
> HAProxy without requiring Lua.  In fact the most viewed topic on
> HAProxy's forum is exactly about this:
>
>   
> https://discourse.haproxy.org/t/how-do-i-serve-a-single-static-file-from-haproxy/32
>
> Ciprian.
>
HAProxy provides a cache, which was designed to perform cache on small
objects (favicon, css...). So this may be what you are looking for.


-- 
Moemen MHEDHBI


[PATCH] MINOR: sample: add ssl_sni_check converter

2018-12-23 Thread Moemen MHEDHBI
Hi,

The attached patch adds the ssl_sni_check converter which returns true
if the sample input string matches a loaded certificate's CN/SAN.

This can be useful to check for example if a host header matches a
loaded certificate CN/SAN before doing a redirect:

frontent fe_main 
  bind 127.0.0.1:80
  bind 127.0.0.1:443 ssl crt /etc/haproxy/ssl/
  http-request redirect scheme https if !{ ssl_fc } { hdr(host),ssl_sni_check() 
}


This converter may be even more useful when certificates will be
added/removed at runtime.

++

-- 
Moemen MHEDHBI
>From 14ed628ab9badbb06c45bab324eb00f998de49af Mon Sep 17 00:00:00 2001
From: Moemen MHEDHBI 
Date: Sun, 23 Dec 2018 20:50:04 +0100
Subject: [PATCH] MINOR: sample: add ssl_sni_check converter

This adds the ssl_sni_check converter. The converter returns
true if the sample input string matches a loaded certificate's CN/SAN.
Lookup can be done through certificates of a specified bind line (by
) otherwise the search will include all bind lines of the current
proxy.
---
 doc/configuration.txt |  6 ++
 src/ssl_sock.c| 43 +++
 2 files changed, 49 insertions(+)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 6ca63d64a..0be043e73 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -13651,6 +13651,12 @@ sha1
   Converts a binary input sample to a SHA1 digest. The result is a binary
   sample with length of 20 bytes.
 
+ssl_sni_check()
+  Returns true if the sample input string matches a loaded certificate's CN/SAN.
+  Otherwise false is returned. When  is provided the lookup is done only
+  through the certificates of the  bind line named , if not all bind 
+  lines of the current frontend will be searched.
+
 strcmp()
   Compares the contents of  with the input value of type string. Returns
   the result as a signed integer compatible with strcmp(3): 0 if both strings
diff --git a/src/ssl_sock.c b/src/ssl_sock.c
index 282b85ddd..b24d78978 100644
--- a/src/ssl_sock.c
+++ b/src/ssl_sock.c
@@ -7276,6 +7276,41 @@ smp_fetch_ssl_c_verify(const struct arg *args, struct sample *smp, const char *k
 	return 1;
 }
 
+/* boolean, returns true if input string matches a loaded certificate's CN/SAN. */
+/* The lookup is done only for the bind named  if the param is prvided. */
+static int smp_conv_ssl_sni_check(const struct arg *args, struct sample *smp, void *private)
+{
+	struct proxy *px = smp->px;
+	struct listener *l;
+	struct ebmb_node *node = NULL;
+	char *wildp = NULL;
+	int i;
+
+	for (i = 0; i < trash.size && i < smp->data.u.str.data; i++) {
+		trash.area[i] = tolower(smp->data.u.str.area[i]);
+		if (!wildp && (trash.area[i] == '.'))
+			wildp = [i];
+	}
+	trash.area[i] = 0;
+
+	list_for_each_entry(l, >conf.listeners, by_fe) {
+		if ( args->type == ARGT_STR && l->name && (strcmp(args->data.str.area, l->name) != 0))
+			continue;
+		/* lookup in full qualified names */
+		node = ebst_lookup(>bind_conf->sni_ctx, trash.area);
+		/* lookup in wildcards names */
+		if (!node && wildp)
+			node = ebst_lookup(>bind_conf->sni_w_ctx, wildp);
+		if (node != NULL)
+			break;
+	}
+
+	smp->data.type = SMP_T_BOOL;
+	smp->data.u.sint = !!node;
+	smp->flags = SMP_F_VOL_TEST;
+	return 1;
+}
+
 /* parse the "ca-file" bind keyword */
 static int ssl_bind_parse_ca_file(char **args, int cur_arg, struct proxy *px, struct ssl_bind_conf *conf, char **err)
 {
@@ -9047,6 +9082,14 @@ static struct sample_fetch_kw_list sample_fetch_keywords = {ILH, {
 
 INITCALL1(STG_REGISTER, sample_register_fetches, _fetch_keywords);
 
+/* Note: must not be declared  as its list will be overwritten */
+static struct sample_conv_kw_list sample_conv_kws = {ILH, {
+	{ "ssl_sni_check", smp_conv_ssl_sni_check, ARG1(0,STR), NULL, SMP_T_STR, SMP_T_BOOL },
+	{ /* END */ },
+}};
+
+INITCALL1(STG_REGISTER, sample_register_convs, _conv_kws);
+
 /* Note: must not be declared  as its list will be overwritten.
  * Please take care of keeping this list alphabetically sorted.
  */
-- 
2.19.2



Re: OCSP stapling with multiple domains

2018-11-28 Thread Moemen MHEDHBI
@list: sorry for the incorrect subject in my previous answer. At some
point the subject changed when email was saved and encrypted in Drafts.

On 28/11/2018 18:59, Moemen MHEDHBI wrote:
> Hi Igor,
>
> On 11/27/18 12:48 AM, Igor Cicimov wrote:
>> Hi Moemen,
>>
>> On Tue, Nov 27, 2018 at 1:24 AM Moemen MHEDHBI  wrote:
>>> On 11/14/18 1:34 AM, Igor Cicimov wrote:
>>>
>>> On Sun, Nov 11, 2018 at 2:48 PM Igor Cicimov 
>>>  wrote:
>>>> Hi,
>>>>
>>>> # haproxy -v
>>>> HA-Proxy version 1.8.14-1ppa1~xenial 2018/09/23
>>>> Copyright 2000-2018 Willy Tarreau 
>>>>
>>>> I noticed that in case of multiple domains and OCSP setup:
>>>>
>>>> # ls -1 /etc/haproxy/ssl.d/*.ocsp
>>>> /etc/haproxy/ssl.d/star_domain2_com.crt.ocsp
>>>> /etc/haproxy/ssl.d/star_domain_com.crt.ocsp
>>>> /etc/haproxy/ssl.d/star_domain3_com.crt.ocsp
>>>> /etc/haproxy/ssl.d/star_domain4_com.crt.ocsp
>>>>
>>>> I get OCSP response from haproxy only for one of the domains
>>>> domain.com. Tested via:
>>>>
>>>> $ echo | openssl s_client -connect domain[234].com:443 -tlsextdebug
>>>> -status -servername domain[234].com
>>>>
>>>> Is this expected?
>>> Any comments/ideas regarding this? Further noticed that OCSP code probably 
>>> does not check the certificates SANs and matches only based on the CN in 
>>> the subject since the calls to whatever.domain.tld get stapled but to 
>>> domain.tld do not.
>>>
>>> Hi Igor,
>>>
>>> Testing OCSP on multiple certificates with different domains (based on the 
>>> CN) works correctly for me. (a.domain.com, b.domain.com, c.domain.com)
>>>
>>> Are you using multiple certs with same CN but different SANs ?
>> The certificates belong to completely separate domains, so not
>> subdomains of the same domain like in your case. They are also
>> wildcard certs so here is the layout:
>>
>> # ls -1 /etc/haproxy/ssl.d/
>> star_domain1_com.crt
>> star_domain1_com.crt.ocsp
>> star_domain2_com.crt
>> star_domain2_com.crt.ocsp
>> star_domain3_com.crt
>> star_domain3_com.crt.ocsp
>>
>> # for i in `ls -1 /etc/haproxy/ssl.d/*.crt`; do openssl x509 -noout
>> -subject -in $i; done
>> subject= /C=AU/ST=New South Wales/L=Sydney/O=My Company/CN=*.domain1.com
>> subject= /C=AU/ST=New South Wales/L=Sydney/O=My Company/CN=*.domain2.com
>> subject= /C=AU/ST=New South Wales/L=Sydney/O=My Company/CN=*.domain3.com
>>
>> The SAN only contains the certificates domain and nothing else, for
>> example for domain3.com:
>>
>> X509v3 Subject Alternative Name:
>> DNS:*.domain3.com, DNS:domain3.com
>>
>> The haproxy bind line in the frontend looks like:
>>
>>  bind *:443 ssl crt /etc/haproxy/ssl.d/ ...
>>
>> And here is the output of the daily cronjob that updates the OCSP for 
>> haproxy:
>>
>> Date: Mon, 26 Nov 2018 05:00:01 + (GMT)
>>
>> /etc/haproxy/ssl.d/star_domain1_com.crt: good
>> This Update: Nov 25 17:39:11 2018 GMT
>> Next Update: Dec  2 16:54:11 2018 GMT
>> OCSP Response updated!
>> /etc/haproxy/ssl.d/star_domain2_com.crt: good
>> This Update: Nov 24 20:49:57 2018 GMT
>> Next Update: Dec  1 20:04:57 2018 GMT
>> OCSP Response updated!
>> /etc/haproxy/ssl.d/star_domain3_com.crt: good
>> This Update: Nov 25 14:09:00 2018 GMT
>> Next Update: Dec  2 13:24:00 2018 GMT
>> OCSP Response updated!
>>
>> I can confirm this is working as intended on other serves I have with
>> 1.7.11 and 1.8.14, so it must be something specific to this one that I
>> struggle to understand (to be even more confusing it is all being
>> setup by Ansible in same way as everywhere else).
>>
>> Under what circumstances would a setup like this not work in terms of
>> OCSP? Example:
>>
>> $ echo | openssl s_client -connect server:443 -tlsextdebug -status
>> -servername domain1.com | grep -E 'OCSP|domain1'
>> depth=0 C = AU, ST = New South Wales, L = Sydney, O = My Company, CN =
>> *.domain1.com
>> verify return:1
>> DONE
>> OCSP response: no response sent
>>  0 s:/C=AU/ST=New South Wales/L=Sydney/O=My Company/CN=*.domain1.com
>> subject=/C=AU/ST=New South Wales/L=Sydney/O=My Company/CN=*.domain1.com
>>
>> Thanks for your input by the way, very much appreciated.
>
> If I am understanding this correctly when 

Encrypted Message

2018-11-28 Thread Moemen MHEDHBI
Hi Igor,

On 11/27/18 12:48 AM, Igor Cicimov wrote:
> Hi Moemen,
>
> On Tue, Nov 27, 2018 at 1:24 AM Moemen MHEDHBI  wrote:
>> On 11/14/18 1:34 AM, Igor Cicimov wrote:
>>
>> On Sun, Nov 11, 2018 at 2:48 PM Igor Cicimov 
>>  wrote:
>>> Hi,
>>>
>>> # haproxy -v
>>> HA-Proxy version 1.8.14-1ppa1~xenial 2018/09/23
>>> Copyright 2000-2018 Willy Tarreau 
>>>
>>> I noticed that in case of multiple domains and OCSP setup:
>>>
>>> # ls -1 /etc/haproxy/ssl.d/*.ocsp
>>> /etc/haproxy/ssl.d/star_domain2_com.crt.ocsp
>>> /etc/haproxy/ssl.d/star_domain_com.crt.ocsp
>>> /etc/haproxy/ssl.d/star_domain3_com.crt.ocsp
>>> /etc/haproxy/ssl.d/star_domain4_com.crt.ocsp
>>>
>>> I get OCSP response from haproxy only for one of the domains
>>> domain.com. Tested via:
>>>
>>> $ echo | openssl s_client -connect domain[234].com:443 -tlsextdebug
>>> -status -servername domain[234].com
>>>
>>> Is this expected?
>> Any comments/ideas regarding this? Further noticed that OCSP code probably 
>> does not check the certificates SANs and matches only based on the CN in the 
>> subject since the calls to whatever.domain.tld get stapled but to domain.tld 
>> do not.
>>
>> Hi Igor,
>>
>> Testing OCSP on multiple certificates with different domains (based on the 
>> CN) works correctly for me. (a.domain.com, b.domain.com, c.domain.com)
>>
>> Are you using multiple certs with same CN but different SANs ?
> The certificates belong to completely separate domains, so not
> subdomains of the same domain like in your case. They are also
> wildcard certs so here is the layout:
>
> # ls -1 /etc/haproxy/ssl.d/
> star_domain1_com.crt
> star_domain1_com.crt.ocsp
> star_domain2_com.crt
> star_domain2_com.crt.ocsp
> star_domain3_com.crt
> star_domain3_com.crt.ocsp
>
> # for i in `ls -1 /etc/haproxy/ssl.d/*.crt`; do openssl x509 -noout
> -subject -in $i; done
> subject= /C=AU/ST=New South Wales/L=Sydney/O=My Company/CN=*.domain1.com
> subject= /C=AU/ST=New South Wales/L=Sydney/O=My Company/CN=*.domain2.com
> subject= /C=AU/ST=New South Wales/L=Sydney/O=My Company/CN=*.domain3.com
>
> The SAN only contains the certificates domain and nothing else, for
> example for domain3.com:
>
> X509v3 Subject Alternative Name:
> DNS:*.domain3.com, DNS:domain3.com
>
> The haproxy bind line in the frontend looks like:
>
>  bind *:443 ssl crt /etc/haproxy/ssl.d/ ...
>
> And here is the output of the daily cronjob that updates the OCSP for haproxy:
>
> Date: Mon, 26 Nov 2018 05:00:01 + (GMT)
>
> /etc/haproxy/ssl.d/star_domain1_com.crt: good
> This Update: Nov 25 17:39:11 2018 GMT
> Next Update: Dec  2 16:54:11 2018 GMT
> OCSP Response updated!
> /etc/haproxy/ssl.d/star_domain2_com.crt: good
> This Update: Nov 24 20:49:57 2018 GMT
> Next Update: Dec  1 20:04:57 2018 GMT
> OCSP Response updated!
> /etc/haproxy/ssl.d/star_domain3_com.crt: good
> This Update: Nov 25 14:09:00 2018 GMT
> Next Update: Dec  2 13:24:00 2018 GMT
> OCSP Response updated!
>
> I can confirm this is working as intended on other serves I have with
> 1.7.11 and 1.8.14, so it must be something specific to this one that I
> struggle to understand (to be even more confusing it is all being
> setup by Ansible in same way as everywhere else).
>
> Under what circumstances would a setup like this not work in terms of
> OCSP? Example:
>
> $ echo | openssl s_client -connect server:443 -tlsextdebug -status
> -servername domain1.com | grep -E 'OCSP|domain1'
> depth=0 C = AU, ST = New South Wales, L = Sydney, O = My Company, CN =
> *.domain1.com
> verify return:1
> DONE
> OCSP response: no response sent
>  0 s:/C=AU/ST=New South Wales/L=Sydney/O=My Company/CN=*.domain1.com
> subject=/C=AU/ST=New South Wales/L=Sydney/O=My Company/CN=*.domain1.com
>
> Thanks for your input by the way, very much appreciated.


If I am understanding this correctly when you use the naked domain
'domain1.com', you don't get an OCSP response (despite mentioning the
domain in the SAN extension).

Is this the case for all the domains or only one of them ? I am asking
this since you're mentioning multiple domains.

I was testing the same config with HA-Proxy version 1.8.14 2018/09/20
without being able to reproduce this.

$ echo quit | openssl s_client -connect localhost:443 -servername
'example.org' -status | egrep 'OCSP|example'
OCSP response:
OCSP Response Data:
    OCSP Response Status: successful (0x0)
    Response Type: Basic OCSP Response
    OCSP Nonce:
subject=/C=FR/ST=PARIS/O=M

Re: haproxy segfaults when clearing the input buffer via LUA

2018-11-26 Thread Moemen MHEDHBI


On 11/20/18 2:25 PM, Christopher Faulet wrote:
> Le 17/11/2018 à 20:42, Willy Tarreau a écrit :
>> Hi Moemen,
>>
>> On Wed, Nov 14, 2018 at 04:07:42PM +0100, Moemen MHEDHBI wrote:
>>> Hi,
>>>
>>> I was playing with LUA, to configure a traffic mirroring behavior.
>>> Basically I wanted HAProxy to send the http response of a request to a
>>> 3rd party before sending the response to the client.
>>>
>>> So this is the stripped down version of the script to reproduce the
>>> segfault with haproxy from the master branch:
>>>
>>> function mirror(txn)
>>>  local in_len = txn.res:get_in_len()
>>>  while in_len > 0 do
>>>  response = txn.res:dup()
>>>  -- sending response to 3rd party.
>>>  txn.res:forward(in_len)
>>>  core.yield()
>>>  in_len = txn.res:get_in_len()
>>>  end
>>> end
>>> core.register_action("mirror", { "http-res" }, mirror)
>>>
>>> Then I use this script via "http-response lua.mirror"
>>>
>>>
>>> I think problem here is that when I forward the response from the input
>>> buffer to the output buffer and hand processing back to HAProyx, the
>>> latter will try to send an invalid http request.
>>>
>>> The request is invalid because HAProxy did not have the opportunity to
>>> check the response and make sure there are valid headers because the
>>> input buffer is empty after the core.yield().
>>>
>>> So I was expecting an error and HAProxy telling me that this is an
>>> invalid request but not a segfault.
>>
>> I can't tell for sure, but I totally agree it should never segfault,
>> so at the very least we're missing a test. However I suspect there
>> is a problem with the presence of the forward() call in your script,
>> because by doing this you're totally bypassing the HTTP engine, so
>> your script was called in an http context, it discretely stole the
>> contents under the blanket, and went back to the http engine saying
>> "I did nothing, it's not me!". The rest of the code continues to
>> process the HTTP contents from the buffer where they are, resulting
>> it quite a big mess. Ideally we should have a way to detect that
>> parts of the buffer were moved on return and immediately send an
>> error there. But there are some cases where it's valid if called
>> using the HTTP API. So I don't know for sure how to detect such
>> anomalies. Maybe buffer contents being smaller than the size of
>> headers known by the parser would already be a good step forward.
>>
>> I remember Thierry recently had to try to strengthen a little bit
>> such use cases where tcp was used from within HTTP. We'll definitely
>> have to figure what the use cases are for this and to find a reliable
>> solution to this because by definition it will not work anymore with
>> HTX.
>>
>>> There are two ways to avoid this by changing the script:
>>>
>>> 1/ Use mode tcp
>>>
>>> 2/ Use "get" and "send" instead of "forward", this way the LUA script
>>> will send the response directly to the client, instead of HAProxy doing
>>> that.
>>
>> It should still cause the same problem which is that the HTTP parser
>> is totally bypassed and what you forward is not HTTP anymore, but bytes
>> from the wire, and that you may even expect that the HTTP parser appends
>> an error at some places and aborts if it discoveres the stream is
>> mangled.
>>
>> I don't know if we can register filters from the Lua, but ideall that's
>> what should be the best option in your case : having a Lua-based filter
>> running on the data part would allow you to intercept the data stream
>> for each chunk decoded by the HTTP parser.
>>
>
> For the record, here is my old reply on a similar issue:
> https://www.mail-archive.com/haproxy@formilux.org/msg29571.html
>
> So, to be safe, don't use get/set/forward/send in HTTP without
> terminated the transaction with txn.done().
>
> The Lua API must definitely be changed to be more restrictive in HTTP.
> When the LUA will be update to support the HTX representation, I'll
> see with Thierry how to clarify this point.
>

Thank you Willy and Christopher for the clarifications, this is pretty
clear.

-- 
Moemen MHEDHBI




Re: OCSP stapling with multiple domains

2018-11-26 Thread Moemen MHEDHBI

On 11/14/18 1:34 AM, Igor Cicimov wrote:
> On Sun, Nov 11, 2018 at 2:48 PM Igor Cicimov
>  <mailto:ig...@encompasscorporation.com>> wrote:
>
> Hi,
>
> # haproxy -v
> HA-Proxy version 1.8.14-1ppa1~xenial 2018/09/23
> Copyright 2000-2018 Willy Tarreau  <mailto:wi...@haproxy.org>>
>
> I noticed that in case of multiple domains and OCSP setup:
>
> # ls -1 /etc/haproxy/ssl.d/*.ocsp
> /etc/haproxy/ssl.d/star_domain2_com.crt.ocsp
> /etc/haproxy/ssl.d/star_domain_com.crt.ocsp
> /etc/haproxy/ssl.d/star_domain3_com.crt.ocsp
> /etc/haproxy/ssl.d/star_domain4_com.crt.ocsp
>
> I get OCSP response from haproxy only for one of the domains
> domain.com <http://domain.com>. Tested via:
>
> $ echo | openssl s_client -connect domain[234].com:443 -tlsextdebug
> -status -servername domain[234].com
>
> Is this expected?
>
>
> Any comments/ideas regarding this? Further noticed that OCSP code
> probably does not check the certificates SANs and matches only based
> on the CN in the subject since the calls to whatever.domain.tld get
> stapled but to domain.tld do not.
>
Hi Igor,

Testing OCSP on multiple certificates with different domains (based on
the CN) works correctly for me. (a.domain.com, b.domain.com, c.domain.com)

Are you using multiple certs with same CN but different SANs ?

-- 
Moemen MHEDHBI



haproxy segfaults when clearing the input buffer via LUA

2018-11-14 Thread Moemen MHEDHBI
Hi,

I was playing with LUA, to configure a traffic mirroring behavior.
Basically I wanted HAProxy to send the http response of a request to a
3rd party before sending the response to the client.

So this is the stripped down version of the script to reproduce the
segfault with haproxy from the master branch:

function mirror(txn)
    local in_len = txn.res:get_in_len()
    while in_len > 0 do
    response = txn.res:dup()
    -- sending response to 3rd party.
    txn.res:forward(in_len)
    core.yield()
    in_len = txn.res:get_in_len()
    end
end
core.register_action("mirror", { "http-res" }, mirror)

Then I use this script via "http-response lua.mirror"


I think problem here is that when I forward the response from the input
buffer to the output buffer and hand processing back to HAProyx, the
latter will try to send an invalid http request.

The request is invalid because HAProxy did not have the opportunity to
check the response and make sure there are valid headers because the
input buffer is empty after the core.yield().

So I was expecting an error and HAProxy telling me that this is an
invalid request but not a segfault.

There are two ways to avoid this by changing the script:

1/ Use mode tcp

2/ Use "get" and "send" instead of "forward", this way the LUA script
will send the response directly to the client, instead of HAProxy doing
that.

-- 
Moemen MHEDHBI



[PATCH] DOC: Update configuration doc about the maximum number of, stick counters

2018-09-25 Thread Moemen MHEDHBI
Previous patches added support to tracking up to MAX_SESS_STKCTR stick
counters in the same connection, but without updating the DOC, it is done
here.

-- 
Moemen MHEDHBI

>From 30038ba660a784202664fd4253ede15e7a9f8f91 Mon Sep 17 00:00:00 2001
From: Moemen MHEDHBI 
Date: Tue, 25 Sep 2018 17:50:53 +0200
Subject: [PATCH] DOC: Update configuration doc about the maximum number of
 stick counters.

Previous patches added support to tracking up to MAX_SESS_STKCTR stick
counters in the same connection, but without updating the DOC, it is done
here.
---
 doc/configuration.txt | 41 -
 1 file changed, 24 insertions(+), 17 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 5042df95f..7bf6889e8 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -4192,9 +4192,11 @@ http-request { allow | auth [realm ] | redirect  | reject |
 
 - { track-sc0 | track-sc1 | track-sc2 }  [table ] :
   enables tracking of sticky counters from current request. These rules
-  do not stop evaluation and do not change default action. Three sets of
-  counters may be simultaneously tracked by the same connection. The first
-  "track-sc0" rule executed enables tracking of the counters of the
+  do not stop evaluation and do not change default action. The number of
+  counters that may be simultaneously tracked by the same connection is set
+  in MAX_SESS_STKCTR at build time (reported in haproxy -vv) which defaults
+  to 3, so the track-sc number is between 0 and (MAX_SESS_STCKTR-1). The
+  first "track-sc0" rule executed enables tracking of the counters of the
   specified table as the first set. The first "track-sc1" rule executed
   enables tracking of the counters of the specified table as the second
   set. The first "track-sc2" rule executed enables tracking of the
@@ -9274,16 +9276,18 @@ tcp-request connection  [{if | unless} ]
 
 - { track-sc0 | track-sc1 | track-sc2 }  [table ] :
 enables tracking of sticky counters from current connection. These
-rules do not stop evaluation and do not change default action. 3 sets
-of counters may be simultaneously tracked by the same connection. The
-first "track-sc0" rule executed enables tracking of the counters of the
-specified table as the first set. The first "track-sc1" rule executed
-enables tracking of the counters of the specified table as the second
-set. The first "track-sc2" rule executed enables tracking of the
-counters of the specified table as the third set. It is a recommended
-practice to use the first set of counters for the per-frontend counters
-and the second set for the per-backend ones. But this is just a
-guideline, all may be used everywhere.
+rules do not stop evaluation and do not change default action. The
+number of counters that may be simultaneously tracked by the same
+connection is set in MAX_SESS_STKCTR at build time (reported in
+haproxy -vv) whichs defaults to 3, so the track-sc number is between 0
+and (MAX_SESS_STCKTR-1). The first "track-sc0" rule executed enables
+tracking of the counters of the specified table as the first set. The
+first "track-sc1" rule executed enables tracking of the counters of the
+specified table as the second set. The first "track-sc2" rule executed
+enables tracking of the counters of the specified table as the third
+set. It is a recommended practice to use the first set of counters for
+the per-frontend counters and the second set for the per-backend ones.
+But this is just a guideline, all may be used everywhere.
 
 These actions take one or two arguments :
  is mandatory, and is a sample expression rule as described
@@ -14011,10 +14015,13 @@ sets unless they require some future information. Those generally include
 TCP/IP addresses and ports, as well as elements from stick-tables related to
 the incoming connection. For retrieving a value from a sticky counters, the
 counter number can be explicitly set as 0, 1, or 2 using the pre-defined
-"sc0_", "sc1_", or "sc2_" prefix, or it can be specified as the first integer
-argument when using the "sc_" prefix. An optional table may be specified with
-the "sc*" form, in which case the currently tracked key will be looked up into
-this alternate table instead of the table currently being tracked.
+"sc0_", "sc1_", or "sc2_" prefix. These three pre-defined prefixes can only be
+used if MAX_SESS_STKCTR value does not exceed 3, otherwise the counter number
+can be specified as the first integer argument when using the "sc_" prefix.
+Starting from "sc_0" t

Re: Configuring HAProxy session limits

2018-07-24 Thread Moemen MHEDHBI
Hi Àbéjídé,


On 24/07/2018 17:59, Àbéjídé Àyodélé wrote:
> Hi Friends,
>
> I am trying to bump session limits via the maxconn in the global
> section as
> below:
>
> cat /etc/haproxy/redacted-haproxy.cfg
> global
>   maxconn 1
>   stats socket /var/run/redacted-haproxy-stats.sock user haproxy group
> haproxy
> mode 660 level operator expose-fd listeners
>
> frontend redacted-frontend
>   mode tcp
>   bind :2004
>   default_backend redacted-backend
>
> backend redacted-backend
>   mode tcp
>   balance leastconn
>   hash-type consistent
>
>   server redacted_0 redacted01.qa:8443 
> check agent-check agent-port 8080 weight 100
> send-proxy
>   server redacted-684994ccd-6rn9q 192.168.39.223:8443
>  check port 8443 weight 100
> send-proxy
>   server redacted-684994ccd-c88d9 192.168.46.66:8443
>  check port 8443 weight 100
> send-proxy
>   server redacted-canary-58ccdb7cf4-47f4m 192.168.53.47:8443
>  check port 8443
> weight 100 send-proxy
>
> NOTE: I removed some portion of the config for conciseness sake.
>
> However this did not seem to have any impact on HAProxy after a reload
> as seen
> below:
>
> echo "show stat" | socat
> unix-connect:/var/run/redacted-haproxy-stats.sock stdio
> | cut -d"," -f7
> slim
> 2000
>
>
>
>
> 200

When slim is used in a Frontend line (in your case: redacted-frontend)
it refers to the maxconn of the frontend.
By default, when maxconn is not specified it is equal to 2000:
https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4.2-maxconn

When slim is used in a Backend line (in your case: redacted-backend) it
refers to the fullconn param because backends does not have maxconns.
The fullconn param is a little bit more complicated to understand than
maxconn. You can find more information about it in the doc:
https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4.2-fullconn
or if you search the mailing list history but most of the time you don't
need to use it.
To understand the 200 value, you need to consider the following
statement from the doc :
> Since it's hard to get this value right, haproxy automatically sets it
to 10% of the sum of the maxconns of all frontends that may branch to
this backend
So 10% of 2000 = 200

++
- Moemen.
>
> I do not know where 2000 and 200 are coming from as I did not at any point
> configure that, the maxconn was previously 4096.
>
> A more detailed stats output is below:
>
> echo "show stat" | socat
> unix-connect:/var/run/redacted-haproxy-stats.sock stdio
> #
> pxname,svname,qcur,qmax,scur,smax,slim,stot,bin,bout,dreq,dresp,ereq,econ,eresp,wretr,wredis,status,weight,act,bck,chkfail,chkdown,lastchg,downtime,qlimit,pid,iid,sid,throttle,lbtot,tracked,type,rate,rate_lim,rate_max,check_status,check_code,check_duration,hrsp_1xx,hrsp_2xx,hrsp_3xx,hrsp_4xx,hrsp_5xx,hrsp_other,hanafail,req_rate,req_rate_max,req_tot,cli_abrt,srv_abrt,comp_in,comp_out,comp_byp,comp_rsp,lastsess,last_chk,last_agt,qtime,ctime,rtime,ttime,agent_status,agent_code,agent_duration,check_desc,agent_desc,check_rise,check_fall,check_health,agent_rise,agent_fall,agent_health,addr,cookie,mode,algo,conn_rate,conn_rate_max,conn_tot,intercepted,dcon,dses,
> redacted-frontend,FRONTEND,,,0,2,2000,3694,0,0,0,0,0,OPEN,1,2,00,3,0,9,,,0,0,0,,,0,0,0,0,tcp,,3,9,3694,,0,0,
> redacted-backend,redacted_0,0,0,0,1,,2,0,0,,0,,0,0,0,0,UP,94,1,0,0,0,1582,0,,1,3,1,,2,,2,0,,1,L4OK,,0,,,0,0,683,,via
> agent : up,0,0,0,0,L7OK,0,50,Layer4 check passed,Layer7 check
> passed,2,3,4,1,1,1,10.185.57.54:8443
> ,,tcp
> redacted-backend,redacted-684994ccd-6rn9q,0,0,0,1,,46,0,0,,0,,0,0,0,0,UP,100,1,0,0,0,1582,0,,1,3,2,,46,,2,0,,1,L4OK,,0,,,0,0,6,,,0,0,0,1Layer4
> check passed,,2,3,4192.168.39.223:8443
> ,,tcp
> redacted-backend,redacted-684994ccd-c88d9,0,0,0,1,,45,0,0,,0,,0,0,0,0,UP,100,1,0,0,0,1582,0,,1,3,3,,45,,2,0,,1,L4OK,,0,,,0,0,12,,,0,0,0,0Layer4
> check passed,,2,3,4192.168.46.66:8443
> ,,tcp
> redacted-backend,redacted-canary-58ccdb7cf4-47f4m,0,0,0,1,,45,0,0,,0,,0,0,0,0,UP,100,1,0,0,0,1582,0,,1,3,4,,45,,2,0,,1,L4OK,,0,,,0,0,10,,,0,0,0,1Layer4
> check passed,,2,3,4192.168.53.47:8443
> ,,tcp
> redacted-backend,BACKEND,0,0,0,2,200,3694,0,0,0,0,,0,0,0,0,UP,394,4,0,,0,1582,0,,1,3,0,,138,,1,3,,9,,0,0,0,0,0,0,6,,,0,0,0,1,,tcp,leastconn,,,
>
> I need guidance on what I need to do to configure session limits
> correctly and
> also make it reflect in the exported metrics.
>
> Thanks!
>
> Abejide Ayodele
> It always seems impossible until it's done. --Nelson Mandela



Question regarding haproxy backend behaviour

2018-04-22 Thread Moemen MHEDHBI
Hi


On 18/04/2018 21:46, Ayush Goyal wrote:
> Hi
>
> Thanks Igor/Moemen for your response. I hadn't considered frontend
> queuing, although I am not sure where to measure it. I have wound down
> the benchmark infrastructure for time being and it would take me some
> time to replicate it again for providing additional stats. In the
> meantime, I am attaching the sample logs of 200 lines for benchmarks
> from 1 of the haproxy server.
>

Sorry for the late reply. In order to explain the stats you were seeing
let us get back to your first question:
>  1. How the nginx_backend connections are being terminated to serve
the new
connections?

As told in the previous answer, the backend connection can be terminated
when the server decides to close the connection or due to a HAProxy
timeout or when the client terminates the connection..
But in keep-alive mode, when the server closes the connection, HAProxy
won't close the client side connection. So unless the client asks for
closing the connection (in keep-alive the client keeps the connection
open for further requests) you will see more connections on the frontend
side than the backend side.
You can use the "option forceclose" which will ensure that HAProxy
actively closes the connection on both sides after each request and you
will see that the number of frontend and backend connections are closer.
Frontend connections may still be a little higher because in general
(HAProxy and the servers are in the same site) the latency in the
frontend side is higher than the one in the backend side.

> Reading the logs however, I could see that both srv_queue and
> backend_queue are 0. One detail that you may notice reading the logs,
> that I had omitted earlier for sake of simplicity is that nginx_ssl_fe
> frontend is bound on 2 processes to split cpu load. So instead of this:
>>
>>> frontend nginx_ssl_fe
>>>         bind *:8443 ssl 
>>>         maxconn 10
>>>         bind-process 2
>>
> It has
> > bind-process 2 3 
>
> In these logs haproxy ssl_sess_id_router frontend is doing 21k
> frontend connections, and both processes of nginx_ssl_fe are doing
> approx 10k frontend connections for total of ~20k frontend
> connections. This is just one node there are 3 more nodes like this,
> making the frontend connections in the ssl_sess_id_router frontend
> ~63k and ~60k in all frontends for nginx_ssl_fe. The nginx is still
> handling only 32k connections from nginx_backend.
>
> Please let me know if you need more info.
>
> Thanks,
> Ayush Goyal
>    
>  
>
> On Tue, Apr 17, 2018 at 10:03 PM Moemen MHEDHBI <mmhed...@haproxy.com
> <mailto:mmhed...@haproxy.com>> wrote:
>
> Hi
>
>
> On 16/04/2018 12:04, Igor Cicimov wrote:
>>
>>
>> On Mon, 16 Apr 2018 6:09 pm Ayush Goyal <ay...@helpshift.com
>>     <mailto:ay...@helpshift.com>> wrote:
>>
>> Hi Moemen,
>>
>> Thanks for your response. But I think I need to clarify a few
>> things here. 
>>
>> On Mon, Apr 16, 2018 at 4:33 AM Moemen MHEDHBI
>> <mmhed...@haproxy.com <mailto:mmhed...@haproxy.com>> wrote:
>>
>> Hi
>>
>>
>> On 12/04/2018 19:16, Ayush Goyal wrote:
>>> Hi,
>>>
>>> I have a question regarding haproxy backend connection
>>> behaviour. We have following setup:
>>>
>>>   +-+     +---+
>>>   | haproxy |>| nginx |
>>>   +-+     +---+
>>>
>>> We use a haproxy cluster for ssl off-loading and then
>>> load balance request to
>>> nginx cluster. We are currently benchmarking this setup
>>> with 3 nodes for haproxy
>>> cluster and 1 nginx node. Each haproxy node has two
>>> frontend/backend pair. First
>>> frontend is a router for ssl connection which
>>> redistributes request to the second 
>>> frontend in the haproxy cluster. The second frontend is
>>> for ssl handshake and 
>>> routing requests to nginx servers. Our configuration is
>>> as follows:
>>>
>>> ```
>>> global
>>>     maxconn 10
>>>     user haproxy
>>>     group haproxy
>>>     nbproc 2
>>>     cpu-map 1 1
>>>     cpu-map 2 2
>>>
>>

Re: Question regarding haproxy backend behaviour

2018-04-17 Thread Moemen MHEDHBI
Hi


On 16/04/2018 12:04, Igor Cicimov wrote:
>
>
> On Mon, 16 Apr 2018 6:09 pm Ayush Goyal <ay...@helpshift.com
> <mailto:ay...@helpshift.com>> wrote:
>
> Hi Moemen,
>
> Thanks for your response. But I think I need to clarify a few
> things here. 
>
> On Mon, Apr 16, 2018 at 4:33 AM Moemen MHEDHBI
> <mmhed...@haproxy.com <mailto:mmhed...@haproxy.com>> wrote:
>
> Hi
>
>
> On 12/04/2018 19:16, Ayush Goyal wrote:
>> Hi,
>>
>> I have a question regarding haproxy backend connection
>> behaviour. We have following setup:
>>
>>   +-+     +---+
>>   | haproxy |>| nginx |
>>   +-+     +---+
>>
>> We use a haproxy cluster for ssl off-loading and then load
>> balance request to
>> nginx cluster. We are currently benchmarking this setup with
>> 3 nodes for haproxy
>> cluster and 1 nginx node. Each haproxy node has two
>> frontend/backend pair. First
>> frontend is a router for ssl connection which redistributes
>> request to the second 
>> frontend in the haproxy cluster. The second frontend is for
>> ssl handshake and 
>> routing requests to nginx servers. Our configuration is as
>> follows:
>>
>> ```
>> global
>>     maxconn 10
>>     user haproxy
>>     group haproxy
>>     nbproc 2
>>     cpu-map 1 1
>>     cpu-map 2 2
>>
>> defaults
>>     mode http
>>     option forwardfor
>>     timeout connect 5s
>>     timeout client 30s
>>     timeout server 30s
>>     timeout tunnel 30m
>>     timeout client-fin 5s
>>
>> frontend ssl_sess_id_router
>>         bind *:443
>>         bind-process 1
>>         mode tcp
>>         maxconn 10
>>         log global
>>         option tcp-smart-accept
>>         option splice-request
>>         option splice-response
>>         default_backend ssl_sess_id_router_backend
>>
>> backend ssl_sess_id_router_backend
>>         bind-process 1
>>         mode tcp
>>         fullconn 5
>>         balance roundrobin
>>         ..
>>         option tcp-smart-connect
>>         server lbtest01 :8443 weight 1 check send-proxy
>>         server lbtest02 :8443 weight 1 check send-proxy
>>         server lbtest03 :8443 weight 1 check send-proxy
>>
>> frontend nginx_ssl_fe
>>         bind *:8443 ssl 
>>         maxconn 10
>>         bind-process 2
>>         option tcp-smart-accept
>>         option splice-request
>>         option splice-response
>>         option forwardfor
>>         reqadd X-Forwarded-Proto:\ https
>>         timeout client-fin 5s
>>         timeout http-request 8s
>>         timeout http-keep-alive 30s
>>         default_backend nginx_backend
>>
>> backend nginx_backend
>>         bind-process 2
>>         balance roundrobin
>>         http-reuse safe
>>         option tcp-smart-connect
>>         option splice-request
>>         option splice-response
>>         timeout tunnel 30m
>>         timeout http-request 8s
>>         timeout http-keep-alive 30s
>>         server testnginx :80  weight 1 check
>> ```
>>
>> The nginx node has nginx with 4 workers and 8192 max clients,
>> therefore the max
>> number of connection it can accept is 32768.
>>
>> For benchmark, we are generating ~3k new connections per
>> second where each
>> connection makes 1 http request and then holds the connection
>> for next 30
>> seconds. This results in a high established connection on the
>> first frontend,
>> ssl_sess_id_router, ~25k per haproxy node (Total ~77k
>> connections on 3 haproxy
>> nodes). The second frontend (nginx_ssl_fe) receives the same
>> number of
>> connection on the frontend. 

Re: Question regarding haproxy backend behaviour

2018-04-15 Thread Moemen MHEDHBI
Hi


On 12/04/2018 19:16, Ayush Goyal wrote:
> Hi,
>
> I have a question regarding haproxy backend connection behaviour. We
> have following setup:
>
>   +-+     +---+
>   | haproxy |>| nginx |
>   +-+     +---+
>
> We use a haproxy cluster for ssl off-loading and then load balance
> request to
> nginx cluster. We are currently benchmarking this setup with 3 nodes
> for haproxy
> cluster and 1 nginx node. Each haproxy node has two frontend/backend
> pair. First
> frontend is a router for ssl connection which redistributes request to
> the second 
> frontend in the haproxy cluster. The second frontend is for ssl
> handshake and 
> routing requests to nginx servers. Our configuration is as follows:
>
> ```
> global
>     maxconn 10
>     user haproxy
>     group haproxy
>     nbproc 2
>     cpu-map 1 1
>     cpu-map 2 2
>
> defaults
>     mode http
>     option forwardfor
>     timeout connect 5s
>     timeout client 30s
>     timeout server 30s
>     timeout tunnel 30m
>     timeout client-fin 5s
>
> frontend ssl_sess_id_router
>         bind *:443
>         bind-process 1
>         mode tcp
>         maxconn 10
>         log global
>         option tcp-smart-accept
>         option splice-request
>         option splice-response
>         default_backend ssl_sess_id_router_backend
>
> backend ssl_sess_id_router_backend
>         bind-process 1
>         mode tcp
>         fullconn 5
>         balance roundrobin
>         ..
>         option tcp-smart-connect
>         server lbtest01 :8443 weight 1 check send-proxy
>         server lbtest02 :8443 weight 1 check send-proxy
>         server lbtest03 :8443 weight 1 check send-proxy
>
> frontend nginx_ssl_fe
>         bind *:8443 ssl 
>         maxconn 10
>         bind-process 2
>         option tcp-smart-accept
>         option splice-request
>         option splice-response
>         option forwardfor
>         reqadd X-Forwarded-Proto:\ https
>         timeout client-fin 5s
>         timeout http-request 8s
>         timeout http-keep-alive 30s
>         default_backend nginx_backend
>
> backend nginx_backend
>         bind-process 2
>         balance roundrobin
>         http-reuse safe
>         option tcp-smart-connect
>         option splice-request
>         option splice-response
>         timeout tunnel 30m
>         timeout http-request 8s
>         timeout http-keep-alive 30s
>         server testnginx :80  weight 1 check
> ```
>
> The nginx node has nginx with 4 workers and 8192 max clients,
> therefore the max
> number of connection it can accept is 32768.
>
> For benchmark, we are generating ~3k new connections per second where each
> connection makes 1 http request and then holds the connection for next 30
> seconds. This results in a high established connection on the first
> frontend,
> ssl_sess_id_router, ~25k per haproxy node (Total ~77k connections on 3
> haproxy
> nodes). The second frontend (nginx_ssl_fe) receives the same number of
> connection on the frontend. On nginx node, we see that active connections
> increase to ~32k.
>
> Our understanding is that haproxy should keep a 1:1 connection mapping
> for each
> new connection in frontend/backend. But there is a connection count
> mismatch
> between haproxy and nginx (Total 77k connections in all 3 haproxy for both
> frontends vs 32k connections in nginx made by nginx_backend), We are
> still not
> facing any major 5xx or connection errors. We are assuming that this is
> happening because haproxy is terminating old idle ssl connections to
> serve the
> new ones. We have following questions:
>
> 1. How the nginx_backend connections are being terminated to serve the new
> connections?
Connections are usually terminated when the client receives the whole
response. Closing the connection can be initiated by the client, server
of HAProxy (timeouts, etc..)

> 2. Why haproxy is not terminating connections on the frontend to keep
> it them at 32k
> for 1:1 mapping?
I think there is no 1:1 mapping between the number of connections in
haproxy and nginx. This is because you are chaining the two fron/back
pairs in haproxy, so when the client establishes 1 connctions with
haproxy you will see 2 established connections in haproxy stats. This
explains why the number of connections in haproxy is the double of the
ones in nginx.

> Thanks
> Ayush Goyal

-- 
Moemen MHEDHBI



Re: Cookies, load balancing, stick tables.

2018-04-06 Thread Moemen MHEDHBI
Hi Franks,


On 28/03/2018 14:11, Franks Andy (IT Technical Architecture Manager) wrote:
>
> Hi all,
>
>   Hopefully an easy one, but I can’t really find the solution.
>
> We’ve come up with a control system for haproxy, where we manually can
> clear stick table entries from a GUI. We’re also using a cookie to set
> the server in a backend as we’re expecting to deal with clients behind
> a nat device.
>
>  
>
> It’s the customers (just internal IT in another dept) request that
> they should be able to close down a stick table entry and have the
> client not be able to go to that stick-table selected server AT ALL,
> even when presenting a cookie.
>
> It seems to me that HA is designed to allow these cookie selected
> server connections irrespective of the stick table entries, so there
> are two ways to continue to me:
>
>  
>
> 1)  Have the application remove the separate cookie we insert when
> the application gets logged off or times out (timeout happens at 15
> minutes of app idle time).
>
> 2)  We get HAProxy to control the expiry time of the cookie we
> send over, and refresh that expiry each time a transaction happens.
>
> 3)  Live with the imbalance of clients from NATted source ip
> addresses and ditch the cookie insertion.
>
>  
>
> We would all prefer #2, since the devs don’t want to spend time
> redeveloping, and HAProxy can seemingly do just about anything! #3
> would work, but removing entries from the stick table during testing
> or certain maintenance may well remove more than just the intended target.
>
>  
>
> Any ideas?
>
> Thanks
>
> Andy
>
For solution #2 you can use the "maxlife" param of the "cookie"
directive:
http://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4.2-cookie
When the date in maxlife has expired the cookie will be ignored which
means haproxy will choose a different server but there is no clean way
to refresh the expiry date without updating the date in the cookie with
the "replace-header" action. This won't be easy because the date is an
internal haproxy format.

So if you don't want to spend time redeveloping the application you can
still go with solution #1 by removing the persistence cookie in haproxy
using something like (  http-request replace-header Cookie SRV=[^;]*;? '
' if ACL )

-- 
Moemen MHEDHBI



Re: Rejected connections not getting counted in stats

2018-04-04 Thread Moemen MHEDHBI
Hi Errikos,


On 26/03/2018 13:03, Errikos Koen wrote:
> Hello,
>
> I have a frontend whitelisted by IP with the following rules:
>
> acl whitelist src -f /etc/haproxy/whitelist.lst
> tcp-request connection reject unless whitelist
>
> and while documentation
> <https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4.2-tcp-request%20connection>
>  suggests
> I would be able to see the rejected connections counted in stats
> (quote: they are accounted separately for in the stats, as "denied
> connections"), those are stuck at 0.
>
> The whitelist appears to be working ok, making a request from a non
> whitelisted IP results in:
>
> $ curl -v http://hostname
> * About to connect() to hostname port 80 (#0)
> *   Trying xxx.xxx.xxx.xxx...
> * connected
> * Connected to hostname (xxx.xxx.xxx.xxx) port 80 (#0)
> > GET / HTTP/1.1
> > User-Agent: curl/7.26.0
> > Host: hostname
> > Accept: */*
> >
> * additional stuff not fine transfer.c:1037: 0 0
> * Recv failure: Connection reset by peer
> * Closing connection #0
> curl: (56) Recv failure: Connection reset by peer
>
> and whitelisted IPs work ok.
>
> I am running a self compiled haproxy 1.8.4 (with make options
> USE_PCRE=1 TARGET=linux2628 USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1) on
> Debian 8 with 3.16.0-5-amd64 kernel.
>
> Any ideas?
>
> Thanks
> -- 
> Errikos Koen,
> Cloud Architect
> www.pamediakopes.gr <http://www.pamediakopes.gr>

It works for me using the same version and build options.
Maybe you are looking to the wrong counter.
The one in the stats page is about "denied requests" (this is about http
requests) while you should be looking for "denied connections", you can
find more about this here:
https://cbonte.github.io/haproxy-dconv/1.8/management.html#9.1
According to the doc, the "denied connections" is the 81th field
(counting from 0) so using the following command will help track the
counter:
watch  'echo "show stat" | socat stdio  < haproxy-socket-path > | cut -d
"," -f 1-2,82 | column -s, -t'

++

-- 
Moemen MHEDHBI



Re: What is the difference between session and request?

2018-02-21 Thread Moemen MHEDHBI
Hi,



On 20/02/2018 02:12, flamese...@yahoo.co.jp wrote:
> Hi all
>
> I found that there are fe_conn, fe_req_rate, fe_sess_rate, be_conn and
> be_sess_rate, but there is no be_req_rate.
>
> I understand that there might be multiple requests in one connection,
> what is a session here?

Googling your question will lead to this SO post:
https://stackoverflow.com/questions/33168469/whats-the-exact-meaning-of-session-in-haproxy
where
- A connection is the event of connecting which may lead or not to a
session. A connection counter includes rejected connections, queued
connections, etc ..
- A session is an end-to-end accepted connection. So maybe it is more
accurate to talk about requests per session rather than requests per
connection.

>
> And how can I get be_req_rate?

Unfortunately, this fetch does not seem to be implemented yet.

>
> Thank you

-- 
Moemen MHEDHBI



Re: slowly move connections away from failed real server to remaining real server.

2018-02-13 Thread Moemen MHEDHBI


On 13/02/2018 15:49, Andrew Smalley wrote:
> Hi,
Hi Andrew,

>
> We have had a request and not sure if there is any way to implement this.
>
> Simply think of two real servers being loadbalanced. one fails all the
> connections are moved to the remaining server overloading it.
>
> What we want is for the traffic from the failed real server to be
> moved to the remaining real server without overloading it. IE Move a
> few connections at a time so the last server is not overloaded.
>
> Anyone know how this can be done?

Setting the right maxconn value for the server would not be sufficient 
here ? So the extra traffic due to the failed server will be queued.


>
>
> Andruw Smalley
>
> Loadbalancer.org Ltd.
>
> www.loadbalancer.org
> +1 888 867 9504 / +44 (0)330 380 1064
> asmal...@loadbalancer.org
>
> Leave a Review | Deployment Guides | Blog
>

-- 
Moemen MHEDHBI





Re: Difference between variables and sample fetches?

2018-01-30 Thread Moemen MHEDHBI
You are right Tim, No this is not a bug.

In fact you can use log format variables in actions also. After looking
to the code and docs, I think you can use log format variables whenever
you have  in the docs, ex:
add-header  
set-query 
etc

It seems that historically log format variables where used for logging
then the usage was expended to actions.

To sum up:
- When you find "log format expression" in the docs: then you can use
log variables %ci which also the log format has access to sample
expressions via square brackets:  %[src] or %[src,ipmask(24)]
- When you find "sample expression" in the docs: then you can use only
sample fetches and converters.


On 30/01/2018 14:04, Tim Düsterhus wrote:
> Moemen,
>
> Am 30.01.2018 um 10:15 schrieb Moemen MHEDHBI:
>> The variables you are talking about are more precisely "log format
>> variables" that are only available for the logging part of HAProxy.
> Yes, they are documented in the "log format" section, but they seem to
> work in other places as well. That's why I asked:
>
> haproxy.cfg:
>> defaults
>>  timeout connect 5s
>>  timeout client  50s
>>  timeout server  50s
>>
>> frontend https
>>  mode http
>>  bind :8080
>>
>>  http-response set-header Test %ci
>>  http-response set-header Test2 %[src]
>>
>>  use_backend example
>>
>> backend example
>>  mode http
>>  server example example.com:80
> Example curl:
>
>> [timwolla@~]curl -I localhost:8080
>> HTTP/1.1 404 Not Found
>> Content-Type: text/html
>> Date: Tue, 30 Jan 2018 13:02:30 GMT
>> Server: ECS (lga/1395)
>> Content-Length: 345
>> Test: 127.0.0.1
>> Test2: 127.0.0.1
> Is it a bug that they work inside for example http-response set-header
> (as shown above) as well?
>
> Best regards
> Tim Düsterhus

-- 
Moemen MHEDHBI





Re: Difference between variables and sample fetches?

2018-01-30 Thread Moemen MHEDHBI
Hi Tim,


On 22/01/2018 19:29, Tim Düsterhus wrote:
> Hi
>
> what are the differences between variables and sample fetches? Some
> values can be retrieved using both. For example the src IP address can
> be retrieved using  both `%ci` as well as `%[src]`.
>
> One difference I noticed is that I don't think I am able to use
> converters (e.g. ipmask) for variables (e.g. %ci).
>
> Are there any other differences?
>
> Best regards
> Tim Düsterhus
>

The variables you are talking about are more precisely "log format
variables" that are only available for the logging part of HAProxy.
Sample fetches are used to extract data from traffic streams and use it
for content aware routing, stickiness,etc .

++

Moemen MHEDHBI





Re: Layer 7 Routing Capabilities for Non-HTTP Protocol

2018-01-29 Thread Moemen MHEDHBI
Hi Alistair,


On 26/01/2018 12:30, Alistair Lowe wrote:
> Hello all,
>
> I'm looking to use HAProxy as part of an IOT device server. As my
> devices will be bandwidth and resource contained, I can't use HTTP
> protocol but require content aware routing and stickiness similar to
> what HAProxy offers in HTTP mode today, over a custom protocol on top
> of TCP.
>
> What, if any, options and features are available to me under these
> circumstances?
>
> Any advise offered would be greatly appreciated!
>
> Many thanks,
> Alistair

In TCP mode, HAProxy can do content aware routing and stickiness based
on the information available at Layer 4
(http://cbonte.github.io/haproxy-dconv/1.8/configuration.html#7.3.3) and
Layer 5
(http://cbonte.github.io/haproxy-dconv/1.8/configuration.html#7.3.4).
If this is not sufficient and you need to look into the custom protocol,
using LUA scripts may be useful here.

++

-- 
Moemen MHEDHBI



Re: Warning: upgrading to openssl master+ enable_tls1_3 (coming v1.1.1) could break handshakes for all protocol versions .

2018-01-13 Thread Moemen MHEDHBI
HI Pavlos,


On 12/01/2018 22:53, Pavlos Parissis wrote:
> On 12/01/2018 03:57 μμ, Emeric Brun wrote:
>> Hi All,
>>
>> FYI: upgrading to next openssl-1.1.1 could break your prod if you're using a 
>> forced cipher list because
>> handshake will fail regardless the tls protocol version if you don't specify 
>> a cipher valid for TLSv1.3
>> in your cipher list.
>>
>> https://github.com/openssl/openssl/issues/5057
>>
>> https://github.com/openssl/openssl/issues/5065
>>
>> Openssl's team doesn't seem to consider this as an issue and I'm just bored 
>> to discuss with them.
>>
>> R,
>> Emeric
>>
>
> So, If we enable TLSv1.3, together with TLSv1.2, on the server side, then 
> client must support
> TLSv1.3 otherwise it will get a nice SSL error. Am I right? If I am right, I 
> hope I'm not, then we
> have to wait for all clients to support TLSv1.3 before we enabled it on the 
> server side, this
> doesn't sound right and I am pretty sure I am completely wrong here.
>
> Cheers,
> Pavlos
>
>

Not exactly, the moment you force a cipher list that does not include a
TLSv1.3 cipher in the server side (which has TLSv1.3 enabled) the TLS
handshake will break regardless of what is in the Client hello.

-- 
Moemen MHEDHBI




Re: Use haproxy 1.8.x to balance web applications only reachable through Internet proxy

2017-12-11 Thread Moemen MHEDHBI


On 11/12/2017 17:21, Gbg wrote:
> Hello Moemen,
>
> unless I got this wrong this isn't the setup I search for. I don't
> need haproxy to *be* a proxy but rather *use* a proxy while serving
> content over http as a reverse proxy
>
> Perhaps I should have given the thread this name

I get your point, HAProxy does not support using a "http/socks proxy".
HAProxy intends to be "an HTTP reverse-proxy" retrieving resources from
backend servers and the requests sent to a server are different from the
ones sent to a "forward proxy".
That being said HAProxy  can still "pass" proxy requests to http/socks
proxies if the client is configured to use a proxy.

++
>
> Am 11. Dezember 2017 16:56:12 MEZ schrieb Moemen MHEDHBI
> <mmhed...@haproxy.com>:
>
>
> On 11/12/2017 15:02, Gbg wrote:
>
> I need to contact applications through a socks or http proxy.
> My current setup looks like this but only works when the
> Computer haproxy runs on has direct Internet connection (which
> is not the case in our datacenter, I tried this at home)
> frontend main bind *:8000 acl is_extweb1 path_beg -i /policies
> acl is_extweb2 path_beg -i /produkte use_backend externalweb1
> if is_extweb1 use_backend externalweb2 if is_extweb2 backend
> externalweb1 server static www.google.com:80 check backend
> externalweb2 server static www.gmx.net:80 check There is an SO
> post which addresses the same question and provides some more
> details:
> 
> https://stackoverflow.com/questions/47605766/use-haproxy-as-an-reverse-proxy-with-an-application-behind-internet-proxy
>
>
>
>
>
> Hi Gbg,
>
> For this to work you need the client (browser for example) to be aware
> of the forward proxy.
> So first you need to configure the client to use HAProxy as a forward
> proxy, then in the HAProxy conf you need to use the forward proxy in the
> backend and the configuration may look like this:
>
> frontend main
> bind *:8000
> acl is_extweb path_beg -i /policies /produkte
> use_backend forward_proxy if is_extweb
> default_backend another_backend
>
> backend forward_proxy
>   server static < IP-of-the-forward-proxy > : < Port > check
>
>
> ++
>
> Moemen MHEDHBI
>
>
>
>
> -- 
> Diese Nachricht wurde von meinem Android-Gerät mit K-9 Mail gesendet. 

-- 
Moemen MHEDHBI



Re: Use haproxy 1.8.x to balance web applications only reachable through Internet proxy

2017-12-11 Thread Moemen MHEDHBI


On 11/12/2017 15:02, Gbg wrote:
> I need to contact applications through a socks or http proxy.
>
> My current setup looks like this but only works when the Computer
> haproxy runs on has direct Internet connection (which is not the case
> in our datacenter, I tried this at home)
>
> frontend main
> bind *:8000
> acl is_extweb1 path_beg -i /policies
> acl is_extweb2 path_beg -i /produkte
> use_backend externalweb1 if is_extweb1
> use_backend externalweb2 if is_extweb2
>
> backend externalweb1
> server static www.google.com:80 check
>
> backend externalweb2
> server static www.gmx.net:80 check
>
> There is an SO post which addresses the same question and provides
> some more details:
> https://stackoverflow.com/questions/47605766/use-haproxy-as-an-reverse-proxy-with-an-application-behind-internet-proxy



Hi Gbg,

For this to work you need the client (browser for example) to be aware
of the forward proxy.
So first you need to configure the client to use HAProxy as a forward
proxy, then in the HAProxy conf you need to use the forward proxy in the
backend and the configuration may look like this:

frontend main
bind *:8000
acl is_extweb path_beg -i /policies /produkte
use_backend forward_proxy if is_extweb
default_backend another_backend

backend forward_proxy
  server static < IP-of-the-forward-proxy > : < Port > check


++

Moemen MHEDHBI





Re: HLS loadbalancing

2017-11-29 Thread Moemen MHEDHBI
Hi

HLS seems to be based on standard HTTP requests, so although I don't
know the details of the HLS protocol, I think HAProxy can do the Job.

You can use the diferent HAProxy timeouts to deal with long HTTP
sessions and you can rely on any HTTP header, cookie, .. to route
requests to the appropriate pool.

++


On 29/11/2017 08:09, Deon S wrote:
>
> Good day
>
> HAproxy looks like a very nice product. Was wondering if we can use it
> for HLS server load balancing. HLS is basically http file download but
> chunked. Connection times are way longer that normal and persistence
> will definitely be a key factor. Load balancer will ensure that
> connections are spread over a number of HLS streamers. Possibility we
> can enforce this with user accounts, when user connects we have some
> information and can then direct it to pool,  perhaps cookie.
>
> Search mailing list but found nothing about HLS.
>
> Regards
>
> Deon
>
> -- 
Hls

-- 
Moemen MHEDHBI



Re: HAProxy 1.7.9 Not Capturing Application Session Cookie

2017-11-28 Thread Moemen MHEDHBI
Hi Hermant,



On 27/11/2017 20:31, Coscend@HAProxy wrote:
>
> Hello Moemen,
>
>  
>
> Thank you and very thoughtful of you to educate us on how HAProxy
> handles Websockets and logs cookies.  Guidance such as these have
> helped us grow from a rank startup to offer SLA-based healthcare
> services to disadvantaged remote areas (where there are no
> hospitals/clinics) through our Web-based products.  These patients
> indirectly benefit from your guidance, besides us who benefit directly.
>
>  
>
> 
>
It is great to know that HAProxy is contributing to such projects :)

> Without the cookie in the request of the login page, our users are
> unable to login into the product.  Going by your guidance, it would be
> advisable to insert the JSESSIONID received in server response back
> into the client request.  This will help our product server
> authenticate users to login.  Are we on the right path?
>
If a server is inserting a cookie when replying to HAProxy, then HAProxy
should send that cookie when replying to the client (unless you are
asking explicitly HAProxy to remove the cookie). So I am almost sure
that if there is no cookie logged then that is because there is actually
no cookies sent (via set-cookie).

> https://www.haproxy.com/documentation/aloha/8-5/haproxy/traffic-capture/ 
> àInsert a cookie if none presented by the client
>
>  
>
> If we need to course correct, please advise alternatives.
>
>  
>
> As advised, we are using for Websockets
>
> backend subdomain_cc
>
>     timeout tunnel  3600s 
>
I am not sure this is going to help as you don't just need to insert a
cookie but you need to have a cookie with the right value to make this
work. (Unless I am mistaken about how your app works)


I think we are being confused by the whole Websocket thing while it
**shouldn't be** the case.
Sorry for the confusion but Websocket is probably not the problem here.
So I am going to get back to some of your previous questions in order to
make this clearer.
>
>  
>
>  
>
> *From:*Moemen MHEDHBI [mailto:mmhed...@haproxy.com]
> *Sent:* Monday, November 27, 2017 1:15 PM
> *To:* haproxy@formilux.org
> *Subject:* Re: HAProxy 1.7.9 Not Capturing Application Session Cookie
>
>  
>
> Hi Hemant,
>
> When using websocket, HAProxy will switch to tunnel mode whenever it
> detects the Connection: Upgrade header.
>
> Tunnel mode means that only the first request and response are
> processed and logged and everything else will be forwarded with no
> analysis, I think this is what happens with your 3.3.2 version.
> Normally you will only be able to see the cookie in the log if it is
> present in the request initiating the websocket connection.
>
> On the other hand, with your 3.3.0 version, HAProxy works in the
> default keep-alive-mode where every request is processed and logged.
>
> ++
>
>  
>
> On 24/11/2017 23:30, Coscend@Coscend wrote:
>
> Hello Moemen,
>
>  
>
> Thank you for your encouraging insights.  Below is the information
> you asked. 
>
>  
>
> >>Also you mentioned  the application extensively uses Websockets. Is it
> only 3.3.2 using websockets ? if that is the case this may be a
> good lead since HAProxy does not handle websockets traffic in the
> same way as it does for normal http traffic.
>
>  
>
> Yes, only v. 3.3.2 uses Websockets.  (v. 3.3.0 did not use
> Websockets and access via HAProxy was seamless.)
>
>  
>
> Could you please educate us on what configuration changes we need
> to do for Websockets traffic (vs. HTTP traffic)?
>
Basically, There is nothing really that have to be changed in your
HAProxy configuration with your 3.3.2 version, as long as you are
sending the cookie in the same way (cookie header).
Your new app will be talking HTTP (and that is where cookies and headers
can be processed) then when switching to websocket we don't have to talk
any more about the JSESSIONID or anything else related to HTTP.

>  
>
> >>In your first post you said that it is working for 3.3.0 but not
> 3.3.2, then maybe this is an application issue. Are you sure 3.3.2
> does sent the JSESSIONID.
>
>  
>
> Yes.  Please see below JSESSIONID in the login page URL loaded,
> HAProxy logs and product log.  Is there any other way to verify
> whether the v. 3.3.2 is publishing JSESSIONID?
>
I think this is the most important part, you need to know when the
JSESSIONID cookie is being sent and if it is the case then you should be
able to see that in HAProxy logs.
>
>  
>
> Through HAProxy, login page URL loads with a JSESSIONID: 
> 
> https://coscend.com/CoscendCC.

Re: HAProxy 1.7.9 Not Capturing Application Session Cookie

2017-11-27 Thread Moemen MHEDHBI
t;     stick store-response cookie(JSESSIONID)
>
>     stick store-response res.cook(JSESSIONID)
>
>     #stick match req.cook(JSESSIONID)
>
> stick store-request req.cook(JSESSIONID)
>
>     stick store-request cookie(JSESSIONID)
>
>     stick store-request urlp(JSESSIONID)
>
>     stick store-request urlp(jsessionid)
>
>     acl hdr_location res.hdr(Location) -m found
>
>     rspirep ^(Location:)\
> http://bk.coscend.local:6080/CoscendCC.Test/(.*)$
> <http://bk.coscend.local:6080/CoscendCC.Test/%28.*%29$>   Location:\
> https://coscend.com/CoscendCC.Test/\2
> <https://coscend.com/CoscendCC.Test/2> if hdr_location
>
>  
>
>     acl hdr_set_cookie_domain res.hdr(Set-cookie) -m found sub
> Domain=bk.coscend.local
>
>     rspirep ^(Set-Cookie:.*)\ Domain=bk.coscend.local(.*) \1\
> Domain=coscend.com\2 if hdr_set_cookie_domain
>
> acl hdr_set_cookie_path_cc_test res.hdr(Set-cookie) -m found sub Path=
>
>     rspirep ^(Set-Cookie:.*)\ Path=(.*)$ \1\ Path=/CoscendCC.Test\2 if
> hdr_set_cookie_path_cc_test
>
>  
>
>    server CoscendCC.Test bk.coscend.local:6080 cookie cc-tt-d check
>
>  
>
> Sincerely,
>
>  
>
> Hemant K. Sabat
>
>  
>
> Coscend Communications Solutions
>
> www.Coscend.com <http://www.coscend.com/>
>
> --
>
> *Real-time, Interactive Video Collaboration, Tele-healthcare,
> Tele-education, Telepresence Services, on the fly…*
>
> --
>
> CONFIDENTIALITY NOTICE: See 'Confidentiality Notice Regarding E-mail
> Messages from Coscend Communications Solutions' posted
> at:http://www.Coscend.com/Anchor/Common/Terms_and_Conditions.html
>
>  
>
>  
>
>  
>
> *From:*Moemen MHEDHBI [mailto:mmhed...@haproxy.com]
> *Sent:* Thursday, November 23, 2017 10:49 AM
> *To:* haproxy@formilux.org
> *Subject:* Re: HAProxy 1.7.9 Not Capturing Application Session Cookie
>
>  
>
> Hi,
>
> Your configuration seems correct to me.
> In your first post you said that it is working for 3.3.0 but not
> 3.3.2, then maybe this is an application issue. Are you sure 3.3.2
> does sent the JSESSIONID.
>
> Also you mentioned  the application extensively uses Websockets. Is it
> only 3.3.2 using websockets ? if that is the case this may be a good
> lead since HAProxy does not handle websockets traffic in the same way
> as it does for normal http traffic.
>
> ++
>
> On 23/11/2017 08:43, Coscend@Coscend wrote:
>
> Dear HAProxy Community,
>
>  
>
> This is a follow up on a previous post after doing several
> additional configuration changes and tests.  We would appreciate
> your insights to resolve the issue we are facing with non-capture
> of application session cookie in HAProxy logs.
>
>  
>
> HAProxy 1.7.9 provides SSL termination and reverse proxy to our
> Java-based HTML5 Web application.  The application extensively
> uses WebSockets.  This application generates a session cookie that
> contains a JSESSIONID for session stickiness and authentication. 
> We would like to capture the cookie contained in the request and
> response.  With the configuration below, HAProxy fails to capture
> the session cookie as per the logs (see below).  
>
>  
>
> How could we refine our configuration?  Or, is it a known
> limitation in HAProxy regarding application session cookie?
>
>  
>
> Login page URL loads via HAProxy with a JSESSIONID: 
> 
> https://coscend.com/CoscendCC.Test/signin;jsessionid=E916C54BB7A9EA30E3EC9021AEF4CB79
>
>  
>
> HAProxy.cfg
>
> --
>
> global
>
> …
>
> defaults
>
> …
>
> frontend webapps-frontend
>
>     bind  *:80 name http 
>
> bind  *:443 name https ssl crt "$SSL_CRT_FILE"
>
>     option    forwardfor  
>
> http-request set-header X-Forwarded-Port %[dst_port] 
>
> http-request set-header X-Forwarded-Proto https if { ssl_fc
> }   
>
> option    httplog
>
> log   global  
>
> option    log-separate-errors  
>
> …
>
>     capture cookie JSESSIONID len 63   
>
>     capture request  header Host len 64
>
> …
>
>     capture response header Server len 20  
>
> …
>
>     acls…
>
>     acl host_cos

Re: HAProxy 1.7.9 Not Capturing Application Session Cookie

2017-11-23 Thread Moemen MHEDHBI
 {|coscend.com||https://coscend.com/Co}
> {|||no-cache||no-cache|chunked} "GET
> /CoscendCC.Test/wicket/bookmarkable/org.apache.coscendcc.web.pages.auth.SignInPage;jsessionid=E916C54BB7A9EA30E3EC9021AEF4CB79?1-1.0-signin&_=1511422199789=Netscape=5.0%20(Windows%20NT%2010.0%3B%20Win64%3B%20x64)%20AppleWebKit%2F537.36%20(KHTML%2C%20like%20Gecko)%20Chrome%2F62.0.3202.94%20Safari%2F537.36=Mozilla=true=false=en-US=Win32=Mozilla%2F5.0%20(Windows%20NT%2010.0%3B%20Win64%3B%20x64)%20AppleWebKit%2F537.36%20(KHTML%2C%20like%20Gecko)%20Chrome%2F62.0.3202.94%20Safari%2F537.36=1600=900=24=-6&
>
>  
>
>  
>
> Thank you.
>
>  
>
> Sincerely,
>
>  
>
> Hemant K. Sabat
>
>  
>
> Coscend Communications Solutions
>
> www.Coscend.com <http://www.coscend.com/>
>
> --
>
> *Real-time, Interactive Video Collaboration, Tele-healthcare,
> Tele-education, Telepresence Services, on the fly…*
>
> --
>
> CONFIDENTIALITY NOTICE: See 'Confidentiality Notice Regarding E-mail
> Messages from Coscend Communications Solutions' posted
> at:http://www.Coscend.com/Anchor/Common/Terms_and_Conditions.html
>
>  
>
>  
>
> *From:* Coscend@Coscend [mailto:haproxy.insig...@coscend.com]
> *Sent:* Thursday, September 21, 2017 2:27 PM
> *To:* haproxy@formilux.org
> *Subject:* HAProxy 1.7.9 Not Capturing Application Session Cookie
>
>  
>
> Dear HAProxy Community,
>
>  
>
> Your guidance on the following issue we are facing would be appreciated.
>
>  
>
> CONTEXT
>
> --
>
> We are running two versions of our application--APP-3.3.0 and
> APP-3.3.2--on the same server and same environment.  Both the APPs are
> running perfectly if we directly access them, while bypassing HAProxy.
>
>  
>
> ISSUE
>
> -
>
> While accessing these APPs through HAProxy,
>
> 1. APP-3.3.0:  HAProxy is capturing JSESSIONID in logs. 
>
> 2. APP-3.3.2: HAProxy is NOT capturing JSESSIONID in logs. 
>
>  
>
>  
>
> QUESTION
>
> 
>
> Could you please advise how HAProxy captures application session
> cookies? 
>
> Is the capture portion of our HAProxy config below incorrect? 
>
> Or, is there a problem with our APP-3.3.2? 
>
> Thank you.
>
>  
>
>  
>
> =LOGS and CONFIG PARAMETERS=
>
>  
>
> HAProxy logs
>
> -
>
> APP-3.3.0:  HAProxy captures JSESSIONID in each log line.
>
> Sep 21 13:36:07 localhost haproxy[10415]: 192.168.100.152:56085
> [21/Sep/2017:13:36:07.914] webapps-frontend~
> subdomain-backend/APP-3.3.0 0/0/0/3/10 200 86916
> JSESSIONID=66BC3A6F228503A5D39F4B8E6F1FF951 -  6/6/0/0/0 0/0
> {.com||https://.com/Co}
> {|86575|max-age=||cache|} "GET
> /APP-3.3.0/wicket/resource/org.apache.wicket.resource.JQueryResourceReference/jquery/jquery-3.2.1-ver-3B390F5614B3789CE71FFA5C856AA35E.js
> HTTP/1.1"
>
>  
>
>  
>
> APP-3.3.2:  JSESSIONID is missing in majority of the log lines.
>
> Sep 21 13:39:23 localhost proxy-server[10517]: 192.168.100.152:56391
> [21/Sep/2017:13:39:23.450] webapps-frontend~
> subdomain-backend/APP-3.3.2 0/0/1/4/8 200 86916 - -  6/6/0/0/0 0/0
> {.com||https://.com/Co}
> {|86575|max-age=||cache|} "GET
> /APP-3.3.2/wicket/resource/org.apache.wicket.resource.JQueryResourceReference/jquery/jquery-3.2.1-ver-3B390F5614B3789CE71FFA5C856AA35E.js
> HTTP/1.1"
>
>  
>
>  
>
> HAProxy 1.7.9 config (Relevant portion)
>
> ==
>
> …
>
> frontend webapps-frontend
>
> …
>
>     http-request set-header X-Forwarded-Port %[dst_port] 
>
> http-request set-header X-Forwarded-Proto https if { ssl_fc }   
>
>  
>
>     ### Logging options
>
>     option    httplog
>
> log   global  
>
> #option       logasap
>
>  
>
>     capture cookie JSESSIONID len 124  
>
>     capture request  header Host len 64
>
>     capture request  header Content-Length len 10  
>
>     capture request  header Referer len 32 
>
>     capture response header Server len 20  
>
>     capture response header Content-Length len 10  
>
>     capture response header Cache-Control len 8
>
>     capture response header Via len 20         
>
>     capture response header Location len 20
>
>     capture response header X-Backend-Server-Name len 20   
>
>    
>
> capture response header Content-Security-Policy len 128
>
>     capture response header Strict-Transport-Security len 64   
>
>     capture response header X-Frame-Options len 32 
>
>     capture response header X-XSS-Protection len 32
>
>     capture response header X-Content-Type-Options len 32  
>
>     capture response header Referrer-Policy len 32 
>
>     capture response header Pragma len 32  
>
>     capture response header Transfer-Encoding len 32   
>
>    
>
> capture response header Access-Control-Allow-Origin len 32
>
>     capture response header Access-Control-Allow-Headers len 32
>
>     capture response header Access-Control-Allow-Methods len 32
>
>     capture response header Access-Control-Allow-Credentials len 20
>
>  
>
> backend subdomain-backend
>
>     http-response set-header Strict-Transport-Security
> "max-age=31536000; includeSubDomains; preload"
>
>     http-response set-header X-Frame-Options "SAMEORIGIN" # or "DENY"
>
>     http-response set-header X-XSS-Protection "1; mode=block"
>
>     http-response set-header X-Content-Type-Options "nosniff"
>
>    http-response set-header Referrer-Policy
> "no-referrer-when-downgrade"  
>    
>
>
>     http-response set-header Pragma "no-cache" #Deprecated, only for
> backwards compatibility with HTTP/1.0 clients.
>
>     http-response set-header Cache-Control "nocache, no-store"
>   
>  
>
>
>  
>
>     http-response set-header Access-Control-Allow-Origin "*"
> #"%%{AccessControlAllowOrigin} env=AccessControlAllowOrigin"
>
>     http-response set-header Access-Control-Allow-Headers "Origin,
> X-Requested-With, Content-Type, Accept, X-CSRF-Token, X-XSRF-TOKEN"
>
>     http-response set-header Access-Control-Allow-Methods "GET, POST,
> PUT, DELETE, OPTIONS"
>
>     http-response set-header Access-Control-Allow-Credentials "true"
>
>  
>
>     http-response set-header X-Backend-Server-Name %s
>
>  
>
>  
>
> <http://www.avg.com/email-signature?utm_medium=email_source=link_campaign=sig-email_content=emailclient>
>
>   
>
> Virus-free. www.avg.com
> <http://www.avg.com/email-signature?utm_medium=email_source=link_campaign=sig-email_content=emailclient>
>
>
>  
>

-- 
Moemen MHEDHBI



Re: No TIME-WAIT socket when using Haproxy with Redis

2017-11-09 Thread Moemen MHEDHBI
Hi,

If you are talking about TCP health checks, this is a normal behaviour
because we take care of closing the connection with a RST after the
handshake.

And when closing the connection with RST there is no TIME_WAIT because 
RFC 793 explicitly says that on |RST| reception no response is to be
sent and you must go into the |CLOSED| state.

++


On 09/11/2017 09:58, Fei Ding wrote:
> Hi:
>
> I am confused about why there is no TIME-WAIT socket when using
> Haproxy to check Redis living status. Could any one give me some hit,
> or even get me to the specific source code?
>
> Thanks a lot.
>

-- 
Moemen MHEDHBI



Re: HTTP DELETE command failing

2017-11-02 Thread Moemen MHEDHBI
HAProxy is replying 403, which means that the DELETE request was
explicitly denied by your conf.

In order for us to help you, we need to have a look to your conf

++

On 02/11/2017 17:17, Norman Branitsky wrote:
>
> In HAProxy version 1.7.5,
>
> I see GET and POST commands working correctly but DELETE fails:
>
> [01/Nov/2017:11:02:34.423] main_ssl~ ssl_training-01/training-01.
> 0/0/0/20/69 200 402587 - -  6/6/0/0/0 0/0 "GET
> /etk-training-ora1/etk-apps/rt/admin/manage-users.js HTTP/1.1"
>
> Nov  1 11:02:34 localhost haproxy[40877]: 10.20.120.220:64971
> [01/Nov/2017:11:02:34.690] main_ssl~ ssl_training-01/training-01.
> 0/0/0/150/151 200 1490 - -  6/6/0/1/0 0/0 "POST
> /etk-training-ora1/auth/oauth/token HTTP/1.1"
>
> Nov  1 11:02:34 localhost haproxy[40877]: 10.20.120.220:64971
> [01/Nov/2017:11:02:34.889] main_ssl~ ssl_training-01/training-01.
> 0/0/1/54/56 200 388 - -  6/6/1/1/0 0/0 "GET
> /etk-training-ora1/private/api/systemPreferences/maxPageSize HTTP/1.1"
>
> Nov  1 11:02:35 localhost haproxy[40877]: 10.20.120.220:64970
> [01/Nov/2017:11:02:34.890] main_ssl~ ssl_training-01/training-01.
> 0/0/1/329/331 200 19968 - -  6/6/0/0/0 0/0 "GET
> /etk-training-ora1/private/api/users?page=0=50=accountName,ASC
> HTTP/1.1"
>
> Nov  1 11:02:42 localhost haproxy[40877]: 10.20.120.220:64971
> [01/Nov/2017:11:02:42.571] main_ssl~ main_ssl/ 0/-1/-1/-1/0 403
> 188 - - PR-- 4/4/0/0/0 0/0 "DELETE
> /etk-training-ora1/private/api/users/62469 HTTP/1.1"
>
>  
>
> In the GET and POST commands, path_beg matches /etk-training-ora1.
>
> It appears that in the DELETE command path_beg returns nothing or
> something else.
> Suggestions, please?
>
>  
>
> Norman
>
> * *
>
> *Norman Branitsky
> *Cloud Architect
>
> MicroPact
>
> (o) 416.916.1752
>
> (c) 416.843.0670
>
> (t) 1-888-232-0224 x61752
>
> www.micropact.com <http://www.micropact.com/>
>
> Think it > Track it > Done
>
>  
>

-- 
Moemen MHEDHBI

Support Engineer
http://haproxy.com
Tel: +33 1 30 67 60 71



Re: In core.register_service use socket.http block?

2017-10-24 Thread Moemen MHEDHBI
Hi,

What do you mean exactly by blocking,  blocked from doing what ?

The use-service will make HAProxy use the LUA script instead of a
backend server when "proxying" the request.

++

On 24/10/2017 10:38, aogooc xu wrote:
> hello, 
>
> I would like to achieve a register in the register_service mini http
> proxy, dynamic processing request
>
> * problem
>
> Use luasocket in lua code,will cause blocking ?
>
> Test 10 concurrent requests,example
>
> ab -c 10  -n 1000  http://127.0.0.1/test/
>
> It will block and wait for all requests to be completed, I am confused。
>
>
> haproxy cfg exmaple:
>
> http-request use-service lua.haproxy-proxy
>
>
>
>
>

-- 
Moemen MHEDHBI





Re: possible to capture custom response header for http logs?

2017-10-24 Thread Moemen MHEDHBI
Hi David,

Using "http-response set-header " and " capture response header" cannot
grantee an execution order so that the header is captured at the right time.

Can you try using this config in the following order:

1/ declare capture response len 12 id 0
2  http-response set-header X-R-ID %[res.hdr("X-Used-Params"),djb2(1),hex]
3/ http-response capture hdr(X-R-ID) id 0

++


On 24/10/2017 02:27, David Birdsong wrote:
> I'm using haproxy to create an identifier using an upstream response
> header like so:
>
> http-response set-header X-R-ID %[res.hdr("X-Used-Params"),djb2(1),hex]
>
> I'm having trouble getting haproxy to log this value with the
> additional capture header that should get routed to my custom http log:
>
> capture response header X-R-ID len 16
>
> Does http-response set-header run too late to be captured for logging?
>

-- 
Moemen MHEDHBI



Re: Question about https rewrite

2017-10-19 Thread Moemen MHEDHBI

Hi,

On 18/10/2017 20:22, Benoît Vézina wrote:
> Hi,
>
> I did spend a lot (I really mean a lot) trying to make work Odoo
> webslide behind Haproxy but I still end put an nginx cause that module
> is sending javascript that call stuff in http instead of https.
>
> In the nginx world I have to had that to my server section and all the
> rewrite is done fine:
>
>    proxy_set_header X-Forwarded-Proto $scheme;
This can be expressed in HAProxy by the following two lines:

|http-request set-header X-Forwarded-Proto https if { ssl_fc } ||http-request 
set-header X-Forwarded-Proto http if !{ ssl_fc } |||

> So do I have to installed a nginx between haproxy and odoo to do the
> rewrite or do it is a way to do it in haproxy.
>
> Here is my frontend and backend section
>
> frontend 443
> bind *:443 ssl crt /etc/haproxy/certs/current/xtremxpert.pem ssl crt
> /etc/haproxy/certs/current
> reqadd X-Forwarded-Proto:\ https
> mode http
> acl 443_xtremxpert_com__host hdr(host) -i xtremxpert.com
> acl 443_xtremxpert_com__host hdr(host) -i xtremxpert.com:443
> use_backend 443_xtremxpert_com_ if 443_xtremxpert_com__host
>
> backend 443_xtremxpert_com_
> acl forwarded_proto hdr_cnt(X-Forwarded-Proto) eq 0
> acl forwarded_port hdr_cnt(X-Forwarded-Port) eq 0
> http-request add-header X-Forwarded-Port %[dst_port] if forwarded_port
> http-request add-header X-Forwarded-Proto https if { ssl_fc }
> forwarded_proto
> mode http
> server 03bfdfc9400011968ca41e78cca5cf00dc68b773 10.42.179.224:8069
>
>

It is not clear what you want to do here, if you just want to send the
X-Forwarded-Proto with the corresponding scheme then you already have
the answer, otherwise we need more details about your problem.

++

-- 
Moemen MHEDHBI



Re: Haproxy config for sticky route

2017-10-17 Thread Moemen MHEDHBI
Hi Ruben,

You are defining only one server, in your backend so even if your
resolvers return 3 addresses, HAProxy will pick only one (probably the
first).

You need to define three servers, you can do it manually (three server
lines) or use the "server-template" keyword.



On 13/10/2017 14:59, Ruben wrote:
> Well =fun should be a variable name. But the name, whatever it is,
> should always be routed to the same ip based on some consistency
> algorhithm.
>
> I've build some DNS server to correct for the randomizing of the
> server list from the dns. So the following:
>
> dig @ordered-dns-proxy chat
>
> always gives me the list:
>
> 10.0.0.11
> 10.0.0.12
> 10.0.0.13
>
> in this order.
>
> My config now looks like:
> -
> defaults
>     mode http
>     timeout client 5s
>     timeout connect 5s
>     timeout server 5s
>
> resolvers dns
>     nameserver srv_dns ordered-dns-proxy:53
>
> frontend all
>     bind :80
>     mode http
>     timeout client 120s
>     option forwardfor
>
>     default_backend chat
>
> backend chat
>     balance url_param chatName
>     timeout server 120s
>
>     server srv_chat chat:80 resolvers dns
> ---
>
> But still not working. It is always routed to a different server.
>
> What I want to accomplish is something like.
>
> dig @ordered-dns-proxy chat
>
> always gives me the list:
>
> 10.0.0.11
> 10.0.0.12
> 10.0.0.13
>
> I want to have the connection:
>
> ?chatName=
>
> crc32() % 3 is for example 2
>
> always route to the second server in the dns list.
>
> With my current config this won't happen. The balancer is always going
> to the first.
>
>
>
> 2017-10-11 1:27 GMT+02:00 Igor Cicimov <ig...@encompasscorporation.com
> <mailto:ig...@encompasscorporation.com>>:
> >
> >
> >
> > On Tue, Oct 10, 2017 at 11:25 PM, Ruben <rdoc...@gmail.com
> <mailto:rdoc...@gmail.com>> wrote:
> >>
> >> I have some stateful chat servers (SockJS) running in docker swarm
> mode. When doing dig chat I get an unordered randomized list of
> servers for example:
> >>
> >> (every time the order is different)
> >> 10.0.0.12
> >> 10.0.0.10
> >> 10.0.0.11
> >>
> >> The chat is accessed by a chatName url parameter. Now I want to be
> able to run a chat-load-balancer service in docker with multiple
> replicas using the haproxy image.
> >>
> >> The problem is that docker always resolves to a randomized list
> when doing dig chat.
> >>
> >> I want to map the chatName param from the url to a fixed server
> which always have the same ip from the list of ips of dig chat. So the
> mapping of the url_param should not be based on the position of the
> server in the list, but solely on the ip of the server.
> >>
> >> So for example ?chatName=fun should always route to ip 10.0.0.12,
> no matter what.
> >>
> >> My current haproxy.cfg is:
> >>
> >> defaults
> >>   mode http
> >>   timeout client 5s
> >>   timeout connect 5s
> >>   timeout server 5s
> >>
> >> frontend frontend_chat
> >>   bind :80
> >>   mode http
> >>   timeout client 120s
> >>   option forwardfor
> >>   option http-server-close
> >>   option http-pretend-keepalive
> >>   default_backend backend_chat
> >>
> >> backend backend_chat
> >>   balance url_param chatName
> >>   timeout server 120s
> >>   server chat chat:80
> >>
> >> At the moment it seems that only the Commercial Subscribtion of
> Nginx can handle this kind of cases using the sticky route $variable
> ...; directive in the upstream module.
> >
> >
> > Maybe try:
> >
> > http-request set-header Host 10.0.0.12 if { query -m beg -i
> chatName=fun }
> >

-- 
Moemen MHEDHBI



Re: Haproxy refuses new connections when doing a reload followed by a restart

2017-10-06 Thread Moemen MHEDHBI
Hi Lukas,


On 04/10/2017 22:01, Lukas Tribus wrote:
> I guess the problem is that when a reload happens before a restart and 
> pre-reload
> systemd-wrapper process is still alive, systemd gets confused by that old 
> process
> and therefor, refrains from starting up the new instance.
>
> Or systemd doesn't get confused, sends SIGTERM to the old systemd-wrapper
> process as well, but the wrapper doesn't handle SIGTERM after a SIGUSR1
> (a hard stop WHILE we are already gracefully stopping).
>
>
> Should the systemd-wrapper exit after distributing the graceful stop message 
> to
> processes? I don't think so, it sounds horribly.
>
> Should the systemd-wrapper expect a SIGTERM after a SIGUSR1 and sends the
> TERM/INT to its childs? I think so, but I'm not 100% sure. Is that even the 
> issue?
>
>
>
> We did get rid of the systemd-wrapper in haproxy 1.8-dev, and replaced it 
> with a
> master->worker solution, so I'd say there is a chance that this doesn't 
> affect 1.8.
>

A. It appears to me that it is not the wrapper that receives the SIGUSR1
but the haproxy process.

B. Here is how I technically explain the "bug" (to be confirmed by the
Devs) reported by Niels:
 - During the reload:
  1. A SIGUSR2 is sent to the systemd-wrapper
  2. The wrapper sends SIGUSR1 to haproxy processes listed in the pid file.
  3. A new haproxy process is listening for incoming connections and the
pid file now contains only the pid of the new process.
- Then when issuing a restart/stop:
 1. A SIGTERM is sent to the systemd-wrapper
 2. The wrapper sends SIGTERM to haproxy processes listed in the pid file.
 3. Only the new haproxy process is stopped the other one is still there
since it did not receive the SIGTERM
- This why systemd is getting confused and after the timeout systemd
gets done with this by sending a SIGTERM to all child process
(killmode=mixed policy)

C. I was able to verify this by doing the following:
 1. After the reload I manually add the old process pid to the pidfile
 2. Then When I hit restart, all process are stopped correctly.

So the question is ( @William ): when doing a soft stop should we
preserve old process pid in the pidfile until the process terminates ?

-- 
Moemen MHEDHBI




Re: Haproxy refuses new connections when doing a reload followed by a restart

2017-10-04 Thread Moemen MHEDHBI
Hi Lukas,


On 04/10/2017 18:57, Lukas Tribus wrote:
> Hello Niels,
>
>
> a restart means stopping haproxy - and after haproxy exited completely,
> starting haproxy again. When that happens, haproxy immediately stops
> listening to the sockets and then waits for existing connections to be
> closed (you can accelerate that with hard-stop-after [1], but that's not
> the point).
>
> So what you are seeing is expected behavior when RESTARTING.
I am wondering if this is actually an expected behaviour and if maybe
that restart/stop should just shutdown the process and its open connections.
I have made the following tests:
1/ keep an open connection then do a restart will work correctly without
waiting for existing connections to be closed.
2/ Keep an open connection then do a reload + a restart: will wait for
existing connections to be closed.
So if restart should wait for existing connections to terminate then 1/
should be fixed otherwise 2/ should be fixed.

I think it makes more sense to say that restart will not wait for
established connections.  Otherwise there will be no difference between
reload and restart unless there is something else am not aware of.
If we need to fix 2/, a possible solution would be:
- Set killmode to "control-group" rather than "mixed" (the current
value) in systemd unit file.
 
>
> Seems to me you want RELOAD behavior instead, so RELOAD is what Ansible
> should trigger when it detects a config change, no RESTART.
>
Agree

-- 
Moemen MHEDHBI





Re:[PATCH] sc_dec_gpc0?

2017-08-28 Thread Moemen MHEDHBI
Hi Mark,

Probably this wasn't needed at the time, but it makes sense to be able
to decrement gpc0 and not just increment it.
Here is a patch adding sc_dec_gpc0 and since gpc0 is a counter it can't
be decremented below zero.
If the patch is useful and clean, we can consider merging it.


Regards,


On 19/08/2017 14:15, Mark Staudinger wrote:
> Hi Folks,
>
> Probably a question for Willy, but perhaps others worked on this code
> so to the mailing list it goes.
>
> I was curious as to why there is no sc_dec_gpc0 implemented in the
> sample fetch / ACL code.
>
> sc_inc_gpc0 does exist of course, and it's well-documented how it can
> be used to mark an event(s) and use the value as a threshold for ACLs.
>
> I had an idea of using that value as a counterbalance of two types of
> traffic so as to use the gpc0 value as the differential between the two.
>
> Request type A -> sc0_inc_gpc0
> Request type B -> sc0_dec_gpc0
>
> after which two requests, the gpc0 value would remain unchanged from
> the original value.
>
> However I quickly determined that there was no sc_dec_gpc0 feature.
>
> Is there some architectural reason why this would be difficult or
> impractical to do?  Or is it just something that didn't seem
> necessary/useful at the time?
>
> Regards,
> Mark Staudinger
>

-- 
Moemen MHEDHBI

From 66707e9fc90fb2726c8e7dd9f060a52325b780bd Mon Sep 17 00:00:00 2001
From: Moemen MHEDHBI <mmhed...@haproxy.com>
Date: Mon, 28 Aug 2017 17:55:38 +0200
Subject: [PATCH] MINOR: add sc-dec-gpc0 to decrement gpc0 counter.

Since GPC0 is a general purpose counter, it should be possible
to decrement it with sc-dec-gpc0 besides incrementing it with
sc-inc-gpc0. Decrementing GPC0 counter won't update gpc0_rate.
---
 doc/configuration.txt |  59 +
 src/stick_table.c | 118 +-
 2 files changed, 166 insertions(+), 11 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 9f7f9ff..aef7c7a 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -3765,6 +3765,7 @@ http-request { allow | auth [realm ] | redirect  |
   unset-var() |
   { track-sc0 | track-sc1 | track-sc2 }  [table ] |
   sc-inc-gpc0() |
+  sc-dec-gpc0() |
   sc-set-gpt0()  |
   silent-drop |
  }
@@ -4046,6 +4047,11 @@ http-request { allow | auth [realm ] | redirect  |
   designated by . If an error occurs, this action silently fails and
   the actions evaluation continues.
 
+- sc-dec-gpc0():
+  This action decrements the GPC0 counter according with the sticky counter
+  designated by . If an error occurs, this action silently fails and
+  the actions evaluation continues.
+
 - set-var()  :
   Is used to set the contents of a variable. The variable is declared
   inline.
@@ -4238,6 +4244,7 @@ http-response { allow | deny | add-header   | set-nice  |
 unset-var() |
 { track-sc0 | track-sc1 | track-sc2 }  [table ] |
 sc-inc-gpc0() |
+sc-dec-gpc0() |
 sc-set-gpt0()  |
 silent-drop |
   }
@@ -4474,6 +4481,11 @@ http-response { allow | deny | add-header   | set-nice  |
   designated by . If an error occurs, this action silently fails and
   the actions evaluation continues.
 
+- sc-dec-gpc0():
+  This action decrements the GPC0 counter according with the sticky counter
+  designated by . If an error occurs, this action silently fails and
+  the actions evaluation continues.
+
 - "silent-drop" : this stops the evaluation of the rules and makes the
   client-facing connection suddenly disappear using a system-dependant way
   that tries to prevent the client from being notified. The effect it then
@@ -9069,6 +9081,11 @@ tcp-request connection  [{if | unless} ]
 counter designated by . If an error occurs, this action silently
 fails and the actions evaluation continues.
 
+- sc-dec-gpc0():
+The "sc-dec-gpc0" decrements the GPC0 counter according to the sticky
+counter designated by . If an error occurs, this action silently
+fails and the actions evaluation continues.
+
 - sc-set-gpt0() :
 This action sets the GPT0 tag according to the sticky counter designated
 by  and the value of . The expected result is a boolean. If
@@ -9228,6 +9245,7 @@ tcp-request content  [{if | unless} ]
 - capture : the specified sample expression is captured
 - { track-sc0 | track-sc1 | track-sc2 }  [table ]
 - sc-inc-gpc0()
+- sc-dec-gpc0()
 - sc-set-gpt0() 
 - set-var() 
 - unset-var()
@@ -9451,6 +9469,11 @@ tcp-response content  [{if | unless} ]
 counter designated by . If an error occurs, this action fails
 

Re: explanation of "Backend Limit" I see in the stats page

2017-06-27 Thread Moemen MHEDHBI


On 27/06/2017 08:58, John Cherouvim wrote:
> Thanks a lot for the explanation.
>
>> Here the numbers are almost the same as your "s" server because it is
>> the only server configured in the backend. If there was many, the
>> backend numbers will be around the sum of the server numbers (for
>> each column).
>>   
> Although this is a small detail, is there an explanation for the fact
> that they are not always exactly equal? Is it the case that they'll
> eventually be exactly equal but maybe due to implementation reasons on
> how these counters are held in memory there is a slight difference?
>
> thanks
>
>
The explanation may differ based on the number/metric in question. For
example, the Total sessions in the backend may be not equal to the sum
of total sessions of the servers, because there were some requests
aborted before HAProxy chooses a backend server.

-- 
Moemen MHEDHBI





Re: explanation of "Backend Limit" I see in the stats page

2017-06-26 Thread Moemen MHEDHBI
Hi John,


On 26/06/2017 13:45, John Cherouvim wrote:
> Hello
>
> I have this configuration for a backend containing a server:
>> backend static_backend
>> option forwardfor
>> server s localhost:80 maxconn 900
>
> On the stats page though I see a "Backend" row which shows almost the
> same numbers for my "s" server. What does this represent?

The Backend row shows the stats for the whole backend. Here the numbers
are almost the same as your "s" server because it is the only server
configured in the backend. If there was many, the backend numbers will
be around the sum of the server numbers (for each column).


> That "Backend" row shows a limit of 200 (marked with red in the photo
> http://i.imgur.com/YWNsStZ.png ). What exactly is that limit and how
> can I change it?
>
> thanks
>
>

The Session Limit in the backend row refers to the fullconn parameter of
that backend.
The fullconn is about setting dynamic maxconn on backend servers. You
can read more about it in the docs
http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4-fullconn

You need to consider the "fullconn" parameter if you have set up
"minconn" in server lines (to use dynamic maxconn), otherwise you can
ignore it.

Regards,

-- 
Moemen MHEDHBI