Old Github Issue

2021-05-25 Thread Aleksandar Lazic

Hi.

I wanted to cleanup some old issues but was not able due to the fact
that I'm not sure if the bugs are still valid, especially for 1.8/1.9
and previous versions.

https://github.com/haproxy/haproxy/issues?page=10&q=is%3Aissue+is%3Aopen

It would be nice when someone with more knowledge then I can take a look
and close the not relevant of fixed issues.

Regards
Alex



Maybe stupid question but, I don't see a fetch method for %rt => StreamID

2021-05-31 Thread Aleksandar Lazic

Hi.

While I try to get the stream id from spoa I recognized that there is no fetch 
method for the streamID.

The discussion is here.
https://github.com/criteo/haproxy-spoe-go/issues/28

That's the sid in filter spoa log output.
SPOE: [agent-on-http-req]  sid=88 st=0 
0/0/0/0/0 1/1 0/0 10/33

http://git.haproxy.org/?p=haproxy-2.4.git;a=blob;f=src/flt_spoe.c;h=a68f7b9141025963e8f4ad79c0d1617a4c59774e;hb=HEAD#l2815

```
2815 if (ctx->status_code || !(conf->agent_fe.options2 & 
PR_O2_NOLOGNORM))
2816 send_log(&conf->agent_fe, (!ctx->status_code ? 
LOG_NOTICE : LOG_WARNING),
2817  "SPOE: [%s]  sid=%u st=%u 
%ld/%ld/%ld/%ld/%ld %u/%u %u/%u %llu/%llu\n",
2818  agent->id, spoe_event_str[ev], s->uniq_id, 
ctx->status_code,
 ^^
2819  ctx->stats.t_request, ctx->stats.t_queue, 
ctx->stats.t_waiting,
2820  ctx->stats.t_response, 
ctx->stats.t_process,
2821  agent->counters.idles, 
agent->counters.applets,
2822  agent->counters.nb_sending, 
agent->counters.nb_waiting,
2823  agent->counters.nb_errors, 
agent->counters.nb_processed);

```

It looks to me that the %rt log format have the stream id, right?

http://git.haproxy.org/?p=haproxy-2.4.git;a=blob;f=doc/configuration.txt;h=a13a9a77f8a077a6ac798b1dccc8a0f2f3f67396;hb=HEAD#l20576

|   | %rt  | request_counter (HTTP req or TCP session) | numeric |

http://git.haproxy.org/?p=haproxy-2.4.git;a=blob;f=src/log.c;hb=c5c5bc4e36ce4a6f3bc113c8e16824fdb276c220#l3175
3175 case LOG_FMT_COUNTER: // %rt

http://git.haproxy.org/?p=haproxy-2.4.git;a=blob;f=src/log.c;hb=c5c5bc4e36ce4a6f3bc113c8e16824fdb276c220#l2202
2202 uniq_id = _HA_ATOMIC_FETCH_ADD(&global.req_count, 1);

Regards
Alex



Re: Maybe stupid question but, I don't see a fetch method for %rt => StreamID

2021-05-31 Thread Aleksandar Lazic

Hi.

On 31.05.21 14:23, Aleksandar Lazic wrote:

Hi.

While I try to get the stream id from spoa I recognized that there is no fetch 
method for the streamID.


Attached a patch which adds the fetch sample for the stream id.
I assume it could be back ported up to version 2.0

Regards
Alex


The discussion is here.
https://github.com/criteo/haproxy-spoe-go/issues/28

That's the sid in filter spoa log output.
SPOE: [agent-on-http-req]  sid=88 st=0 
0/0/0/0/0 1/1 0/0 10/33

http://git.haproxy.org/?p=haproxy-2.4.git;a=blob;f=src/flt_spoe.c;h=a68f7b9141025963e8f4ad79c0d1617a4c59774e;hb=HEAD#l2815

```
2815 if (ctx->status_code || !(conf->agent_fe.options2 & 
PR_O2_NOLOGNORM))
2816 send_log(&conf->agent_fe, (!ctx->status_code ? 
LOG_NOTICE : LOG_WARNING),
2817  "SPOE: [%s]  sid=%u st=%u 
%ld/%ld/%ld/%ld/%ld %u/%u %u/%u %llu/%llu\n",
2818  agent->id, spoe_event_str[ev], s->uniq_id, 
ctx->status_code,
  ^^
2819  ctx->stats.t_request, ctx->stats.t_queue, 
ctx->stats.t_waiting,
2820  ctx->stats.t_response, 
ctx->stats.t_process,
2821  agent->counters.idles, 
agent->counters.applets,
2822  agent->counters.nb_sending, 
agent->counters.nb_waiting,
2823  agent->counters.nb_errors, 
agent->counters.nb_processed);

```

It looks to me that the %rt log format have the stream id, right?

http://git.haproxy.org/?p=haproxy-2.4.git;a=blob;f=doc/configuration.txt;h=a13a9a77f8a077a6ac798b1dccc8a0f2f3f67396;hb=HEAD#l20576

|   | %rt  | request_counter (HTTP req or TCP session) | numeric |

http://git.haproxy.org/?p=haproxy-2.4.git;a=blob;f=src/log.c;hb=c5c5bc4e36ce4a6f3bc113c8e16824fdb276c220#l3175
3175 case LOG_FMT_COUNTER: // %rt

http://git.haproxy.org/?p=haproxy-2.4.git;a=blob;f=src/log.c;hb=c5c5bc4e36ce4a6f3bc113c8e16824fdb276c220#l2202
2202 uniq_id = _HA_ATOMIC_FETCH_ADD(&global.req_count, 1);

Regards
Alex



>From 15a2026c495e64d8165a13a3c8a4e5e19ad7e8d6 Mon Sep 17 00:00:00 2001
From: Alexandar Lazic 
Date: Mon, 31 May 2021 21:28:56 +0200
Subject: [PATCH] MINOR: sample: fetch stream_uniq_id

This fetch sample allows to get the current Stream ID for the
current session.

---
 doc/configuration.txt  | 13 ++
 reg-tests/sample_fetches/stream_id.vtc | 33 ++
 src/sample.c   | 14 +++
 3 files changed, 55 insertions(+), 5 deletions(-)
 create mode 100644 reg-tests/sample_fetches/stream_id.vtc

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 11c38945c..7eb7e29cd 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -17433,11 +17433,6 @@ rand([]) : integer
   needed to take some routing decisions for example, or just for debugging
   purposes. This random must not be used for security purposes.
 
-uuid([]) : string
-  Returns a UUID following the RFC4122 standard. If the version is not
-  specified, a UUID version 4 (fully random) is returned.
-  Currently, only version 4 is supported.
-
 srv_conn([/]) : integer
   Returns an integer value corresponding to the number of currently established
   connections on the designated server, possibly including the connection being
@@ -17514,6 +17509,9 @@ stopping : boolean
 str() : string
   Returns a string.
 
+stream_uniq_id : integer
+  Returns the uniq stream id.
+
 table_avl([]) : integer
   Returns the total number of available entries in the current proxy's
   stick-table or in the designated stick-table. See also table_cnt.
@@ -17528,6 +17526,11 @@ thread : integer
   the function, between 0 and (global.nbthread-1). This is useful for logging
   and debugging purposes.
 
+uuid([]) : string
+  Returns a UUID following the RFC4122 standard. If the version is not
+  specified, a UUID version 4 (fully random) is returned.
+  Currently, only version 4 is supported.
+
 var() : undefined
   Returns a variable with the stored type. If the variable is not set, the
   sample fetch fails. The name of the variable starts with an indication
diff --git a/reg-tests/sample_fetches/stream_id.vtc b/reg-tests/sample_fetches/stream_id.vtc
new file mode 100644
index 0..ec512b198
--- /dev/null
+++ b/reg-tests/sample_fetches/stream_id.vtc
@@ -0,0 +1,33 @@
+varnishtest "stream id sample fetch Test"
+
+#REQUIRE_VERSION=2.0
+
+feature ignore_unknown_macro
+
+server s1 {
+rxreq
+txresp
+} -start
+
+haproxy h1 -conf {
+defaults
+mode http
+timeout connect 1s
+timeout client  1s
+timeout server  1s
+
+frontend fe
+bind "fd@${fe}"
+http-response 

[PATCH] DOC/MINOR: move uuid in the configuration to the right, alphabetical order

2021-05-31 Thread Aleksandar Lazic

Fix alphabetical order of uuid
>From bb84a45b848b879f41ab37343b50057323a6ff19 Mon Sep 17 00:00:00 2001
From: Alexandar Lazic 
Date: Tue, 1 Jun 2021 00:27:01 +0200
Subject: [PATCH] DOC/MINOR: move uuid in the configuration to the right
 alphabetical order

This patch can be backported up to 2.1 where the uuid fetch was
introduced

---
 doc/configuration.txt | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 11c38945c..9264f03ce 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -17433,11 +17433,6 @@ rand([]) : integer
   needed to take some routing decisions for example, or just for debugging
   purposes. This random must not be used for security purposes.
 
-uuid([]) : string
-  Returns a UUID following the RFC4122 standard. If the version is not
-  specified, a UUID version 4 (fully random) is returned.
-  Currently, only version 4 is supported.
-
 srv_conn([/]) : integer
   Returns an integer value corresponding to the number of currently established
   connections on the designated server, possibly including the connection being
@@ -17528,6 +17523,11 @@ thread : integer
   the function, between 0 and (global.nbthread-1). This is useful for logging
   and debugging purposes.
 
+uuid([]) : string
+  Returns a UUID following the RFC4122 standard. If the version is not
+  specified, a UUID version 4 (fully random) is returned.
+  Currently, only version 4 is supported.
+  
 var() : undefined
   Returns a variable with the stored type. If the variable is not set, the
   sample fetch fails. The name of the variable starts with an indication
-- 
2.25.1



Re: Maybe stupid question but, I don't see a fetch method for %rt => StreamID

2021-05-31 Thread Aleksandar Lazic

Tim.

On 31.05.21 23:23, Tim Düsterhus wrote:

Aleks,

On 5/31/21 9:35 PM, Aleksandar Lazic wrote:

While I try to get the stream id from spoa I recognized that there is no fetch 
method for the streamID.


Attached a patch which adds the fetch sample for the stream id.
I assume it could be back ported up to version 2.0


The backporting information should be part of the commit message. But I don't 
think it's going to be backported that far.

Further comments inline.


From 15a2026c495e64d8165a13a3c8a4e5e19ad7e8d6 Mon Sep 17 00:00:00 2001
From: Alexandar Lazic 
Date: Mon, 31 May 2021 21:28:56 +0200
Subject: [PATCH] MINOR: sample: fetch stream_uniq_id

This fetch sample allows to get the current Stream ID for the
current session.

---
 doc/configuration.txt  | 13 ++
 reg-tests/sample_fetches/stream_id.vtc | 33 ++
 src/sample.c   | 14 +++
 3 files changed, 55 insertions(+), 5 deletions(-)
 create mode 100644 reg-tests/sample_fetches/stream_id.vtc

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 11c38945c..7eb7e29cd 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -17433,11 +17433,6 @@ rand([]) : integer
   needed to take some routing decisions for example, or just for debugging
   purposes. This random must not be used for security purposes.

-uuid([]) : string
-  Returns a UUID following the RFC4122 standard. If the version is not
-  specified, a UUID version 4 (fully random) is returned.
-  Currently, only version 4 is supported.
-


Good catch, but please split moving this around into a dedicated patch (DOC).


Done.


 srv_conn([/]) : integer
   Returns an integer value corresponding to the number of currently established
   connections on the designated server, possibly including the connection being
@@ -17514,6 +17509,9 @@ stopping : boolean
 str() : string
   Returns a string.

+stream_uniq_id : integer
+  Returns the uniq stream id.
+


This explanation is not useful to the reader (even I don't understand it).


Hm. Well it fetches the uniq_id from the stream struct.

http://git.haproxy.org/?p=haproxy.git;a=blob;f=include/haproxy/stream-t.h;h=9499e94d77feea0dad787eb3bd7b6b0375ca0148;hb=HEAD#l120
120 unsigned int uniq_id;   /* unique ID used for the traces */

This is shown on the SPOE log line as sid and therefore I think it should be
possible to get the same ID also within HAProxy as fetch method.

```
SPOE: [agent-on-http-req]  sid=88 st=0 
0/0/0/0/0 1/1 0/0 10/33
```

In the log is this the variable "%rt" when a stream is available, when no stream
is available then is this the "global.req_count".

http://git.haproxy.org/?p=haproxy.git;a=blob;f=src/log.c;h=7dabe16f8fa54631f6eab815eb73f77d058d0368;hb=HEAD#l2178

In the doc is it described as request_counter which is only true when no stream
is available, when a stream is available then is that %rt the uniq id.

http://git.haproxy.org/?p=haproxy.git;a=blob;f=doc/configuration.txt;h=11c38945c29d2d28c9afb13afed60b30a97069cb;hb=HEAD#l20576
20576   |   | %rt  | request_counter (HTTP req or TCP session) | numeric
 |

So, yes I agree it's difficult to describe it in the doc for the normal user.

How about this wording.

```
This fetch method returns the internal Stream ID, if a stream is available. The
internal Stream ID is used in several places in HAProxy to trace the Stream
inside HAProxy. It is also uses in SPOE as "sid" value.
```



 table_avl([]) : integer
   Returns the total number of available entries in the current proxy's
   stick-table or in the designated stick-table. See also table_cnt.
@@ -17528,6 +17526,11 @@ thread : integer
   the function, between 0 and (global.nbthread-1). This is useful for logging
   and debugging purposes.

+uuid([]) : string
+  Returns a UUID following the RFC4122 standard. If the version is not
+  specified, a UUID version 4 (fully random) is returned.
+  Currently, only version 4 is supported.
+
 var() : undefined
   Returns a variable with the stored type. If the variable is not set, the
   sample fetch fails. The name of the variable starts with an indication
diff --git a/src/sample.c b/src/sample.c
index 09c272c48..5d3b06b10 100644
--- a/src/sample.c
+++ b/src/sample.c
@@ -4210,6 +4210,18 @@ static int smp_fetch_uuid(const struct arg *args, struct 
sample *smp, const char
 return 0;
 }

+/* returns the stream uniq_id */
+static int
+smp_fetch_stream_uniq_id(const struct arg *args, struct sample *smp, const 
char *kw, void *private)


I believe the 'static int' should go on the same line.


Well I copied from "smp_fetch_cpu_calls" but yes most of the other fetches are
in the same line so I will put it in the same line.


+{
+    if (!smp->strm)
+    return 0;
+
+    smp->data.type = SMP_T_SINT;
+    smp->data.u.sint = smp->strm->uniq_id;
+    return 1;
+}
+
 /* Note: must no

Re: Maybe stupid question but, I don't see a fetch method for %rt => StreamID

2021-06-01 Thread Aleksandar Lazic
Tim,

Jun 1, 2021 9:50:17 AM Tim Düsterhus :

> Aleks,
>
> On 6/1/21 1:03 AM, Aleksandar Lazic wrote:
>>>>  srv_conn([/]) : integer
>>>>    Returns an integer value corresponding to the number of currently 
>>>> established
>>>>    connections on the designated server, possibly including the connection 
>>>> being
>>>> @@ -17514,6 +17509,9 @@ stopping : boolean
>>>>  str() : string
>>>>    Returns a string.
>>>>
>>>> +stream_uniq_id : integer
>>>> +  Returns the uniq stream id.
>>>> +
>>>
>>> This explanation is not useful to the reader (even I don't understand it).
>> […]
>> This is shown on the SPOE log line as sid and therefore I think it should be
>> possible to get the same ID also within HAProxy as fetch method.
>> ```
>> SPOE: [agent-on-http-req]  sid=88 st=0 
>> 0/0/0/0/0 1/1 0/0 10/33
>> ```
>> […]
>> ```
>> This fetch method returns the internal Stream ID, if a stream is available. 
>> The
>> internal Stream ID is used in several places in HAProxy to trace the Stream
>> inside HAProxy. It is also uses in SPOE as "sid" value.
>> ```
>>
>
> This phrasing is understandable to me, but now I'm wondering if this is the 
> best solution. Maybe the already existing user-configurable unique request ID 
> should instead be sent to the SPOE and then logged?
>
> https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#7.3.6-unique-id
>
> The request_counter (%rt) you mentioned could be embedded into this unique-id.

Well this uniqe-id is not send as Stream ID to SPOA receiver, due to this fact 
can't you debug which stream is the troubled one.

> Best regards
> Tim Düsterhus

Regards
Alex


Re: Maybe stupid question but, I don't see a fetch method for %rt => StreamID

2021-06-01 Thread Aleksandar Lazic

On 01.06.21 14:23, Tim Düsterhus wrote:

Aleks,

On 6/1/21 10:30 AM, Aleksandar Lazic wrote:

This phrasing is understandable to me, but now I'm wondering if this is the 
best solution. Maybe the already existing user-configurable unique request ID 
should instead be sent to the SPOE and then logged?

https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#7.3.6-unique-id

The request_counter (%rt) you mentioned could be embedded into this unique-id.


Well this uniqe-id is not send as Stream ID to SPOA receiver, due to this fact 
can't you debug which stream is the troubled one.


Yes, that's why I suggested that the SPOE is extended to also include this 
specific ID somewhere (just) for logging purposes.


Yep.
Any opinion from the other community Members?


Best regards
Tim Düsterhus






Re: Maybe stupid question but, I don't see a fetch method for %rt => StreamID

2021-06-04 Thread Aleksandar Lazic

On 02.06.21 11:38, Christopher Faulet wrote:

Le 6/1/21 à 8:26 PM, Aleksandar Lazic a écrit :

On 01.06.21 14:23, Tim Düsterhus wrote:

Aleks,

On 6/1/21 10:30 AM, Aleksandar Lazic wrote:
This phrasing is understandable to me, but now I'm wondering if this is the best 
solution. Maybe the already existing user-configurable unique request ID should 
instead be sent to the SPOE and then logged?


https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#7.3.6-unique-id

The request_counter (%rt) you mentioned could be embedded into this unique-id.


Well this uniqe-id is not send as Stream ID to SPOA receiver, due to this fact 
can't you debug which stream is the troubled one.


Yes, that's why I suggested that the SPOE is extended to also include this 
specific ID somewhere (just) for logging purposes.


Yep.
Any opinion from the other community Members?



The SID provided in the SPOE log message is the one used in the SPOP frame header. 
This way it is possible to match a corresponding log message emitted by the agent.


The "unique-id-format %rt" fix the issue for me.

Regarding the format for this log message, its original purpose was to diagnose 
problems. Instead of adding custom information, I guess the best would be to have 
a "log-format" directive. At least to not break existing tools parsing those 
log messages. But to do so, all part of the current message must be available 
via log variables and/or sample fetches. And, at first glance, it will be hard 
to achieve (sample fetches are probably easier though).


Regarding the stream_uniq_id sample fetch, it is a good idea to add it. 
In fact, when it makes sense, a log variable must also be accessible via a 
sample fetch. Tim's remarks about the patch are valid. For the scope, INTRN or 
L4CLI, I don't know. I'm inclined to choose INTRN.


Let me withdrawal my patch because I use the following configs to satisfy  may
requirement.


```
global
log stdout format raw daemon
# daemon
maxconn 2

defaults
log global
modehttp
option  httplog
option  dontlognull
timeout connect 5000
timeout client  5
timeout server  5

frontend haproxynode
bind *:9080
mode http

unique-id-format %rt
http-request set-var(sess.my_fe_path) path
http-request set-var(sess.my_fe_src) src
http-request set-var(sess.my_fe_referer) req.hdr(Referer)
http-request set-var(sess.my_fe_requestedhost) req.hdr(Host)

# define the spoe agents
filter spoe engine agent-on-http-req config resources/haproxy/spoe-url.conf
filter spoe engine agent-on-http-res config resources/haproxy/spoe-url.conf

# map the spoe response to acl variables
# acl authenticated var(sess.allevents.info) -m bool

http-response set-header x-spoe %[var(sess.feevents.info)]
default_backend streams

backend agent-on-http-req
mode tcp
log global

server spoe 127.0.0.1:9000 check

backend agent-on-http-res
mode tcp
log global

server spoe 127.0.0.1:9000 check

backend streams
log global

server socat 127.0.0.1:1234 check
```

```
[agent-on-http-req]
spoe-agent agent-on-http-req

log global

messages agent-on-http-req

option var-prefix feevents

timeout hello  2s
timeout idle   2m
timeout processing 1s

use-backend agent-on-http-req

spoe-message agent-on-http-req
args my_path=path my_src=src my_referer=req.hdr(Referer) my_sid=unique-id 
my_req_host=req.hdr(Host)
event on-frontend-http-request

[agent-on-http-res]
spoe-agent agent-on-http-res

log global

messages agent-on-http-res

option var-prefix feevents

timeout hello  2s
timeout idle   2m
timeout processing 1s

use-backend agent-on-http-res

spoe-message agent-on-http-res
args my_path=var(sess.my_fe_path) my_src=src 
my_referer=var(sess.my_fe_referer) my_sid=unique-id 
my_req_host=var(sess.my_fe_requestedhost)
event on-http-response
```



Re: Proxy Protocol - any browser proxy extensions that support ?

2021-06-04 Thread Aleksandar Lazic

On 04.06.21 21:32, Jim Freeman wrote:

https://developer.chrome.com/docs/extensions/reference/proxy/
supports SOCKS4/SOCKS5

Does anyone know of any in-browser VPN/proxy extensions that support
Willy's Proxy Protocol ?
https://www.haproxy.com/blog/haproxy/proxy-protocol/ enumerates some
of the state of support, but doesn't touch on browser VPN/proxy
extensions, and my due-diligence googling is coming up short ...


Well not a real browser but a Swedish army knife :-)

https://github.com/curl/curl/commit/6baeb6df35d24740c55239f24b5fc4ce86f375a5

`haproxy-protocol`


Thanks,
...jfree






[PATCH] DOC: use the req.ssl_sni in examples

2021-06-05 Thread Aleksandar Lazic

Hi.

This patch fixes the usage of req_ssl_sni in the doc.

Any plan to remove the old keyword or add some warning that this
keyword is deprecated?

Regards
Alex
>From 84fe0fa89548c384322f47bc3eb37ea9843d0eb8 Mon Sep 17 00:00:00 2001
From: Alex 
Date: Sat, 5 Jun 2021 13:23:08 +0200
Subject: [PATCH] DOC: use the req.ssl_sni in examples

This patch should be backported to at least 2.0
---
 doc/configuration.txt | 14 +++---
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 6b7cc2666..5b1768e89 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -13228,16 +13228,16 @@ use-server  unless 
   The "use-server" statement works both in HTTP and TCP mode. This makes it
   suitable for use with content-based inspection. For instance, a server could
   be selected in a farm according to the TLS SNI field when using protocols with
-  implicit TLS (also see "req_ssl_sni"). And if these servers have their weight
+  implicit TLS (also see "req.ssl_sni"). And if these servers have their weight
   set to zero, they will not be used for other traffic.
 
   Example :
  # intercept incoming TLS requests based on the SNI field
- use-server www if { req_ssl_sni -i www.example.com }
+ use-server www if { req.ssl_sni -i www.example.com }
  server www 192.168.0.1:443 weight 0
- use-server mail if { req_ssl_sni -i mail.example.com }
+ use-server mail if { req.ssl_sni -i mail.example.com }
  server mail 192.168.0.1:465 weight 0
- use-server imap if { req_ssl_sni -i imap.example.com }
+ use-server imap if { req.ssl_sni -i imap.example.com }
  server imap 192.168.0.1:993 weight 0
  # all the rest is forwarded to this server
  server  default 192.168.0.2:443 check
@@ -18727,7 +18727,7 @@ ssl_fc_sni : string
   matching the HTTPS host name (253 chars or less). The SSL library must have
   been built with support for TLS extensions enabled (check haproxy -vv).
 
-  This fetch is different from "req_ssl_sni" above in that it applies to the
+  This fetch is different from "req.ssl_sni" above in that it applies to the
   connection being deciphered by HAProxy and not to SSL contents being blindly
   forwarded. See also "ssl_fc_sni_end" and "ssl_fc_sni_reg" below. This
   requires that the SSL library is built with support for TLS extensions
@@ -18998,13 +18998,13 @@ req_ssl_sni : string (deprecated)
   the example below. See also "ssl_fc_sni".
 
   ACL derivatives :
-req_ssl_sni : exact string match
+req.ssl_sni : exact string match
 
   Examples :
  # Wait for a client hello for at most 5 seconds
  tcp-request inspect-delay 5s
  tcp-request content accept if { req_ssl_hello_type 1 }
- use_backend bk_allow if { req_ssl_sni -f allowed_sites }
+ use_backend bk_allow if { req.ssl_sni -f allowed_sites }
  default_backend bk_sorry_page
 
 req.ssl_st_ext : integer
-- 
2.25.1



Weird behavior of spoe between http and https requests

2021-06-11 Thread Aleksandar Lazic

Hi.

I use haproxy 2.4 with this fe config.

```
global
log stdout format raw daemon
daemon
maxconn 2
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd 
listeners
stats timeout 30s

tune.ssl.default-dh-param 2048

# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private


# See 
https://ssl-config.mozilla.org/#server=haproxy&version=2.1&config=old&openssl=1.1.1d&guideline=5.4
ssl-default-bind-ciphers 
ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA
ssl-default-bind-ciphersuites 
TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
ssl-default-bind-options no-tls-tickets ssl-min-ver TLSv1.0

ssl-default-server-ciphers 
ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA
ssl-default-server-ciphersuites 
TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
ssl-default-server-options no-tls-tickets ssl-min-ver TLSv1.0


defaults http
  log global
  mode http
  retry-on all-retryable-errors
  option forwardfor
  option redispatch
  option http-ignore-probes
  option httplog
  option dontlognull
  option log-health-checks
  option socket-stats
  timeout connect 5s
  timeout client  50s
  timeout server  50s
  http-reuse safe
  errorfile 400 /etc/haproxy/errors/400.http
  errorfile 403 /etc/haproxy/errors/403.http
  errorfile 408 /etc/haproxy/errors/408.http
  errorfile 500 /etc/haproxy/errors/500.http
  errorfile 502 /etc/haproxy/errors/502.http
  errorfile 503 /etc/haproxy/errors/503.http
  errorfile 504 /etc/haproxy/errors/504.http

frontend http-in
  bind *:80
  mode http

  unique-id-format %rt
  http-request set-var(sess.my_fe_path) path
  http-request set-var(sess.my_fe_src) src
  http-request set-var(sess.my_fe_referer) req.hdr(Referer)
  http-request set-var(sess.my_fe_requestedhost) req.hdr(Host)

  # define the spoe agents
  filter spoe engine agent-on-http-req config /etc/haproxy/spoe-url.conf
  filter spoe engine agent-on-http-res config /etc/haproxy/spoe-url.conf

frontend https-in

  bind :::443 v4v6 alpn h2,http/1.1 ssl ca-file 
/etc/haproxy/letsencryptauthorityx3.pem crt /etc/ssl/haproxy/

  unique-id-format %rt
  http-request set-var(sess.my_fe_path) path
  http-request set-var(sess.my_fe_src) src
  http-request set-var(sess.my_fe_referer) req.hdr(Referer)
  http-request set-var(sess.my_fe_requestedhost) req.hdr(Host)

  # define the spoe agents
  filter spoe engine agent-on-http-req config /etc/haproxy/spoe-url.conf
  filter spoe engine agent-on-http-res config /etc/haproxy/spoe-url.conf
```

And with this spoe config.
```
[agent-on-http-req]
spoe-agent agent-on-http-req

log global

messages agent-on-http-req

option var-prefix feevents

timeout hello  2s
timeout idle   2m
timeout processing 1s

use-backend agent-on-http-req

spoe-message agent-on-http-req
args my_path=path my_src=src my_referer=req.hdr(Referer) my_sid=unique-id 
my_req_host=req.hdr(Host)
event on-frontend-http-request

[agent-on-http-res]
spoe-agent agent-on-http-res

log global

messages agent-on-http-res

option var-prefix feevents

timeout hello  2s
timeout idle   2m
timeout processing 1s

use-backend agent-on-http-res

spoe-message agent-on-http-res
args my_path=var(sess.my_fe_path) my_src=src 
my_referer=var(sess.my_fe_referer) my_sid=unique-id 
my_req_host=var(sess.my_fe_requestedhost)
event on-http-response
```

Now when I make a http request I get all values and args.
```
Jun 11 16:01:01 reggata-001 spoe-url[112969]: 2021/06/11 16:01:01 Msg Name  
:agent-on-http-req:
Jun 11 16:01:01 reggata-001 spoe-url[112969]: 2021/06/11 16:01:01 Msg Count :5:
Jun 11 16:01:01 reggata-001 spoe-url[112969]: 2021/06/11 16:01:01 Arg Name  
:my_path:
Jun 11 16:01:01 reggata-001 spoe-url[112969]: 2021/06/11 16:01:01 Arg Value 
:/test:
Jun 11 16:01:01 reg

Re: Weird behavior of spoe between http and https requests

2021-06-11 Thread Aleksandar Lazic

Hi.

On 11.06.21 18:07, Aleksandar Lazic wrote:

Hi.

I use haproxy 2.4 with this fe config.

```
global
     log stdout format raw daemon
     daemon
     maxconn 2
     stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd 
listeners
     stats timeout 30s

     tune.ssl.default-dh-param 2048

     # Default SSL material locations
     ca-base /etc/ssl/certs
     crt-base /etc/ssl/private


     # See 
https://ssl-config.mozilla.org/#server=haproxy&version=2.1&config=old&openssl=1.1.1d&guideline=5.4
     ssl-default-bind-ciphers 
ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA
     ssl-default-bind-ciphersuites 
TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
     ssl-default-bind-options no-tls-tickets ssl-min-ver TLSv1.0

     ssl-default-server-ciphers 
ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA
     ssl-default-server-ciphersuites 
TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
     ssl-default-server-options no-tls-tickets ssl-min-ver TLSv1.0


defaults http
   log global
   mode http
   retry-on all-retryable-errors
   option forwardfor
   option redispatch
   option http-ignore-probes
   option httplog
   option dontlognull
   option log-health-checks
   option socket-stats
   timeout connect 5s
   timeout client  50s
   timeout server  50s
   http-reuse safe
   errorfile 400 /etc/haproxy/errors/400.http
   errorfile 403 /etc/haproxy/errors/403.http
   errorfile 408 /etc/haproxy/errors/408.http
   errorfile 500 /etc/haproxy/errors/500.http
   errorfile 502 /etc/haproxy/errors/502.http
   errorfile 503 /etc/haproxy/errors/503.http
   errorfile 504 /etc/haproxy/errors/504.http

frontend http-in
   bind *:80
   mode http

   unique-id-format %rt
   http-request set-var(sess.my_fe_path) path
   http-request set-var(sess.my_fe_src) src
   http-request set-var(sess.my_fe_referer) req.hdr(Referer)
   http-request set-var(sess.my_fe_requestedhost) req.hdr(Host)

   # define the spoe agents
   filter spoe engine agent-on-http-req config /etc/haproxy/spoe-url.conf
   filter spoe engine agent-on-http-res config /etc/haproxy/spoe-url.conf

frontend https-in

   bind :::443 v4v6 alpn h2,http/1.1 ssl ca-file 
/etc/haproxy/letsencryptauthorityx3.pem crt /etc/ssl/haproxy/

   unique-id-format %rt
   http-request set-var(sess.my_fe_path) path
   http-request set-var(sess.my_fe_src) src
   http-request set-var(sess.my_fe_referer) req.hdr(Referer)
   http-request set-var(sess.my_fe_requestedhost) req.hdr(Host)

   # define the spoe agents
   filter spoe engine agent-on-http-req config /etc/haproxy/spoe-url.conf
   filter spoe engine agent-on-http-res config /etc/haproxy/spoe-url.conf
```

And with this spoe config.
```
[agent-on-http-req]
spoe-agent agent-on-http-req

     log global

     messages agent-on-http-req

     option var-prefix feevents

     timeout hello  2s
     timeout idle   2m
     timeout processing 1s

     use-backend agent-on-http-req

spoe-message agent-on-http-req
     args my_path=path my_src=src my_referer=req.hdr(Referer) my_sid=unique-id 
my_req_host=req.hdr(Host)
     event on-frontend-http-request

[agent-on-http-res]
spoe-agent agent-on-http-res

     log global

     messages agent-on-http-res

     option var-prefix feevents

     timeout hello  2s
     timeout idle   2m
     timeout processing 1s

     use-backend agent-on-http-res

spoe-message agent-on-http-res
     args my_path=var(sess.my_fe_path) my_src=src 
my_referer=var(sess.my_fe_referer) my_sid=unique-id 
my_req_host=var(sess.my_fe_requestedhost)
     event on-http-response
```

Now when I make a http request I get all values and args.
```
Jun 11 16:01:01 reggata-001 spoe-url[112969]: 2021/06/11 16:01:01 Msg Name  
:agent-on-http-req:
Jun 11 16:01:01 reggata-001 spoe-url[112969]: 2021/06/11 16:01:01 Msg Count :5:
Jun 11 16:01:01 reggata-001 spoe-url[112969]: 202

Line 47 in src/queue.c "s * queue's lock."

2021-06-24 Thread Aleksandar Lazic

Hi.

when someone works again on src/queue.c could be this typo fixed.

http://git.haproxy.org/?p=haproxy.git;a=blob;f=src/queue.c;h=6d3aa9a12bcd6078d1b5a76969da4104a6adb1bd;hb=HEAD#l47

```
  44  *   - a pendconn_add() is only performed by the stream which will own the
  45  * pendconn ; the pendconn is allocated at this moment and returned ; 
it is
  46  * added to either the server or the proxy's queue while holding this
  47 s * queue's lock.
  48  *
```

Regards
Alex



Re: Proposal about new default SSL log format

2021-07-03 Thread Aleksandar Lazic

Hi Remi.

On 02.07.21 16:26, Remi Tricot-Le Breton wrote:

Hello list,

Some work in ongoing to ease connection error and SSL handshake error logging.
This will rely on some new sample fetches that could be added to a custom
log-format string.
In order to ease SSL logging and debugging, we will also add a new default log
format for SSL connections. Now is then the good time to find the best format
for everyone.
The proposed format looks like the HTTP one to which the SSL specific
information is added. But if anybody sees a missing information that could be
beneficial for everybody, feel free to tell it, nothing is set in stone yet.

The format would look like this :
     >>> Jul  1 18:11:31 haproxy[143338]: 127.0.0.1:37740 
[01/Jul/2021:18:11:31.517] \
   ssl_frontend~ ssl_frontend/s2 0/0/0/7/+7 \
   0/0/0/0 2750  1/1/1/1/0 0/0 TLSv1.3 TLS_AES_256_GCM_SHA384

   Field   Format    Extract from the example above
   1   process_name '[' pid ']:'   haproxy[143338]:
   2   client_ip ':' client_port127.0.0.1:37740
   3   '[' request_date ']'  [01/Jul/2021:18:11:31.517]
   4   frontend_name  ssl_frontend~
   5   backend_name '/' server_name ssl_frontend/s2
   6   TR '/' Tw '/' Tc '/' Tr '/' Ta*   0/0/0/7/+7
   7 *conn_status '/' SSL hsk error '/' SSL vfy '/' SSL CA vfy* 0/0/0/0
   8 bytes_read*   2750
   9 termination_state 
  10   actconn '/' feconn '/' beconn '/' srv_conn '/' retries*    1/1/1/1/0
  11   srv_queue '/' backend_queue  0/0
  12 *ssl_version*  TLSv1.3
  13 *ssl_ciphers*   TLS_AES_256_GCM_SHA384


The equivalent log-format string would be the following :
     "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta \
%[conn_err_code]/%[ssl_fc_hsk_err]/%[ssl_c_err]/%[ssl_c_ca_err] \
         %B %ts %ac/%fc/%bc/%sc/%rc %sq/%bq %sslv %sslc

The fields in bold are the SSL specific ones and the statuses ones will come
from a not yet submitted code so the names and format might slightly change.

Feel free to suggest any missing data, which could come from log-format
specific fields or already existing sample fetches.


How about to combine ssl_version/ssl_ciphers in one line.

It would be helpful to see also the backend status.
Maybe add a 14th and 15th line with following fields

*backend_name '/' conn_status '/' SSL hsk error '/' SSL vfy '/' SSL CA vfy*
*backend_name '/' ssl_version '/' ssl_ciphers*

I had in the past several issues with the backend where the backend CA wasn't 
in the CA File which was quite
difficult to debug.

+1 to the suggestion from Илья Шипицин to use iso8601 which is already in 
haproxy since 2019/10/01:2.1-dev2.

I haven't found sub second format parameter in strftime call therefore I assume 
the strftime call have this
".00" as fix value.

```
strftime(iso_time_str, sizeof(iso_time_str), "%Y-%m-%dT%H:%M:%S.00+00:00", 
&tm)
```

Maybe another option is to use TAI for timestamps.

https://en.wikipedia.org/wiki/International_Atomic_Time
https://cr.yp.to/proto/utctai.html
http://www.madore.org/~david/computers/unix-leap-seconds.html


Thanks

Rémi


Jm2c

Alex



Re: Proposal about new default SSL log format

2021-07-03 Thread Aleksandar Lazic

On 03.07.21 13:27, Илья Шипицин wrote:



сб, 3 июл. 2021 г. в 16:22, Aleksandar Lazic mailto:al-hapr...@none.at>>:

Hi Remi.

On 02.07.21 16:26, Remi Tricot-Le Breton wrote:
 > Hello list,
 >
 > Some work in ongoing to ease connection error and SSL handshake error 
logging.
 > This will rely on some new sample fetches that could be added to a custom
 > log-format string.
 > In order to ease SSL logging and debugging, we will also add a new 
default log
 > format for SSL connections. Now is then the good time to find the best 
format
 > for everyone.
 > The proposed format looks like the HTTP one to which the SSL specific
 > information is added. But if anybody sees a missing information that 
could be
 > beneficial for everybody, feel free to tell it, nothing is set in stone 
yet.
 >
 > The format would look like this :
 >      >>> Jul  1 18:11:31 haproxy[143338]: 127.0.0.1:37740 
<http://127.0.0.1:37740> [01/Jul/2021:18:11:31.517] \
 >    ssl_frontend~ ssl_frontend/s2 0/0/0/7/+7 \
 >    0/0/0/0 2750  1/1/1/1/0 0/0 TLSv1.3 TLS_AES_256_GCM_SHA384
 >
 >    Field   Format    Extract from the 
example above
 >    1   process_name '[' pid ']:'                           
haproxy[143338]:
 >    2   client_ip ':' client_port 127.0.0.1:37740 
<http://127.0.0.1:37740>
 >    3   '[' request_date ']'                      
[01/Jul/2021:18:11:31.517]
 >    4   frontend_name                                          
ssl_frontend~
 >    5   backend_name '/' server_name                         
ssl_frontend/s2
 >    6   TR '/' Tw '/' Tc '/' Tr '/' Ta*   
0/0/0/7/+7
 >    7 *conn_status '/' SSL hsk error '/' SSL vfy '/' SSL CA vfy* 
0/0/0/0
 >    8 bytes_read*                                                     
  2750
 >    9 termination_state                                               
  
 >   10   actconn '/' feconn '/' beconn '/' srv_conn '/' retries*    
1/1/1/1/0
 >   11   srv_queue '/' backend_queue   
   0/0
 >   12 *ssl_version*                                                  
TLSv1.3
 >   13 *ssl_ciphers*                                   
TLS_AES_256_GCM_SHA384
 >
 >
 > The equivalent log-format string would be the following :
 >      "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta \
 > %[conn_err_code]/%[ssl_fc_hsk_err]/%[ssl_c_err]/%[ssl_c_ca_err] \
 >          %B %ts %ac/%fc/%bc/%sc/%rc %sq/%bq %sslv %sslc
 >
 > The fields in bold are the SSL specific ones and the statuses ones will 
come
 > from a not yet submitted code so the names and format might slightly 
change.
 >
 > Feel free to suggest any missing data, which could come from log-format
 > specific fields or already existing sample fetches.

How about to combine ssl_version/ssl_ciphers in one line.

It would be helpful to see also the backend status.
Maybe add a 14th and 15th line with following fields

*backend_name '/' conn_status '/' SSL hsk error '/' SSL vfy '/' SSL CA vfy*
*backend_name '/' ssl_version '/' ssl_ciphers*

I had in the past several issues with the backend where the backend CA 
wasn't in the CA File which
was quite difficult to debug.

+1 to the suggestion from Илья Шипицин to use iso8601 which is already in 
haproxy since
2019/10/01:2.1-dev2.

I haven't found sub second format parameter in strftime call therefore I 
assume the strftime call
have this
".00" as fix value.

```
strftime(iso_time_str, sizeof(iso_time_str), "%Y-%m-%dT%H:%M:%S.00+00:00", 
&tm)
```

Maybe another option is to use TAI for timestamps.


many analysis tools, for example Microsoft LogParser, ClickHouse, can perform 
queries right on top
of TSV files with iso8601 time.


Agree.
The output could be a TSV just to get sub seconds information could TAI be used.

https://en.wikipedia.org/wiki/International_Atomic_Time 
https://cr.yp.to/proto/utctai.html 
http://www.madore.org/~david/computers/unix-leap-seconds.html 


 > Thanks
 >
 > Rémi

Jm2c

Alex






Re: Long broken option http_proxy: should we kill it ?

2021-07-08 Thread Aleksandar Lazic

On 08.07.21 18:33, Willy Tarreau wrote:

Hi all,

Amaury discovered that "option http_proxy" was broken. I quickly checked
when it started, and it got broken with the introduction of HTX in 1.9
three years ago. It still used to work in legacy mode in 1.9 and 2.0
but 2.0 uses HTX by default and legacy disappeared from 2.1. Thus to
summarize it, no single version emitted during the last 2.5 years saw it
working.

As such I was considering removing it from 2.5 without prior deprecation.
My opinion is that something that doesn't work for 2.5 years and that
triggers no single report is a sufficient indicator of non-use. We'll
still need to deploy reasonable efforts to see under what conditions it
can be fixed and the fix backported, of course. Does anyone object to
this ?

For a bit of background, this option was added 14 years ago to extract
an IP address an a port from an absolute URI, rewrite it to relative
and forward the request to the original IP:port, thus acting like a
non-resolving proxy. Nowadays one could probably achieve the same
by doing something such asthe following:

 http-request set-dst url_ip
 http-request set-dst-port url_port
 http-request set-uri %[path]

And it could even involve the do_resolve() action to resolve names to
addresses. That's why I'm in favor of not even trying to keep this one
further.


+1 to remove


Thanks,
Willy






Re: Long broken option http_proxy: should we kill it ?

2021-07-10 Thread Aleksandar Lazic

On 08.07.21 19:44, Aleksandar Lazic wrote:

On 08.07.21 18:33, Willy Tarreau wrote:

Hi all,

Amaury discovered that "option http_proxy" was broken. I quickly checked
when it started, and it got broken with the introduction of HTX in 1.9
three years ago. It still used to work in legacy mode in 1.9 and 2.0
but 2.0 uses HTX by default and legacy disappeared from 2.1. Thus to
summarize it, no single version emitted during the last 2.5 years saw it
working.

As such I was considering removing it from 2.5 without prior deprecation.
My opinion is that something that doesn't work for 2.5 years and that
triggers no single report is a sufficient indicator of non-use. We'll
still need to deploy reasonable efforts to see under what conditions it
can be fixed and the fix backported, of course. Does anyone object to
this ?

For a bit of background, this option was added 14 years ago to extract
an IP address an a port from an absolute URI, rewrite it to relative
and forward the request to the original IP:port, thus acting like a
non-resolving proxy. Nowadays one could probably achieve the same
by doing something such asthe following:

 http-request set-dst url_ip
 http-request set-dst-port url_port
 http-request set-uri %[path]

And it could even involve the do_resolve() action to resolve names to
addresses. That's why I'm in favor of not even trying to keep this one
further.


+1 to remove


Funny part, there was a question in SO about this topic ;-)

https://stackoverflow.com/questions/68321275/unable-to-implement-haproxy-as-forward-proxy-for-https


Thanks,
Willy









FYI: kubernetes api deprecation in 1.22

2021-07-16 Thread Aleksandar Lazic

Hi.

FYI that the 1.22 have some changes which also impacts Ingress and Endpoints.

https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-22

Regards
Alex



Re: FYI: kubernetes api deprecation in 1.22

2021-07-16 Thread Aleksandar Lazic

On 16.07.21 10:27, Илья Шипицин wrote:

I wonder if Kubernetes has sort of ingress compliance test. Or is it up to 
ingress itself


Yes, there is such a thing but I never used it.
https://github.com/kubernetes-sigs/ingress-controller-conformance


On Fri, Jul 16, 2021, 1:21 PM Aleksandar Lazic mailto:al-hapr...@none.at>> wrote:

Hi.

FYI that the 1.22 have some changes which also impacts Ingress and 
Endpoints.

https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-22

Regards
Alex






Re: Help

2021-07-16 Thread Aleksandar Lazic

Hi.

On 16.07.21 14:34, Anilton Silva Fernandes wrote:

Hi there…

Can I get another HELP:

This time, I want to receive a request, and check for URL to know which backend 
should be call.

This is my config:

frontend web_accounts
     mode tcp
     bind 10.15.1.12:443
     default_backend accounts_servers

frontend web_apimanager
     mode tcp
     bind 10.15.1.13:443

     use_backend apiservices if   { path_beg /api/ }    
# IF THERE’S API ON THE URL SEND TO APISERVICES
     use_backend apimanager  unless   { path_beg /api }  # IF 
THERE’S NOT API, SEND IT TO APIMANAGER


This is not possible with TCP mode.
You have to switch to HTTP mode.

In this Blog post is such a example documented and more about HAProxy acls.

https://www.haproxy.com/blog/introduction-to-haproxy-acls/


backend accounts_servers
    mode tcp
    balance roundrobin
    server  accounts1 10.16.18.128:443 check

backend apimanager
    mode tcp
    balance roundrobin
    server  apimanager1 10.16.18.129:9445 check

backend apiservices
    mode tcp
    balance roundrobin
    server  apimanagerqa.cvt.cv 10.16.18.129:8245 check

Thank you

*From:*Emerson Gomes [mailto:emerson.go...@gmail.com]
*Sent:* 7 de julho de 2021 12:34
*To:* Anilton Silva Fernandes 
*Cc:* haproxy@formilux.org
*Subject:* Re: Help

Hello Anilton,

In the "bind *:443" line, do not specify a PEM file directly, but only the 
directory where your PEM file(s) resides.

Also, make sure that both the certificate and private key are contained within 
the same PEM file.

It should look like this:

-BEGIN CERTIFICATE-
    xxx
-END CERTIFICATE-
-BEGIN PRIVATE KEY-
   xxx
-END PRIVATE KEY-

BR.,

Emerson

Em qua., 7 de jul. de 2021 às 14:47, Anilton Silva Fernandes mailto:anilton.fernan...@cvt.cv>> escreveu:

Hi there.

Can I get some help from you.

I’m configuring HAProxy as a frontend on HTTPS with centified and I want 
clients to be redirect to BACKEND on HTTPS as well (443) but I want clients to 
see only HAProxy certificate, as the backend one is not valid.

Bellow the schematic of my design:

So, on

This is the configuration file I’m using:




frontend haproxy mode http bind *:80 bind *:443 ssl crt 
/etc/ssl/cvt.cv/accounts_cvt.pem default_backend wso2 backend wso2 mode http 
option forwardfor redirect scheme https if !{ ssl_fc } server my-api 
10.16.18.128:443 check ssl verify none http-request set-header X-Forwarded-Port 
%[dst_port] http-request add-header X-Forwarded-Proto https if { ssl_fc }



frontend web_accounts mode tcp bind 192.168.1.214:443 default_backend 
accounts_servers frontend web_apimanager mode tcp bind 192.168.1.215:443 
default_backend apimanager_servers backend accounts_servers balance roundrobin 
server accounts1 10.16.18.128:443 check server accounts2 10.16.18.128:443 check 
backend apimanager_servers balance roundrobin server accounts1 10.16.18.128:443 
check server accounts2 10.16.18.128:443 check





The first one is what works but we got SSL problems due to invalid 
certificates on Backend;

The second one is what we would like, but does not work and says some erros:

[ALERT] 187/114337 (7823) : parsing [/etc/haproxy/haproxy.cfg:85] : 'bind *:443' 
: unable to load SSL private key from PEM file '/etc/ssl/cvt.cv/accounts_cvt.pem 
'.

[ALERT] 187/114337 (7823) : Error(s) found in configuration file : 
/etc/haproxy/haproxy.cfg

[ALERT] 187/114337 (7823) : Proxy 'haproxy': no SSL certificate specified 
for bind '*:443' at [/etc/haproxy/haproxy.cfg:85] (use 'crt').

[ALERT] 187/114337 (7823) : Fatal errors found in configuration.

Errors in configuration file, check with haproxy check.

This is on CentOS 6

Thank you

Melhores Cumprimentos

**

*Anilton Fernandes | Plataformas, Sistemas e Infraestruturas*

Cabo Verde Telecom, SA

Group Cabo Verde Telecom

Rua Cabo Verde Telecom, 1, Edificio CVT

198, Praia, Santiago, República de Cabo Verde

Phone: +238 3503934 | Mobile: +238 9589123 | Email – anilton.fernan...@cvt.cv 


cid:image001.jpg@01D5997A.B9848FB0






[WARNING] (1) : We generated two equal cookies for two different servers.

2021-08-09 Thread Aleksandar Lazic

Hi.

We use the HAProxy 2.4 image which have now HAProxy 2.4.2.
https://hub.docker.com/layers/haproxy/library/haproxy/2.4/images/sha256-d5e2a5261d6367c31c8ce9b2e692fe67237bdc29f37f2e153d346e8b0dc7c13b?context=explore

I get this message for dynamic cookies.

```
[WARNING]  (1) : We generated two equal cookies for two different servers.
Please change the secret key for 'my-haproxy'.
```

But from my point of view and for server-template and dynamic-cookie-key make 
this message no sense or am I
wrong?

Here the full haproxy config.

```
global
daemon
log 127.0.0.1:8514 local1 debug
maxconn 1

resolvers azure-dns
  accepted_payload_size 65535
  nameserver ocpresolver tcp@172.30.0.10:53
  resolve_retries   3
  timeout resolve   1s
  timeout retry 1s
  hold other   30s
  hold refused 30s
  hold nx  30s
  hold timeout 30s
  hold valid   10s
  hold obsolete30s

defaults
  mode http
  log global
  timeout connect 10m
  timeout client  1h
  timeout server  1h
  log-format "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %si %sp %H %CC 
%CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r"

frontend stats
  bind *:9000
  stats enable
  stats uri /stats
  stats refresh 10s
  stats admin if LOCALHOST

listen my-haproxy
  bind :"8080" ssl crt /mnt/haproxy/certs/default.pem
  cookie PHPSESSID insert indirect nocache dynamic
  dynamic-cookie-key testphrase
  balance roundrobin

  server-template my 20 
my-cloud-service.my-namespace.svc.cluster.local:29099 resolvers azure-dns check
```

Regards
Alex



Re: [WARNING] (1) : We generated two equal cookies for two different servers.

2021-08-11 Thread Aleksandar Lazic

On 11.08.21 09:04, Willy Tarreau wrote:

Hi Aleks,

On Mon, Aug 09, 2021 at 06:40:29PM +0200, Aleksandar Lazic wrote:

Hi.

We use the HAProxy 2.4 image which have now HAProxy 2.4.2.
https://hub.docker.com/layers/haproxy/library/haproxy/2.4/images/sha256-d5e2a5261d6367c31c8ce9b2e692fe67237bdc29f37f2e153d346e8b0dc7c13b?context=explore

I get this message for dynamic cookies.

```
[WARNING]  (1) : We generated two equal cookies for two different servers.
Please change the secret key for 'my-haproxy'.
```

But from my point of view and for server-template and dynamic-cookie-key make
this message no sense or am I wrong?


The problem is that when using dynamic cookies, the dynamic-cookie-key,
the server's IP, and its port are hashed together to generate a fixed
cookie value that will be stable across a cluster of haproxy LBs, but
hashes are never without collisions despite being 64-bit, and here you
apparently faced one. Given how unlikely it is, I suspect that the issue
in fact is that you might have multiple servers on the same address.
Maybe just during some DNS transitions. If that's the case, maybe we
should improve the collision check to only report it if it happens for
servers with different addresses.


Well not the same IP but quite similar.
Your explanation can be the reason for the warning.

```
dig cloud-service.namespace.svc.cluster.local

cloud-service.namespace.svc.cluster.local. 5IN A 10.128.2.111
cloud-service.namespace.svc.cluster.local. 5IN A 10.128.2.112
cloud-service.namespace.svc.cluster.local. 5IN A 10.128.2.113
cloud-service.namespace.svc.cluster.local. 5IN A 10.128.2.114
cloud-service.namespace.svc.cluster.local. 5IN A 10.128.2.115
cloud-service.namespace.svc.cluster.local. 5IN A 10.129.9.83
cloud-service.namespace.svc.cluster.local. 5IN A 10.129.9.84
cloud-service.namespace.svc.cluster.local. 5IN A 10.129.9.85
cloud-service.namespace.svc.cluster.local. 5IN A 10.129.9.86
cloud-service.namespace.svc.cluster.local. 5IN A 10.129.9.87
cloud-service.namespace.svc.cluster.local. 5IN A 10.131.4.233
cloud-service.namespace.svc.cluster.local. 5IN A 10.131.4.234
cloud-service.namespace.svc.cluster.local. 5IN A 10.131.4.235
cloud-service.namespace.svc.cluster.local. 5IN A 10.131.4.236
cloud-service.namespace.svc.cluster.local. 5IN A 10.131.4.237
```


Willy






Clarification about http-reuse

2021-08-17 Thread Aleksandar Lazic

Hi.

In the doc is this part

http://cbonte.github.io/haproxy-dconv/2.4/configuration.html#4-http-reuse

```
By default, a connection established between HAProxy and the backend server
which is considered safe for reuse is moved back to the server's idle
connections pool so that any other request can make use of it. This is the
"safe" strategy below.
```

and in the code this.

http://git.haproxy.org/?p=haproxy-2.4.git;a=blob;f=src/cfgparse.c;hb=2883fcf65bc09d4acf25561bcd955c6ca27c0438#l3424


```
3424 if ((curproxy->mode != PR_MODE_HTTP) && 
(curproxy->options & PR_O_REUSE_MASK) != PR_O_REUSE_NEVR)
3425 curproxy->options &= ~PR_O_REUSE_MASK;
```

Does this mean that even when no "http-reuse ..." is set will the "http-reuse 
safe" set on the proxy?

Regards
Alex



Re: Clarification about http-reuse

2021-08-18 Thread Aleksandar Lazic

On 17.08.21 16:58, Willy Tarreau wrote:

Hi Alex,

On Tue, Aug 17, 2021 at 02:19:38PM +0200, Aleksandar Lazic wrote:

```
3424 if ((curproxy->mode != PR_MODE_HTTP) && 
(curproxy->options & PR_O_REUSE_MASK) != PR_O_REUSE_NEVR)
3425 curproxy->options &= ~PR_O_REUSE_MASK;
```

Does this mean that even when no "http-reuse ..." is set will the "http-reuse 
safe" set on the proxy?


Yes, that's since 2.0. Reuse in "safe" mode is enabled by default.
You can forcefully disable it using "http-reuse never" if you want
(e.g. for debugging or if you suspect a bug in the server). But
"safe" is as safe as regular keep-alive.

Hoping this helps,


Yes, thanks.


Willy






Re: BoringSSL commit dddb60e breaks compilation of HAProxy

2021-09-08 Thread Aleksandar Lazic

On 08.09.21 11:07, Willy Tarreau wrote:

On Wed, Sep 08, 2021 at 01:58:00PM +0500,  ??? wrote:

??, 8 . 2021 ?. ? 13:54, Willy Tarreau :


On Wed, Sep 08, 2021 at 12:05:23PM +0500,  ??? wrote:

Hello, Bob

I tracked an issue  https://github.com/haproxy/haproxy/issues/1386


let's track activity there


Quite frankly, I'm seriously wondering how long we'll want to keep
supporting that constantly breaking library. Does it still provide



by "let us track activity" I do not mean that we are going to maintain
BoringSSL :)

people will come from time to time with BoringSSL support request. Existing
github issue is good to redirect them to.


Oh this is how I understood it as well, I just think that you and a
handful of others have already spent a lot of energy on that lib and
I was only encouraging you not to spend way more than what you find
reasonable after this issue is created :-)


Is there another library which have the quic stuff implemented which
can be used for quic development?


Willy






Re: [ANNOUNCE] haproxy-2.5-dev10

2021-10-18 Thread Aleksandar Lazic

On 16.10.21 16:22, Willy Tarreau wrote:

Hi,

HAProxy 2.5-dev10 was released on 2021/10/16. It added 75 new commits
after version 2.5-dev9.

The smoke is progressively being blown away and we're starting to see
clearer what final 2.5 will look like.

In completely random order, here are the main changes I noticed in this
release:

   - some fixes for OpenSSL 3.0.0 support from Rémi and William; regression
 tests were fixed as well and the version in the CI was upgraded from
 alpha17 to 3.0.0

   - Rémi's JWT patches were merged. Now it becomes possible to decode
 JWT tokens and check their integrity. There are still a few pending
 patches for it but they're essentially cosmetic, so the code is
 expected to be already operational. Those who've been waiting for
 this are strongly invited to give it a try so that any required
 change has a chance to be merged before 2.5. Alex ?


That's great that the JWT feature is in HAProxy :-)

Sadly I'm not anymore involved in the project which I have planed to use it
therefore I can't test it in a real world scenario.

Thank you Rémi for implementing it.

Regards
Alex



Re: Last-minute proposal for 2.5 about httpslog

2021-11-04 Thread Aleksandar Lazic

On 04.11.21 15:28, Willy Tarreau wrote:

Hello,

as some of you know, 2.5 will come with a new "option httpslog" to ease
logging some useful TLS info by default.

While running some tests in production with the error-log-format, I
realized that we're not logging the SNI in "httpslog", and that it's
probably a significant miss that we ought to fix before the release.
I think it could be particularly useful for those using long crt-lists
with a default domain, as it will allow to figure which ones have been
handled by the default one possibly due to a missing certificate or a
misconfiguration.

Right now the default HTTPS format is defined this way :

 log-format "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC \
%CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r \
%[fc_conn_err]/%[ssl_fc_err,hex]/%[ssl_c_err]/\
%[ssl_c_ca_err]/%[ssl_fc_is_resumed] %sslv/%sslc"

As it is, it closely matches the httplog one so that tools configured to
process the latter should also work unmodified with the new one.

The question is, should we add "ssl_fc_sni" somewhere in this line, and
if so, where? Logging it at the end seems sensible to me so that even if
it's absent we're not missing anything. But maybe there are better options
or opinions on the subject.


A big bold +1 to add the sni header to the log.


Feel free to suggest so that we put something there before tomorrow and
have it in a last dev13 before the release.

Thanks,
Willy






Limit requests with peers on 2 independent HAProxies to one backend

2021-11-08 Thread Aleksandar Lazic



Hi.

I have 2 LB's which should limit the connection to one backend.

I would try to use "conn_cur" in a stick table and share it via peers.
Have anyone such a solution already in place?

That's my assuption for the config.

```
peers be_pixel_peers
  bind 9123
  log global
  localpeer {{ ansible_nodename }}
  server lb1 lb1.domain.com:1024
  server lb2 lb2.domain.com:1024


backend be_pixel_persons
  log global

  acl port_pixel dst_port {{ dst_ports["pixel"] }}
  tcp-request content silent-drop if port_pixel !{ src -f 
/etc/haproxy/whitelist.acl }

  option httpchk GET /alive
  http-check connect ssl
  timeout check 20s
  timeout server 300s

  # limit connection to backend

  stick-table type ip size 1m expire 10m store conn_cur peers be_pixel_peers
  http-request deny if { src,table_table_conn_cur(sc_conn_cur) gt 100 }

  

  http-request capture req.fhdr(Referer) id 0
  http-request capture req.fhdr(User-Agent) id 1
  http-request capture req.hdr(host) id 2
  http-request capture var(txn.cap_alg_keysize)  id 3
  http-request capture var(txn.cap_cipher) id 4
  http-request capture var(txn.cap_protocol) id 5

  http-response set-header X-Server %s

  balance roundrobin

  server pixel_persons1 {{ hosts["pixel_persons1"] }}:8184 resolvers mydns ssl 
check check-ssl ca-file /etc/haproxy/letsencryptauthorityx3.pem maxconn 2 weight 20
  server pixel_persons2 {{ hosts["pixel_persons2"] }}:8184 resolvers mydns ssl 
check check-ssl ca-file /etc/haproxy/letsencryptauthorityx3.pem maxconn 2 weight 20
  server pixel_persons3 {{ hosts["pixel_persons3"] }}:8184 resolvers mydns ssl 
check check-ssl ca-file /etc/haproxy/letsencryptauthorityx3.pem maxconn 8 weight 80

```

Regards
Alex



Re: Limit requests with peers on 2 independent HAProxies to one backend

2021-11-10 Thread Aleksandar Lazic

Hi.

Have anybody some hints or tips about the question?

Regards
Alex

On 08.11.21 12:26, Aleksandar Lazic wrote:


Hi.

I have 2 LB's which should limit the connection to one backend.

I would try to use "conn_cur" in a stick table and share it via peers.
Have anyone such a solution already in place?

That's my assuption for the config.

```
peers be_pixel_peers
   bind 9123
   log global
   localpeer {{ ansible_nodename }}
   server lb1 lb1.domain.com:1024
   server lb2 lb2.domain.com:1024


backend be_pixel_persons
   log global

   acl port_pixel dst_port {{ dst_ports["pixel"] }}
   tcp-request content silent-drop if port_pixel !{ src -f 
/etc/haproxy/whitelist.acl }

   option httpchk GET /alive
   http-check connect ssl
   timeout check 20s
   timeout server 300s

   # limit connection to backend

   stick-table type ip size 1m expire 10m store conn_cur peers be_pixel_peers
   http-request deny if { src,table_table_conn_cur(sc_conn_cur) gt 100 }

   

   http-request capture req.fhdr(Referer) id 0
   http-request capture req.fhdr(User-Agent) id 1
   http-request capture req.hdr(host) id 2
   http-request capture var(txn.cap_alg_keysize)  id 3
   http-request capture var(txn.cap_cipher) id 4
   http-request capture var(txn.cap_protocol) id 5

   http-response set-header X-Server %s

   balance roundrobin

   server pixel_persons1 {{ hosts["pixel_persons1"] }}:8184 resolvers mydns ssl 
check check-ssl ca-file /etc/haproxy/letsencryptauthorityx3.pem maxconn 2 weight 20
   server pixel_persons2 {{ hosts["pixel_persons2"] }}:8184 resolvers mydns ssl 
check check-ssl ca-file /etc/haproxy/letsencryptauthorityx3.pem maxconn 2 weight 20
   server pixel_persons3 {{ hosts["pixel_persons3"] }}:8184 resolvers mydns ssl 
check check-ssl ca-file /etc/haproxy/letsencryptauthorityx3.pem maxconn 8 weight 80

```

Regards
Alex






Re: Limit requests with peers on 2 independent HAProxies to one backend

2021-11-10 Thread Aleksandar Lazic

Hi Joao.

Thank you very much. I will give it a try.

Regards
Alex

On 10.11.21 22:25, Joao Morais wrote:




Em 8 de nov. de 2021, à(s) 08:26, Aleksandar Lazic  
escreveu:


Hi.

I have 2 LB's which should limit the connection to one backend.

I would try to use "conn_cur" in a stick table and share it via peers.
Have anyone such a solution already in place?


Hi Alex, I’ve already posted another question with a similar config which 
worked like a charm in my tests:

 https://www.mail-archive.com/haproxy@formilux.org/msg39753.html

~jm




That's my assuption for the config.

```
peers be_pixel_peers
  bind 9123
  log global
  localpeer {{ ansible_nodename }}
  server lb1 lb1.domain.com:1024
  server lb2 lb2.domain.com:1024


backend be_pixel_persons
  log global

  acl port_pixel dst_port {{ dst_ports["pixel"] }}
  tcp-request content silent-drop if port_pixel !{ src -f 
/etc/haproxy/whitelist.acl }

  option httpchk GET /alive
  http-check connect ssl
  timeout check 20s
  timeout server 300s

  # limit connection to backend

  stick-table type ip size 1m expire 10m store conn_cur peers be_pixel_peers
  http-request deny if { src,table_table_conn_cur(sc_conn_cur) gt 100 }

  

  http-request capture req.fhdr(Referer) id 0
  http-request capture req.fhdr(User-Agent) id 1
  http-request capture req.hdr(host) id 2
  http-request capture var(txn.cap_alg_keysize)  id 3
  http-request capture var(txn.cap_cipher) id 4
  http-request capture var(txn.cap_protocol) id 5

  http-response set-header X-Server %s

  balance roundrobin

  server pixel_persons1 {{ hosts["pixel_persons1"] }}:8184 resolvers mydns ssl 
check check-ssl ca-file /etc/haproxy/letsencryptauthorityx3.pem maxconn 2 weight 20
  server pixel_persons2 {{ hosts["pixel_persons2"] }}:8184 resolvers mydns ssl 
check check-ssl ca-file /etc/haproxy/letsencryptauthorityx3.pem maxconn 2 weight 20
  server pixel_persons3 {{ hosts["pixel_persons3"] }}:8184 resolvers mydns ssl 
check check-ssl ca-file /etc/haproxy/letsencryptauthorityx3.pem maxconn 8 weight 80

```

Regards
Alex









Maybe stupid question but should "maxconn 0" work?

2021-12-01 Thread Aleksandar Lazic



Hi.

I try to test some limits with peers and wanted to test "maxconn 0" before I 
start with the peers.
Should "maxconn 0" work?
I expect to get connection refused or similar and and 500 in the log but both 
curls get a 200

```
# curl -v http://127.0.0.1:8080/; curl -v http://127.0.0.1:8080/
```

```
podman exec haproxy-dest haproxy -vv
HAProxy version 2.4.8-d1f8d41 2021/11/03 - https://haproxy.org/
Status: long-term supported branch - will stop receiving fixes around Q2 2026.
Known bugs: http://www.haproxy.org/bugs/bugs-2.4.8.html
Running on: Linux 5.11.0-40-generic #44~20.04.2-Ubuntu SMP Tue Oct 26 18:07:44 
UTC 2021 x86_64
Build options :
  TARGET  = linux-glibc
  CPU = generic
  CC  = cc
  CFLAGS  = -O2 -g -Wall -Wextra -Wdeclaration-after-statement -fwrapv 
-Wno-address-of-packed-member -Wno-unused-label -Wno-sign-compare 
-Wno-unused-parameter -Wno-clobbered -Wno-missing-field-initializers 
-Wno-cast-function-type -Wtype-limits -Wshift-negative-value -Wshift-overflow=2 
-Wduplicated-cond -Wnull-dereference
  OPTIONS = USE_PCRE2=1 USE_PCRE2_JIT=1 USE_GETADDRINFO=1 USE_OPENSSL=1 
USE_LUA=1 USE_PROMEX=1
  DEBUG   =

Feature list : +EPOLL -KQUEUE +NETFILTER -PCRE -PCRE_JIT +PCRE2 +PCRE2_JIT 
+POLL -PRIVATE_CACHE +THREAD -PTHREAD_PSHARED +BACKTRACE -STATIC_PCRE 
-STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT +CRYPT_H 
+GETADDRINFO +OPENSSL +LUA +FUTEX +ACCEPT4 -CLOSEFROM -ZLIB +SLZ +CPU_AFFINITY 
+TFO +NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL -SYSTEMD -OBSOLETE_LINKER 
+PRCTL -PROCCTL +THREAD_DUMP -EVPORTS -OT -QUIC +PROMEX -MEMORY_PROFILING

Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_THREADS=64, default=12).
Built with OpenSSL version : OpenSSL 1.1.1k  25 Mar 2021
Running on OpenSSL version : OpenSSL 1.1.1k  25 Mar 2021
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with Lua version : Lua 5.3.3
Built with the Prometheus exporter as a service
Built with network namespace support.
Built with libslz for stateless compression.
Compression algorithms supported : identity("identity"), deflate("deflate"), 
raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT 
IP_FREEBIND
Built with PCRE2 version : 10.36 2020-12-04
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with gcc compiler version 10.2.1 20210110

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
  h2 : mode=HTTP   side=FE|BE mux=H2   
flags=HTX|CLEAN_ABRT|HOL_RISK|NO_UPG
fcgi : mode=HTTP   side=BEmux=FCGI 
flags=HTX|HOL_RISK|NO_UPG
: mode=HTTP   side=FE|BE mux=H1   flags=HTX
  h1 : mode=HTTP   side=FE|BE mux=H1   flags=HTX|NO_UPG
: mode=TCPside=FE|BE mux=PASS flags=
none : mode=TCPside=FE|BE mux=PASS flags=NO_UPG

Available services : prometheus-exporter
Available filters :
[SPOE] spoe
[CACHE] cache
[FCGI] fcgi-app
[COMP] compression
[TRACE] trace
```

Haproxy config
```
global
log stdout format short daemon debug
maxconn 0

defaults
timeout connect 1s
timeout server 5s
timeout client 5s

frontend http
mode http
log global
log-format "[%tr] %ST %B %CC %CS %tsc %hr %hs %{+Q}r"
declare capture response len 4

bind :::8080 v4v6

default_backend nginx

listen nginx
mode http
bind :::8081

http-request return status 200 content-type text/plain string "static" hdr x-host 
"%[req.hdr(host)]"
```

Regards
Alex



Re: Maybe stupid question but should "maxconn 0" work?

2021-12-02 Thread Aleksandar Lazic

On 02.12.21 15:12, Frank Wall wrote:

On 2021-12-02 02:16, Aleksandar Lazic wrote:

I try to test some limits with peers and wanted to test "maxconn 0"
before I start with the peers.
Should "maxconn 0" work?
I expect to get connection refused or similar and and 500 in the log
but both curls get a 200


Maybe I got your question wrong, but "maxconn 0" is not supposed to block
all connections:

   The default value is "0" which means unlimited.
(http://cbonte.github.io/haproxy-dconv/2.4/configuration.html#maxconn%20(Server%20and%20default-server%20options)


Thanks Frank for the answer.
So the answer to my question is "Yes, it's a stupid question because RTFM!" :-)


Regards
- Frank


Best regards
Alex




Is it expected that "capture response" does not get headers when "http-request return" is used

2021-12-04 Thread Aleksandar Lazic



Hi.

I try to capture the response header "dst_conn" from "http-request return" but 
in %hs isn't the value.

```
podman logs -f haproxy-dest
[NOTICE]   (1) : New worker #1 (3) forked
<6>[04/Dec/2021:12:14:34.437] 200 58 - - LR-- {} "GET / HTTP/1.1"
<6>[04/Dec/2021:12:14:34.437] 200 58 - - LR-- {} "GET / HTTP/1.1"
<6>[04/Dec/2021:12:14:34.438] 200 58 - - LR-- {} "GET / HTTP/1.1"

```

I haven't seen any "capture" in "http-after-response".
The question is also makes sense to have a capture after "http-request return" 
as in the documenation is
written that return stops the evaluation of any other rules also from capture?

http://cbonte.github.io/haproxy-dconv/2.4/configuration.html#http-response%20return
"This stops the evaluation of the rules and immediately returns a response."

My config
```
global
log stdout format short daemon debug
maxconn 1

defaults
timeout connect 1s
timeout server 1s
timeout client 1s

frontend http
mode http
log global
log-format "[%tr] %ST %B %CC %CS %tsc %hr %hs %{+Q}r"
# declare capture response len 4
capture response header dst_conn len 4

bind :::8080 v4v6

default_backend nginx

backend nginx
mode http
# bind :::8081

http-request return status 200 hdr dst_conn "%[dst_conn]"
```

Haproxy version
```
podman exec haproxy-dest haproxy -vv
HAProxy version 2.4.8-d1f8d41 2021/11/03 - https://haproxy.org/
Status: long-term supported branch - will stop receiving fixes around Q2 2026.
Known bugs: http://www.haproxy.org/bugs/bugs-2.4.8.html
Running on: Linux 5.11.0-40-generic #44~20.04.2-Ubuntu SMP Tue Oct 26 18:07:44 
UTC 2021 x86_64
Build options :
  TARGET  = linux-glibc
  CPU = generic
  CC  = cc
  CFLAGS  = -O2 -g -Wall -Wextra -Wdeclaration-after-statement -fwrapv 
-Wno-address-of-packed-member -Wno-unused-label -Wno-sign-compare 
-Wno-unused-parameter -Wno-clobbered -Wno-missing-field-initializers 
-Wno-cast-function-type -Wtype-limits -Wshift-negative-value -Wshift-overflow=2 
-Wduplicated-cond -Wnull-dereference
  OPTIONS = USE_PCRE2=1 USE_PCRE2_JIT=1 USE_GETADDRINFO=1 USE_OPENSSL=1 
USE_LUA=1 USE_PROMEX=1
  DEBUG   =
...
```

Regards
Alex



Help with peer setup and "srv_conn(bk_customer/haproxy-dest1)"

2021-12-05 Thread Aleksandar Lazic


Hi.

I try to protect an backend server against a overload within a master/master 
setup.
The test setup looks like this

lb1: 8081 \
   -hap-dest: 8080
lb2: 8082 /

When I now call lb1 with curl the "tracker/quota1" gpc is increased and the 
second request is denied.
The problem is that the peer on lb2 does not get the counter data to protect 
the backend on lb2 too.

Please can anybody help me to fix my mistake and find a proper solution.


```
curl -v http://127.0.0.1:8081/; curl -v http://127.0.0.1:8081
* Connected to 127.0.0.1 (127.0.0.1) port 8081 (#0)
> GET / HTTP/1.1
> Host: 127.0.0.1:8081
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< dest_dst_conn: 1
< content-length: 0
<
* Connection #0 to host 127.0.0.1 left intact


* Connected to 127.0.0.1 (127.0.0.1) port 8081 (#0)
> GET / HTTP/1.1
> Host: 127.0.0.1:8081

< HTTP/1.1 403 Forbidden
< content-length: 93
< cache-control: no-cache
< content-type: text/html

```

``` lb1
echo "show peers;show table tracker/quota1;show table tracker/quota2"|socat - 
tcp4-connect:127.0.0.1:9990

0x55bb71554dc0: [05/Dec/2021:10:27:17] id=tracker disabled=0 flags=0x33 
resync_timeout= task_calls=5
  0x55bb71558350: id=tracker(remote,inactive) addr=127.0.0.1:20001 
last_status=NAME last_hdshk=5m36s
reconnect= heartbeat= confirm=0 tx_hbt=0 rx_hbt=0 no_hbt=0 
new_conn=1 proto_err=0 coll=0
flags=0x0
shared tables:
  0x55bb7156f1e0 local_id=2 remote_id=0 flags=0x0 remote_data=0x0
  last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
  table:0x55bb71556a50 id=tracker/quota1 update=3 localupdate=3 
commitupdate=0 refcnt=1
Dictionary cache not dumped (use "show peers dict")
  0x55bb7156f090 local_id=1 remote_id=0 flags=0x0 remote_data=0x0
  last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
  table:0x55bb71556c60 id=tracker/quota2 update=2 localupdate=2 
commitupdate=0 refcnt=1
Dictionary cache not dumped (use "show peers dict")
  0x55bb71557300: id=h1(local,inactive) addr=127.0.0.1:2 last_status=NONE 
last_hdshk=
reconnect= heartbeat= confirm=0 tx_hbt=0 rx_hbt=0 
no_hbt=0 new_conn=0 proto_err=0 coll=0
flags=0x0
shared tables:
  0x55bb7156f230 local_id=2 remote_id=0 flags=0x0 remote_data=0x0
  last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
  table:0x55bb71556a50 id=tracker/quota1 update=3 localupdate=3 
commitupdate=0 refcnt=1
Dictionary cache not dumped (use "show peers dict")
  0x55bb7156f0e0 local_id=1 remote_id=0 flags=0x0 remote_data=0x0
  last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
  table:0x55bb71556c60 id=tracker/quota2 update=2 localupdate=2 
commitupdate=0 refcnt=1
Dictionary cache not dumped (use "show peers dict")

# table: tracker/quota1, type: string, size:100, used:1
0x55bb71772888: key=0 use=0 exp=53297 server_id=0 gpc0=1

# table: tracker/quota2, type: string, size:100, used:1
0x55bb71772958: key=0 use=0 exp=53297 server_id=0 gpc1=0

```

``` lb2
echo "show peers;show table tracker/quota1;show table tracker/quota2"|socat - 
tcp4-connect:127.0.0.1:9991

0x5618ae836dc0: [05/Dec/2021:10:27:12] id=tracker disabled=0 flags=0x33 
resync_timeout= task_calls=5
  0x5618ae83a350: id=tracker(remote,inactive) addr=127.0.0.1:2 
last_status=NAME last_hdshk=5m31s
reconnect= heartbeat= confirm=0 tx_hbt=0 rx_hbt=0 no_hbt=0 
new_conn=2 proto_err=0 coll=0
flags=0x0
shared tables:
  0x5618ae8511e0 local_id=2 remote_id=0 flags=0x0 remote_data=0x0
  last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
  table:0x5618ae838a50 id=tracker/quota1 update=0 localupdate=0 
commitupdate=0 refcnt=1
Dictionary cache not dumped (use "show peers dict")
  0x5618ae851090 local_id=1 remote_id=0 flags=0x0 remote_data=0x0
  last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
  table:0x5618ae838c60 id=tracker/quota2 update=0 localupdate=0 
commitupdate=0 refcnt=1
Dictionary cache not dumped (use "show peers dict")
  0x5618ae839300: id=h2(local,inactive) addr=127.0.0.1:20001 last_status=NONE 
last_hdshk=
reconnect= heartbeat= confirm=0 tx_hbt=0 rx_hbt=0 
no_hbt=0 new_conn=0 proto_err=0 coll=0
flags=0x0
shared tables:
  0x5618ae851230 local_id=2 remote_id=0 flags=0x0 remote_data=0x0
  last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
  table:0x5618ae838a50 id=tracker/quota1 update=0 localupdate=0 
commitupdate=0 refcnt=1
Dictionary cache not dumped (use "show peers dict")
  0x5618ae8510e0 local_id=1 remote_id=0 flags=0x0 remote_data=0x0
  last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
  table:0x5618ae83

Re: Is it expected that "capture response" does not get headers when "http-request return" is used

2021-12-06 Thread Aleksandar Lazic

On 06.12.21 08:25, Christopher Faulet wrote:

Le 12/4/21 à 13:25, Aleksandar Lazic a écrit :


Hi.

I try to capture the response header "dst_conn" from "http-request return" but 
in %hs isn't the value.

```
podman logs -f haproxy-dest
[NOTICE]   (1) : New worker #1 (3) forked
<6>[04/Dec/2021:12:14:34.437] 200 58 - - LR-- {} "GET / HTTP/1.1"
<6>[04/Dec/2021:12:14:34.437] 200 58 - - LR-- {} "GET / HTTP/1.1"
<6>[04/Dec/2021:12:14:34.438] 200 58 - - LR-- {} "GET / HTTP/1.1"

```

I haven't seen any "capture" in "http-after-response".
The question is also makes sense to have a capture after "http-request return" 
as in the documenation is
written that return stops the evaluation of any other rules also from capture?

http://cbonte.github.io/haproxy-dconv/2.4/configuration.html#http-response%20return
"This stops the evaluation of the rules and immediately returns a response."


Hi Alex,

Unfortunately, it is indeed not possible for now. First, the captures via "capture 
request" and
"capture response" directives are performed very early, and on received 
messages only. Thus it is not
possible to capture info from generated responses at this stage. However, it is 
probably possible to add
a "capture" action to the "http-afer-response" ruleset. This would able to you to capture your header 
with the following config:


    declare capture response len 4
    http-after-response capture hdr(dst_conn) id 0

At first glance it seems trivial. I will check that.


Thank you Christopher.

Regards
Alex




Re: Help with peer setup and "srv_conn(bk_customer/haproxy-dest1)"

2021-12-08 Thread Aleksandar Lazic

Hi.

Anyone which can help to protect the backen with backend states?

Regards
Alex

On 05.12.21 11:42, Aleksandar Lazic wrote:


Hi.

I try to protect an backend server against a overload within a master/master 
setup.
The test setup looks like this

lb1: 8081 \
    -hap-dest: 8080
lb2: 8082 /

When I now call lb1 with curl the "tracker/quota1" gpc is increased and the 
second request is denied.
The problem is that the peer on lb2 does not get the counter data to protect 
the backend on lb2 too.

Please can anybody help me to fix my mistake and find a proper solution.


```
curl -v http://127.0.0.1:8081/; curl -v http://127.0.0.1:8081
* Connected to 127.0.0.1 (127.0.0.1) port 8081 (#0)
 > GET / HTTP/1.1
 > Host: 127.0.0.1:8081
 > User-Agent: curl/7.68.0
 > Accept: */*
 >
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< dest_dst_conn: 1
< content-length: 0
<
* Connection #0 to host 127.0.0.1 left intact


* Connected to 127.0.0.1 (127.0.0.1) port 8081 (#0)
 > GET / HTTP/1.1
 > Host: 127.0.0.1:8081

< HTTP/1.1 403 Forbidden
< content-length: 93
< cache-control: no-cache
< content-type: text/html

```

``` lb1
echo "show peers;show table tracker/quota1;show table tracker/quota2"|socat - 
tcp4-connect:127.0.0.1:9990

0x55bb71554dc0: [05/Dec/2021:10:27:17] id=tracker disabled=0 flags=0x33 
resync_timeout= task_calls=5
   0x55bb71558350: id=tracker(remote,inactive) addr=127.0.0.1:20001 
last_status=NAME last_hdshk=5m36s
     reconnect= heartbeat= confirm=0 tx_hbt=0 rx_hbt=0 
no_hbt=0 new_conn=1 proto_err=0 coll=0
     flags=0x0
     shared tables:
   0x55bb7156f1e0 local_id=2 remote_id=0 flags=0x0 remote_data=0x0
   last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
   table:0x55bb71556a50 id=tracker/quota1 update=3 localupdate=3 
commitupdate=0 refcnt=1
     Dictionary cache not dumped (use "show peers dict")
   0x55bb7156f090 local_id=1 remote_id=0 flags=0x0 remote_data=0x0
   last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
   table:0x55bb71556c60 id=tracker/quota2 update=2 localupdate=2 
commitupdate=0 refcnt=1
     Dictionary cache not dumped (use "show peers dict")
   0x55bb71557300: id=h1(local,inactive) addr=127.0.0.1:2 last_status=NONE 
last_hdshk=
     reconnect= heartbeat= confirm=0 tx_hbt=0 rx_hbt=0 
no_hbt=0 new_conn=0 proto_err=0 coll=0
     flags=0x0
     shared tables:
   0x55bb7156f230 local_id=2 remote_id=0 flags=0x0 remote_data=0x0
   last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
   table:0x55bb71556a50 id=tracker/quota1 update=3 localupdate=3 
commitupdate=0 refcnt=1
     Dictionary cache not dumped (use "show peers dict")
   0x55bb7156f0e0 local_id=1 remote_id=0 flags=0x0 remote_data=0x0
   last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
   table:0x55bb71556c60 id=tracker/quota2 update=2 localupdate=2 
commitupdate=0 refcnt=1
     Dictionary cache not dumped (use "show peers dict")

# table: tracker/quota1, type: string, size:100, used:1
0x55bb71772888: key=0 use=0 exp=53297 server_id=0 gpc0=1

# table: tracker/quota2, type: string, size:100, used:1
0x55bb71772958: key=0 use=0 exp=53297 server_id=0 gpc1=0

```

``` lb2
echo "show peers;show table tracker/quota1;show table tracker/quota2"|socat - 
tcp4-connect:127.0.0.1:9991

0x5618ae836dc0: [05/Dec/2021:10:27:12] id=tracker disabled=0 flags=0x33 
resync_timeout= task_calls=5
   0x5618ae83a350: id=tracker(remote,inactive) addr=127.0.0.1:2 
last_status=NAME last_hdshk=5m31s
     reconnect= heartbeat= confirm=0 tx_hbt=0 rx_hbt=0 
no_hbt=0 new_conn=2 proto_err=0 coll=0
     flags=0x0
     shared tables:
   0x5618ae8511e0 local_id=2 remote_id=0 flags=0x0 remote_data=0x0
   last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
   table:0x5618ae838a50 id=tracker/quota1 update=0 localupdate=0 
commitupdate=0 refcnt=1
     Dictionary cache not dumped (use "show peers dict")
   0x5618ae851090 local_id=1 remote_id=0 flags=0x0 remote_data=0x0
   last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
   table:0x5618ae838c60 id=tracker/quota2 update=0 localupdate=0 
commitupdate=0 refcnt=1
     Dictionary cache not dumped (use "show peers dict")
   0x5618ae839300: id=h2(local,inactive) addr=127.0.0.1:20001 last_status=NONE 
last_hdshk=
     reconnect= heartbeat= confirm=0 tx_hbt=0 rx_hbt=0 
no_hbt=0 new_conn=0 proto_err=0 coll=0
     flags=0x0
     shared tables:
   0x5618ae851230 local_id=2 remote_id=0 flags=0x0 remote_data=0x0
   last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
   table:0x56

Re: Is it expected that "capture response" does not get headers when "http-request return" is used

2021-12-08 Thread Aleksandar Lazic

On 08.12.21 10:20, Christopher Faulet wrote:

Le 12/6/21 à 08:25, Christopher Faulet a écrit :

Le 12/4/21 à 13:25, Aleksandar Lazic a écrit :


Hi.

I try to capture the response header "dst_conn" from "http-request return" but 
in %hs isn't the value.

```
podman logs -f haproxy-dest
[NOTICE]   (1) : New worker #1 (3) forked
<6>[04/Dec/2021:12:14:34.437] 200 58 - - LR-- {} "GET / HTTP/1.1"
<6>[04/Dec/2021:12:14:34.437] 200 58 - - LR-- {} "GET / HTTP/1.1"
<6>[04/Dec/2021:12:14:34.438] 200 58 - - LR-- {} "GET / HTTP/1.1"

```

I haven't seen any "capture" in "http-after-response".
The question is also makes sense to have a capture after "http-request return" 
as in the documenation is
written that return stops the evaluation of any other rules also from capture?

http://cbonte.github.io/haproxy-dconv/2.4/configuration.html#http-response%20return
"This stops the evaluation of the rules and immediately returns a response."


Hi Alex,

Unfortunately, it is indeed not possible for now. First, the captures via
"capture request" and "capture response" directives are performed very early,
and on received messages only. Thus it is not possible to capture info from
generated responses at this stage. However, it is probably possible to add a
"capture" action to the "http-afer-response" ruleset. This would able to you to
capture your header with the following config:

 declare capture response len 4
 http-after-response capture hdr(dst_conn) id 0

At first glance it seems trivial. I will check that.


Hi,

I added it to 2.6-DEV. The patch is small enough to be backported to 2.5.


Cool thank you.



Re: Blocking log4j CVE with HAProxy

2021-12-13 Thread Aleksandar Lazic

On 13.12.21 11:48, Olivier D wrote:

Hello there,

If you don't know yet, a CVE was published on friday about library log4j, 
allowing a remote code execution
with a crafted HTTP request.

We would like to filter these requests on HAProxy to lower the exposition. At 
peak times, 20% of our web
traffic is scanners about this bug !

The offended string is "${jndi:". It must be filtered on any fields that could 
go to log servers:
- URL
- User-Agent
- User name

What would be the easier way to do that ? If I give it a try :

http-request deny deny_status 405 if { url_sub -i "\$\{jndi:" or hdr_sub(user-agent) -i 
"\$\{jndi:" }


What do you think ?


Basically it could be any header which is used by the application which uses to 
send it unchecked and
unverified to the logging library.

The assumption or the fields are valid but not enough, from my point of view.

There is a quiet nice blog post https://isc.sans.edu/diary/28120 about this 
topic.

For my point of view is the key statement in the blog below.
"as long as it reads some input from an attacker, and passes that to the log4 
library"

There are 2 main question here.
1. Why is a input from out site of the application passed unchecked to the 
logging library!
2. I the lookup really necessary for the application or only a lazy way to 
solve some topics?

This CVE creates a lot or noise but I haven't seen anywhere that someone asked 
this simple questions.
The sad fact is that one of the main development rule is broken here from the 
developing peoples, and this
is a quite old rule.

Check and verify EVERY Input from the "User".

From my point of view can the "http-request deny" rule be added but which else 
header should be included?

The "referer" Header is also a nice injection option because some apps want to 
know from which location a
request is cumming and this is a know header how about some specific app headers 
"X-???"?


Olivier


Jm2c

Regards
Alex




Re: Blocking log4j CVE with HAProxy

2021-12-13 Thread Aleksandar Lazic

On 13.12.21 14:03, Lukas Tribus wrote:

On Mon, 13 Dec 2021 at 13:25, Aleksandar Lazic  wrote:

1. Why is a input from out site of the application passed unchecked to the 
logging library!


Because you can't predict the future.

When you know that your backend is SQL, you escape what's necessary to
avoid SQL injection (or use prepared statements) before sending
commands against the database.
When you know your output is HTML, you escape HTML special characters,
so untrusted inputs can't inject HTML tags.

That's what input validation means.

How exactly do you verify and sanitise inputs to protect against an
unknown vulnerability with an unknown syntax in a logging library that
is supposed to handle all strings just fine? You don't, it doesn't
work this way, and that's not what input validation means.


Well I go the other way around.

The application must know what data are allowed, verify the input and if the 
input is not valid discard it.
In any case, the user input should never send directly to the database!
There are a lot of options in many different languages to quote or prepare some 
Queries *before* they send to
the database.

I know that this is a lot of work because I do this in almost every of my 
programs but security and error
handling is a must in currently applications and I would say at least 1/3th of 
an appliation.

We see this in haproxy quite good as there are a huge mount of checks for null, 
expected types and a lot other
checks that's why haproxy is so robust and secure, imho.

But I think this is now off topic, let's mail off-list further, okay?


Lukas


Regards
Alex



Re: Blocking log4j CVE with HAProxy

2021-12-13 Thread Aleksandar Lazic



On 13.12.21 14:53, Lukas Tribus wrote:

On Mon, 13 Dec 2021 at 14:43, Aleksandar Lazic  wrote:

Well I go the other way around.

The application must know what data are allowed, verify the input and if the 
input is not valid discard it.´


You clearly did not understand my point so let me try to phrase it differently:

The log4j vulnerability is about "allowed data" triggering a software
vulnerability which was impossible to predict.


ah okay, then please accept my apologize for misunderstanding you.


Lukas



Regards
Alex



Re: Blocking log4j CVE with HAProxy

2021-12-14 Thread Aleksandar Lazic

Hi.

On 14.12.21 10:18, Olivier D wrote:

Hi,

Le lun. 13 déc. 2021 à 19:38, John Lauro mailto:johnala...@gmail.com>> a écrit :

http-request deny deny_status 405 if { url_sub -i "\$\{jndi:" or hdr_sub(user-agent) 
-i "\$\{jndi:" }
was not catching the bad traffic.  I think the escapes were causing issues 
in the matching.

The following did work:
                 http-request deny deny_status 405 if { url_sub -i -f 
/etc/haproxy/bad_header.lst }
                 http-request deny deny_status 405 if { hdr_sub(user-agent) 
-i -f /etc/haproxy/bad_header.lst }
and in bad_header.lst
${jndi:


  I tried
http-request deny deny_status 405 if { url_sub -i "\$\{jndi:" or hdr_sub(user-agent) -i 
"\$\{jndi:" }
and
http-request deny deny_status 405 if { url_sub -i ${jndi: or 
hdr_sub(user-agent) -i ${jndi: }

without success. Can anyone tell what's wrong with both syntaxes ? And how to 
escape special chars
correctly ?


There is now a blog post on haproxy.com how to configure haproxy to protect the 
backend applications against
the log4j attack.

https://www.haproxy.com/blog/december-2021-log4shell-mitigation/


Olivier


Regards
Alex



Add HAProxy to quicwg Implementations wiki

2021-12-19 Thread Aleksandar Lazic



Hi.

Do you agree that we now can add HAProxy to that list :-)

https://github.com/quicwg/base-drafts/wiki/Implementations

My suggestion, please help me to file the ??:

IETF QUIC Transport

HAProxy:

QUIC implementation in HAProxy

Language: C
Version: draft-29??
Roles: Server, Client
Handshake: TLS 1.3
Protocol IDs: ??
ALPN: ??
Public server:
Is there a public server?
#

HTTP/3

Implementation of QUIC and HTTP/3 support in HAProxy

Language: C
Version: draft-http-34??
Roles: Server, Client
Handshake: TLSv1.3
Protocol IDs: ??
Public server: -
###

Regards
Alex



Re: Add HAProxy to quicwg Implementations wiki

2021-12-19 Thread Aleksandar Lazic



On 19.12.21 13:52, Willy Tarreau wrote:

Hi Aleks,

On Sun, Dec 19, 2021 at 01:43:01PM +0100, Aleksandar Lazic wrote:

Do you agree that we now can add HAProxy to that list :-)

https://github.com/quicwg/base-drafts/wiki/Implementations


Ideally we should submit it once we have a public server with it. There
are still low-level issues that Fred and Amaury are working on before
this can happen, but based on the progress I'm seeing on the interop
page at https://interop.seemann.io/  I definitely expect that these
will be addressed soon and that haproxy.org will be delivered over QUIC
before 2.6 is released :-)


Cool thanks for the update :-)


Willy


Regards
Alex



Re: Getting rid of outdated haproxy apt ppa repo

2021-12-20 Thread Aleksandar Lazic



Hi.

On 20.12.21 09:40, Christoph Kukulies wrote:

Due to some recent action I did from some may outdated instructions for haproxy 
1.6 under Ubuntu
I have a left off broken haproxy repo which comes up everytim I’m doing 
apt-updates:

Ign:3 http://ppa.launchpad.net/vbernat/haproxy-1.6/ubuntu bionic InRelease
Hit:4 http://ppa.launchpad.net/vbernat/haproxy-1.8/ubuntu bionic InRelease
Err:5 http://ppa.launchpad.net/vbernat/haproxy-1.6/ubuntu bionic Release
   404  Not Found [IP: 91.189.95.85 80]
Hit:6 http://archive.ubuntu.com/ubuntu bionic InRelease
Get:7 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Get:8 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
Reading package lists... Done
E: The repository 'http://ppa.launchpad.net/vbernat/haproxy-1.6/ubuntu bionic 
Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore 
disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration 
details.


Any clues how I can get rid of this?


Well 1.6 is end of life.

https://www.haproxy.org/

You should replace haproxy-1.6 with 2.4, IMHO.
https://haproxy.debian.net/#?distribution=Ubuntu&release=bionic&version=2.4

How to handle ppa can be searched in the Internet, here a example page from the 
internet search.
https://linuxhint.com/ppa_repositories_ubuntu/


—
Christoph


Regards
Alex



HAP 2.3.16 A bogus STREAM [0x559faa07b4f0] at "cache store filter"

2021-12-25 Thread Aleksandar Lazic



Hi.

as the message tell us that we should report this to the developers I do so :-)


```
Dec 24 01:10:31 lb1 haproxy[20008]: A bogus STREAM [0x559faa07b4f0] is spinning 
at 204371 calls per second
and refuses to die, aborting now!
Please report this error to developers
[strm=0x559faa07b4f0,12390e src=:::79.183.184.235 fe=https-in be=be_api 
dst=api_main2
 txn=0x559faab233e0,44000 txn.req=MSG_DONE,d txn.rsp=MSG_RPBEFORE,0 
rqf=48c4e068 rqa=4
 rpf=a000a860 rpa=0 sif=CLO,2c8002 sib=CLO,1280112 af=(nil),0 
csf=0x559faa07ba10,1059a0
 ab=(nil),0 csb=0x559faad7dcf0,1a0
 cof=0x7f224212e5d0,80003300:H2(0x559faa7d7b00)/SSL(0x7f22424fc7a0)/tcpv6(2162)
 cob=0x7f2240f79fe0,8982300:H1(0x559faa0ab840)/SSL(0x7f2263517770)/tcpv4(1490)
 filters={0x559faa29c520="cache store filter"}]

Dec 24 01:10:31 lb1 haproxy[4818]: [ALERT] 357/011031 (20008) : A bogus STREAM 
[0x559faa07b4f0] is spinning
at 204371 calls per second and refuses to die, aborting now! Please report this 
error to developers
[strm=0x559faa07b4f0,12390e src=:::79.183.184.235 fe=https-in be=be_api 
dst=api_main2
 txn=0x559faab233e0,44000 txn.req=MSG_DONE,d txn.rsp=MSG_RPBEFORE,0 
rqf=48c4e068 rqa=4
 rpf=a000a860 rpa=0 sif=CLO,2c8002 sib=CLO,1280112 af=(nil),0 
csf=0x559faa07ba10,1059a0
 ab=(nil),0
 csb=0x559faad7dcf0,1a0 
cof=0x7f224212e5d0,80003300:H2(0x559faa7d7b00)/SSL(0x7f22424fc7a0)/tcpv6(2162)
 cob=0x7f2240f79fe0,8982300:H1(0x559faa0ab840)/SSL(0x7f2263517770)/tcpv4(1490)
 filters={0x559faa29c520="cache store filter"}]
```

Here the cache config from haproxy.

```
cache default_cache
total-max-size 1024 # MB
# max-object-size 1  # bytes
max-age 300 # seconds

cache api_cache
total-max-size 1024 # MB
# max-object-size 1  # bytes
max-age 300 # seconds

backend be_default
  log global

  http-request cache-use default_cache
  http-response cache-store default_cache

backend be_api
  log global

  http-request cache-use api_cache
  http-response cache-store api_cache
```

Here the haproxy version ans we plan to update to 2.4 version asap.

```
ubuntu@lb1:~$ haproxy -vv
HA-Proxy version 2.3.16-1ppa1~bionic 2021/11/25 - https://haproxy.org/
Status: stable branch - will stop receiving fixes around Q1 2022.
Known bugs: http://www.haproxy.org/bugs/bugs-2.3.16.html
Running on: Linux 4.15.0-139-generic #143-Ubuntu SMP Tue Mar 16 01:30:17 UTC 
2021 x86_64
Build options :
  TARGET  = linux-glibc
  CPU = generic
  CC  = cc
  CFLAGS  = -O2 -g -O2 
-fdebug-prefix-map=/build/haproxy-1kKZLK/haproxy-2.3.16=. 
-fstack-protector-strong -Wformat -Werror=format-security -Wdate-time 
-D_FORTIFY_SOURCE=2 -Wall -Wextra -Wdeclaration-after-statement -fwrapv 
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered 
-Wno-missing-field-initializers -Wtype-limits -Wshift-negative-value 
-Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference
  OPTIONS = USE_PCRE2=1 USE_PCRE2_JIT=1 USE_OPENSSL=1 USE_LUA=1 USE_ZLIB=1 
USE_SYSTEMD=1
  DEBUG   =

Feature list : +EPOLL -KQUEUE +NETFILTER -PCRE -PCRE_JIT +PCRE2 +PCRE2_JIT 
+POLL -PRIVATE_CACHE +THREAD -PTHREAD_PSHARED +BACKTRACE -STATIC_PCRE 
-STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT +CRYPT_H 
+GETADDRINFO +OPENSSL +LUA +FUTEX +ACCEPT4 -CLOSEFROM +ZLIB -SLZ +CPU_AFFINITY 
+TFO +NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL +SYSTEMD -OBSOLETE_LINKER 
+PRCTL +THREAD_DUMP -EVPORTS

Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_THREADS=64, default=8).
Built with OpenSSL version : OpenSSL 1.1.1  11 Sep 2018
Running on OpenSSL version : OpenSSL 1.1.1  11 Sep 2018
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with Lua version : Lua 5.3.3
Built with network namespace support.
Built with the Prometheus exporter as a service
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), deflate("deflate"), 
raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT 
IP_FREEBIND
Built with PCRE2 version : 10.31 2018-02-12
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with gcc compiler version 7.5.0

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
  h2 : mode=HTTP   side=FE|BE mux=H2
fcgi : mode=HTTP   side=BEmux=FCGI
: mode=HTTP   side=FE|BE mux=H1
: mode=TCPside=FE|BE mux=PASS

Available services : prometheus-exporter
Available filters :
[SPOE] spoe
[CACHE] cache
[FCGI] fcgi-app

Re: invalid request

2021-12-29 Thread Aleksandar Lazic

Hi.

On 28.12.21 19:35, brendan kearney wrote:

list members,

i am running haproxy, and see some errors with requests.  i am trying to
understand why the errors are being thrown.  haproxy version and error
info below.  i am thinking that the host header is being exposed outside
the TLS encryption, but cannot be sure that is what is going on.

of note, the gnome weather extension runs into a similar issue. and the
eclipse IDE, when trying to call out to the download site.

where can i find more about what is going wrong with the requests and
why haproxy is blocking them?  if it matters, the calls are from apps to
a http VIP in haproxy, load balancing to squid backends.

# haproxy -v
HA-Proxy version 2.1.11-9da7aab 2021/01/08 - https://haproxy.org/
Status: stable branch - will stop receiving fixes around Q1 2021.
Known bugs: http://www.haproxy.org/bugs/bugs-2.1.11.html


As you can see on this page are 108 bugs fixed within the next version.
Maybe you should update to latest 2.4 and see if the behavior is still the same.


Running on: Linux 5.11.22-100.fc32.x86_64 #1 SMP Wed May 19 18:58:25 UTC
2021 x86_64

[28/Dec/2021:12:17:14.412] frontend proxy (#2): invalid request
    backend  (#-1), server  (#-1), event #154, src 
192.168.1.90:44228
    buffer starts at 0 (including 0 out), 16216 free,
    len 168, wraps at 16336, error at position 52
    H1 connection flags 0x, H1 stream flags 0x0012
    H1 msg state MSG_HDR_L2_LWS(24), H1 msg flags 0x1410
    H1 chunk len 0 bytes, H1 body len 0 bytes :

    0  CONNECT admin.fedoraproject.org:443 HTTP/1.1\r\n


Do you use 
http://cbonte.github.io/haproxy-dconv/2.1/configuration.html#4-option%20http_proxy
It would help when you share the haproxy config.


    00046  Host: admin.fedoraproject.org\r\n
    00077  Accept-Encoding: gzip, deflate\r\n
    00109  User-Agent: gnome-software/40.4\r\n
    00142  Connection: Keep-Alive\r\n
    00166  \r\n

[28/Dec/2021:12:48:34.023] frontend proxy (#2): invalid request
    backend  (#-1), server  (#-1), event #166, src 
192.168.1.90:44350
    buffer starts at 0 (including 0 out), 16258 free,
    len 126, wraps at 16336, error at position 49
    H1 connection flags 0x, H1 stream flags 0x0012
    H1 msg state MSG_HDR_L2_LWS(24), H1 msg flags 0x1410
    H1 chunk len 0 bytes, H1 body len 0 bytes :

    0  CONNECT download.eclipse.org:443 HTTP/1.1\r\n
    00043  Host: download.eclipse.org\r\n
    00071  User-Agent: Apache-HttpClient/4.5.10 (Java/11.0.13)\r\n
    00124  \r\n

thanks in advance,

brendan






Re: Troubles with AND in acl

2022-01-01 Thread Aleksandar Lazic

Hi.

On 01.01.22 20:56, Henning Svane wrote:

Hi

I have used it for some time in PFsense, but know made a Linux installation and now the configuration 
give me some troubles.


What have I done wrong here below?

As I cannot see what I should have done different, but sudo haproxy -c -f /etc/haproxy/haproxy01.cfg 
gives the following errors


error detected while parsing ACL 'XMail_EAS' : unknown fetch method 'if' in ACL 
expression 'if'.

error detected while parsing an 'http-request track-sc1' condition : unknown fetch method 'XMail_EAS' 
in ACL expression 'XMail_EAS'.


I have tried with { } around but that did not help


"if" is not a valid keyword for "acl" line.
http://cbonte.github.io/haproxy-dconv/2.4/configuration.html#7


Configuration:

bind 10.40.61.10:443 ssl crt /etc/haproxy/crt/mail_domain_com.pem alpn 
h2,http/1.1

acl XMail hdr(host) -i mail.domain.com autodiscover.domain.com

http-request redirect scheme https code 301 if !{ ssl_fc }

acl XMail_EAS if XMail AND {url_beg -i /microsoft-server-activesync}



This works.

  acl XMail hdr(host) -i mail.domain.com autodiscover.domain.com
  acl MS_ACT url_beg -i /microsoft-server-activesync

  http-request track-sc1 src table Table_SRC_XMail_EAS_L4 if XMail MS_ACT

The AND is implicit.
http://cbonte.github.io/haproxy-dconv/2.4/configuration.html#7.2


http-request track-sc1 src table Table_SRC_XMail_EAS_L4 if { XMail_EAS } { 
status 401 }  { status 403 }

http-request tarpit deny_status 429 if  { XMail_EAS} { sc_http_req_rate(1) gt 
10 }


Please can you share some more information's.
haproxy -vv


Regards

Henning


Regards
Alex





Re: HAP 2.3.16 A bogus STREAM [0x559faa07b4f0] at "cache store filter"

2022-01-04 Thread Aleksandar Lazic



On 04.01.22 10:16, Christopher Faulet wrote:

Le 12/25/21 à 23:59, Aleksandar Lazic a écrit :


Hi.

as the message tell us that we should report this to the developers I do so :-)


```
Dec 24 01:10:31 lb1 haproxy[20008]: A bogus STREAM [0x559faa07b4f0] is spinning 
at 204371 calls per second
and refuses to die, aborting now!
Please report this error to developers
[strm=0x559faa07b4f0,12390e src=:::79.183.184.235 fe=https-in be=be_api 
dst=api_main2
   txn=0x559faab233e0,44000 txn.req=MSG_DONE,d txn.rsp=MSG_RPBEFORE,0 
rqf=48c4e068 rqa=4
   rpf=a000a860 rpa=0 sif=CLO,2c8002 sib=CLO,1280112 af=(nil),0 
csf=0x559faa07ba10,1059a0
   ab=(nil),0 csb=0x559faad7dcf0,1a0
   
cof=0x7f224212e5d0,80003300:H2(0x559faa7d7b00)/SSL(0x7f22424fc7a0)/tcpv6(2162)
   cob=0x7f2240f79fe0,8982300:H1(0x559faa0ab840)/SSL(0x7f2263517770)/tcpv4(1490)
   filters={0x559faa29c520="cache store filter"}]



Hi Alex,

I think I found the issue. I'm unable to reproduce the spinning loop but I can 
freeze infinitely a stream.
It is probably just a matter of timing. On my side, it is related to L7 
retries. Could you confirm you have
a "retry-on" parameter in your configuration ?


Yes I can confirm.

```
defaults http
  log global
  mode http
  retry-on all-retryable-errors
  option forwardfor
  option redispatch
  option http-ignore-probes
  option httplog
  option dontlognull
  option ssl-hello-chk
  option log-health-checks
  option socket-stats
  timeout connect 5s
  timeout client  50s
  timeout server  50s
  http-reuse safe
  errorfile 400 /etc/haproxy/errors/400.http
  errorfile 403 /etc/haproxy/errors/403.http
...
```


Thanks !


Regards
Alex



Re: HAP 2.3.16 A bogus STREAM [0x559faa07b4f0] at "cache store filter"

2022-01-04 Thread Aleksandar Lazic

On 04.01.22 14:10, Christopher Faulet wrote:

Le 1/4/22 à 10:26, Aleksandar Lazic a écrit :


On 04.01.22 10:16, Christopher Faulet wrote:

Le 12/25/21 à 23:59, Aleksandar Lazic a écrit :


Hi.

as the message tell us that we should report this to the developers I do so :-)


```
Dec 24 01:10:31 lb1 haproxy[20008]: A bogus STREAM [0x559faa07b4f0] is spinning 
at 204371 calls per second
and refuses to die, aborting now!
Please report this error to developers
[strm=0x559faa07b4f0,12390e src=:::79.183.184.235 fe=https-in be=be_api 
dst=api_main2
    txn=0x559faab233e0,44000 txn.req=MSG_DONE,d txn.rsp=MSG_RPBEFORE,0 
rqf=48c4e068 rqa=4
    rpf=a000a860 rpa=0 sif=CLO,2c8002 sib=CLO,1280112 af=(nil),0 
csf=0x559faa07ba10,1059a0
    ab=(nil),0 csb=0x559faad7dcf0,1a0
    
cof=0x7f224212e5d0,80003300:H2(0x559faa7d7b00)/SSL(0x7f22424fc7a0)/tcpv6(2162)
    
cob=0x7f2240f79fe0,8982300:H1(0x559faa0ab840)/SSL(0x7f2263517770)/tcpv4(1490)
    filters={0x559faa29c520="cache store filter"}]



Hi Alex,

I think I found the issue. I'm unable to reproduce the spinning loop but I can 
freeze infinitely a stream.
It is probably just a matter of timing. On my side, it is related to L7 
retries. Could you confirm you have
a "retry-on" parameter in your configuration ?


Yes I can confirm.

```
defaults http
    log global
    mode http
    retry-on all-retryable-errors
    option forwardfor
    option redispatch
    option http-ignore-probes
    option httplog
    option dontlognull
    option ssl-hello-chk
    option log-health-checks
    option socket-stats
    timeout connect 5s
    timeout client  50s
    timeout server  50s
    http-reuse safe
    errorfile 400 /etc/haproxy/errors/400.http
    errorfile 403 /etc/haproxy/errors/403.http
...
```



Thanks Alex, I pushed a fix. It will be backported as far as the 2.0 ASAP.


Thank you Christopher




Re: invalid request

2022-01-12 Thread Aleksandar Lazic



On 12.01.22 17:06, Andrew Anderson wrote:



On Thu, Dec 30, 2021 at 10:15 PM Willy Tarreau mailto:w...@1wt.eu>> wrote:

On Wed, Dec 29, 2021 at 12:29:11PM +0100, Aleksandar Lazic wrote:
 > >     0  CONNECT download.eclipse.org:443 HTTP/1.1\r\n
 > >     00043  Host: download.eclipse.org\r\n
 > >     00071  User-Agent: Apache-HttpClient/4.5.10 (Java/11.0.13)\r\n
 > >     00124  \r\n

It indeed looks like a recently fixed problem related to the mandatory
comparison between the authority part of the request and the Host header
field, which do not match above since only one contains a port.


I don't know how pervasive this issue is on non-Java clients, but the 
sendCONNECTRequest() method from
Java's HttpURLConnection API is responsible for the authority/host mismatch 
when using native Java HTTP
support, and has been operating this way for a very long time:

     /**
      * send a CONNECT request for establishing a tunnel to proxy server
      */
     private void sendCONNECTRequest() throws IOException {
         int port = url.getPort();

         requests.set(0, HTTP_CONNECT + " " + connectRequestURI(url)
                          + " " + httpVersion, null);
         requests.setIfNotSet("User-Agent", userAgent);

         String host = url.getHost();
         if (port != -1 && port != url.getDefaultPort()) {
             host += ":" + String.valueOf(port);
         }
         requests.setIfNotSet("Host", host);

The Apache-HttpClient library has a similar issue as well (as demonstrated 
above).

More recent versions are applying scheme-based normalization which consists
in dropping the port from the comparison when it matches the scheme
(which is implicitly https here).


Is there an option other than using "accept-invalid-http-request" available to 
modify this behavior on the
haproxy side in 2.4?  I have also run into this with Java 8, 11 and 17 clients.

Are these commits what you are referring to about scheme-based normalization 
available in more recent
versions (2.5+):

https://github.com/haproxy/haproxy/commit/89c68c8117dc18a2f25999428b4bfcef83f7069e
(MINOR: http: implement http uri parser)
https://github.com/haproxy/haproxy/commit/8ac8cbfd7219b5c8060ba6d7b5c76f0ec539e978
(MINOR: http: use http uri parser for scheme)
https://github.com/haproxy/haproxy/commit/69294b20ac03497e33c99464a0050951bdfff737
(MINOR: http: use http uri parser for authority)

If so, I can pull those into my 2.4 build and see if that works better for Java 
clients.


Well, looks like you want a forward proxy like squid not a reverse proxy like 
haproxy.
https://en.wikipedia.org/wiki/HTTP_tunnel

As you don't shared your config I assume you try to use option http_proxy which 
will be deprecated.
http://cbonte.github.io/haproxy-dconv/2.5/configuration.html#4-option%20http_proxy


Andrew


Regards Alex



Re: invalid request

2022-01-12 Thread Aleksandar Lazic



On 12.01.22 21:52, Andrew Anderson wrote:


On Wed, Jan 12, 2022 at 11:58 AM Aleksandar Lazic mailto:al-hapr...@none.at>> wrote:

Well, looks like you want a forward proxy like squid not a reverse proxy 
like haproxy.


The application being load balanced is a proxy, so http_proxy is not a good fit (and as you mention on the 
deprecation list), but haproxy as a load balancer is a much better at front-ending this environment than 
any other solution available.


We upgraded to 2.4 recently, and a Java application that uses these proxy servers is what exposed this 
issue for us.  Even if we were to use squid, we would still run into this, as I would want to ensure that 
squid was highly available for the environment, and we would hit the same code path when going through 
haproxy to connect to squid.


The only option currently available in 2.4 that I am aware of is to setup internal-only frontend/backend 
paths with accept-invalid-http-request configured on those paths exclusively for Java clients to use. This 
is effectively how we have worked around this for now:


listen proxy
     bind :8080
     mode http
     option httplog
     server proxy1 192.0.2.1:8080
     server proxy2 192.0.2.2:8080

listen proxy-internal
     bind :8081
     mode http
     option httplog
     option accept-invalid-http-request
     server proxy1 192.0.2.1:8080 track proxy/proxy1
     server proxy2 192.0.2.2:8080 track proxy/proxy2

This is a viable workaround for us in the short term, but this would not be a solution that would work for 
everyone.  If the uri parser patches I found in the 2.5/2.6 branches are the right ones to make haproxy 
more permissive on matching the authority with the host in CONNECT requests, that will remove the need for 
the parallel frontend/backends without validation enabled.  I hope to be able to have time to test a 2.4 
build with those patches included over the next few days.


By design is HAProxy a reverse proxy to a origin server not to a forwarding 
proxy which is the reason why the
CONNECT method is a invalid method.

Because of that fact I would not use "mode http" for the squid backend/servers 
because of the issues you
described.
Why not "mode tcp" with proxy protocol 
http://www.squid-cache.org/Doc/config/proxy_protocol_access/ if you
need the client ip.


Regards
Alex



Re: Problem: Port_443_lbb1/ - Error 400 BAD REQ

2022-02-01 Thread Aleksandar Lazic

Hi.

On 31.01.22 16:51, Roberto Carna wrote:

Dear all, I have haproxy-1.5.18-3.el7.x86_64 running OK.


You should consider to use a maintained version as 1.5 is End of Life from the 
community.
https://www.haproxy.org/
https://github.com/DBezemer/rpm-haproxy


Development area are claiming for an error, after clicking on a given URL from 
an internal App. We have
two backends nodes, and when DEV tries pointing to just one node, the click is 
OK. So we thought it was
a persistent session problem, so we set up a cookie. The sessions now are 
persistent for the clients, it
is OK, but when DEV tests the click with the problem, effectively it occurs 
again This is in haproxy.log
for this error:

10.10.1.14:59016 [31/Jan/2022:12:33:18.649] Port_443_lbb1~ Port_443_lbb1/** 
-1/-1/-1/-1/2232 *400* 187 - - PR-- 3/3/0/0/0 0/0 {|} "<*BADREQ*>"
10.10.1.14:59019 [31/Jan/2022:12:33:15.579] Port_443_lbb1~ app/NODE1 5824/0/0/54/5878 204 
610 - - --VN 3/3/0/1/0 0/0 {|app.company.com} "POST /api/data/UpdateRecentItems 
HTTP/1.1"

The backend is the following:

backend APP

balance roundrobin
        cookie SERVERID insert
        server NODE1 10.10.18.1:443 check cookie NODE2 ssl verify none
        server NODE2 10.10.18.2:443 check cookie NODE2 ssl verify none

If I remove the "check" option from the two lines, the error appears again:

Error 400 - Bad Request - Your browser sent a invalid request.

But when I point the browser to just one node editing the hosts file, the click 
works OK.

Please what can be the problem?

Thanks a lot !!!


Well I suggest to run haproxy with "-d" to see what happens as it's dev.
You should also try to use the "Network" view in the Browsers Developer Tools 
when you click.

Please also share more of the config as the BADREQ could lead also at the 
listen or frontend part.

Regards
Alex



Re: haproxy in windows

2022-02-10 Thread Aleksandar Lazic

Hi.

On 10/02/2022 10:25, Gowri Shankar wrote:

Im trying to install haproxy for loadbalancing for my servers,but im
not able install from my windows system.Is there ha proxy available
for windows, please give and help us with documentation.


Well I don't think that there is a native Windows binary.
You can try to run haproxy in cygwin or any other linux environment on
Windows.

You can also try to port haproxy to windows but I this is a huge amount
of work :-)

Hth
Alex



[PATCH] MINOR: sample: Add srv_rtt server round trip time sample

2022-02-23 Thread Aleksandar Lazic


Hi.

Here the first patch for feature request "New Balancing algorithm (Peak) EWMA 
#1570"

regards
AlexFrom e95bf6a4bf107fdc59696c4b4a4ef7b03133b813 Mon Sep 17 00:00:00 2001
From: Aleksandar Lazic 
Date: Thu, 24 Feb 2022 02:56:21 +0100
Subject: [PATCH] MINOR: sample: Add srv_rtt server round trip time sample

This sample fetch get the server round trip time

Part of feature request #1570

---
 doc/configuration.txt|  8 +++
 reg-tests/sample_fetches/srv_rtt.vtc | 34 
 src/tcp_sample.c | 15 
 3 files changed, 57 insertions(+)
 create mode 100644 reg-tests/sample_fetches/srv_rtt.vtc

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 572c79d55..be6a811c8 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -18958,6 +18958,14 @@ srv_name : string
   While it's almost only used with ACLs, it may be used for logging or
   debugging. It can also be used in a tcp-check or an http-check ruleset.
 
+srv_rtt : integer
+  Returns the Round Trip Time (RTT) measured by the kernel for the server
+  connection.  is facultative, by default the unit is milliseconds. 
+  can be set to "ms" for milliseconds or "us" for microseconds. If the server
+  connection is not established, if the connection is not TCP or if the
+  operating system does not support TCP_INFO, for example Linux kernels before
+  2.4, the sample fetch fails.
+
 7.3.4. Fetching samples at Layer 5
 --
 
diff --git a/reg-tests/sample_fetches/srv_rtt.vtc b/reg-tests/sample_fetches/srv_rtt.vtc
new file mode 100644
index 0..c0ad0cbae
--- /dev/null
+++ b/reg-tests/sample_fetches/srv_rtt.vtc
@@ -0,0 +1,34 @@
+varnishtest "srv_rtt sample fetch Test"
+
+#REQUIRE_VERSION=2.6
+
+feature ignore_unknown_macro
+
+server s1 {
+rxreq
+txresp
+} -start
+
+
+haproxy h1 -conf {
+defaults
+mode http
+timeout connect "${HAPROXY_TEST_TIMEOUT-5s}"
+timeout client  "${HAPROXY_TEST_TIMEOUT-5s}"
+timeout server  "${HAPROXY_TEST_TIMEOUT-5s}"
+
+frontend fe
+bind "fd@${fe}"
+http-response set-header srv-rrt   "%[srv_rtt]"
+default_backend be
+
+backend be
+server srv1 ${s1_addr}:${s1_port}
+} -start
+
+client c1 -connect ${h1_fe_sock} {
+txreq -url "/"
+rxresp
+expect resp.status == 200
+expect resp.http.srv-rrt ~ "[0-9]+"
+} -run
diff --git a/src/tcp_sample.c b/src/tcp_sample.c
index 19edcd243..7b8b616cb 100644
--- a/src/tcp_sample.c
+++ b/src/tcp_sample.c
@@ -446,6 +446,20 @@ smp_fetch_fc_reordering(const struct arg *args, struct sample *smp, const char *
 		return 0;
 	return 1;
 }
+
+/* get the mean rtt of a client connection */
+static int
+smp_fetch_srv_rtt(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+	if (!get_tcp_info(args, smp, 1, 0))
+		return 0;
+
+	/* By default or if explicitly specified, convert rtt to ms */
+	if (!args || args[0].type == ARGT_STOP || args[0].data.sint == TIME_UNIT_MS)
+		smp->data.u.sint = (smp->data.u.sint + 500) / 1000;
+
+	return 1;
+}
 #endif // linux || freebsd || netbsd
 #endif // TCP_INFO
 
@@ -478,6 +492,7 @@ static struct sample_fetch_kw_list sample_fetch_keywords = {ILH, {
 #ifdef TCP_INFO
 	{ "fc_rtt",   smp_fetch_fc_rtt,   ARG1(0,STR), val_fc_time_value, SMP_T_SINT, SMP_USE_L4CLI },
 	{ "fc_rttvar",smp_fetch_fc_rttvar,ARG1(0,STR), val_fc_time_value, SMP_T_SINT, SMP_USE_L4CLI },
+	{ "srv_rtt",  smp_fetch_srv_rtt,  ARG1(0,STR), val_fc_time_value, SMP_T_SINT, SMP_USE_L4CLI },
 #if defined(__linux__) || defined(__FreeBSD__) || defined(__NetBSD__)
 	{ "fc_unacked",   smp_fetch_fc_unacked,   ARG1(0,STR), var_fc_counter, SMP_T_SINT, SMP_USE_L4CLI },
 	{ "fc_sacked",smp_fetch_fc_sacked,ARG1(0,STR), var_fc_counter, SMP_T_SINT, SMP_USE_L4CLI },
-- 
2.25.1



Re: [PATCH] MINOR: sample: Add srv_rtt server round trip time sample

2022-02-25 Thread Aleksandar Lazic



Hi Willy.

On 25.02.22 14:54, Willy Tarreau wrote:

Hi Alex,

On Thu, Feb 24, 2022 at 03:03:59AM +0100, Aleksandar Lazic wrote:

Hi.

Here the first patch for feature request "New Balancing algorithm (Peak) EWMA 
#1570"


Note, I don't think it is needed for this algo as long as we instead
use measured response time and/or health check time. But regardless
it's something useful to have. A few comments below:



Thanks you for your much valuable feedback.
I think also that the rtt information as fetch sample could be useful.


 From e95bf6a4bf107fdc59696c4b4a4ef7b03133b813 Mon Sep 17 00:00:00 2001
From: Aleksandar Lazic 
Date: Thu, 24 Feb 2022 02:56:21 +0100
Subject: [PATCH] MINOR: sample: Add srv_rtt server round trip time sample

This sample fetch get the server round trip time


You should mention "TCP round trip time" since it's measured at the TCP
level.


+srv_rtt : integer
+  Returns the Round Trip Time (RTT) measured by the kernel for the server
+  connection.  is facultative, by default the unit is milliseconds. 

+  can be set to "ms" for milliseconds or "us" for microseconds. If the server
+  connection is not established, if the connection is not TCP or if the
+  operating system does not support TCP_INFO, for example Linux kernels before
+  2.4, the sample fetch fails.


I would rather call it "bc_rtt" since it's not the server but the backend
connection. Technically speaking it indeed requires a connection to be
established and will only report the value for *this* connection and not
anything stateless related to the server. Thta's more in line with what we
have for the frontend connection already with fc_rtt.

You mentioned "unit" but it does not appear in the keyword syntax.

Also I think it would be useful to have all other fc_* from the same
section turned to bc_* (fc_rttvar, fc_retrans, etc), as it can sometimes
explain long response times in logs.


diff --git a/reg-tests/sample_fetches/srv_rtt.vtc 
b/reg-tests/sample_fetches/srv_rtt.vtc
new file mode 100644
index 0..c0ad0cbae
--- /dev/null
+++ b/reg-tests/sample_fetches/srv_rtt.vtc
@@ -0,0 +1,34 @@
+varnishtest "srv_rtt sample fetch Test"
+
+#REQUIRE_VERSION=2.6


Note, we *might* need to add a new macro to detect support for TCP_INFO.
We still don't have the config predicates to detect support for certain
keywords or sample fetch functions so that's not easy, but it's possible
that this test will break on some OS like cygwin. If so we could work
around this temporarily using "EXCLUDE_TARGETS" and in the worst case we
could mark it broken for the time it takes to completely solve this.


Agree here the suggestion is to add something like USE_GETSOCKOPT to be able to 
make '#REQUIRE_OPTIONS=GETSOCKOPT',
something like similar to USE_GETADDRINFO, right?


(...)

diff --git a/src/tcp_sample.c b/src/tcp_sample.c
index 19edcd243..7b8b616cb 100644
--- a/src/tcp_sample.c
+++ b/src/tcp_sample.c
@@ -446,6 +446,20 @@ smp_fetch_fc_reordering(const struct arg *args, struct 
sample *smp, const char *
return 0;
return 1;
  }
+
+/* get the mean rtt of a client connection */
+static int
+smp_fetch_srv_rtt(const struct arg *args, struct sample *smp, const char *kw, 
void *private)
+{
+   if (!get_tcp_info(args, smp, 1, 0))
+   return 0;
+
+   /* By default or if explicitly specified, convert rtt to ms */
+   if (!args || args[0].type == ARGT_STOP || args[0].data.sint == 
TIME_UNIT_MS)
+   smp->data.u.sint = (smp->data.u.sint + 500) / 1000;
+
+   return 1;
+}


That's another reason for extending the existing keywords, avoiding code
duplication. You can have all your new keywords map to the fc_* equivalent
and just change this:

  - if (!get_tcp_info(args, smp, 0, 0))
  + if (!get_tcp_info(args, smp, *kw == 'b', 0))

Please update the comments on top of the functions to mention that they're
called for both "fc_*" and "bc_*" depending on the side, and that's OK.


Let me go back to the "drawing board" and will send a update as soon as there 
is a update :-)


thanks,
Willy


Regards
Alex



Re: Active Internet-Draft: Suppressing CA Certificates in TLS 1.3

2022-02-28 Thread Aleksandar Lazic

Hi.

On 28.02.22 13:55, Branitsky, Norman wrote:

Future requirement for HAProxy?

https://datatracker.ietf.org/doc/draft-kampanakis-tls-scas-latest/


From my point of view is this draft heavily based on the implementation of the 
underlaying TLS library.


For everyone which want to know what this is here a short intro cite.

```
1.  Introduction

   The most data heavy part of a TLS handshake is authentication.  It
   usually consists of a signature, an end-entity certificate and
   Certificate Authority (CA) certificates used to authenticate the end-
   entity to a trusted root CA.  These chains can sometime add to a few
   kB of data which could be problematic for some usecases.
   [EAPTLSCERT] and [EAP-TLS13] discuss the issues big certificate
   chains in EAP authentication.  Additionally, it is known that IEEE
   802.15.4 [IEEE802154] mesh networks and Wi-SUN [WISUN] Field Area
   Networks often notice significant delays due to EAP-TLS
   authentication in constrained bandwidth mediums.

   To alleviate the data exchanged in TLS [RFC8879] shrinks certificates
   by compressing them.  [CBOR-CERTS] uses different certificate
   encodings for constrained environments.  On the other hand, [CTLS]
   proposes the use of certificate dictionaries to omit sending CA
   certificates in a Compact TLS handshake.

   In a post-quantum context
   [I-D.hoffman-c2pq][NIST_PQ][I-D.ietf-tls-hybrid-design], the TLS
   authentication data issue is exacerbated.
   [CONEXT-PQTLS13SSH][NDSS-PQTLS13] show that post-quantum certificate
   chains exceeding the initial TCP congestion window (10MSS [RFC6928])
   will slow down the handshake due to the extra round-trips they

Thomson, et al.  Expires 17 August 2022 [Page 2]
Internet-DraftSuppress CAs February 2022

   introduce.  [PQTLS] shows that big certificate chains (even smaller
   than the initial TCP congestion window) will slow down the handshake
   in lossy environments.  [TLS-SUPPRESS] quantifies the post-quantum
   authentication data in QUIC and TLS and shows that even the leanest
   post-quantum signature algorithms will impact QUIC and TLS.
   [CL-BLOG] also shows that 9-10 kilobyte certificate chains (even with
   30MSS initial TCP congestion window) will lead to double digit TLS
   handshake slowdowns.  What's more, it shows that some clients or
   middleboxes cannot handle chains larger than 10kB.


```


*Norman Branitsky*
Senior Cloud Architect
Tyler Technologies, Inc.

P: 416-916-1752
C: 416.843.0670
www.tylertech.com

Tyler Technologies 






Re: Is there some kind of program that mimics a problematic HTTP server?

2022-03-01 Thread Aleksandar Lazic



Hi Shawn.

On 01.03.22 23:09, Shawn Heisey wrote:

I was thinking about ways to help pinpoint problems a client is having 
connecting to services.  And a thought
occurred to me.

Is there any kind of software available that can stand up a broken HTTP server, 
such that it is broken in very
specific and configurable ways?

Imagine a bit of software that can listen on a port and exhibit configurable 
failure scenarios.  Including but
certainly not limited to these:

* SSL negotiation issues
* Simulate dropped packets by ignoring incoming packets or failing to send 
outgoing packets.
* Timeouts, delays, no response, or incorrect behavior at various phases:
** TCP
** SSL
** GET/HEAD/POST

Does anything like this already exist?  It would be an awesome troubleshooting 
tool.  Configure it to fail in some
way, have a client try connecting to it with their software, and if they get 
the same error that they do when trying
it with the real server, then you've possibly pinpointed what the problem on 
the real server is, without diving into
logs or packet captures.  And the client may not know anything about the software 
they're using other than "it works
fine connecting to XXX", making them an exceedingly unreliable source of 
information.

So I'm not interested in something that can analyze network traffic or logs.  I 
can already do that. I am imagining
a server that can intentionally misbehave.

And here's why I am asking my question on the haproxy mailing list:  I think 
haproxy itself would serve as the
perfect starting point for this idea.  Imagine having configuration directives 
for haproxy that tell it to
intentionally misbehave, either on the frontend or the backend.  It could run 
side by side with a production
instance, on another port or on another machine, with a nearly identical config 
to production that has misbehave
configuration directives.

Side note: I think haproxy would be a perfect fit at $DAY_JOB to replace a 
couple of problematic pieces of software,
but I until I understand better how that software is configured, I can't 
mention it as a possible solution. I really
like haproxy. Please keep up the good work.  I'm looking for ways I can 
contribute to the project's success.


I don't know such a tool but this sounds like a interesting project Idea.

Maybe some parts could be done via LUA but as HAProxy internally handle a lot 
of errors it could be tricky to force
HAProxy do behave "weird" and not standard compliant.
http://www.arpalert.org/src/haproxy-lua-api/2.5/index.html

As you can see in the repo from Tim https://github.com/TimWolla/h-app-roxy that 
HAProxy and lua can be a quite powerful
combination.


Thanks,
Shawn


Regards
Alex



Re: Rpm version 2.4.14

2022-03-15 Thread Aleksandar Lazic



On 15.03.22 05:36, Eli Bechavod wrote:

Hii guys,
I am looking for rpm to version 2.4.14 and didn’t found that ..

Why on image base centos/rhel did you stop in 1.8 ? I saw that I can install 
with a makefile but it old way .. :( .

I would to sound if you have any solutions


You can create a rpm based on that repo.
https://github.com/DBezemer/rpm-haproxy


Thanks
Eli


Regards
Alex



Re: [ANNOUNCE] haproxy-2.6-dev4

2022-03-26 Thread Aleksandar Lazic
Hi Willy.

On Sat, 26 Mar 2022 10:22:02 +0100
Willy Tarreau  wrote:

> Hi,
> 
> HAProxy 2.6-dev4 was released on 2022/03/26. It added 80 new commits
> after version 2.6-dev3.
> 
> The activity started to calm down a bit, which is good because we're
> roughly 2 months before the release and it will become important to avoid
> introducing last-minute regressions.
> 
> This version mostly integrates fixes for various bugs in various places
> like stream-interfaces, QUIC, the HTTP client or the trace subsystem. The
> remaining patches are mostly QUIC improvements and code cleanups. In
> addition the MQTT protocol parser was extended to also support MQTTv3.1.
> 
> A change discussed around previous announce was made in the H2 mux: the
> "timeout http-keep-alive" and "timeout http-request" are now respected
> and work as documented, so that it will finally be possible to force such
> connections to be closed when no request comes even if they're seeing
> control traffic such as PING frames. This can typically happen in some
> server-to-server communications whereby the client application makes use
> of PING frames to make sure the connection is still alive. I intend to
> backport this after some time, probably to 2.5 and later 2.4, as I've
> got reports about stable versions currently posing this problem.
> 
> I'm expecting to see another batch of stream-interface code refactoring
> that Christopher is still working on. This is a very boring and tedious
> task that should significantly lower the long-term maintenance effort,
> so I'm willing to wait a little bit for such changes to be ready. What
> this means for users is a reduction of the bugs we've seen over the last
> 2-3 years alternating between truncated responses and never-dying
> connections and that result from the difficulty to propagate certain
> events across multiple layers.
> 
> Also William still has some updates to finish on the HTTP client
> (connection retries, SSL cert verification and host name resolution
> mainly). On the paper, each of them is relatively easy, but practically,
> since the HTTP client is the first one of its category, each attempt to
> progress is stopped by the discovery of a shortcoming or bug that were
> not visible before. Thus the progress takes more time than desired but
> as a side effect, the core code gets much more reliable by getting rid
> of these old issues.
> 
> One front that made impressive progress over the last few months is QUIC.
> While a few months ago we were counting the number of red boxes on the
> interop tests at https://interop.seemann.io/ to figure what to work on as
> a top priority, now we're rather counting the number of tests that report
> a full-green state, and haproxy is now on par with other servers in these
> tests. Thus the idea emerged, in order to continue to make progress on
> this front, to start to deploy QUIC on haproxy.org so that interoperability
> issues with browsers and real-world traffic can be spotted. A few attempts
> were made and already revealed issues so for now it's disabled again. Be
> prepared to possibly observe a few occasional hiccups when visiting the
> site (and if so, please do complain to us). The range of possible issues
> would likely be frozen transfers and truncated responses, but these should
> not happen.
> 
> From a technical point, the way it's done is by having a separate haproxy
> process listening to QUIC on UDP port 1443, and forwarding HTTP requests
> to the existing process. The main process constantly checks the QUIC one,
> and when it's seen as operational, it appends an Alt-Svc header that
> indicates the client that an HTTP/3 implementation is available on port
> 1443, and that this announce is valid for a short time (we'll leave it to
> one minute only so that issues can resolve quickly, but for now it's only
> 10s so that quick tests cause no harm):
> 
> http-response add-header alt-svc 'h3=":1443"; ma=60' if \
>{ var(txn.host) -m end haproxy.org } { nbsrv(quic) gt 0 }
> 
> As such, compatible browsers are free to try to connect there or not. Other
> tools (such as git clone) will not use it. For those impatient to test it,
> the QUIC process' status is reported at the bottom of the stats page here:
> http://stats.haproxy.org/. The "quic" socket in the frontend at the top
> reports the total traffic received from the QUIC process, so if you're
> seeing it increase while you reload the page it's likely that you're using
> QUIC to read it. In Firefox I'm having this little plugin loaded:
> 
>   https://addons.mozilla.org/en-US/firefox/addon/http2-indicator/
> 
> It displays a small flash on the URL bar with different colors depending
> on the protocol used to load the page (H1/SPDY/H2/H3). When that works it's
> green (H3), otherwise it's blue (H2).
> 
> At this point I'd still say "do not reproduce these experiments at home".
> Amaury and Fred are still watching the process' traces very closely to
> spot bugs and stop it as soon

[PATCH] DOC: remove double blanks in confiuration.txt

2022-03-29 Thread Aleksandar Lazic
Hi.

This patch removes some double blanks.

Regards
Alex
>From a65450d3da357c659b00bd3ecb5a038a1f827692 Mon Sep 17 00:00:00 2001
From: Aleksandar Lazic 
Date: Wed, 30 Mar 2022 00:11:40 +0200
Subject: [PATCH] DOC: remove double blanks in confiuration.txt

Double blanks in keywords are not good for the html documenation parser.
This commit fix the double blanks for tcp-request content use-service

---
 doc/configuration.txt | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 87ae43809..cb05fef91 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -3118,7 +3118,7 @@ server  [:] [param*]
   As previously mentioned, "peer" keyword may be replaced by "server" keyword
   with a support for all "server" parameters found in 5.2 paragraph.
   If the underlying peer is local, : parameters must not be present.
-  These parameters must  be provided on a "bind" line (see "bind" keyword
+  These parameters must be provided on a "bind" line (see "bind" keyword
   of this "peers" section).
   Some of these parameters are irrelevant for "peers" sections.
 
@@ -12553,7 +12553,7 @@ tcp-request content unset-var() [ { if | unless }  ]
   This is used to unset a variable. Please refer to "http-request set-var" for
   details about variables.
 
-tcp-request content  use-service   [ { if | unless }  ]
+tcp-request content use-service  [ { if | unless }  ]
 
   This action is used to executes a TCP service which will reply to the request
   and stop the evaluation of the rules. This service may choose to reply by
-- 
2.25.1



Stupid question about nbthread and maxconn

2022-04-23 Thread Aleksandar Lazic
Hi.

I'm not sure if I understand the doc properly.

https://docs.haproxy.org/2.2/configuration.html#nbthread
```
This setting is only available when support for threads was built in. It
makes haproxy run on  threads. This is exclusive with "nbproc". While
"nbproc" historically used to be the only way to use multiple processors, it
also involved a number of shortcomings related to the lack of synchronization
between processes (health-checks, peers, stick-tables, stats, ...) which do
not affect threads. As such, any modern configuration is strongly encouraged
to migrate away from "nbproc" to "nbthread". "nbthread" also works when
HAProxy is started in foreground. On some platforms supporting CPU affinity,
when nbproc is not used, the default "nbthread" value is automatically set to
the number of CPUs the process is bound to upon startup. This means that the
thread count can easily be adjusted from the calling process using commands
like "taskset" or "cpuset". Otherwise, this value defaults to 1. The default
value is reported in the output of "haproxy -vv". See also "nbproc".
```

https://docs.haproxy.org/2.2/configuration.html#3.2-maxconn
```
Sets the maximum per-process number of concurrent connections to . It
is equivalent to the command-line argument "-n". Proxies will stop accepting
connections when this limit is reached. The "ulimit-n" parameter is
automatically adjusted according to this value. See also "ulimit-n". Note:
the "select" poller cannot reliably use more than 1024 file descriptors on
some platforms. If your platform only supports select and reports "select
FAILED" on startup, you need to reduce maxconn until it works (slightly
below 500 in general). If this value is not set, it will automatically be
calculated based on the current file descriptors limit reported by the
"ulimit -n" command, possibly reduced to a lower value if a memory limit
is enforced, based on the buffer size, memory allocated to compression, SSL
cache size, and use or not of SSL and the associated maxsslconn (which can
also be automatic).

```

Let's say we have the following setup.

```
maxconn 2
nbthread 4
```

My understanding is that HAProxy will accept 2 concurrent connection, right?
Even when I increase the nbthread will HAProxy *NOT* accept more then 2
concurrent connection, right?

The increasing of nbthread will "only" change that the performance will be
"better" on a let's say 32 CPU Machine, especially for the upcoming 2.6 :-)

https://docs.microsoft.com/en-us/azure/virtual-machines/dv3-dsv3-series#dsv3-series
=> Standard_D32s_v3: 32 CPU, 128G RAM

What confuses me is "maximum per-process" in the maxconn docu part, will every
thread handle the maxconn or is this for the whole HAProxy instance.

More mathematically :-O.
2 * 4 = 8
or
2 * 4 = 2

Regards
Alex



Learning from Spam (was: Re: Social media marketing Plans from Scratch haproxy.org)

2022-04-26 Thread Aleksandar Lazic
Hi,

On Tue, 26 Apr 2022 03:32:16 -0700
Ivana Paul  wrote:

> Hello haproxy.org

[SPAM Content]

New Idea for spam "learning platform" :-)

I never heard anything about "SMO services" and now I know it's this.

Social Media Optimization (SMO) Services

Regard
Alex



Re: Set environment variables

2022-04-26 Thread Aleksandar Lazic
On Tue, 26 Apr 2022 15:03:51 +0200
Valerio Pachera  wrote:

> Hi, I have several backend configuration that make use of a custom script:
> 
> external-check command 'custom-script.sh'
> 
> The script read uses the environment variables such as $HAPROXY_PROXY_NAME.
> I would like to be able to set and environment variable in the backend
> declaration, before running the external check.
> This environment variable will change the behavior of custom-script.sh.
> 
> Is it possible to declare environment variables in haproxy 1.9 or later?
> 
> What I need is to make custom-script.sh aware if SSL is used or not.
> If there's another way to achieve that, please tell me.

Well you can put it in the name of the server as I don't see any other option
to add extra variables into the external check.

https://git.haproxy.org/?p=haproxy.git;a=blob;f=src/extcheck.c;hb=e50aabe443125eb94e3e7823c387125ca7e0c302#l81

```
  81 const struct extcheck_env extcheck_envs[EXTCHK_SIZE] = {
  82 [EXTCHK_PATH]   = { "PATH",   
EXTCHK_SIZE_EVAL_INIT },
  83 [EXTCHK_HAPROXY_PROXY_NAME] = { "HAPROXY_PROXY_NAME", 
EXTCHK_SIZE_EVAL_INIT },
  84 [EXTCHK_HAPROXY_PROXY_ID]   = { "HAPROXY_PROXY_ID",   
EXTCHK_SIZE_EVAL_INIT },
  85 [EXTCHK_HAPROXY_PROXY_ADDR] = { "HAPROXY_PROXY_ADDR", 
EXTCHK_SIZE_EVAL_INIT },
  86 [EXTCHK_HAPROXY_PROXY_PORT] = { "HAPROXY_PROXY_PORT", 
EXTCHK_SIZE_EVAL_INIT },
  87 [EXTCHK_HAPROXY_SERVER_NAME]= { "HAPROXY_SERVER_NAME",
EXTCHK_SIZE_EVAL_INIT },
  88 [EXTCHK_HAPROXY_SERVER_ID]  = { "HAPROXY_SERVER_ID",  
EXTCHK_SIZE_EVAL_INIT },
  89 [EXTCHK_HAPROXY_SERVER_ADDR]= { "HAPROXY_SERVER_ADDR",
EXTCHK_SIZE_ADDR },
  90 [EXTCHK_HAPROXY_SERVER_PORT]= { "HAPROXY_SERVER_PORT",
EXTCHK_SIZE_UINT },
  91 [EXTCHK_HAPROXY_SERVER_MAXCONN] = { "HAPROXY_SERVER_MAXCONN", 
EXTCHK_SIZE_EVAL_INIT },
  92 [EXTCHK_HAPROXY_SERVER_CURCONN] = { "HAPROXY_SERVER_CURCONN", 
EXTCHK_SIZE_ULONG },
  93 };
```

> Thank you.

Hth
Alex



Re: Stupid question about nbthread and maxconn

2022-04-26 Thread Aleksandar Lazic
Hi.

Anyone any Idea about the question below?

Regards
Alex

On Sat, 23 Apr 2022 11:05:36 +0200
Aleksandar Lazic  wrote:

> Hi.
> 
> I'm not sure if I understand the doc properly.
> 
> https://docs.haproxy.org/2.2/configuration.html#nbthread
> ```
> This setting is only available when support for threads was built in. It
> makes haproxy run on  threads. This is exclusive with "nbproc". While
> "nbproc" historically used to be the only way to use multiple processors, it
> also involved a number of shortcomings related to the lack of synchronization
> between processes (health-checks, peers, stick-tables, stats, ...) which do
> not affect threads. As such, any modern configuration is strongly encouraged
> to migrate away from "nbproc" to "nbthread". "nbthread" also works when
> HAProxy is started in foreground. On some platforms supporting CPU affinity,
> when nbproc is not used, the default "nbthread" value is automatically set to
> the number of CPUs the process is bound to upon startup. This means that the
> thread count can easily be adjusted from the calling process using commands
> like "taskset" or "cpuset". Otherwise, this value defaults to 1. The default
> value is reported in the output of "haproxy -vv". See also "nbproc".
> ```
> 
> https://docs.haproxy.org/2.2/configuration.html#3.2-maxconn
> ```
> Sets the maximum per-process number of concurrent connections to . It
> is equivalent to the command-line argument "-n". Proxies will stop accepting
> connections when this limit is reached. The "ulimit-n" parameter is
> automatically adjusted according to this value. See also "ulimit-n". Note:
> the "select" poller cannot reliably use more than 1024 file descriptors on
> some platforms. If your platform only supports select and reports "select
> FAILED" on startup, you need to reduce maxconn until it works (slightly
> below 500 in general). If this value is not set, it will automatically be
> calculated based on the current file descriptors limit reported by the
> "ulimit -n" command, possibly reduced to a lower value if a memory limit
> is enforced, based on the buffer size, memory allocated to compression, SSL
> cache size, and use or not of SSL and the associated maxsslconn (which can
> also be automatic).
> 
> ```
> 
> Let's say we have the following setup.
> 
> ```
> maxconn 2
> nbthread 4
> ```
> 
> My understanding is that HAProxy will accept 2 concurrent connection,
> right? Even when I increase the nbthread will HAProxy *NOT* accept more then
> 2 concurrent connection, right?
> 
> The increasing of nbthread will "only" change that the performance will be
> "better" on a let's say 32 CPU Machine, especially for the upcoming 2.6 :-)
> 
> https://docs.microsoft.com/en-us/azure/virtual-machines/dv3-dsv3-series#dsv3-series
> => Standard_D32s_v3: 32 CPU, 128G RAM
> 
> What confuses me is "maximum per-process" in the maxconn docu part, will every
> thread handle the maxconn or is this for the whole HAProxy instance.
> 
> More mathematically :-O.
> 2 * 4 = 8
> or
> 2 * 4 = 2
> 
> Regards
> Alex
> 




Re: Networking

2022-04-30 Thread Aleksandar Lazic
Hi Nick.

On Sat, 30 Apr 2022 05:44:09 +
Nick Owen  wrote:

> So I am pretty new to networking and I am not quite sure how to set up the
> config file correctly. I just want a simple reverse proxy and I have created
> a diagram to show you how’d I’d like it configured. If you have any sites or
> examples that could point me in the right direction that’d be great.

Well first of all please take some time without any "pressure in any kind" to
dig into the topic, Loadbalancing and HAProxy could get quite fast quite
complex.

There are very good articles on the HAProxy blog which explains some basics
about Loadbalancing and HAProxy.
https://www.haproxy.com/blog/category/basics/

For your diagram below is this blog post helpfully and can show you a good
starting configuration.
https://www.haproxy.com/blog/haproxy-configuration-basics-load-balance-your-servers/

HAProxy have a very detailed documenation which shows you how flexible HAProxy
is.

https://docs.haproxy.org/

Best regards
Alex



Re: Download Question

2022-05-02 Thread Aleksandar Lazic
Hi.

On Mon, 2 May 2022 14:44:45 +
Dave Swinton  wrote:

> Do you have a repository for the current releases in RPM? We are currently
> using 1.8 but would like to move to 2.5.x after some internal testing but
> don't see any direct links to an RPM from the download page.

You can build your own version based on this repo.

https://github.com/DBezemer/rpm-haproxy

Regards
Alex

> Thank you.
> 
> David Swinton
> RedIron Technologies
> Mobile: (925) 864-1783
> Email:  dave.swin...@redirontech.com
> 
> [519F0236]
> 




Re: Paid feature development: TCP stream compression

2022-05-19 Thread Aleksandar Lazic
Hi Mark.

On Thu, 19 May 2022 17:29:37 +0100
Mark Zealey  wrote:

> Hi there,
> 
> We are using HAProxy to terminate and balance TCP streams (XMPP) between
> our apps and our service infrastructure. We are currently running
> XMPP-level gzip compression but I'm interested in potentially shifting
> this to the haproxy layer - basically everything on the connection would
> be compressed with gzip, brotli or similar.
> 
> If you would be interested in doing paid development on haproxy for 
> this, please
> drop me a line with some details about roughly how much it would cost 
> and how
> long it would take. Any development work done for this would be
> contributed back to the open source haproxy edition.

That sounds really great, thank you for this offering :-)

I suggest to get in touch with cont...@haproxy.com as that's the company behind
HAProxy.

> Thanks,
> 
> Mark

Regards
Alex



Re: Paid feature development: TCP stream compression

2022-05-20 Thread Aleksandar Lazic
On Fri, 20 May 2022 12:16:07 +0100
Mark Zealey  wrote:

> Thanks, we may use this for a very rough proof-of-concept. However we 
> are dealing with millions of concurrent connections, 10-100 million 
> connections per day, so we'd prefer to pay someone to develop (+ test!) 
> something for haproxy which will work at this scale

Well at this scale you will have for sure more then one HAProxy instance. :-)

Do you want that the HAProxies all together have the same "knowledge" about the
connections?
What I mean should in the implementation the peers protocol be considered to be
used?
Do you expect some XMPP protocol knowledge in the implementation?

> Mark
> 
> On 20/05/2022 10:12, Илья Шипицин wrote:
> > in theory, you can try OpenVPN with compression enabled.
> > or maybe stunnel with compression stunnel TLS Proxy 
> > 
> >
> > пт, 20 мая 2022 г. в 13:59, Mark Zealey :
> >
> > Good point, I forgot to mention that bit. We will be
> > TLS-terminating the connection on haproxy itself so
> > compress/decompress would happen after the plain stream has been
> > received, prior to being forwarded (in plain, or re-encrypted with
> > TLS) to the backends.
> >
> > So:
> >
> > app generates gzip+tls TCP stream -> haproxy: strip TLS, gunzip ->
> > forward TCP to backend servers
> >
> > We don't have any other implementation of this, at the moment it
> > is just an idea we would like to implement.
> >
> > Mark
> >
> >
> > On 20/05/2022 09:54, Илья Шипицин wrote:
> >> isn't it SSL encapsulated ? how is compression is supposed to
> >> work in details ?
> >> any other implementation to look at ?
> >>
> >> чт, 19 мая 2022 г. в 21:32, Mark Zealey :
> >>
> >> Hi there,
> >>
> >> We are using HAProxy to terminate and balance TCP streams
> >> (XMPP) between
> >> our apps and our service infrastructure. We are currently running
> >> XMPP-level gzip compression but I'm interested in potentially
> >> shifting
> >> this to the haproxy layer - basically everything on the
> >> connection would
> >> be compressed with gzip, brotli or similar.
> >>
> >> If you would be interested in doing paid development on
> >> haproxy for
> >> this, please
> >> drop me a line with some details about roughly how much it
> >> would cost
> >> and how
> >> long it would take. Any development work done for this would be
> >> contributed back to the open source haproxy edition.
> >>
> >> Thanks,
> >>
> >> Mark
> >>
> >>




Re: how to install on RHEL7 and 8

2022-05-24 Thread Aleksandar Lazic
Hi.

On Tue, 24 May 2022 20:56:14 +
"Alford, Mark"  wrote:

> Do you have instruction on the exact library needed to fo the full install on
> RHEL 7 and RHEL 8
> 
> I read the INSTALL doc in the tar ball and the did the make command and it
> failed because of LUA but lua.2.5.3 is installed

Please post the full steps you have done with the error.
Wild guess, the dev rpm's are not installed.

Maybe this repo with the specs helps you to find the error.
https://github.com/DBezemer/rpm-haproxy


> Please help
> 
> Mark Alford
> Security+
> IT Specialist (System Administrator)
> Office of Research and Development,
> Center for Computational Toxicology and
> Exposure
> Scientific Computing and Data Curation Division Application Development Branch
> 
> e: alford.m...@epa.gov
> t: (919) 541-4177
> m: (413) 358-0407
> 
> 
> If I am not the Federal Contracting Officer or Contracting Officer
> Representative (CO/COR) on your contract please do not consider this
> technical direction (TD). Any TD will be formally identified and/or
> documented from your CO or COR.
> 




Re: how to install on RHEL7 and 8

2022-05-28 Thread Aleksandar Lazic
Hi Ryan.

On Thu, 26 May 2022 13:28:58 -0500
"Ryan O'Hara"  wrote:

> On Wed, May 25, 2022 at 11:15 AM William Lallemand 
> wrote:
> 
> > On Tue, May 24, 2022 at 08:56:14PM +, Alford, Mark wrote:
> > > Do you have instruction on the exact library needed to fo the full
> > install on RHEL 7 and RHEL 8
> > >
> > > I read the INSTALL doc in the tar ball and the did the make command and
> > it failed because of LUA but lua.2.5.3 is installed
> > >
> > > Please help
> > >
> > >
> > Hello,
> >
> > I'm using this thread to launch a call for help about the redhat
> > packaging.
> >
> 
> I am the maintainer for all the Red Hat and Fedora packages. Feel free to
> ask questions here on the mailing list or email me directly.
> 
> 
> 
> > We try to document the list of available packages here:
> > https://github.com/haproxy/wiki/wiki/Packages
> >
> > The IUS repository is know to work but only provides packages as far as
> > 2.2. no 2.3, 2.4 or 2.5 are there but I'm seeing an open ticket for
> > the 2.4 here: https://github.com/iusrepo/wishlist/issues/303
> >
> > Unfortunately nobody ever step up to maintain constantly the upstream
> > releases for redhat/centos like its done for ubuntu/debian on
> > haproxy.debian.net.
> >
> 
> I try to keep Fedora up to date with latest upstream, but once a release
> goes into a specific Fedora release (eg. haproxy-2.4 in Fedora 35) I don't
> update to haproxy-2.5 in that same release. I have in the past and I get
> angry emails about rebasing to a newer release. I've spoken to Willy about
> this in the past and we seem to be in agreement on this.
> 
> RHEL is different. We almost never rebase to a later major release for the
> lifetime of RHEL. The one exception was when we added haproxy-1.8 to RHSCL
> (software collections) in RHEL7 since the base RHEL7 had haproxy-1.5 and
> there were significant features added to the 1.8 release.
> 
> I get this complaint often for haproxy in RHEL. Keep in mind that RHEL is
> focused on consistency and stability over a long period of time. I can't
> stress this enough - it is extremely rare to rebase to a new, major release
> of haproxy (or anything else) in a major RHEL release. For example, RHEL9
> has haproxy-2.4 and will likely always have that version. I do often rebase
> to newer minor release to pick up bug fixes (eg. haproxy-2.4.8 will be
> updated to haproxy-2.4.17, but very unlikely to be anything beyond the
> latest 2.4 release). I understand this is not for everybody.

Well written and I'm fully aware of the pro and cons of that strategy.

Let me make a suggestion.
Offer the latest HAPoxy as RPM in like epel or some extra repo and keep the
supported one in the main repo.

As far as I can see is there already a epel entry for HAProxy
https://bugzilla.redhat.com/buglist.cgi?bug_status=__open__&component=haproxy
as described here.
https://docs.fedoraproject.org/en-US/epel/epel-package-request/

The issue for some users is that there is no RPM available until the rpm is
build on there own with https://github.com/DBezemer/rpm-haproxy.
Thanks David for keep this repo up2date.

Looks like this is the source of the HAProxy builds for CentOS and RHEL, isn't
it?
https://git.centos.org/rpms/haproxy/branches?branchname=c8s

How about to add there a branch "upstream" or something else which uses the
latest LTS version as even HAProxy community onls supports the LTS version for
a long time. 

Another Idea is to add another repo under
https://github.com/orgs/haproxy/repositories like "linux-distro-build-sources"
and add there the RPM, deb and some other build files for some other linux
distributions. Now if an user want to offer an rpm or deb can the build config
be used from there, similar to the great work from Vincent for the Debian
Distribution.

As I know that some enterprise companies does not allow epel or other none
"official" RHEL Repos in there setup is this an option to offer them the latest
HAProxy for there system.

The solution for the problem "latest HAProxy on RPM based System" is the to use
the upstream rpm or build there own rpm based on the offical repo
"linux-distro-build-sources" from https://github.com/orgs/haproxy/repositories

Well yes, the name is up for discussion :-)

jm2c

> > Maybe it could be done with IUS, its as simple as a pull request on
> > their github for each new release, but someone need to be involve.
> >
> > I'm not a redhat user, but from time to time someone is asking for a
> > redhat package and nothing is really available and maintained outside of
> > the official redhat one.
> >
> 
> As mentioned elsewhere, COPR is likely the best place for this. It had been
> awhile since I've used it, but there have been times I did special,
> unsupported builds in COPR for others to use.
> 
> Hope this helps.
> 
> Ryan




Re: [ANNOUNCE] haproxy-2.6-dev12

2022-05-28 Thread Aleksandar Lazic
Hi.

On Sat, 28 May 2022 11:42:17 +
Ajay Mahto  wrote:

> Unsubscribe me.

Feel free to do it by your self.
https://www.haproxy.org/#tact

Regards
Alex

> Regards,
> 
> Ajay Kumar Mahto,
> Lead DevOps Engineer,
> NPCI, Hyderabad
> +91 8987510264
> 
> From: Willy Tarreau 
> Sent: Friday, May 27, 2022 11:55:07 PM
> To: haproxy@formilux.org 
> Subject: [ANNOUNCE] haproxy-2.6-dev12
> 
> WARNING: This mail has come from outside. Please verify sender, attachment
> and URLs before clicking on them.
> 
> Hi,
> 
> HAProxy 2.6-dev12 was released on 2022/05/27. It added 149 new commits
> after version 2.6-dev11.
> 
> Yeah I know, I said we'll only issue -dev12 if we face some trouble. But
> stay cool, we didn't face any trouble. However we figured that it would
> help last-minute testers to have a final tagged version.
> 
> The vast majority of patches are tagged CLEANUP and MINOR. That's great.
> 
> One old github issue was finally addressed, regarding the HTTP version
> validation. In the past we used to accept any 4-letter protocol using
> letters H,P,R,S,T, which allowed us to match both HTTP and RTSP. But it
> was reported to cause trouble because it was neither possible to disable
> RTSP support not extend this to other protocols. The problem with having
> RTSP enabled by default is that if haproxy forwards it to a backend server
> that doesn't know it, the server may respond with an HTTP/0.9 error that
> will be blocked by haproxy which then returns a 502 error. That's no big
> deal until you're watching your load balancer's logs and counters.
> 
> So now by default only HTTP is accepted, and this can be relaxed by
> adding "accept-invalid-http-request". To be honest, I really doubt that
> there are that many people using RTSP, given that we never ever get any
> single problem report about it, so I think it will not be a big deal to
> add this option in such cases so that all other users gain in serenity.
> This will likely be backported but if so, very slowly as this will be a
> behavior change, albeit a very small one.
> 
> Some polishing was done on QUIC, to improve the behavior on closing
> connections and stopping the process, and error processing in general.
> The maintainability was also improved by refactoring certain areas.
> Ah, crap, I just noticed that we missed a few patches from Fred who
> added some doc and a few settings!
> 
> The conn_streams that were holding up the release are now gone. It took
> two of us two full days of code analysis and head scratching to figure
> the role of certain antique flags and give them a more appropriate name,
> but that was really necessary. I must admit I really like the new model
> in 2.6, it's much more consistent and logical than 2.5 and older. It's
> visible in that it's easier to document and explain. And even during the
> changes it was easier to figure the field names for parts that had to be
> changed manually.
> 
> There are a bit more patches than I initially expected because this time
> I refused to leave poorly named function arguments and local variables:
> we've suffered from this for many years where process_stream() used to
> have a "struct stream *sess" and the session was "sess->sess". I didn't
> want to experience this anymore, we need the code to be more intuitive
> and readable especially for new contributors, and given the large amount
> of changes since 2.5 that will complicate backports anyway, it was the
> perfect opportunity to pursue that quest. While these changes represent
> many patches, they're essentially renames. There's always the tiny risk
> of an undetected mistake but all of them are trivial, were reviewed
> multiple times, built and individually tested so I'm not worried (famous
> last words :-)).
> 
> Some of us will continue testing over the week-end (it's already deployed
> on haproxy.org). I think we'll add a few bits of doc, Fred's patches that
> we missed, maybe a fix or two for last minute issues, and I expect to
> release on Tuesday (because Mondays are usually too short).
> 
> Please find the usual URLs below :
>Site index   :
> https://ind01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.haproxy.org%2F&data=05%7C01%7Cajay.mahto%40npci.org.in%7C9d4ba987a73844dbbef808da400e7c5a%7C8ca9216b1bdf40569775f5e402a48d32%7C0%7C0%7C637892728218046385%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=re5z5mdHaKq8gU73DlWwiMCCEYz4E9nnQzamqFlgUeo%3D&reserved=0
> Documentation:
> https://ind01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fdocs.haproxy.org%2F&data=05%7C01%7Cajay.mahto%40npci.org.in%7C9d4ba987a73844dbbef808da400e7c5a%7C8ca9216b1bdf40569775f5e402a48d32%7C0%7C0%7C637892728218046385%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=PHJsvY2xbGuNCep7m8SKF5ZJ30p67d3vzZIznjxgVoI%3D&reserved=0
> Wiki :
> https://ind01.safelinks

Re: Rate Limiting with token/leaky bucket algorithm

2022-06-03 Thread Aleksandar Lazic
Hi.

On Fri, 3 Jun 2022 17:12:25 +0200
Seena Fallah  wrote:

> When using the below config to have 100req/s rate-limiting after passing
> the 100req/s all of the reqs will deny not reqs more than 100req/s!
> ```
> listen test
> bind :8000
> stick-table  type ip  size 100k expire 30s store http_req_rate(1s)
> http-request track-sc0 src
> http-request deny deny_status 429 if { sc_http_req_rate(0) gt 100 }
> http-request return status 200 content-type "text/plain" lf-string "200
> OK"
> ```
> 
> Is there a way to deny reqs more than 100 not all of them?
> For example, if we have 1000req/s, 100reqs get "200 OK" and the rest of
> them (900reqs) gets "429"?

Yes.

Here are some examples with explanation.
https://www.haproxy.com/blog/four-examples-of-haproxy-rate-limiting/

Here some search outputs, maybe some of the examples helps you to.
https://html.duckduckgo.com/html?q=haproxy%20rate%20limiting

Regards
Alex



Re: V2.3 allow use of TLSv1.0

2022-06-09 Thread Aleksandar Lazic
Hi spfma.tech.

Uff, the mail is quite hard to read but looks like you are on ubuntu.

Maybe this page can help to solve your issue.

Enable TLSv1 in Ubuntu 20.04
https://ndk.sytes.net/wordpress/?p=1169

Regards
Alex

On Thu, 09 Jun 2022 09:58:10 +0200
spfma.t...@e.mail.fr wrote:

> Hi,   Thanks for your answer.   I have tried the generated config from this
> wonderful site, but no improvement.   So here is the output of the haproxy
> command :   HA-Proxy version 2.3.20-1ppa1~focal 2022/04/29 -
> https://haproxy.org/ Status: End of life - please upgrade to branch 2.4.
> Known bugs: http://www.haproxy.org/bugs/bugs-2.3.20.html Running on: Linux
> 5.4.0-113-generic #127-Ubuntu SMP Wed May 18 14:30:56 UTC 2022 x86_64 Build
> options : TARGET = linux-glibc CPU = generic
>  CC = cc
>  CFLAGS = -O2 -g -O2
> -fdebug-prefix-map=/build/haproxy-VMWa1u/haproxy-2.3.20=.
> -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time
> -D_FORTIFY_SOURCE=2 -Wall -Wextra -Wdeclaration-after-statement -fwrapv
> -Wno-address-of-packed-member -Wno-unused-label -Wno-sign-compare
> -Wno-unused-parameter -Wno-clobbered -Wno-missing-field-initializers
> -Wno-cast-function-type -Wtype-limits -Wshift-negative-value
> -Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference OPTIONS = USE_PCRE2=1
> USE_PCRE2_JIT=1 USE_OPENSSL=1 USE_LUA=1 USE_ZLIB=1 USE_SYSTEMD=1 DEBUG = 
> 
> Feature list : +EPOLL -KQUEUE +NETFILTER -PCRE -PCRE_JIT +PCRE2 +PCRE2_JIT
> +POLL -PRIVATE_CACHE +THREAD -PTHREAD_PSHARED +BACKTRACE -STATIC_PCRE
> -STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT +CRYPT_H
> +GETADDRINFO +OPENSSL +LUA +FUTEX +ACCEPT4 -CLOSEFROM +ZLIB -SLZ
> +CPU_AFFINITY +TFO +NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL +SYSTEMD
> -OBSOLETE_LINKER +PRCTL +THREAD_DUMP -EVPORTS
> 
> Default settings :
>  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
> 
> Built with multi-threading support (MAX_THREADS=64, default=2).
> Built with OpenSSL version : OpenSSL 1.1.1f 31 Mar 2020
> Running on OpenSSL version : OpenSSL 1.1.1f 31 Mar 2020
> OpenSSL library supports TLS extensions : yes
> OpenSSL library supports SNI : yes
> OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
> Built with Lua version : Lua 5.3.3
> Built with network namespace support.
> Built with the Prometheus exporter as a service
> Built with zlib version : 1.2.11
> Running on zlib version : 1.2.11
> Compression algorithms supported : identity("identity"), deflate("deflate"),
> raw-deflate("deflate"), gzip("gzip") Built with transparent proxy support
> using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND Built with PCRE2 version :
> 10.34 2019-11-21 PCRE2 library supports JIT : yes
> Encrypted password support via crypt(3): yes
> Built with gcc compiler version 9.4.0
> 
> Available polling systems :
>  epoll : pref=300, test result OK
>  poll : pref=200, test result OK
>  select : pref=150, test result OK
> Total: 3 (3 usable), will use epoll.
> 
> Available multiplexer protocols :
> (protocols marked as  cannot be specified using 'proto' keyword)
>  h2 : mode=HTTP side=FE|BE mux=H2
>  fcgi : mode=HTTP side=BE mux=FCGI
>   : mode=HTTP side=FE|BE mux=H1
>   : mode=TCP side=FE|BE mux=PASS
> 
> Available services : prometheus-exporter
> Available filters :
>  [SPOE] spoe
>  [CACHE] cache
>  [FCGI] fcgi-app
>  [COMP] compression
>  [TRACE] trace   ---   OpenSSL 1.1.1f 31 Mar 2020
> built on: Tue May 3 17:49:36 2022 UTC
> platform: debian-amd64
> options: bn(64,64) rc4(16x,int) des(int) blowfish(ptr) 
> compiler: gcc -fPIC -pthread -m64 -Wa,--noexecstack -Wall -Wa,--noexecstack
> -g -O2 -fdebug-prefix-map=/build/openssl-7zx7z2/openssl-1.1.1f=.
> -fstack-protector-strong -Wformat -Werror=format-security
> -DOPENSSL_TLS_SECURITY_LEVEL=2 -DOPENSSL_USE_NODELETE -DL_ENDIAN
> -DOPENSSL_PIC -DOPENSSL_CPUID_OBJ -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT
> -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM
> -DSHA512_ASM -DKECCAK1600_ASM -DRC4_ASM -DMD5_ASM -DAESNI_ASM -DVPAES_ASM
> -DGHASH_ASM -DECP_NISTZ256_ASM -DX25519_ASM -DPOLY1305_ASM -DNDEBUG
> -Wdate-time -D_FORTIFY_SOURCE=2 OPENSSLDIR: "/usr/lib/ssl" ENGINESDIR:
> "/usr/lib/x86_64-linux-gnu/engines-1.1" Seeding source: os-specific
> 
>  ---   #
> # OpenSSL example configuration file.
> # This is mostly being used for generation of certificate requests.
> #
> 
> # Note that you can include other files from the main configuration
> # file using the .include directive.
> #.include filename
> 
> # This definition stops the following lines choking if HOME isn't
> # defined.
> HOME = .
> 
> # Extra OBJECT IDENTIFIER info:
> #oid_file = $ENV::HOME/.oid
> oid_section = new_oids
> 
> # To use this configuration file with the "-extfile" option of the
> # "openssl x509" utility, name here the section containing the
> # X.509v3 extensions to use:
> # extensions =
> # (Alternatively, use a configuration file that has only
> # X.509v3 extensions in its main [= default] section.)
> 
> [ new_oids ]
> 
> # 

Re: HttpClient in Lua

2022-06-15 Thread Aleksandar Lazic
HI.

On Wed, 15 Jun 2022 23:33:27 +1000
Philip Young  wrote:

> Hi
> I am currently writing a LUA module to make authorisation decisions on
> whether a request is allowed, by calling out to another service to make the
> authorisation decision.
> In the Lua module, I am using Socket.connect_ssl() to
> connect to the authorisation service but I am struggling to work out how to
> set the path to the certificate I want to use to connect to the authorisation
> service.
> Does anybody know how to set the path to the certificate that is
> used when using Socket.connect_ssl() Is it possible to do this using the
> httpclient?

As I'm not a lua nor httpclient expert but maybe this could help.
https://docs.haproxy.org/2.6/configuration.html#httpclient.ssl.ca-file

Also check if you mabye need to adopt this at least for the beginning.
https://docs.haproxy.org/2.6/configuration.html#httpclient.ssl.verify

> I have tried asking the Slack chat channel and on the commercial
> site but no one knows. 
> 
> Cheers Phil

Hth
Alex



Re: HttpClient in Lua

2022-06-15 Thread Aleksandar Lazic
Hi Phil,

please keep the ML in the loop.

On Thu, 16 Jun 2022 00:19:57 +1000
Philip Young  wrote:

> Hi Alex
> 
> Thanks for the reply, but unfortunately that only sets the CA certs that
> issued the server certs. I need a way to specify a client certificate that
> will be used to talk to authz service. 

Ah okay sorry haven't understood that you want to send client certificate.
I would try to use http://docs.haproxy.org/2.6/configuration.html#5.2-crt
with the Client Certificate in the pem and set it on the server line.

It's my conclusion of that code.
https://git.haproxy.org/?p=haproxy.git;a=blob;f=src/hlua.c;hb=HEAD#l12530

Again it's just a assumption as I had never the requirements to use client
certificates with haproxy.

Regards
Alex

> Thanks anyway
> 
> Sent from my iPhone
> 
> > On 16 Jun 2022, at 12:03 am, Aleksandar Lazic  wrote:
> > 
> > HI.
> > 
> >> On Wed, 15 Jun 2022 23:33:27 +1000
> >> Philip Young  wrote:
> >> 
> >> Hi
> >> I am currently writing a LUA module to make authorisation decisions on
> >> whether a request is allowed, by calling out to another service to make the
> >> authorisation decision.
> >> In the Lua module, I am using Socket.connect_ssl() to
> >> connect to the authorisation service but I am struggling to work out how to
> >> set the path to the certificate I want to use to connect to the
> >> authorisation service.
> >> Does anybody know how to set the path to the certificate that is
> >> used when using Socket.connect_ssl() Is it possible to do this using the
> >> httpclient?
> > 
> > As I'm not a lua nor httpclient expert but maybe this could help.
> > https://docs.haproxy.org/2.6/configuration.html#httpclient.ssl.ca-file
> > 
> > Also check if you mabye need to adopt this at least for the beginning.
> > https://docs.haproxy.org/2.6/configuration.html#httpclient.ssl.verify
> > 
> >> I have tried asking the Slack chat channel and on the commercial
> >> site but no one knows. 
> >> 
> >> Cheers Phil
> > 
> > Hth
> > Alex




[PATCH] DOC: add info about ssl-engine for 2.6

2022-06-15 Thread Aleksandar Lazic
Hi.

Attached a doc patch about ssl-engine and 2.6 is related to
https://github.com/haproxy/haproxy/issues/1752


Regards
Alex
>From 85bcc5ea26d7c1f468dbbf6a10b33bc9f79da819 Mon Sep 17 00:00:00 2001
From: Aleksandar Lazic 
Date: Wed, 15 Jun 2022 23:52:30 +0200
Subject: [PATCH] DOC: add info about ssl-engine for 2.6

In the announcment of 2.6 is mentioned that the openssl engine
is not enabled by default.

This patch add the information to the configuration.txt.

Is related to #1752

Should be backported to 2.6
---
 doc/configuration.txt | 4 
 1 file changed, 4 insertions(+)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 183710c35..d0e74e0fb 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -2666,6 +2666,10 @@ ssl-engine  [algo ]
   openssl configuration file uses:
   https://www.openssl.org/docs/man1.0.2/apps/config.html
 
+  Since version 2.6 is the ssl-engine not enabled in the default build. In case
+  that the ssl-engine is requierd can HAProxy be rebuild with USE_ENGINE=1
+  build flag.
+
 ssl-mode-async
   Adds SSL_MODE_ASYNC mode to the SSL context. This enables asynchronous TLS
   I/O operations if asynchronous capable SSL engines are used. The current
-- 
2.25.1



Re: Segfault on 2.6.0 with TCP switching to HTTP/2

2022-06-16 Thread Aleksandar Lazic
On Thu, 16 Jun 2022 10:22:30 +0200
Christopher Faulet  wrote:

> Le 6/16/22 à 05:12, David Leadbeater a écrit :
> > I tried upgrading to 2.6.0 (from 2.5.6) and I'm seeing a segfault when
> > making HTTP/2 requests. I'm using a frontend in TCP mode and then
> > switching it to HTTP/2.
> > 
> > I've made a minimal config that exhibits the segfault, below. Simply
> > doing curl -vk https://ip is enough to trigger it for me.
> > 
> > Thread 1 "haproxy" received signal SIGSEGV, Segmentation fault.
> > 0x555d1d07 in h2s_close (h2s=0x55a60b70) at src/mux_h2.c:1497
> > 1497 HA_ATOMIC_DEC(&h2s->h2c->px_counters->open_streams);
> > (gdb) bt
> > #0  0x555d1d07 in h2s_close (h2s=0x55a60b70) at
> > src/mux_h2.c:1497 #1  h2s_destroy (h2s=0x55a60b70) at src/mux_h2.c:1515
> > #2  0x555d3463 in h2_detach (sd=) at
> > src/mux_h2.c:4432
> > 
> > The exact backtrace varies but always in h2s_destroy.
> > 
> > (In case you're wondering what on earth I'm doing, there's a write-up
> > of it at https://dgl.cx/2022/04/showing-you-your-actual-http-request)
> > 
> > David
> > 
> > ---
> > global
> >ssl-default-bind-options no-sslv3 no-tlsv10
> >user nobody
> > 
> > defaults
> >timeout connect 10s
> >timeout client 30s
> >timeout server 2m
> > 
> > frontend tcp-https
> >mode tcp
> >bind [::]:443 v4v6 ssl crt /etc/haproxy/ssl/bodge.cloud.pem alpn
> > h2,http/1.1 
> >acl ipwtf hdr(Host),lower,field(1,:),word(-1,.,2) ip.wtf
> >default_backend ipwtf
> >tcp-request inspect-delay 10s
> >tcp-request content switch-mode http if !ipwtf
> >use_backend cloud-regions.bodge.cloud if !ipwtf
> > 
> > backend ipwtf
> >mode tcp
> >server ipwtf localhost:8080
> > 
> > backend cloud-regions.bodge.cloud
> >mode http
> >server cr localhost:8080
> > 
> 
> Hi,
> 
> Thanks ! I'm able to reproduce the segfault. I'm on it.

But in any way wouldn't be better that the rule

acl ipwtf hdr(Host),lower,field(1,:),word(-1,.,2) ip.wtf

be after  

> >tcp-request inspect-delay 10s
> >tcp-request content switch-mode http if !ipwtf

because it "feels somehow wrong" to make header checks in tcp mode.

Or check if it's http before the hdr check.
https://docs.haproxy.org/2.6/configuration.html#7.3.5-req.proto_http

```
tcp-request inspect-delay 10s
tcp-request content switch-mode http if HTTP

acl ipwtf hdr(Host),lower,field(1,:),word(-1,.,2) ip.wtf
```

Opinions?

Jm2c

Regards
Alex



Re: Segfault on 2.6.0 with TCP switching to HTTP/2

2022-06-16 Thread Aleksandar Lazic
On Thu, 16 Jun 2022 20:49:00 +1000
David Leadbeater  wrote:

> On Thu, 16 Jun 2022 at 20:27, Aleksandar Lazic  wrote:
> [...]
> > > Thanks ! I'm able to reproduce the segfault. I'm on it.
> 
> Thanks!
> 
> > But in any way wouldn't be better that the rule
> >
> > acl ipwtf hdr(Host),lower,field(1,:),word(-1,.,2) ip.wtf
> >
> > be after
> >
> > > >tcp-request inspect-delay 10s
> > > >tcp-request content switch-mode http if !ipwtf
> >
> > because it "feels somehow wrong" to make header checks in tcp mode.
> 
> There's some explanation in the configuration manual about how it
> works, and it's documented to work, at least for HTTP/1.
> 
> https://docs.haproxy.org/2.6/configuration.html#4
> "While HAProxy is able to parse HTTP/1 in-fly from tcp-request content
> rules"...
> 
> Essentially I want to keep the connection as TCP, so that I can have a
> backend that gets raw HTTP/1.1. I wrote some more about it at
> https://dgl.cx/2022/04/showing-you-your-actual-http-request

Nice service this https://ip.wtf/ thanks for offering it.

> [...]
> > Opinions?
> 
> Clearly in nearly all cases it's better to let haproxy be the HTTP
> proxy layer, especially as it isn't possible to mix for HTTP/2, but it
> lets me do my crazy thing here :)

Thank you David for your patience and explanation.
I fully agree HAProxy is a very flexible Server :-)

> David

Regards
Alex



Re: [ANNOUNCE] haproxy-2.7-dev1

2022-06-25 Thread Aleksandar Lazic
Hi Willy.

On Fri, 24 Jun 2022 22:58:53 +0200
Willy Tarreau  wrote:

> Hi,
> 
> HAProxy 2.7-dev1 was released on 2022/06/24. It added 131 new commits
> after version 2.7-dev0.
> 
> There's not that much new stuff yet but plenty of small issues were
> addressed, and it's already been 3 weeks since the release thus I figured
> it was a perfect timing for a -dev1 for those who want to stay on the edge
> without taking much risks.
> 
> In addition to the fixes that went into 2.6.1 already, some HTTP/3 issues
> were addressed and a memory leak affecting QUIC was addressed as well (thanks
> to @Tristan971 for his precious help on this one). 
> 
> Aside fixes, a few improvements started already. First, and to finish on
> QUIC, the QUICv2 version negotiation was implemented. This will allow us
> to follow the progress on the QUICv2 drafts more closely.
> 
> On HTTP/2, the maintainer of the Lighttpd web server reported a nasty case
> that he observed between curl and lighttpd which is very similar to the so
> called "Silly Window Syndrom" in TCP where a difference of one byte between
> a buffer size and a window size may progressively make the transfer
> degenerate until almost all frames are 1-byte in size. It's not a bug in
> any product, just a consequence of making certain standard-compliant stacks
> interoperate. Some workarounds were placed in various components that
> allowed the issue to appear. We did careful testing on haproxy and couldn't
> produce it there, in part due to our buffer management that makes it
> difficult to read exactly the sizes that produce the issue. But there's
> nothing either that can strictly prevent it from happening (e.g. with a
> sender using smaller frames maybe). So we implemented the workaround as
> well, which will also result in sending slightly less frames during
> uploads. The goal is to backport this once it has been exposed for a
> while without trouble in 2.7.
> 
> Another noticeable improvement is the inclusion of a feature that had
> been written in the now dead ROADMAP file for 15 years: multi-criteria
> bandwidth limiting. It allows to combine multiple filters to enforce
> bandwidth limitations on arbitrary criteria by looking at their total
> rate in a stick table. Thus it's possible to have per-source, per-
> destination, per-network, per-AS, per-interface bandwidth limits in
> each direction. In addition there's a stream-specific pair of limits
> (one per direction as well) that can even be adjusted on the fly. We
> could for example imagine that a client sends a POST request to a
> server, that the server responds with a 100-Continue and a header
> indicating the max permitted upload bandwidth, and then the transfer
> will be automatically capped. Quite frankly, I've been wanting this
> for a long time to address the problem of buffer bloat on small links
> (e.g. my old ADSL line), and here there's now an opportunity to
> maintain a good quality of service without saturating links thanks to
> this. I'm pretty sure that some users will be creative and may even
> come up with ideas of improvements ;-)

WOW that's great news.
Thanks for that feature :-)

Regards
Alex



Re: Adding "Content-Type" and other needed headers in the response

2022-06-28 Thread Aleksandar Lazic
Hi.

On Tue, 28 Jun 2022 12:23:15 +0200
spfma.t...@e.mail.fr wrote:

> Hi,   I have a problem to solve : I never paid attention to the fact HAProxy
> (2.5.1-86b093a) did not return HTTP headers in the reponses, because there
> was no complaints so far. But now we got one, because of an old application
> which needs at least "Content-Type" as some tests are performed before
> generating the content.   If the devs don't fix it, I will have to find a
> solution on the load balancer side.   The LB is serving content from a Tomcat
> server, wich is returning plenty of headers.   So is there a way to add them
> in the response, like some "pass thru" ?   I was not able to find useful
> informations so far, maybe because I don't know what concepts and directives
> are involved.   As a very dumb and primitive test, I have added
> "http-response add-header Content-Type 'text/html'" at both FE and BE level,
> but the header is still not shown.   Thanks for any help.   Regards 
> 
> -
> FreeMail powered by mail.fr

Pleaese can you try to update HAProxy as there are ~150 Fixes in the latest
2.5. https://www.haproxy.org/bugs/bugs-2.5.1.html or try to use the latest
shiny 2.6 :-)

Can you share your current config.

Maybe the "http-after-response" could help in your case, but it's just a guess.
https://docs.haproxy.org/2.6/configuration.html#4.2-http-after-response%20set-header

Regards
Alex



Re: [PATCH] DOC: add info about ssl-engine for 2.6

2022-07-27 Thread Aleksandar Lazic

Hi Tim.

Thank you for your feedback.
Attached the new version

regards
Alex

On 16.06.22 15:16, Tim Düsterhus wrote:

Alex,


From 85bcc5ea26d7c1f468dbbf6a10b33bc9f79da819 Mon Sep 17 00:00:00 2001
From: Aleksandar Lazic 
Date: Wed, 15 Jun 2022 23:52:30 +0200
Subject: [PATCH] DOC: add info about ssl-engine for 2.6

In the announcment of 2.6 is mentioned that the openssl engine


There's a typo here: announcement.


is not enabled by default.

This patch add the information to the configuration.txt.

Is related to #1752


Please explicitly mention 'GitHub issue':

This is related to GitHub Issue #1752.



Should be backported to 2.6
---
 doc/configuration.txt | 4 
 1 file changed, 4 insertions(+)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 183710c35..d0e74e0fb 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -2666,6 +2666,10 @@ ssl-engine  [algo of algorithms>]

   openssl configuration file uses:
   https://www.openssl.org/docs/man1.0.2/apps/config.html

+  Since version 2.6 is the ssl-engine not enabled in the default 
build. In case


That first sentence sounds like a German sentence structure to me that 
is not correct English grammar. Suggestion that also unifies the wording 
with other places the refer to the USE_* flags:


Version 2.6 disabled the support for engines in the default build. This 
option is only available when HAProxy has been

compiled with USE_ENGINE.

+  that the ssl-engine is requierd can HAProxy be rebuild with 
USE_ENGINE=1


Typo: required


+  build flag.
+
 ssl-mode-async
   Adds SSL_MODE_ASYNC mode to the SSL context. This enables 
asynchronous TLS
   I/O operations if asynchronous capable SSL engines are used. The 
current

--
2.25.1


Best regards
Tim Düsterhus
From b0991e2f011d8fbbde3fc3a3e4fcc4a956e41064 Mon Sep 17 00:00:00 2001
From: Aleksandar Lazic 
Date: Wed, 27 Jul 2022 15:24:54 +0200
Subject: [PATCH] DOC: add info about ssl-engine for 2.6

In the announcement of 2.6 is mentioned that the openssl engine
is not enabled by default.

This patch add the information to the configuration.txt.

This is related to GitHub Issue #1752.

Should be back ported to 2.6
---
 doc/configuration.txt | 5 +
 1 file changed, 5 insertions(+)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index c348a08de..35d58f29c 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -2680,6 +2680,11 @@ ssl-engine  [algo ]
   openssl configuration file uses:
   https://www.openssl.org/docs/man1.0.2/apps/config.html
 
+  Version 2.6 disabled the support for engines in the default build. This
+  option is only available when HAProxy has been compiled with USE_ENGINE. 
+  In case that the ssl-engine is required can HAProxy be rebuild with 
+  USE_ENGINE=1 build flag.
+
 ssl-mode-async
   Adds SSL_MODE_ASYNC mode to the SSL context. This enables asynchronous TLS
   I/O operations if asynchronous capable SSL engines are used. The current
-- 
2.25.1



Re: Sending CORS headers with HAProxy-generated error responses

2022-08-12 Thread Aleksandar Lazic

Hi Eric.

On 11.08.22 21:59, Eric Johanson wrote:

When HAProxy generates an HTTP 500 error (say because our servers are down), 
then HAProxy does not send any CORS
information.  Because of this, the HTTP 500 responses do not arrive at our web 
application because they are
blocked by the browser.

To solve this, I want to add the relavent CORS headers to these 
HAProxy-generated error responses.  And I want to
add CORS headers ONLY to such error responses (the error pages specified by the 
"errorfile" directive in the
haproxy.cfg file).  Our middleware server already adds appropriate CORS headers 
for all responses that come from
our middleware, including error responses that come from our middleware.  But 
when HAProxy sends an error response
that it generates internally, there are no CORS headers.

I see there are various solutions for adding CORS headers to all HTTP traffic 
from front-ends (e.g.
https://github.com/haproxytech/haproxy-lua-cors).  But I want to add CORS 
headers ONLY to the internal
HAProxy-generated error responses.

Is there a way to do this, and if so how?


Well I'm not sure if it's possible but I would try to use fc_err and set 
errorfile.

https://docs.haproxy.org/2.6/configuration.html#7.3.3-fc_err
https://docs.haproxy.org/2.6/configuration.html#3.8

Example, untested.
http-response set-status [1] 500 if fc_err gt 0

Which error is the right one can be seen here
https://docs.haproxy.org/2.6/configuration.html#7.3.3-fc_err_str

Jm2c
Regards
Alex


Thank you.
Confidentiality Notice: The information in this email is confidential. It is 
intended only for the use of the named recipient. Any use or disclosure of this 
email is strictly prohibited, if you are not the intended recipient. In case 
you have received this email in error, please notify us immediately and then 
delete this email. To help reduce unnecessary paper wastage, we would request 
that you do not print this email unless it is really necessary - thank you.





Re: Sending CORS headers with HAProxy-generated error responses

2022-08-12 Thread Aleksandar Lazic

Hi.

On 12.08.22 14:48, Eric Johanson wrote:

Thanks for the reply.  It sounds like for my situation, I want to add some CORS headers when 
fc_err > 0 perhaps using the "set-header" action of http-response.  (Your 
example uses the set-status action, which I don't think solves my problem of generating CORS 
headers for internal HAProxy connection errors).

https://docs.haproxy.org/2.6/configuration.html#4.2-http-response%20add-header

So maybe something like:
http-response add-header Access-Control-Allow-Origin "https://example.com"; if 
fc_err gt 0
# ... more like this for the other required CORS headers

I haven't tried this, but does it some like it will accomplish what I described 
in my original post?


I would say give it a try and see if works.

Regards
Alex


-Original Message-
From: Aleksandar Lazic 
Sent: Friday, August 12, 2022 6:45 AM
To: Eric Johanson 
Cc: haproxy@formilux.org
Subject: Re: Sending CORS headers with HAProxy-generated error responses

Hi Eric.

On 11.08.22 21:59, Eric Johanson wrote:

When HAProxy generates an HTTP 500 error (say because our servers are
down), then HAProxy does not send any CORS information.  Because of
this, the HTTP 500 responses do not arrive at our web application because they 
are blocked by the browser.

To solve this, I want to add the relavent CORS headers to these
HAProxy-generated error responses.  And I want to add CORS headers
ONLY to such error responses (the error pages specified by the
"errorfile" directive in the haproxy.cfg file).  Our middleware server
already adds appropriate CORS headers for all responses that come from our 
middleware, including error responses that come from our middleware.  But when 
HAProxy sends an error response that it generates internally, there are no CORS 
headers.

I see there are various solutions for adding CORS headers to all HTTP traffic 
from front-ends (e.g.
https://github.com/haproxytech/haproxy-lua-cors).  But I want to add
CORS headers ONLY to the internal HAProxy-generated error responses.

Is there a way to do this, and if so how?


Well I'm not sure if it's possible but I would try to use fc_err and set 
errorfile.
https://docs.haproxy.org/2.6/configuration.html#7.3.3-fc_err
https://docs.haproxy.org/2.6/configuration.html#3.8

Example, untested.
http-response set-status [1] 500 if fc_err gt 0

Which error is the right one can be seen here 
https://docs.haproxy.org/2.6/configuration.html#7.3.3-fc_err_str

Jm2c
Regards
Alex


Thank you.
Confidentiality Notice: The information in this email is confidential. It is 
intended only for the use of the named recipient. Any use or disclosure of this 
email is strictly prohibited, if you are not the intended recipient. In case 
you have received this email in error, please notify us immediately and then 
delete this email. To help reduce unnecessary paper wastage, we would request 
that you do not print this email unless it is really necessary - thank you.


Confidentiality Notice: The information in this email is confidential. It is 
intended only for the use of the named recipient. Any use or disclosure of this 
email is strictly prohibited, if you are not the intended recipient. In case 
you have received this email in error, please notify us immediately and then 
delete this email. To help reduce unnecessary paper wastage, we would request 
that you do not print this email unless it is really necessary - thank you.
Confidentiality Notice: The information in this email is confidential. It is 
intended only for the use of the named recipient. Any use or disclosure of this 
email is strictly prohibited, if you are not the intended recipient. In case 
you have received this email in error, please notify us immediately and then 
delete this email. To help reduce unnecessary paper wastage, we would request 
that you do not print this email unless it is really necessary - thank you.




Re: 3rd party modules support

2022-08-18 Thread Aleksandar Lazic

Hi.

On 17.08.22 16:54, Pavel Krestovozdvizhenskiy wrote:
Does HAProxy support of 3rd party modules? Not LUA scripts but compiled 
modules. Something like modules in nginx. I've read the documentation 
and did not found clear answer.


Not as far as i know, a more detailed answer can be found here.
https://www.mail-archive.com/haproxy@formilux.org/msg12985.html

What you can do is to add some filter addons similar to the current one
https://github.com/haproxy/haproxy/tree/master/addons and build haproxy 
with the filter.



Thanks. Paul


Regards
Alex



Re: Defining two FTP connections pointing to the same server

2022-08-18 Thread Aleksandar Lazic

Hi.

On 18.08.22 20:40, Roberto Carna wrote:

Dear all, I have to change my haproxy.cfg file in order to enable two
FTP connections to the same server, with these requirements:

FTP server IP: 10.10.1.10

1st FTP service:
FTP Control: port 21
FTP Data: port 11000 to 11010

2nd FTP service:
FTP Control: port 2100
FTP Data: 11000 to 10010 (same range as the first service)

In the haproxy.cfg I tried this:

listen ftp-control-1
 bind 10.10.1.1:21
 mode tcp
 option tcplog
  server FTP 10.10.1.10:21 check 21

listen ftp-control-2
 bind 10.10.1.1:2100
 mode tcp
 option tcplog
  server FTP 10.10.1.10:21 check 2100

listen ftp-data-1-2   <--- The same config for FTP data because they
use the same port range
 bind 10.10.1.1:11000-11010
 mode tcp
 option tcplog
 server FTP 10.10.1.10 check

But it doesn't work.


What do you have in the logs or at start time in the output?
I assume you will get something similar like "port already in use".


Is my config correct or not?

Is it correct if I use the same FTP data port range for both services
on the same server?


Well the "bind" implies that haproxy bind to that ports, you should see 
this with "netstat -tulpn".


I think you will need different listening (bind) ports for the second 
server.


See https://docs.haproxy.org/2.6/configuration.html#4.2-bind


Thanks and greetings!

Robert


Regards
Alex



Re: LibreSSL 3.6.0 QUIC support with HAProxy 2.7

2022-09-14 Thread Aleksandar Lazic

Hi William.

On 14.09.22 18:50, William Lallemand wrote:

Hello List,

We've just finished the portage of HAProxy for the next libreSSL
version which implements the quicTLS API.


Wow great news.


For those interested this is how you are supposed to compile everything:

The libreSSL library:

$ git clone https://github.com/libressl-portable/portable libressl
$ cd libressl
$ ./autogen.sh

// The QUIC API is not public and not available in the shared
// library for now, you have to link with the .a
$ ./configure --prefix=/opt/libressl-quic/ --disable-shared 
CFLAGS=-DLIBRESSL_HAS_QUIC
$ make V=1
$ sudo make install

HAProxy:

$ git clone http://git.haproxy.org/git/haproxy.git/
$ cd haproxy
$ make TARGET=linux-glibc USE_OPENSSL=1 USE_QUIC=1 
SSL_INC=/opt/libressl-quic/include/ \
   SSL_LIB=/opt/libressl-quic/lib/ DEFINE='-DLIBRESSL_HAS_QUIC'


$ ./haproxy -vv
HAProxy version 2.7-dev5-7eeef9-91 2022/09/14 - https://haproxy.org/
Status: development branch - not safe for use in production.
Known bugs: https://github.com/haproxy/haproxy/issues?q=is:issue+is:open
Running on: Linux 5.15.0-47-generic #51-Ubuntu SMP Thu Aug 11 07:51:15 
UTC 2022 x86_64
Build options :
  TARGET  = linux-glibc
  CPU = generic
  CC  = cc
  CFLAGS  = -O2 -ggdb3 -Wall -Wextra -Wundef 
-Wdeclaration-after-statement -Wfatal-errors -Wtype-limits 
-Wshift-negative-value -Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference 
-fwrapv -Wno-address-of-packed-member -Wno-unused-label -Wno-sign-compare 
-Wno-unused-parameter -Wno-clobbered -Wno-missing-field-initializers 
-Wno-cast-function-type -Wno-string-plus-int -Wno-atomic-alignment 
-DLIBRESSL_HAS_QUIC
  OPTIONS = USE_PCRE=1 USE_OPENSSL=1 USE_LUA=1 USE_ZLIB=1 USE_SYSTEMD=1 
USE_QUIC=1
  DEBUG   = -DDEBUG_MEMORY_POOLS -DDEBUG_STRICT

Feature list : +EPOLL -KQUEUE +NETFILTER +PCRE -PCRE_JIT -PCRE2 
-PCRE2_JIT +POLL +THREAD -PTHREAD_EMULATION +BACKTRACE -STATIC_PCRE 
-STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT +CRYPT_H -ENGINE 
+GETADDRINFO +OPENSSL +LUA +ACCEPT4 -CLOSEFROM +ZLIB -SLZ +CPU_AFFINITY +TFO 
+NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL +SYSTEMD -OBSOLETE_LINKER +PRCTL 
-PROCCTL +THREAD_DUMP -EVPORTS -OT +QUIC -PROMEX -MEMORY_PROFILING

Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_TGROUPS=16, MAX_THREADS=256, 
default=8).
Built with OpenSSL version : LibreSSL 3.6.0
Running on OpenSSL version : LibreSSL 3.6.0
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3


How about to change this to something like

Built with SSL Library version
Running on SSL Library version
SSL library supports ...

Because it's confusing :-)

Built with OpenSSL version : LibreSSL 3.6.0

I thought also something like

Built with (OpenSSL|LibreSSL) version : LibreSSL 3.6.0

But this looks ugly to me.



Built with Lua version : Lua 5.4.3
Built with network namespace support.
Support for malloc_trim() is enabled.
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), deflate("deflate"), 
raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT 
IPV6_TRANSPARENT IP_FREEBIND
Built with PCRE version : 8.39 2016-06-14
Running on PCRE version : 8.39 2016-06-14
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Encrypted password support via crypt(3): yes
Built with gcc compiler version 11.2.0

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' 
keyword)
   quic : mode=HTTP  side=FE mux=QUIC  flags=HTX|NO_UPG|FRAMED
 h2 : mode=HTTP  side=FE|BE  mux=H2flags=HTX|HOL_RISK|NO_UPG
   fcgi : mode=HTTP  side=BE mux=FCGI  flags=HTX|HOL_RISK|NO_UPG
   : mode=HTTP  side=FE|BE  mux=H1flags=HTX
 h1 : mode=HTTP  side=FE|BE  mux=H1flags=HTX|NO_UPG
   : mode=TCP   side=FE|BE  mux=PASS  flags=
   none : mode=TCP   side=FE|BE  mux=PASS  flags=NO_UPG

Available services : none

Available filters :
[BWLIM] bwlim-in
[BWLIM] bwlim-out
[CACHE] cache
[COMP] compression
   

Re: http-response option in frontend section or backend section?

2022-10-03 Thread Aleksandar Lazic

Hi.

On 03.10.22 16:29, Roberto Carna wrote:

Dear, I have a HAProxy with several web applications but I have to
solve the cookie without a secure flag problem in just one web
application.

Do I have to define the "http-response replace header" option in the
frontend section or in the backend section of haproxy.cfg ? Or is it
the same ?

Because if I define the option in the frontend section, I modify the
cookie behaviour of all the applications, and this is not what I want.

Thanks a lot!!!


I would say when the application have a own backend config then modify 
the response there.


Regards
Alex



Re: HA Proxy License

2022-10-07 Thread Aleksandar Lazic
Hi John.

I suggest to get in touch whith HAProxy company via this form.

https://www.haproxy.com/contact-us/

best regards
alex

07.10.2022 17:55:42 John Bowling (CE CEN) :

> Hello,
> 
> What are the costs for the license or is there a subscription for license?
> 
> *John L. Bowling (JB)*
> 
> Senior Team Leader
> 
> *IES – Network Engineering & Security (NES)*
> 
> *Network Operational Readiness (NOC)*
> 
> Whole Foods Market – Global Support (CEN)
> 
> An Amazon Company
> 
> 1011 W 5th  Street, 4th floor
> 
> Austin, Texas USA 78703
> 
> Mobile: +1-512-221-3780
> 
> Desk: +1-512.542.0797
> 
> Email: john.bowl...@wholefoods.com
> 
> www.wholefoodsmarket.com[http://www.wholefoodsmarket.com/]
> 
> Four principles: customer obsession rather than competitor focus, passion for 
> invention, commitment to operational excellence, and long-term thinking
> 
>  
> 
> For WFM technical support please call Global Help Desk at 1-877-923-4263  
>  
> 
>  Monday-Friday 6:00am-9:00pm CST Sat & Sun: 8:00am-4:00pm
> 
> For service request, open up WFM Internal ticket in OrchardNow 
> OrchardNow[https://wfmprod.service-now.com/nav_to.do?uri=%2Fhome.do%3F]
> 
>  
> 
> This email contains proprietary and confidential material for the sole use of 
> the intended recipient. Any review, use, distribution or disclosure by others 
> without the permission of the sender is strictly prohibited.  If you are not 
> the intended recipient (or authorized to receive for the recipient), please 
> contact the sender by reply email and delete all copies of this message. 
> Thank you.
> 


Re: I can't disable TLS v1.1 from Internet

2022-10-24 Thread Aleksandar Lazic

Hi Roberto.

On 24.10.22 03:21, Roberto Carna wrote:

Dear, I have this scenario:

Internet --> HAproxy Frontend --> HAproxy Backend --> Web servers

HAproxy version 1.5.8 in frontend (disabling protocols in the backend
section connected to HAProxy backend):

server HA-Backend 172.20.20.1:443 ssl verify none ciphers
EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA:!AES256+ECDHE:!AES256+DHE
no-tlsv11 no-tlsv10 no-sslv3

HAproxy version 1.5.8 in backend (disabling protocols in the backend
section connected to web server) -->

server WEB01 10.12.12.1:443 ssl verify none ciphers
DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:AES256-GCM-SHA384:AES256-SHA256:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA:!AES256+ECDHE:!AES256+DHE
cookie s1 no-tlsv11 no-tlsv10 no-sslv3

server WEB02 10.12.12.2:443 ssl verify none ciphers
DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:AES256-GCM-SHA384:AES256-SHA256:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA:!AES256+ECDHE:!AES256+DHE
cookie s2 no-tlsv11 no-tlsv10 no-sslv3

Web Servers IIS (supporting TLS 1.0, TLS 1.1 and TLS 1.2)

As it is impossible to disable TLS 1.0 and TLS 1.1 from the IIS web
servers for specific functionality reasons (the web administrator
doesn't let me do this), I suppose I can disable TLS 1.0 and TLS 1.1
from the HAProxy frontend and backend.

But after that, I executed a test from SSL Labs from Qualys, and it
said TLS 1.1 is still enabled.

What can be the reason because the HAProxy frontend can't disable TLS
1.1 in connections from the Internet ?

Is anything wrong?


Well you have changed the server line not the frontend config.

The flow is like this

INet => HAProxy Frontend
 \
  Frontend
  \
   Backend => HAproxy Backend

SSL Labs test the Frontend config from HAProxy Frontend.

What is the config for the frontend of the HAProxy Frontend?

BTW.: HAProxy 1.5 is't maintained any more since 2020-01-10
https://www.haproxy.org/

You can get a more recent version from this repos.
https://github.com/iusrepo?q=hap&type=all&language=&sort=
https://github.com/DBezemer/rpm-haproxy


Thanks in advance, greetings!!!


Regards
Alex



Re: Two frontends with the same IP and Port

2022-10-25 Thread Aleksandar Lazic

Hi Roberto.

On 25.10.22 17:01, Roberto Carna wrote:

Sorry, I want two different backends with same IP/port and different
SSL options as follow, and the same SSL wildcard certificate:

# Frontend 1 with certain SSL options
frontend Web1
bind 10.10.1.1:443 ssl crt /root/ssl/ no-sslv3 no-tlsv10 no-tlsv11
no-tls-tickets force-tlsv12
acl url_web1hdr_dom(host) -i www1.example.com
use_backend Server1  if url_web1

# Frontend 2 with any SSL options
frontend Web2
bind 10.10.1.1:443 ssl crt /root/ssl/
acl url_web2hdr_dom(host) -i www2.example.com
use_backend Server2  if url_web2

I made the above configuration, but sometimes the web traffic doesn't
reach the second server, until a browser refresh.


I think you could use this option for your setup.
https://docs.haproxy.org/2.6/configuration.html#5.1-crt-list

Hth
Alex


Special thanks!

El mar, 25 oct 2022 a las 10:16, Roberto Carna
() escribió:


Dear, I have a HAproxy server with two different frontends with the
same IP and port, both pointing to different backends, as follow:

frontend Web1
bind 10.10.1.1:443 ssl crt /root/ssl/ no-sslv3 no-tlsv10 no-tlsv11
no-tls-tickets force-tlsv12
acl url_web1hdr_dom(host) -i www1.example.com
use_backend Server1  if url_web1

frontend Web2
bind 10.10.1.1:443 ssl crt /root/ssl/ no-sslv3 no-tlsv10 no-tlsv11
no-tls-tickets force-tlsv12
acl url_web2hdr_dom(host) -i www2.example.com
use_backend Server2  if url_web2

If somebody goes to www1.example.com he enters to the first frontend,
and if somebody goes to www2.example.com he enters to the second
frontend.

Is this configuration OK or do I have to have any errors???

Thanks a lot!






Re: dsr and haproxy

2022-11-04 Thread Aleksandar Lazic

Hi.

On 04.11.22 12:24, Szabo, Istvan (Agoda) wrote:

Hi,

Is there anybody successfully configured haproxy and dsr?


Well maybe this Blog Post is a good start point.

https://www.haproxy.com/blog/layer-4-load-balancing-direct-server-return-mode/

Regards
Alex


Istvan Szabo
Senior Infrastructure Engineer
---
Agoda Services Co., Ltd.
e: istvan.sz...@agoda.com
---


This message is confidential and is for the sole use of the intended 
recipient(s).
It may also be privileged or otherwise protected by copyright or other legal 
rules.
If you have received it by mistake please let us know by reply email and delete 
it
from your system. It is prohibited to copy this message or disclose its content 
to
anyone. Any confidentiality or privilege is not waived or lost by any mistaken
delivery or unauthorized disclosure of the message. All messages sent to and 
from
Agoda may be monitored to ensure compliance with company policies, to protect 
the
company's interests and to remove potential malware. Electronic messages may be
intercepted, amended, lost or deleted, or contain viruses.




Re: HAPROXYU (apps) -

2022-11-07 Thread Aleksandar Lazic

Dear Carolina.

Please get in touch with the HAProxy Company for a offer.
https://www.haproxy.com/contact-us/

This Mailing list is for the OpenSource HAProxy.

Regards
Alex

On 07.11.22 13:06, Coco, Carolina wrote:

Hi team,

Could you please send us an offer for the marked in yellow?, its for one 
customer
of us.

[cid:image001.png@01D8EAB0.674071D0]

Thanks

Carolina Coco
Inside Sales
Direct:  +34 91 598 1406
Mobile:+34 649837471
Email: carolina.coco @softwareone.com
SoftwareONE Spain, S.A.
c/ Via de los Poblados 3, Edificio 4B, 1ªPlanta
28033 Madrid
España
https://www.softwareone.com/es


[cid:image009.gif@01D8F068.6024F750]
[cid:image008.jpg@01D8EAB0.674071D0]





Re: How to return 429 Status Code instead of 503

2022-11-17 Thread Aleksandar Lazic
hi.

but there is a 429 error code in the source.

https://git.haproxy.org/?p=haproxy.git&a=search&h=HEAD&st=grep&s=HTTP_ERR_429

As you don't written which version you use, maybe you can use the latest 2.6 
version and give the error code 429 a chance :-)

regards
alex

17.11.2022 16:29:02 Chilaka Ramakrishna :

> Thanks Jarno, for the reply.
> 
> But i don't think this would work for me, I just want to change the status 
> code (return 429 instead of 503) that i can return, if queue timeout occurs 
> for a request..
> 
> Please confirm, if this is possible or this sort of provision is even exposed 
> by HAP.
> 
> On Thu, Nov 17, 2022 at 12:43 PM Jarno Huuskonen  
> wrote:
>> Hello,
>> 
>> On Tue, 2022-11-08 at 09:30 +0530, Chilaka Ramakrishna wrote:
>>> On queue timeout, currently HAProxy throws 503, But i want to return 429,
>>> I understand that 4xx means a client problem and client can't help here.
>>> But due to back compatibility reasons, I want to return 429 instead of
>>> 503. Is this possible ?
>> 
>> errorfile 503 /path/to/429.http
>> (http://cbonte.github.io/haproxy-dconv/2.6/configuration.html#4-errorfile)
>> 
>> Or maybe it's possible with http-error
>> (http://cbonte.github.io/haproxy-dconv/2.6/configuration.html#http-error)
>> 
>> -Jarno
>> 



Re: Rate Limit a specific HTML request

2022-11-22 Thread Aleksandar Lazic

Hi.

On 22.11.22 21:57, Branitsky, Norman wrote:
I have the following "generic" rate limit defined - 150 requests in 10s 
from the same IP address:


 stick-table  type ip size 100k expire 30s store http_req_rate(10s)
 http-request track-sc0 src unless { src -f 
/etc/CONFIG/haproxy/cidr.lst }

 http-request deny deny_status 429 if { sc_http_req_rate(0) gt 150 }

Is it possible to rate limit a specific "computationally expensive" HTML 
request from the same IP address to a much smaller number?


What do you define as a "computationally expensive" request?

Maybe you could draw a bigger Picture and tell us what version of
HAProxy do you use.

In the upcoming 2.7 is also a "Bandwidth limitation", maybe this could 
help to solve your issue.

https://docs.haproxy.org/dev/configuration.html#9.7

HTML is a Description Language therefore I think you want to restrict
HTTP Request/Response, isn't it?

https://www.rfc-editor.org/rfc/rfc1866


*Norman Branitsky*
Senior Cloud Architect
Tyler Technologies, Inc.


Regards
Alex


P: 416-916-1752
C: 416.843.0670
www.tylertech.com
Tyler Technologies 





Re: Rate Limit a specific HTML request

2022-11-22 Thread Aleksandar Lazic

Hi.

On 22.11.22 23:19, Branitsky, Norman wrote:

A "computationally expensive" request is a request sent to our Public Search
service - no login required so it seems to be the target of abuse.
For example:
https:///datamart/searchByName.do?anchor=169a72e.0


Okay, let me rephrase your question.

How can be a IP blocked which creates a request which takes
$too_much_time to response.

Where could be the $too_much_time defined?
Could it be the "timeout server ..." config parameter?

Could the "%Tr" or "%TR" be used from logformat for that?
https://docs.haproxy.org/2.6/configuration.html#8.2.6

or the request get a 504 for internal state.

Idea:

backend block_bad_client
  stick-table  type ip size 100k expire 30s store http_req_rate(10s)
  http-request track-sc0 src unless { $too_much_time }

and call the table block_bad_client in the frontend config.

Is this what you would like to do?

I'm not sure if this is possible with HAProxy.

Regards
Alex


Norman Branitsky
Senior Cloud Architect
P: 416-916-1752

-Original Message-
From: Aleksandar Lazic 
Sent: Tuesday, November 22, 2022 4:27 PM
To: Branitsky, Norman 
Cc: HAProxy 
Subject: Re: Rate Limit a specific HTML request

Hi.

On 22.11.22 21:57, Branitsky, Norman wrote:

I have the following "generic" rate limit defined - 150 requests in
10s from the same IP address:

  stick-table  type ip size 100k expire 30s store
http_req_rate(10s)
  http-request track-sc0 src unless { src -f
/etc/CONFIG/haproxy/cidr.lst }
  http-request deny deny_status 429 if { sc_http_req_rate(0) gt 150
}

Is it possible to rate limit a specific "computationally expensive"
HTML request from the same IP address to a much smaller number?


What do you define as a "computationally expensive" request?

Maybe you could draw a bigger Picture and tell us what version of HAProxy do 
you use.

In the upcoming 2.7 is also a "Bandwidth limitation", maybe this could help to 
solve your issue.
https://urldefense.com/v3/__https://docs.haproxy.org/dev/configuration.html*9.7__;Iw!!A69Ausm6DtA!cXofLVgdVtpc37THsFRU0XMLkddQpViT0iPILErgEsXJ5Ij0hkHgjayqKAMX3sQrCOK74wbouLMjDkb0ZJe5a08n2NK9$

HTML is a Description Language therefore I think you want to restrict HTTP 
Request/Response, isn't it?

https://urldefense.com/v3/__https://www.rfc-editor.org/rfc/rfc1866__;!!A69Ausm6DtA!cXofLVgdVtpc37THsFRU0XMLkddQpViT0iPILErgEsXJ5Ij0hkHgjayqKAMX3sQrCOK74wbouLMjDkb0ZJe5a55k2_bp$


*Norman Branitsky*
Senior Cloud Architect
Tyler Technologies, Inc.


Regards
Alex


P: 416-916-1752
C: 416.843.0670
http://www.tylertech.com
Tyler Technologies






Re: Haproxy send-proxy probes error

2022-11-23 Thread Aleksandar Lazic
Hi.

There is already a bug entry in apache bz from 2019 about that message.

https://bz.apache.org/bugzilla/show_bug.cgi?id=63893

Regards
Alex

23.11.2022 21:36:26 Marcello Lorenzi :

> Hi All,
> we use haproxy 2.2.17-dd94a25 in our development environment and we configure 
> a backend with proxy protocol v2 to permit the source IP forwarding to a TLS 
> backend server. All the configuration works fine but we notice this error 
> reported on backend Apache error logs:
> 
> AH03507: RemoteIPProxyProtocol: unsupported command 20
> 
> We configure the options check-send-proxy on backend probes but the issue 
> persists. 
> 
> Is it possible to remove this persistent error?
> 
> Thanks,
> Marcello



[PATCH] MINOR: sample: Add bc_rtt and bc_rttvar

2022-12-09 Thread Aleksandar Lazic

Hi.

As I still think that the Balancing algorithm (Peak) EWMA ( 
https://github.com/haproxy/haproxy/issues/1570 ) could help to make a 
"better" decision to which server should the request be send, here the 
beginning of the patches.


In any cases it would be nice to know the rtt from the backend, Imho.

Does anybody know how I can "delay/sleep/wait" for the server answer to 
get some rtt which are not 0 as the rtt is 0.


Regards
AlexFrom 7610bb7234bd324e06e56732a67bf8a0e65d7dbc Mon Sep 17 00:00:00 2001
From: Aleksandar Lazic 
Date: Fri, 9 Dec 2022 13:05:52 +0100
Subject: [PATCH] MINOR: sample: Add bc_rtt and bc_rttvar

To be able to implement "Balancing algorithm (Peak) EWMA" is it
necessary to know the round trip time to the backend.

This Patch adds the fetch sample for the backend server.

Part of GH https://github.com/haproxy/haproxy/issues/1570

---
 doc/configuration.txt| 16 ++
 reg-tests/sample_fetches/tcpinfo_rtt.vtc | 39 
 src/tcp_sample.c | 33 
 3 files changed, 88 insertions(+)
 create mode 100644 reg-tests/sample_fetches/tcpinfo_rtt.vtc

diff --git a/doc/configuration.txt b/doc/configuration.txt
index c45f0b4b6..e8526de7f 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -18854,6 +18854,22 @@ be_server_timeout : integer
   current backend. This timeout can be overwritten by a "set-timeout" rule. See
   also the "cur_server_timeout".
 
+bc_rtt() : integer
+  Returns the Round Trip Time (RTT) measured by the kernel for the backend
+  connection.  is facultative, by default the unit is milliseconds. 
+  can be set to "ms" for milliseconds or "us" for microseconds. If the server
+  connection is not established, if the connection is not TCP or if the
+  operating system does not support TCP_INFO, for example Linux kernels before
+  2.4, the sample fetch fails.
+
+bc_rttvar() : integer
+  Returns the Round Trip Time (RTT) variance measured by the kernel for the
+  backend connection.  is facultative, by default the unit is milliseconds.
+   can be set to "ms" for milliseconds or "us" for microseconds. If the
+  server connection is not established, if the connection is not TCP or if the
+  operating system does not support TCP_INFO, for example Linux kernels before
+  2.4, the sample fetch fails.
+
 be_tunnel_timeout : integer
   Returns the configuration value in millisecond for the tunnel timeout of the
   current backend. This timeout can be overwritten by a "set-timeout" rule. See
diff --git a/reg-tests/sample_fetches/tcpinfo_rtt.vtc b/reg-tests/sample_fetches/tcpinfo_rtt.vtc
new file mode 100644
index 0..f28a2072e
--- /dev/null
+++ b/reg-tests/sample_fetches/tcpinfo_rtt.vtc
@@ -0,0 +1,39 @@
+varnishtest "Test declaration of TCP rtt fetches"
+
+# feature cmd "$HAPROXY_PROGRAM -cc 'version_atleast(v2.8-dev1)'"
+feature ignore_unknown_macro
+
+server s1 {
+rxreq
+txresp
+}  -start
+
+haproxy h1 -conf {
+  defaults common
+  mode http
+  timeout connect "${HAPROXY_TEST_TIMEOUT-5s}"
+  timeout client  "${HAPROXY_TEST_TIMEOUT-5s}"
+  timeout server  "${HAPROXY_TEST_TIMEOUT-5s}"
+
+  frontend fe from common
+  bind "fd@${feh1}"
+
+  default_backend be
+
+  backend be from common
+
+  http-response set-header x-test1 "%[fc_rtt]"
+  http-response set-header x-test2 "%[bc_rtt]"
+  http-response set-header x-test3 "%[fc_rttvar]"
+  http-response set-header x-test4 "%[bc_rttvar]"
+
+  server s1 ${s1_addr}:${s1_port}
+
+} -start
+
+client c1 -connect ${h1_feh1_sock} {
+txreq -req GET -url /
+rxresp
+expect resp.status == 200
+#expect resp.http.x-test2 ~ " ms"
+} -run
diff --git a/src/tcp_sample.c b/src/tcp_sample.c
index 925b93291..bf0d538ea 100644
--- a/src/tcp_sample.c
+++ b/src/tcp_sample.c
@@ -373,6 +373,34 @@ static inline int get_tcp_info(const struct arg *args, struct sample *smp,
 	return 1;
 }
 
+/* get the mean rtt of a backend/server connection */
+static int
+smp_fetch_bc_rtt(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+	if (!get_tcp_info(args, smp, 1, 0))
+		return 0;
+
+	/* By default or if explicitly specified, convert rtt to ms */
+	if (!args || args[0].type == ARGT_STOP || args[0].data.sint == TIME_UNIT_MS)
+		smp->data.u.sint = (smp->data.u.sint + 500) / 1000;
+
+	return 1;
+}
+
+/* get the variance of the mean rtt of a backend/server connection */
+static int
+smp_fetch_bc_rttvar(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+	if (!get_tcp_info(args, smp, 1, 1))
+		return 0;
+
+	/* By default or if explicitly specified, convert rttvar to ms */
+	if (!args || args[0].type == ARGT_STOP 

Re: [PATCH] MINOR: sample: Add bc_rtt and bc_rttvar

2022-12-14 Thread Aleksandar Lazic

Hi,

Any feedback to that patch?

On 09.12.22 13:17, Aleksandar Lazic wrote:

Hi.

As I still think that the Balancing algorithm (Peak) EWMA ( 
https://github.com/haproxy/haproxy/issues/1570 ) could help to make a 
"better" decision to which server should the request be send, here the 
beginning of the patches.


In any cases it would be nice to know the rtt from the backend, Imho.

Does anybody know how I can "delay/sleep/wait" for the server answer to 
get some rtt which are not 0 as the rtt is 0.


Regards
Alex




Re: [ANNOUNCE] haproxy-2.8-dev1

2023-01-07 Thread Aleksandar Lazic




On 07.01.23 10:38, Willy Tarreau wrote:

Hi,

HAProxy 2.8-dev1 was released on 2023/01/07. It added 206 new commits
after version 2.8-dev0.


[snipp]

Any chance to add this patch to 1.8?

[PATCH] MINOR: sample: Add bc_rtt and bc_rttvar
https://www.mail-archive.com/haproxy@formilux.org/msg42962.html

What's the plan for this feature request?

Server weight modulation based on smoothed average measurement
https://github.com/haproxy/haproxy/issues/1977

which looks a per-requirement for

New Balancing algorithm (Peak) EWMA
https://github.com/haproxy/haproxy/issues/1570

regards
alex



Re: [PATCH] MINOR: sample: Add bc_rtt and bc_rttvar

2023-01-10 Thread Aleksandar Lazic



On 09.12.22 13:17, Aleksandar Lazic wrote:

Hi.

As I still think that the Balancing algorithm (Peak) EWMA ( 
https://github.com/haproxy/haproxy/issues/1570 ) could help to make a 
"better" decision to which server should the request be send, here the 
beginning of the patches.


In any cases it would be nice to know the rtt from the backend, Imho.

Does anybody know how I can "delay/sleep/wait" for the server answer to 
get some rtt which are not 0 as the rtt is 0.


Here the updated Patch without the EWMA reference.


Regards
AlexFrom 7610bb7234bd324e06e56732a67bf8a0e65d7dbc Mon Sep 17 00:00:00 2001
From: Aleksandar Lazic 
Date: Fri, 9 Dec 2022 13:05:52 +0100
Subject: [PATCH] MINOR: sample: Add bc_rtt and bc_rttvar

This Patch adds the fetch sample for backends round trip time.

---
 doc/configuration.txt| 16 ++
 reg-tests/sample_fetches/tcpinfo_rtt.vtc | 39 
 src/tcp_sample.c | 33 
 3 files changed, 88 insertions(+)
 create mode 100644 reg-tests/sample_fetches/tcpinfo_rtt.vtc

diff --git a/doc/configuration.txt b/doc/configuration.txt
index c45f0b4b6..e8526de7f 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -18854,6 +18854,22 @@ be_server_timeout : integer
   current backend. This timeout can be overwritten by a "set-timeout" rule. See
   also the "cur_server_timeout".
 
+bc_rtt() : integer
+  Returns the Round Trip Time (RTT) measured by the kernel for the backend
+  connection.  is facultative, by default the unit is milliseconds. 
+  can be set to "ms" for milliseconds or "us" for microseconds. If the server
+  connection is not established, if the connection is not TCP or if the
+  operating system does not support TCP_INFO, for example Linux kernels before
+  2.4, the sample fetch fails.
+
+bc_rttvar() : integer
+  Returns the Round Trip Time (RTT) variance measured by the kernel for the
+  backend connection.  is facultative, by default the unit is milliseconds.
+   can be set to "ms" for milliseconds or "us" for microseconds. If the
+  server connection is not established, if the connection is not TCP or if the
+  operating system does not support TCP_INFO, for example Linux kernels before
+  2.4, the sample fetch fails.
+
 be_tunnel_timeout : integer
   Returns the configuration value in millisecond for the tunnel timeout of the
   current backend. This timeout can be overwritten by a "set-timeout" rule. See
diff --git a/reg-tests/sample_fetches/tcpinfo_rtt.vtc b/reg-tests/sample_fetches/tcpinfo_rtt.vtc
new file mode 100644
index 0..f28a2072e
--- /dev/null
+++ b/reg-tests/sample_fetches/tcpinfo_rtt.vtc
@@ -0,0 +1,39 @@
+varnishtest "Test declaration of TCP rtt fetches"
+
+# feature cmd "$HAPROXY_PROGRAM -cc 'version_atleast(v2.8-dev1)'"
+feature ignore_unknown_macro
+
+server s1 {
+rxreq
+txresp
+}  -start
+
+haproxy h1 -conf {
+  defaults common
+  mode http
+  timeout connect "${HAPROXY_TEST_TIMEOUT-5s}"
+  timeout client  "${HAPROXY_TEST_TIMEOUT-5s}"
+  timeout server  "${HAPROXY_TEST_TIMEOUT-5s}"
+
+  frontend fe from common
+  bind "fd@${feh1}"
+
+  default_backend be
+
+  backend be from common
+
+  http-response set-header x-test1 "%[fc_rtt]"
+  http-response set-header x-test2 "%[bc_rtt]"
+  http-response set-header x-test3 "%[fc_rttvar]"
+  http-response set-header x-test4 "%[bc_rttvar]"
+
+  server s1 ${s1_addr}:${s1_port}
+
+} -start
+
+client c1 -connect ${h1_feh1_sock} {
+txreq -req GET -url /
+rxresp
+expect resp.status == 200
+#expect resp.http.x-test2 ~ " ms"
+} -run
diff --git a/src/tcp_sample.c b/src/tcp_sample.c
index 925b93291..bf0d538ea 100644
--- a/src/tcp_sample.c
+++ b/src/tcp_sample.c
@@ -373,6 +373,34 @@ static inline int get_tcp_info(const struct arg *args, struct sample *smp,
 	return 1;
 }
 
+/* get the mean rtt of a backend/server connection */
+static int
+smp_fetch_bc_rtt(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+	if (!get_tcp_info(args, smp, 1, 0))
+		return 0;
+
+	/* By default or if explicitly specified, convert rtt to ms */
+	if (!args || args[0].type == ARGT_STOP || args[0].data.sint == TIME_UNIT_MS)
+		smp->data.u.sint = (smp->data.u.sint + 500) / 1000;
+
+	return 1;
+}
+
+/* get the variance of the mean rtt of a backend/server connection */
+static int
+smp_fetch_bc_rttvar(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+	if (!get_tcp_info(args, smp, 1, 1))
+		return 0;
+
+	/* By default or if explicitly specified, convert rttvar to ms */
+	if (!args || args[0].type == ARGT_STOP || args[0].data.sint == TIME_UNIT_MS)
+		smp->data.u.sint = (smp->data.u.sin

  1   2   3   4   5   6   7   8   9   10   >