Patch proposal for FEATURE/MAJOR: Add upstream-proxy-tunnel feature (was: Re: Maybe stupid question but can HAProxy now use a upstream proxy)

2024-05-27 Thread Aleksandar Lazic

Hi.

I have done some progress with the feature :-)

The test setup runs in 4 shells.

# shell1: curl -vk --connect-to www.test1.com:4433:127.0.0.1:8080 -H "Host: 
www.test1.com" https://www.test1.com:4433

# shell2: ./haproxy -d -f examples/upstream-proxy.cfg
# shell3: sudo podman run --rm -it --name squid -e TZ=UTC -p 3128:3128 --network 
host ubuntu/squid

# shell4: openssl s_server -trace -www -bugs -debug -cert 
reg-tests/ssl/common.pem

The Request reaches the s_server but I 'm stuck with the return way 
"connection.c:conn_recv_upstream_proxy_tunnel_response()"


Have anyone an Idea what's wrong?

Maybe it's too late for 3.0 but it would be nice to have this feature in 3.1 :-)

Regards

Alex

On 2024-05-24 (Fr.) 00:08, Aleksandar Lazic wrote:

Hi.

I have seen https://github.com/haproxy/haproxy/issues/1542 which requests that 
feature.


Now I have tried to "port" the 
https://github.com/brentcetinich/haproxy/commit/bc258bff030677d855a6a84fec881398e8f1e082 
to the current dev branch and attached the patch.


I'm pretty sure that there are some issues in the patch and I'm happy to make 
some rounds to fix the issues :-)


One question for me is, as I'm not that fit anymore in C and datatype, does 
this `0x1` still fits into 32bit?


```from the Patch

+++ b/include/haproxy/server-t.h
@@ -154,6 +154,7 @@ enum srv_initaddr {
 #define SRV_F_NON_PURGEABLE 0x2000   /* this server cannot be removed at 
runtime */

 #define SRV_F_DEFSRV_USE_SSL 0x4000  /* default-server uses SSL */
 #define SRV_F_DELETED 0x8000 /* srv is deleted but not yet purged 
*/

+#define SRV_F_UPSTREAM_PROXY_TUNNEL 0x1  /* this server uses a upstream 
proxy tunnel with CONNECT method */


```

Another Question raised to me is: Why are not "TRACE(...)" entries in 
src/connection.c only DPRINTF?


On that way a big thanks to brentcetinich for his great work for the initil 
work to that patch.


Regards

Alex

On 2024-05-23 (Do.) 22:32, Aleksandar Lazic wrote:

Hi.

I follow the development more or less closely and I must say I not always 
understand all changes :-).


Just for my clarification is the following setup now possible with HAProxy 
with all the new shiny features  :-)


client => frontend
  |
  \-> backend server dest1 IP:port
    |
    \-> call "CONNECT IP:PORT" on upstream proxy
  |
  \-> TCP FLOW to destination IP


I know there is the http://docs.haproxy.org/2.9/configuration.html#5.2-socks4 
option but sadly not too much enterprise Proxies admins offers socks4 nowadays.


I think the Scenario is still not possible but I would like to have a second 
eye opinion on that.


Maybe somebody on the list have a working solution for the scenario and can 
share it, maybe only via direct mail. ¯\_(ツ)_/¯


Regards
Alex
From 5ac8750390ef91974691c07251f6c32782573c72 Mon Sep 17 00:00:00 2001
From: Aleksandar Lazic 
Date: Mon, 27 May 2024 09:05:39 +0200
Subject: [PATCH 2/2] FEATURE/MAJOR: Add upstream-proxy-tunnel feature

This enables HAProxy to reach an target server via a upstream
http proxy.

This commit should close gh #1542
---
 doc/configuration.txt  |  9 +++
 examples/upstream-proxy-squid.conf | 60 +++
 examples/upstream-proxy.cfg| 23 +++
 include/haproxy/connection-t.h |  1 +
 include/haproxy/connection.h   |  1 +
 src/connection.c   | 96 +-
 src/xprt_handshake.c   | 15 -
 7 files changed, 186 insertions(+), 19 deletions(-)
 create mode 100644 examples/upstream-proxy-squid.conf
 create mode 100644 examples/upstream-proxy.cfg

diff --git a/doc/configuration.txt b/doc/configuration.txt
index c0667af8f8..59a7460558 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -18015,6 +18015,13 @@ tls-tickets
   It may also be used as "default-server" setting to reset any previous
   "default-server" "no-tls-tickets" setting.
 
+upstream-proxy-tunnel :
+  May be used in the following contexts: tcp, http
+
+  This option enables upstream http proxy tunnel for outgoing connections to
+  the server. Using this option won't force the health check to go via upstream
+  http proxy by default.
+
 verify [none|required]
   May be used in the following contexts: tcp, http, log, peers, ring
 
@@ -21926,6 +21933,8 @@ fc_err_str : string
   | 41 | "SOCKS4 Proxy deny the request"   |
   | 42 | "SOCKS4 Proxy handshake aborted by server"|
   | 43 | "SSL fatal error" |
+  | 44 | "Error during reverse connect"|
+  | 45 | "Upstream http proxy write error during hand

Re: Maybe stupid question but can HAProxy now use a upstream proxy

2024-05-23 Thread Aleksandar Lazic

Hi.

I have seen https://github.com/haproxy/haproxy/issues/1542 which requests that 
feature.


Now I have tried to "port" the 
https://github.com/brentcetinich/haproxy/commit/bc258bff030677d855a6a84fec881398e8f1e082 
to the current dev branch and attached the patch.


I'm pretty sure that there are some issues in the patch and I'm happy to make 
some rounds to fix the issues :-)


One question for me is, as I'm not that fit anymore in C and datatype, does this 
`0x1` still fits into 32bit?


```from the Patch

+++ b/include/haproxy/server-t.h
@@ -154,6 +154,7 @@ enum srv_initaddr {
 #define SRV_F_NON_PURGEABLE 0x2000   /* this server cannot be removed at 
runtime */

 #define SRV_F_DEFSRV_USE_SSL 0x4000  /* default-server uses SSL */
 #define SRV_F_DELETED 0x8000 /* srv is deleted but not yet purged 
*/

+#define SRV_F_UPSTREAM_PROXY_TUNNEL 0x1  /* this server uses a upstream 
proxy tunnel with CONNECT method */


```

Another Question raised to me is: Why are not "TRACE(...)" entries in 
src/connection.c only DPRINTF?


On that way a big thanks to brentcetinich for his great work for the initil work 
to that patch.


Regards

Alex

On 2024-05-23 (Do.) 22:32, Aleksandar Lazic wrote:

Hi.

I follow the development more or less closely and I must say I not always 
understand all changes :-).


Just for my clarification is the following setup now possible with HAProxy 
with all the new shiny features  :-)


client => frontend
  |
  \-> backend server dest1 IP:port
    |
    \-> call "CONNECT IP:PORT" on upstream proxy
  |
  \-> TCP FLOW to destination IP


I know there is the http://docs.haproxy.org/2.9/configuration.html#5.2-socks4 
option but sadly not too much enterprise Proxies admins offers socks4 nowadays.


I think the Scenario is still not possible but I would like to have a second 
eye opinion on that.


Maybe somebody on the list have a working solution for the scenario and can 
share it, maybe only via direct mail. ¯\_(ツ)_/¯


Regards
Alex
From bf4e7c44ed939a2a9e119ca9b13b46efe9d43ab9 Mon Sep 17 00:00:00 2001
From: Aleksandar Lazic 
Date: Thu, 23 May 2024 23:52:58 +0200
Subject: [PATCH] FEATURE/MAJOR: Add upstream-proxy-tunnel feature

This commit makes it possible for HAProxy to reach
target server via a upstream http proxy.

This patch is based on the work of @brentcetinich
and refer to gh #1542
---
 include/haproxy/connection-t.h |  14 +++-
 include/haproxy/connection.h   |   3 +
 include/haproxy/server-t.h |   8 +-
 include/haproxy/tcpcheck-t.h   |   1 +
 src/backend.c  |   5 ++
 src/connection.c   |  90 +
 src/proto_quic.c   |   4 +
 src/proto_tcp.c|   2 +
 src/server.c   | 138 -
 src/sock.c |   3 +
 src/tcpcheck.c |   3 +
 src/xprt_handshake.c   |  11 +++
 12 files changed, 225 insertions(+), 57 deletions(-)

diff --git a/include/haproxy/connection-t.h b/include/haproxy/connection-t.h
index 6ee0940be4..660c7bc7ba 100644
--- a/include/haproxy/connection-t.h
+++ b/include/haproxy/connection-t.h
@@ -132,8 +132,12 @@ enum {
 	CO_FL_ACCEPT_PROXY  = 0x0200,  /* receive a valid PROXY protocol header */
 	CO_FL_ACCEPT_CIP= 0x0400,  /* receive a valid NetScaler Client IP header */
 
+	/*  STOLEN unused : 0x0040, 0x0080 */
+	CO_FL_UPSTREAM_PROXY_TUNNEL_SEND	= 0x0040,  /* handshaking with upstream http proxy, going to send the handshake */
+	CO_FL_UPSTREAM_PROXY_TUNNEL_RECV = 0x0080,  /* handshaking with upstream http proxy, going to check if handshake succeed */
+
 	/* below we have all handshake flags grouped into one */
-	CO_FL_HANDSHAKE = CO_FL_SEND_PROXY | CO_FL_ACCEPT_PROXY | CO_FL_ACCEPT_CIP | CO_FL_SOCKS4_SEND | CO_FL_SOCKS4_RECV,
+	CO_FL_HANDSHAKE = CO_FL_SEND_PROXY | CO_FL_ACCEPT_PROXY | CO_FL_ACCEPT_CIP | CO_FL_SOCKS4_SEND | CO_FL_SOCKS4_RECV | CO_FL_UPSTREAM_PROXY_TUNNEL_SEND,
 	CO_FL_WAIT_XPRT = CO_FL_WAIT_L4_CONN | CO_FL_HANDSHAKE | CO_FL_WAIT_L6_CONN,
 
 	CO_FL_SSL_WAIT_HS   = 0x0800,  /* wait for an SSL handshake to complete */
@@ -155,6 +159,10 @@ enum {
 
 	/* below we have all SOCKS handshake flags grouped into one */
 	CO_FL_SOCKS4= CO_FL_SOCKS4_SEND | CO_FL_SOCKS4_RECV,
+
+	/* below we have all upstream http proxy tunnel handshake flags grouped into one */
+	CO_FL_UPSTREAM_PROXY_TUNNEL= CO_FL_UPSTREAM_PROXY_TUNNEL_SEND | CO_FL_UPSTREAM_PROXY_TUNNEL_RECV,
+
 };
 
 /* This function is used to report flags in debugging tools. Please reflect
@@ -241,6 +249,8 @@ enum {
 	CO_ERR_SSL_FATAL,/* SSL fatal error during a SSL_read or SSL_write */
 
 	CO_ER_REVERSE,   /* Error during reverse connect */
+
+	CO_ER_PROXY_CONNECT_SEND, /* Upstream http proxy write e

Maybe stupid question but can HAProxy now use a upstream proxy

2024-05-23 Thread Aleksandar Lazic

Hi.

I follow the development more or less closely and I must say I not always 
understand all changes :-).


Just for my clarification is the following setup now possible with HAProxy with 
all the new shiny features  :-)


client => frontend
  |
  \-> backend server dest1 IP:port
|
\-> call "CONNECT IP:PORT" on upstream proxy
  |
  \-> TCP FLOW to destination IP


I know there is the http://docs.haproxy.org/2.9/configuration.html#5.2-socks4 
option but sadly not too much enterprise Proxies admins offers socks4 nowadays.


I think the Scenario is still not possible but I would like to have a second eye 
opinion on that.


Maybe somebody on the list have a working solution for the scenario and can 
share it, maybe only via direct mail. ¯\_(ツ)_/¯


Regards
Alex



Re: Question on deleting cookies from an HTTP request

2024-04-27 Thread Willy Tarreau
Hi,

On Sat, Apr 27, 2024 at 02:06:54AM +0200, Aleksandar Lazic wrote:
> Hi Lokesh.
> 
> On 2024-04-27 (Sa.) 01:41, Lokesh Jindal wrote:
> > Hey folks
> > 
> > I have found that there is no operator "del-cookie" in HAProxy to delete
> > cookies from the request. (HAProxy does support the operator
> > "del-header").
> > 
> > Can you explain why such an operator is not supported? Is it due to
> > complexity? Due to performance? It will be great if you can share
> > details behind this design choice.
> 
> Well I'm pretty sure because nobody have added this feature into HAProxy.
> You are welcome to send a patch which add this feature.
> 
> Maybe you could add "delete" into the
> https://docs.haproxy.org/2.9/configuration.html#4.2-cookie function.
> 
> Please take a look into
> https://github.com/haproxy/haproxy/blob/master/CONTRIBUTING file if you plan
> to contribute.
> 
> > We have use cases where we want to delete cookies from the request. Not
> > having this support in HAProxy also makes me question if one should be
> > deleting request cookies in the reverse proxy layer.
> 
> Maybe you can use some of the "*-header" functions to remove the cookie as
> shown in the example in
> https://docs.haproxy.org/2.9/configuration.html#4.4-replace-header

Lukas had already provided some fairly complete info on how to do it here:

   https://discourse.haproxy.org/t/best-way-to-delete-a-cookie/3184

Since then we've got the "replace-value" action that does the job for
comma-delimited values, but sadly there's still this bogus syntax in the
Cookie header where a semi-colon is used as the delimiter so replace-value
cannot be used there.

Requests for cookie removal are very rare but have always been present.
I'm really wondering if we should implement a specific action for this
instead of relying on replace-header rules. If adding 2-3 rules for
these rare cases is not considered something too painful to maintain,
I'd prefer it remains that way. If it comes at a cost (e.g. regex match)
then maybe we'll need to think about it for 3.1.

Regards,
Willy



Re: Question on deleting cookies from an HTTP request

2024-04-26 Thread Aleksandar Lazic

Hi Lokesh.

On 2024-04-27 (Sa.) 01:41, Lokesh Jindal wrote:

Hey folks

I have found that there is no operator "del-cookie" in HAProxy to delete cookies 
from the request. (HAProxy does support the operator "del-header").


Can you explain why such an operator is not supported? Is it due to complexity? 
Due to performance? It will be great if you can share details behind this design 
choice.


Well I'm pretty sure because nobody have added this feature into HAProxy. You 
are welcome to send a patch which add this feature.


Maybe you could add "delete" into the 
https://docs.haproxy.org/2.9/configuration.html#4.2-cookie function.


Please take a look into 
https://github.com/haproxy/haproxy/blob/master/CONTRIBUTING file if you plan to 
contribute.


We have use cases where we want to delete cookies from the request. Not having 
this support in HAProxy also makes me question if one should be deleting request 
cookies in the reverse proxy layer.


Maybe you can use some of the "*-header" functions to remove the cookie as shown 
in the example in https://docs.haproxy.org/2.9/configuration.html#4.4-replace-header



Thanks
Lokesh


Regards
Alex



Question on deleting cookies from an HTTP request

2024-04-26 Thread Lokesh Jindal
Hey folks

I have found that there is no operator "del-cookie" in HAProxy to delete
cookies from the request. (HAProxy does support the operator "del-header").

Can you explain why such an operator is not supported? Is it due to
complexity? Due to performance? It will be great if you can share details
behind this design choice.

We have use cases where we want to delete cookies from the request. Not
having this support in HAProxy also makes me question if one should be
deleting request cookies in the reverse proxy layer.

Thanks
Lokesh


Question on logging only specific hosts

2024-03-07 Thread Ing. Andrea Vettori
Hello,
I’m trying to log only specific hosts of a certain frontend.

In the frontend configuration I’m using

log 127.0.0.1:514 local3 info
option httplog

and then later

http-request set-log-level silent unless ACL

this is working but I’m getting a lot of logs that are not matching the ACL; I 
think they are related to connections that are not valid so they do not reach 
reading the “Host” header.
Is there a way to invert the process and start “silent” and then enable “info” 
with an ACL ?

Thank you!

— 
Ing. Andrea Vettori
Sistemi Informativi
B2BIres s.r.l.



Quick question - for Haproxy Technologies...

2024-02-21 Thread David Baker
Hey,



My name is David Baker from Idytocle, we help companies such as yours
answer incoming calls with our team of highly-skilled US-based
representatives.



We’ve successfully reduced answering expenses for other businesses by up to
as 43%, while offering a well managed, satisfaction-raising solution their
callers love.



How about an Inbound Call Answering quote? Respond here with anticipated
call volume on a monthly basis, and we’ll connect shortly to verify your
needs.



Sincerely,

David Baker
Senior Coordinator Manager
Idytocle


Re: Question on stats socket values

2024-01-18 Thread Abhijeet Rastogi
Hi Jesse,

By Golang's client's runtime API library, I'm assuming you mean
https://github.com/haproxytech/client-native. However, whatever library you
end up using, your first source of knowledge should come from the HAproxy
documentation itself, and then work your way towards reading API
documentation of a library.

>For example, I know I can use the Golang client’s runtime api to access
the “Econ” value from the stats socket. Does this value only ever increase
for a backend server, or will it be reset at some point?

In this case, as you're concerned about *econ*, refer to the official
documentation here:
https://docs.haproxy.org/dev/management.html#9.3-show%20stat

In the doc, one of the arguments is "json". I typically use "show stat
json" and then look at the output to figure out the type of stat. In this
case, I can see this.

{
  "objType": "Backend",
  "proxyId": 6,
  "id": 0,
  "field": {
"pos": 13,
"name": "econ"
  },
  "processNum": 1,
  "tags": {
"origin": "Metric",
"nature": "Counter",
"scope": "Process"
  },
  "value": {
"type": "u64",
"value": 0
  }
},

The documentation also provides this explanation:

 13. econ [..BS]: number of requests that encountered an error trying to
 connect to a backend server. The backend stat is the sum of the stat
 for all servers of that backend, plus any connection errors not
 associated with a particular server (such as the backend having no
 active servers).

You can use that workflow to figure out the answer. I hope that answered
your question.

Thanks,
Abhijeet


On Thu, Jan 18, 2024 at 3:50 PM Jesse Brink  wrote:

> Hello,
>
> I have a question re: HAProxy’s stats socket. I would like to know how
> often / if their values are reset. Are they monotonically increasing, or
> will they be reset after some period of time if a backend server is not
> taking traffic?
>
> For example, I know I can use the Golang client’s runtime api to access
> the “Econ” value from the stats socket. Does this value only ever increase
> for a backend server, or will it be reset at some point? I am guessing the
> former, but wanted to try to confirm. Thank you.
>
> -Jesse
>


-- 
Cheers,
Abhijeet (https://abhi.host)


Question on stats socket values

2024-01-18 Thread Jesse Brink
Hello,

I have a question re: HAProxy’s stats socket. I would like to know how often / 
if their values are reset. Are they monotonically increasing, or will they be 
reset after some period of time if a backend server is not taking traffic? 

For example, I know I can use the Golang client’s runtime api to access the 
“Econ” value from the stats socket. Does this value only ever increase for a 
backend server, or will it be reset at some point? I am guessing the former, 
but wanted to try to confirm. Thank you.

-Jesse


Re: Question regarding option redispatch interval

2023-12-30 Thread Froehlich, Dominik
Hello everyone,

I’ve reached out on Slack on the matter and got the info that the  
setting for redispatch only applies to session persistence, not connection 
failures which will redispatch after every retry regardless of the interval.

If it’s true, it would confirm my observations (using regular GET requests) 
that the redispatching happened after every retry, with intervals set at -1, 1 
and even at 30.

Can someone from the dev team confirm this? If it’s true it would be nice to 
reflect that in the docs, stating that the interval is only relevant for 
session persistence.

Thanks, and happy new year to everyone!

D

From: Froehlich, Dominik 
Date: Friday, 22. December 2023 at 15:13
To: haproxy@formilux.org 
Subject: Question regarding option redispatch interval
You don't often get email from dominik.froehl...@sap.com. Learn why this is 
important<https://aka.ms/LearnAboutSenderIdentification>
Hello,

I’m trying to enable retries with redispatch on my HAProxy (v2.7.11)

Here is my config for testing:

defaults
  option redispatch
  retries 6
  timeout connect 500ms

frontend myfrontend
  bind :443 ssl crt /etc/cert/server.pem crt-list /crt-list

  default_backend test

backend test
  server alice localhost:8080
  server bob1 localhost:8081
  server bob2 localhost:8083
  server bob3 localhost:8084
  server bob4 localhost:8085
  server bob5 localhost:8086



So I have 6 servers in the backend, out of which only the “alice” server works. 
All of the “bob” servers don’t respond.

When I run a request against HAProxy, it always works and I can observe using 
tcpdump that HAProxy will try each server (up to 6 times) until it hits the 
working “Alice” one.

This is what I want, however the docs state otherwise:

https://docs.haproxy.org/2.7/configuration.html?q=enable_redispatch#4.2-option%20redispatch


 The optional integer value that controls how often redispatches
   occur when retrying connections. Positive value P indicates a
   redispatch is desired on every Pth retry, and negative value
   N indicate a redispatch is desired on the Nth retry prior to the
   last retry. For example, the default of -1 preserves the
   historical behavior of redispatching on the last retry, a
   positive value of 1 would indicate a redispatch on every retry,
   and a positive value of 3 would indicate a redispatch on every
   third retry. You can disable redispatches with a value of 0.

I did not provide any interval, so my assumption would be the default of -1 
applies, which should mean “redispatching on the last retry”.

So, I would expect that HAProxy would try e.g. “bob4” for 5 times, then select 
“bob5” for the 6th retry and ultimately fail and return a 503. But that’s not 
the behavior I observe.

To me, it looks like the default “redispatch” value seems to be 1 instead of -1.

Can someone provide guidance here?

BR,
Dominik


Question regarding option redispatch interval

2023-12-22 Thread Froehlich, Dominik
Hello,

I’m trying to enable retries with redispatch on my HAProxy (v2.7.11)

Here is my config for testing:

defaults
  option redispatch
  retries 6
  timeout connect 500ms

frontend myfrontend
  bind :443 ssl crt /etc/cert/server.pem crt-list /crt-list

  default_backend test

backend test
  server alice localhost:8080
  server bob1 localhost:8081
  server bob2 localhost:8083
  server bob3 localhost:8084
  server bob4 localhost:8085
  server bob5 localhost:8086



So I have 6 servers in the backend, out of which only the “alice” server works. 
All of the “bob” servers don’t respond.

When I run a request against HAProxy, it always works and I can observe using 
tcpdump that HAProxy will try each server (up to 6 times) until it hits the 
working “Alice” one.

This is what I want, however the docs state otherwise:

https://docs.haproxy.org/2.7/configuration.html?q=enable_redispatch#4.2-option%20redispatch


 The optional integer value that controls how often redispatches
   occur when retrying connections. Positive value P indicates a
   redispatch is desired on every Pth retry, and negative value
   N indicate a redispatch is desired on the Nth retry prior to the
   last retry. For example, the default of -1 preserves the
   historical behavior of redispatching on the last retry, a
   positive value of 1 would indicate a redispatch on every retry,
   and a positive value of 3 would indicate a redispatch on every
   third retry. You can disable redispatches with a value of 0.

I did not provide any interval, so my assumption would be the default of -1 
applies, which should mean “redispatching on the last retry”.

So, I would expect that HAProxy would try e.g. “bob4” for 5 times, then select 
“bob5” for the 6th retry and ultimately fail and return a 503. But that’s not 
the behavior I observe.

To me, it looks like the default “redispatch” value seems to be 1 instead of -1.

Can someone provide guidance here?

BR,
Dominik


Re: AW: [EXT] Re: AW: Re: Question about syslog forwarding with HAProxy with keeping the client IP

2023-11-01 Thread Aleksandar Lazic

Hi Sören.

On 2023-11-01 (Mi.) 18:18, Hellwig, Sören wrote:

Hello Alex,

I can compile the version 2.8.3 from source and install the actual release of 
the 2.8 LTS version.


Yes you can but this will not solve the issue.
Have you read the full mail from the first answer, there are some suggestions 
how to solve the issue?



Best regards,
Sören Hellwig


Regards
Alex


-Ursprüngliche Nachricht-
Von: Aleksandar Lazic 
Gesendet: Mittwoch, 1. November 2023 15:36
An: Hellwig, Sören ; haproxy@formilux.org
Betreff: [EXT] Re: AW: Re: Question about syslog forwarding with HAProxy with 
keeping the client IP



On 2023-11-01 (Mi.) 15:17, Hellwig, Sören wrote:

Hello Aleksandar,

thank you for your reply. We are using HAproxy under SLES 15 SP4 and here is 
the version info:

srvkdgrllbp01:/etc/haproxy # haproxy -vv HAProxy version 2.8.0-fdd8154
2023/05/31 - https://haproxy.org/
Status: long-term supported branch - will stop receiving fixes around Q2 2028.
Known bugs: http://www.haproxy.org/bugs/bugs-2.8.0.html


Uff that's old. Can you update?
Have you seen the rest of the answer in the previous mail, also?

Regards
Alex


Running on: Linux 5.14.21-150400.24.81-default #1 SMP PREEMPT_DYNAMIC
Tue Aug 8 14:10:43 UTC 2023 (90a74a8) x86_64 Build options :
TARGET  = linux-glibc
CPU = generic
CC  = cc
CFLAGS  = -O2 -g -Wall -Wextra -Wundef -Wdeclaration-after-statement 
-Wfatal-errors -Wtype-limits -Wshift-negative-value -Wshift-overflow=2 
-Wduplicated-cond -Wnull-dereference -fwrapv -Wno-address-of-packed-member 
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered 
-Wno-missing-field-initializers -Wno-cast-function-type -Wno-string-plus-int 
-Wno-atomic-alignment
OPTIONS = USE_OPENSSL=1 USE_LUA=1 USE_SYSTEMD=1 USE_PCRE=1
DEBUG   = -DDEBUG_STRICT -DDEBUG_MEMORY_POOLS

Feature list : -51DEGREES +ACCEPT4 +BACKTRACE -CLOSEFROM +CPU_AFFINITY
+CRYPT_H -DEVICEATLAS +DL -ENGINE +EPOLL -EVPORTS +GETADDRINFO -KQUEUE
-LIBATOMIC +LIBCRYPT +LINUX_SPLICE +LINUX_TPROXY +LUA +MATH
-MEMORY_PROFILING +NETFILTER +NS -OBSOLETE_LINKER +OPENSSL
-OPENSSL_WOLFSSL -OT +PCRE -PCRE2 -PCRE2_JIT -PCRE_JIT +POLL +PRCTL
-PROCCTL -PROMEX -PTHREAD_EMULATION -QUIC +RT +SHM_OPEN +SLZ +SSL
-STATIC_PCRE -STATIC_PCRE2 +SYSTEMD +TFO +THREAD +THREAD_DUMP +TPROXY
-WURFL -ZLIB

Default settings :
bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_TGROUPS=16, MAX_THREADS=256, default=2).
Built with OpenSSL version : OpenSSL 1.1.1l  24 Aug 2021 SUSE release
SUSE_OPENSSL_RELEASE Running on OpenSSL version : OpenSSL 1.1.1l  24
Aug 2021 SUSE release 150400.7.53.1 OpenSSL library supports TLS
extensions : yes OpenSSL library supports SNI : yes OpenSSL library
supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3 Built with Lua version :
Lua 5.3.6 Built with network namespace support.
Built with libslz for stateless compression.
Compression algorithms supported : identity("identity"),
deflate("deflate"), raw-deflate("deflate"), gzip("gzip") Built with
transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
IP_FREEBIND Built with PCRE version : 8.45 2021-06-15 Running on PCRE
version : 8.45 2021-06-15 PCRE library supports JIT : no (USE_PCRE_JIT
not set) Encrypted password support via crypt(3): yes Built with gcc
compiler version 7.5.0

Available polling systems :
epoll : pref=300,  test result OK
 poll : pref=200,  test result OK
   select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
   h2 : mode=HTTP  side=FE|BE  mux=H2flags=HTX|HOL_RISK|NO_UPG
 fcgi : mode=HTTP  side=BE mux=FCGI  flags=HTX|HOL_RISK|NO_UPG
 : mode=HTTP  side=FE|BE  mux=H1flags=HTX
   h1 : mode=HTTP  side=FE|BE  mux=H1flags=HTX|NO_UPG
 : mode=TCP   side=FE|BE  mux=PASS  flags=
 none : mode=TCP   side=FE|BE  mux=PASS  flags=NO_UPG

Available services : none

Available filters :
  [BWLIM] bwlim-in
  [BWLIM] bwlim-out
  [CACHE] cache
  [COMP] compression
  [FCGI] fcgi-app
  [SPOE] spoe
  [TRACE] trace

Best regards,
Sören Hellwig

-Ursprüngliche Nachricht-
Von: Aleksandar Lazic 
Gesendet: Montag, 30. Oktober 2023 17:58
An: Hellwig, Sören ; haproxy@formilux.org
Betreff: [EXT] Re: Question about syslog forwarding with HAProxy with
keeping the client IP

Hi,

On 2023-10-30 (Mo.) 15:55, Hellwig, Sören wrote:

Hello Support-Team,

we are using the HAProxy as load balancer for our Graylog servers.


Which version of HAProxy?

haproxy -vv


The TCP based protocols works fine, but we have some trouble with the
syslog forwarding.

Our configuration file *haproxy.cfg* looks like this:

log-forward syslog

       # accept incomming UDP messages

       dgram-bind 10.1.2.50:514

AW: [EXT] Re: AW: Re: Question about syslog forwarding with HAProxy with keeping the client IP

2023-11-01 Thread Hellwig , Sören
Hello Alex,

I can compile the version 2.8.3 from source and install the actual release of 
the 2.8 LTS version.

Best regards,
Sören Hellwig

-Ursprüngliche Nachricht-
Von: Aleksandar Lazic  
Gesendet: Mittwoch, 1. November 2023 15:36
An: Hellwig, Sören ; haproxy@formilux.org
Betreff: [EXT] Re: AW: Re: Question about syslog forwarding with HAProxy with 
keeping the client IP



On 2023-11-01 (Mi.) 15:17, Hellwig, Sören wrote:
> Hello Aleksandar,
> 
> thank you for your reply. We are using HAproxy under SLES 15 SP4 and here is 
> the version info:
> 
> srvkdgrllbp01:/etc/haproxy # haproxy -vv HAProxy version 2.8.0-fdd8154 
> 2023/05/31 - https://haproxy.org/
> Status: long-term supported branch - will stop receiving fixes around Q2 2028.
> Known bugs: http://www.haproxy.org/bugs/bugs-2.8.0.html

Uff that's old. Can you update?
Have you seen the rest of the answer in the previous mail, also?

Regards
Alex

> Running on: Linux 5.14.21-150400.24.81-default #1 SMP PREEMPT_DYNAMIC 
> Tue Aug 8 14:10:43 UTC 2023 (90a74a8) x86_64 Build options :
>TARGET  = linux-glibc
>CPU = generic
>CC  = cc
>CFLAGS  = -O2 -g -Wall -Wextra -Wundef -Wdeclaration-after-statement 
> -Wfatal-errors -Wtype-limits -Wshift-negative-value -Wshift-overflow=2 
> -Wduplicated-cond -Wnull-dereference -fwrapv -Wno-address-of-packed-member 
> -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered 
> -Wno-missing-field-initializers -Wno-cast-function-type -Wno-string-plus-int 
> -Wno-atomic-alignment
>OPTIONS = USE_OPENSSL=1 USE_LUA=1 USE_SYSTEMD=1 USE_PCRE=1
>DEBUG   = -DDEBUG_STRICT -DDEBUG_MEMORY_POOLS
> 
> Feature list : -51DEGREES +ACCEPT4 +BACKTRACE -CLOSEFROM +CPU_AFFINITY 
> +CRYPT_H -DEVICEATLAS +DL -ENGINE +EPOLL -EVPORTS +GETADDRINFO -KQUEUE 
> -LIBATOMIC +LIBCRYPT +LINUX_SPLICE +LINUX_TPROXY +LUA +MATH 
> -MEMORY_PROFILING +NETFILTER +NS -OBSOLETE_LINKER +OPENSSL 
> -OPENSSL_WOLFSSL -OT +PCRE -PCRE2 -PCRE2_JIT -PCRE_JIT +POLL +PRCTL 
> -PROCCTL -PROMEX -PTHREAD_EMULATION -QUIC +RT +SHM_OPEN +SLZ +SSL 
> -STATIC_PCRE -STATIC_PCRE2 +SYSTEMD +TFO +THREAD +THREAD_DUMP +TPROXY 
> -WURFL -ZLIB
> 
> Default settings :
>bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
> 
> Built with multi-threading support (MAX_TGROUPS=16, MAX_THREADS=256, 
> default=2).
> Built with OpenSSL version : OpenSSL 1.1.1l  24 Aug 2021 SUSE release 
> SUSE_OPENSSL_RELEASE Running on OpenSSL version : OpenSSL 1.1.1l  24 
> Aug 2021 SUSE release 150400.7.53.1 OpenSSL library supports TLS 
> extensions : yes OpenSSL library supports SNI : yes OpenSSL library 
> supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3 Built with Lua version : 
> Lua 5.3.6 Built with network namespace support.
> Built with libslz for stateless compression.
> Compression algorithms supported : identity("identity"), 
> deflate("deflate"), raw-deflate("deflate"), gzip("gzip") Built with 
> transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT 
> IP_FREEBIND Built with PCRE version : 8.45 2021-06-15 Running on PCRE 
> version : 8.45 2021-06-15 PCRE library supports JIT : no (USE_PCRE_JIT 
> not set) Encrypted password support via crypt(3): yes Built with gcc 
> compiler version 7.5.0
> 
> Available polling systems :
>epoll : pref=300,  test result OK
> poll : pref=200,  test result OK
>   select : pref=150,  test result OK
> Total: 3 (3 usable), will use epoll.
> 
> Available multiplexer protocols :
> (protocols marked as  cannot be specified using 'proto' keyword)
>   h2 : mode=HTTP  side=FE|BE  mux=H2flags=HTX|HOL_RISK|NO_UPG
> fcgi : mode=HTTP  side=BE mux=FCGI  flags=HTX|HOL_RISK|NO_UPG
> : mode=HTTP  side=FE|BE  mux=H1flags=HTX
>   h1 : mode=HTTP  side=FE|BE  mux=H1flags=HTX|NO_UPG
> : mode=TCP   side=FE|BE  mux=PASS  flags=
> none : mode=TCP   side=FE|BE  mux=PASS  flags=NO_UPG
> 
> Available services : none
> 
> Available filters :
>  [BWLIM] bwlim-in
>  [BWLIM] bwlim-out
>  [CACHE] cache
>  [COMP] compression
>  [FCGI] fcgi-app
>  [SPOE] spoe
>  [TRACE] trace
> 
> Best regards,
> Sören Hellwig
> 
> -Ursprüngliche Nachricht-
> Von: Aleksandar Lazic 
> Gesendet: Montag, 30. Oktober 2023 17:58
> An: Hellwig, Sören ; haproxy@formilux.org
> Betreff: [EXT] Re: Question about syslog forwarding with HAProxy with 
> keeping the client IP
> 
> Hi,
> 
> On 2023-10-30 (Mo.) 15:55, Hellwig, Sören wrote:
>> Hello Support-Team,
>>
>> we are using the HAProxy as load balancer for our Graylog servers.
> 
> Which version of HAProxy?
> 
> hapro

Re: AW: [EXT] Re: Question about syslog forwarding with HAProxy with keeping the client IP

2023-11-01 Thread Aleksandar Lazic




On 2023-11-01 (Mi.) 15:17, Hellwig, Sören wrote:

Hello Aleksandar,

thank you for your reply. We are using HAproxy under SLES 15 SP4 and here is 
the version info:

srvkdgrllbp01:/etc/haproxy # haproxy -vv
HAProxy version 2.8.0-fdd8154 2023/05/31 - https://haproxy.org/
Status: long-term supported branch - will stop receiving fixes around Q2 2028.
Known bugs: http://www.haproxy.org/bugs/bugs-2.8.0.html


Uff that's old. Can you update?
Have you seen the rest of the answer in the previous mail, also?

Regards
Alex


Running on: Linux 5.14.21-150400.24.81-default #1 SMP PREEMPT_DYNAMIC Tue Aug 8 
14:10:43 UTC 2023 (90a74a8) x86_64
Build options :
   TARGET  = linux-glibc
   CPU = generic
   CC  = cc
   CFLAGS  = -O2 -g -Wall -Wextra -Wundef -Wdeclaration-after-statement 
-Wfatal-errors -Wtype-limits -Wshift-negative-value -Wshift-overflow=2 
-Wduplicated-cond -Wnull-dereference -fwrapv -Wno-address-of-packed-member 
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered 
-Wno-missing-field-initializers -Wno-cast-function-type -Wno-string-plus-int 
-Wno-atomic-alignment
   OPTIONS = USE_OPENSSL=1 USE_LUA=1 USE_SYSTEMD=1 USE_PCRE=1
   DEBUG   = -DDEBUG_STRICT -DDEBUG_MEMORY_POOLS

Feature list : -51DEGREES +ACCEPT4 +BACKTRACE -CLOSEFROM +CPU_AFFINITY +CRYPT_H 
-DEVICEATLAS +DL -ENGINE +EPOLL -EVPORTS +GETADDRINFO -KQUEUE -LIBATOMIC 
+LIBCRYPT +LINUX_SPLICE +LINUX_TPROXY +LUA +MATH -MEMORY_PROFILING +NETFILTER 
+NS -OBSOLETE_LINKER +OPENSSL -OPENSSL_WOLFSSL -OT +PCRE -PCRE2 -PCRE2_JIT 
-PCRE_JIT +POLL +PRCTL -PROCCTL -PROMEX -PTHREAD_EMULATION -QUIC +RT +SHM_OPEN 
+SLZ +SSL -STATIC_PCRE -STATIC_PCRE2 +SYSTEMD +TFO +THREAD +THREAD_DUMP +TPROXY 
-WURFL -ZLIB

Default settings :
   bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_TGROUPS=16, MAX_THREADS=256, default=2).
Built with OpenSSL version : OpenSSL 1.1.1l  24 Aug 2021 SUSE release 
SUSE_OPENSSL_RELEASE
Running on OpenSSL version : OpenSSL 1.1.1l  24 Aug 2021 SUSE release 
150400.7.53.1
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with Lua version : Lua 5.3.6
Built with network namespace support.
Built with libslz for stateless compression.
Compression algorithms supported : identity("identity"), deflate("deflate"), 
raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT 
IP_FREEBIND
Built with PCRE version : 8.45 2021-06-15
Running on PCRE version : 8.45 2021-06-15
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Encrypted password support via crypt(3): yes
Built with gcc compiler version 7.5.0

Available polling systems :
   epoll : pref=300,  test result OK
poll : pref=200,  test result OK
  select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
  h2 : mode=HTTP  side=FE|BE  mux=H2flags=HTX|HOL_RISK|NO_UPG
fcgi : mode=HTTP  side=BE mux=FCGI  flags=HTX|HOL_RISK|NO_UPG
: mode=HTTP  side=FE|BE  mux=H1flags=HTX
  h1 : mode=HTTP  side=FE|BE  mux=H1flags=HTX|NO_UPG
: mode=TCP   side=FE|BE  mux=PASS  flags=
none : mode=TCP   side=FE|BE  mux=PASS  flags=NO_UPG

Available services : none

Available filters :
 [BWLIM] bwlim-in
 [BWLIM] bwlim-out
 [CACHE] cache
 [COMP] compression
 [FCGI] fcgi-app
 [SPOE] spoe
 [TRACE] trace

Best regards,
Sören Hellwig

-Ursprüngliche Nachricht-
Von: Aleksandar Lazic 
Gesendet: Montag, 30. Oktober 2023 17:58
An: Hellwig, Sören ; haproxy@formilux.org
Betreff: [EXT] Re: Question about syslog forwarding with HAProxy with keeping 
the client IP

Hi,

On 2023-10-30 (Mo.) 15:55, Hellwig, Sören wrote:

Hello Support-Team,

we are using the HAProxy as load balancer for our Graylog servers.


Which version of HAProxy?

haproxy -vv


The TCP based protocols works fine, but we have some trouble with the
syslog forwarding.

Our configuration file *haproxy.cfg* looks like this:

log-forward syslog

      # accept incomming UDP messages

      dgram-bind 10.1.2.50:514 transparent

      # log message into ring buffer

      log ring@logbuffer format rfc5424 local0

ring logbuffer

      description "buffer for syslog"

      format rfc5424

      maxlen 1200

      size 32764

      timeout connect 5s

      timeout server 10s

      # send outgoing messages via TCP

      server logserver1 10.1.2.44:1514 log-proto octet-count check

      #server logserver1 10.1.2.44:1514 log-proto octet-count check
source
0.0.0.0 usesrc clientip

The syslog messages are forwarded to the logserver1 10.1.2.44.
Unfortunately some older Cisco switches did not se

AW: [EXT] Re: Question about syslog forwarding with HAProxy with keeping the client IP

2023-11-01 Thread Hellwig , Sören
Hello Aleksandar,

thank you for your reply. We are using HAproxy under SLES 15 SP4 and here is 
the version info:

srvkdgrllbp01:/etc/haproxy # haproxy -vv
HAProxy version 2.8.0-fdd8154 2023/05/31 - https://haproxy.org/
Status: long-term supported branch - will stop receiving fixes around Q2 2028.
Known bugs: http://www.haproxy.org/bugs/bugs-2.8.0.html
Running on: Linux 5.14.21-150400.24.81-default #1 SMP PREEMPT_DYNAMIC Tue Aug 8 
14:10:43 UTC 2023 (90a74a8) x86_64
Build options :
  TARGET  = linux-glibc
  CPU = generic
  CC  = cc
  CFLAGS  = -O2 -g -Wall -Wextra -Wundef -Wdeclaration-after-statement 
-Wfatal-errors -Wtype-limits -Wshift-negative-value -Wshift-overflow=2 
-Wduplicated-cond -Wnull-dereference -fwrapv -Wno-address-of-packed-member 
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered 
-Wno-missing-field-initializers -Wno-cast-function-type -Wno-string-plus-int 
-Wno-atomic-alignment
  OPTIONS = USE_OPENSSL=1 USE_LUA=1 USE_SYSTEMD=1 USE_PCRE=1
  DEBUG   = -DDEBUG_STRICT -DDEBUG_MEMORY_POOLS

Feature list : -51DEGREES +ACCEPT4 +BACKTRACE -CLOSEFROM +CPU_AFFINITY +CRYPT_H 
-DEVICEATLAS +DL -ENGINE +EPOLL -EVPORTS +GETADDRINFO -KQUEUE -LIBATOMIC 
+LIBCRYPT +LINUX_SPLICE +LINUX_TPROXY +LUA +MATH -MEMORY_PROFILING +NETFILTER 
+NS -OBSOLETE_LINKER +OPENSSL -OPENSSL_WOLFSSL -OT +PCRE -PCRE2 -PCRE2_JIT 
-PCRE_JIT +POLL +PRCTL -PROCCTL -PROMEX -PTHREAD_EMULATION -QUIC +RT +SHM_OPEN 
+SLZ +SSL -STATIC_PCRE -STATIC_PCRE2 +SYSTEMD +TFO +THREAD +THREAD_DUMP +TPROXY 
-WURFL -ZLIB

Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_TGROUPS=16, MAX_THREADS=256, default=2).
Built with OpenSSL version : OpenSSL 1.1.1l  24 Aug 2021 SUSE release 
SUSE_OPENSSL_RELEASE
Running on OpenSSL version : OpenSSL 1.1.1l  24 Aug 2021 SUSE release 
150400.7.53.1
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with Lua version : Lua 5.3.6
Built with network namespace support.
Built with libslz for stateless compression.
Compression algorithms supported : identity("identity"), deflate("deflate"), 
raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT 
IP_FREEBIND
Built with PCRE version : 8.45 2021-06-15
Running on PCRE version : 8.45 2021-06-15
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Encrypted password support via crypt(3): yes
Built with gcc compiler version 7.5.0

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
 h2 : mode=HTTP  side=FE|BE  mux=H2flags=HTX|HOL_RISK|NO_UPG
   fcgi : mode=HTTP  side=BE mux=FCGI  flags=HTX|HOL_RISK|NO_UPG
   : mode=HTTP  side=FE|BE  mux=H1flags=HTX
 h1 : mode=HTTP  side=FE|BE  mux=H1flags=HTX|NO_UPG
   : mode=TCP   side=FE|BE  mux=PASS  flags=
   none : mode=TCP   side=FE|BE  mux=PASS  flags=NO_UPG

Available services : none

Available filters :
[BWLIM] bwlim-in
[BWLIM] bwlim-out
[CACHE] cache
[COMP] compression
[FCGI] fcgi-app
[SPOE] spoe
[TRACE] trace

Best regards,
Sören Hellwig

-Ursprüngliche Nachricht-
Von: Aleksandar Lazic  
Gesendet: Montag, 30. Oktober 2023 17:58
An: Hellwig, Sören ; haproxy@formilux.org
Betreff: [EXT] Re: Question about syslog forwarding with HAProxy with keeping 
the client IP

Hi,

On 2023-10-30 (Mo.) 15:55, Hellwig, Sören wrote:
> Hello Support-Team,
> 
> we are using the HAProxy as load balancer for our Graylog servers.

Which version of HAProxy?

haproxy -vv

> The TCP based protocols works fine, but we have some trouble with the 
> syslog forwarding.
> 
> Our configuration file *haproxy.cfg* looks like this:
> 
> log-forward syslog
> 
>      # accept incomming UDP messages
> 
>      dgram-bind 10.1.2.50:514 transparent
> 
>      # log message into ring buffer
> 
>      log ring@logbuffer format rfc5424 local0
> 
> ring logbuffer
> 
>      description "buffer for syslog"
> 
>      format rfc5424
> 
>      maxlen 1200
> 
>      size 32764
> 
>      timeout connect 5s
> 
>      timeout server 10s
> 
>      # send outgoing messages via TCP
> 
>      server logserver1 10.1.2.44:1514 log-proto octet-count check
> 
>      #server logserver1 10.1.2.44:1514 log-proto octet-count check 
> source
> 0.0.0.0 usesrc clientip
> 
> The syslog messages are forwarded to the logserver1 10.1.2.44. 
> Unfortunately some older Cisco switches did not s

Re: Question about syslog forwarding with HAProxy with keeping the client IP

2023-10-30 Thread Aleksandar Lazic

Hi,

On 2023-10-30 (Mo.) 15:55, Hellwig, Sören wrote:

Hello Support-Team,

we are using the HAProxy as load balancer for our Graylog servers.


Which version of HAProxy?

haproxy -vv

The TCP based protocols works fine, but we have some trouble with the syslog 
forwarding.


Our configuration file *haproxy.cfg* looks like this:

log-forward syslog

     # accept incomming UDP messages

     dgram-bind 10.1.2.50:514 transparent

     # log message into ring buffer

     log ring@logbuffer format rfc5424 local0

ring logbuffer

     description "buffer for syslog"

     format rfc5424

     maxlen 1200

     size 32764

     timeout connect 5s

     timeout server 10s

     # send outgoing messages via TCP

     server logserver1 10.1.2.44:1514 log-proto octet-count check

     #server logserver1 10.1.2.44:1514 log-proto octet-count check source 
0.0.0.0 usesrc clientip


The syslog messages are forwarded to the logserver1 10.1.2.44. Unfortunately 
some older Cisco switches did not send the hostname or IP address in the syslog 
packet.


Is there any chance to route the client IP though the ringbuffer to the 
logserver1?


As HAProxy does not handle the syslog protocl isn't there a option to add this 
info into the syslog protocol. A possible solution is to use for this specific 
devices a syslog receiver like fluentbit or rsyslog which adds the information 
and forwards the log line to haproxy or the destination server.


https://docs.fluentbit.io/manual/pipeline/inputs/syslog
https://docs.fluentbit.io/manual/pipeline/filters/record-modifier
https://docs.fluentbit.io/manual/pipeline/outputs

https://www.rsyslog.com/doc/v8-stable/configuration/modules/idx_input.html
https://www.rsyslog.com/doc/v8-stable/configuration/modules/idx_messagemod.html
https://www.rsyslog.com/doc/v8-stable/configuration/modules/idx_output.html

Just some ideas how to solve the issue.

The command *source* is not allowed in the *ring* section.  If I uncomment the 
last line no data is send to the logserver1.


Best regards,

Sören Hellwig

Dipl.-Ing. (FH) technische Informatik


Best regards
Alex



Question about syslog forwarding with HAProxy with keeping the client IP

2023-10-30 Thread Hellwig , Sören
Hello Support-Team,

we are using the HAProxy as load balancer for our Graylog servers.
The TCP based protocols works fine, but we have some trouble with the syslog 
forwarding.
Our configuration file haproxy.cfg looks like this:

log-forward syslog
# accept incomming UDP messages
dgram-bind 10.1.2.50:514 transparent

# log message into ring buffer
log ring@logbuffer format rfc5424 local0

ring logbuffer
description "buffer for syslog"
format rfc5424
maxlen 1200
size 32764
timeout connect 5s
timeout server 10s

# send outgoing messages via TCP
server logserver1 10.1.2.44:1514 log-proto octet-count check
#server logserver1 10.1.2.44:1514 log-proto octet-count check source 
0.0.0.0 usesrc clientip

The syslog messages are forwarded to the logserver1 10.1.2.44. Unfortunately 
some older Cisco switches did not send the hostname or IP address in the syslog 
packet.
Is there any chance to route the client IP though the ringbuffer to the 
logserver1?
The command source is not allowed in the ring section.  If I uncomment the last 
line no data is send to the logserver1.

Best regards,

Sören Hellwig
Dipl.-Ing. (FH) technische Informatik
Abteilung IT Basis - Team Basistechnologien

[UKE_Dachmarke_RGB_200]

Universitätsklinikum Hamburg-Eppendorf
Geschäftsbereich Informationstechnologie
Martinistraße 52
Gebäude O36 / 1.OG / Raum 16
20246 Hamburg
Telefon: +49 (0)40 7410-57552
Mobil: +49 (0)152-22837423
s.hell...@uke.de
www.uke.de

--

_

Universitätsklinikum Hamburg-Eppendorf; Körperschaft des öffentlichen Rechts; 
Gerichtsstand: Hamburg | www.uke.de
Vorstandsmitglieder: Prof. Dr. Christian Gerloff (Vorsitzender), Joachim Prölß, 
Prof. Dr. Blanche Schwappach-Pignataro, Marya Verdel
_

SAVE PAPER - THINK BEFORE PRINTING


Quick question for haproxy.org

2023-10-16 Thread Charolette Danzig
Hi there,

I realise you're busy so I'll get straight to the point. I noticed that in your 
article www.haproxy.org/they-use-it.html you linked to vpnroom.com/.

I've got a couple of clients who are interested in getting in on the action - 
how much do you charge for a placement like that?

Best,
Charolette Danzig
ps - if you're not the right person to talk to about this, who should I be 
talking to?
pps - just hit me up with a quick 'unsubscribe' if you don't want to receive 
any more advertising opportunities from me



Re: @Wolfssl: any plans to add "ECH (Encrypted client hello) support" and question about Roadmap

2023-06-01 Thread William Lallemand
On Thu, Jun 01, 2023 at 02:15:57PM +0200, Aleksandar Lazic wrote:
> Hi,
> 
> As we have now a shiny new LTS let's take a look into the future :-)
> 
> As the Wolfssl looks like a good future alternative for OpenSSL is there 
> any plan to add ECH (Encrypted client hello) ( 
> https://github.com/haproxy/haproxy/issues/1924 ) into Wolfssl?
> 
> Is there any Idea which feature is planed to be added by HAProxy Company 
> from the feature requests 
> https://github.com/haproxy/haproxy/labels/type%3A%20feature ?
> 
> Regards
> Alex
> 

As far as I know ECH is still a draft and was not release yet, it looks
like it was already integrated in wolfssl though:

https://www.wolfssl.com/encrypted-client-hello-ech-now-supported-wolfssl/

But since the RFC is not released yet their implementation would
probably change.

But this won't probably not be usable for HAProxy since we are using the
OpenSSL compatiblity layer.

If you want to discuss this, please continue on the haproxy github
ticket or we will again split the discussion between multiple support..

-- 
William Lallemand



@Wolfssl: any plans to add "ECH (Encrypted client hello) support" and question about Roadmap

2023-06-01 Thread Aleksandar Lazic

Hi,

As we have now a shiny new LTS let's take a look into the future :-)

As the Wolfssl looks like a good future alternative for OpenSSL is there 
any plan to add ECH (Encrypted client hello) ( 
https://github.com/haproxy/haproxy/issues/1924 ) into Wolfssl?


Is there any Idea which feature is planed to be added by HAProxy Company 
from the feature requests 
https://github.com/haproxy/haproxy/labels/type%3A%20feature ?


Regards
Alex



Re: question for issue

2023-02-17 Thread Aurelien DARRAGON
Hi Massimiliano,

> i see ,
>
> https://www.haproxy.org/download/2.0/src/
> 

>
>
> but there is not the rpm version for doing
>
> a yum localinstall ?
>
>
> thanx a lot

Using http://haproxy.org/download you will only get access to haproxy
source code for manual compiling.

You might want to take a look at
https://github.com/haproxy/wiki/wiki/Packages if you're looking for
precompiled packages for your system.

Regards,
Aurelien



Re: question for issue

2023-02-17 Thread Massimiliano Toscano


 sorry ,

i see make TARGET=generic


but there is not the rpm version for doing

a yum localinstall ?


thanx a lot


regards


Massimiliano Toscano

Supporto Tecnico

Telefono  +39 02 82397643


Da: Massimiliano Toscano
Inviato: venerdì 17 febbraio 2023 17:31:02
A: haproxy@formilux.org; w...@1wt.eu
Oggetto: Re: question for issue



i see ,

https://www.haproxy.org/download/2.0/src/


Massimiliano Toscano

Supporto Tecnico

Telefono  +39 02 82397643


Da: Massimiliano Toscano
Inviato: venerdì 17 febbraio 2023 17:26:42
A: haproxy@formilux.org; w...@1wt.eu
Oggetto: I: question for issue


Hi


i had written for this issue ,


pls i'm taking a look to https://www.haproxy.org/

HAProxy - The Reliable, High Perf. TCP/HTTP Load 
Balancer<https://www.haproxy.org/>
www.haproxy.org
Reliable, High Performance TCP/HTTP Load Balancer




could i to receive suggestion about how i get HAPROXY version 2 and i install 
in OS linux Redhat 7.3

?


must i add some REPO or just local installation wgetting the package in the 
website


no worried about dependencies in case of local installation ?


thanx a lot

regards


Max.

Massimiliano Toscano

Supporto Tecnico

Telefono  +39 02 82397643



Da: Massimiliano Toscano
Inviato: venerdì 17 febbraio 2023 11:42
A: wi...@haproxy.org
Oggetto: question for issue



Hi Team

ona favour ,
pls an importante issue in production, regard a new multisan wildcard 
certificate that we have loaded in haproxy 1.5

in one VM redhat os 7.3

we have put a new certificate combo
/etc/haproxy/volkswagen_combo.it.pem

this combo is , www.volkswagen.it.crt + key + CA

we have created

and added in haproy.cfg , but maybe is not totally good,

we see form.volkswagen.it with unsecure certificate for some minute and after 
other minutes we see "secure" ith epire correct 16th december 2023

how is it possible , ..

if i try to do yum update haproxy i see just always 1.5

but not haproxy VERSION 2

maybe have i to add other REPO for other packages,  or i can so haproxy 
localinstall ?

not worries about dependencies ?


[root@xbalance prova100]# haproxy -v
HA-Proxy version 1.5.18 2016/05/10


[root@xbalance ]# yum update haproxy
BDB2053 Freeing read locks for locker 0x6d1b: 8947/139808343164800
Loaded plugins: langpacks, product-id, search-disabled-repos, 
subscription-manager

Resolving Dependencies
--> Running transaction check
---> Package haproxy.x86_64 0:1.5.18-3.el7_3.1 will be updated
---> Package haproxy.x86_64 0:1.5.18-9.el7_9.1 will be an update


--> Finished Dependency Resolution

Dependencies Resolved

=
 Package Arch   Version 
Repository  Size
=
Updating:
 haproxy x86_64 
1.5.18-9.el7_9.1rhel-7-server-rpms  
   835 k

Transaction Summary
=
Upgrade  1 Package

Total download size: 835 k



OS
[root@]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.3 (Maipo)


 CERTIFICATE WE HAVE PUT IS
/etc/haproxy/volkswagen_combo.it.pem crt

bind *:443 ssl crt /etc/haproxy/volkswagen_combo.it.pem crt

[root@pallino-xbalance-xpro prova100]# grep -i ssl /etc/haproxy/haproxy.cfg


Maybe MULTISAN ha Cryptographyy too high for  HAPROXY 1.5 ?

i know is not a simple issue,

maybe can be in relation with Os or anyway could you suggest something pls ?

thanx a lot

Massimiliano .



Massimiliano Toscano

Supporto Tecnico

Telefono  +39 02 82397643


Re: question for issue

2023-02-17 Thread Massimiliano Toscano

i see ,

https://www.haproxy.org/download/2.0/src/


Massimiliano Toscano

Supporto Tecnico

Telefono  +39 02 82397643


Da: Massimiliano Toscano
Inviato: venerdì 17 febbraio 2023 17:26:42
A: haproxy@formilux.org; w...@1wt.eu
Oggetto: I: question for issue


Hi


i had written for this issue ,


pls i'm taking a look to https://www.haproxy.org/

HAProxy - The Reliable, High Perf. TCP/HTTP Load 
Balancer<https://www.haproxy.org/>
www.haproxy.org
Reliable, High Performance TCP/HTTP Load Balancer




could i to receive suggestion about how i get HAPROXY version 2 and i install 
in OS linux Redhat 7.3

?


must i add some REPO or just local installation wgetting the package in the 
website


no worried about dependencies in case of local installation ?


thanx a lot

regards


Max.

Massimiliano Toscano

Supporto Tecnico

Telefono  +39 02 82397643



Da: Massimiliano Toscano
Inviato: venerdì 17 febbraio 2023 11:42
A: wi...@haproxy.org
Oggetto: question for issue



Hi Team

ona favour ,
pls an importante issue in production, regard a new multisan wildcard 
certificate that we have loaded in haproxy 1.5

in one VM redhat os 7.3

we have put a new certificate combo
/etc/haproxy/volkswagen_combo.it.pem

this combo is , www.volkswagen.it.crt + key + CA

we have created

and added in haproy.cfg , but maybe is not totally good,

we see form.volkswagen.it with unsecure certificate for some minute and after 
other minutes we see "secure" ith epire correct 16th december 2023

how is it possible , ..

if i try to do yum update haproxy i see just always 1.5

but not haproxy VERSION 2

maybe have i to add other REPO for other packages,  or i can so haproxy 
localinstall ?

not worries about dependencies ?


[root@xbalance prova100]# haproxy -v
HA-Proxy version 1.5.18 2016/05/10


[root@xbalance ]# yum update haproxy
BDB2053 Freeing read locks for locker 0x6d1b: 8947/139808343164800
Loaded plugins: langpacks, product-id, search-disabled-repos, 
subscription-manager

Resolving Dependencies
--> Running transaction check
---> Package haproxy.x86_64 0:1.5.18-3.el7_3.1 will be updated
---> Package haproxy.x86_64 0:1.5.18-9.el7_9.1 will be an update


--> Finished Dependency Resolution

Dependencies Resolved

=
 Package Arch   Version 
Repository  Size
=
Updating:
 haproxy x86_64 
1.5.18-9.el7_9.1rhel-7-server-rpms  
   835 k

Transaction Summary
=
Upgrade  1 Package

Total download size: 835 k



OS
[root@]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.3 (Maipo)


 CERTIFICATE WE HAVE PUT IS
/etc/haproxy/volkswagen_combo.it.pem crt

bind *:443 ssl crt /etc/haproxy/volkswagen_combo.it.pem crt

[root@pallino-xbalance-xpro prova100]# grep -i ssl /etc/haproxy/haproxy.cfg


Maybe MULTISAN ha Cryptographyy too high for  HAPROXY 1.5 ?

i know is not a simple issue,

maybe can be in relation with Os or anyway could you suggest something pls ?

thanx a lot

Massimiliano .



Massimiliano Toscano

Supporto Tecnico

Telefono  +39 02 82397643


I: question for issue

2023-02-17 Thread Massimiliano Toscano
Hi


i had written for this issue ,


pls i'm taking a look to https://www.haproxy.org/

HAProxy - The Reliable, High Perf. TCP/HTTP Load 
Balancer<https://www.haproxy.org/>
www.haproxy.org
Reliable, High Performance TCP/HTTP Load Balancer




could i to receive suggestion about how i get HAPROXY version 2 and i install 
in OS linux Redhat 7.3

?


must i add some REPO or just local installation wgetting the package in the 
website


no worried about dependencies in case of local installation ?


thanx a lot

regards


Max.

Massimiliano Toscano

Supporto Tecnico

Telefono  +39 02 82397643



Da: Massimiliano Toscano
Inviato: venerdì 17 febbraio 2023 11:42
A: wi...@haproxy.org
Oggetto: question for issue



Hi Team

ona favour ,
pls an importante issue in production, regard a new multisan wildcard 
certificate that we have loaded in haproxy 1.5

in one VM redhat os 7.3

we have put a new certificate combo
/etc/haproxy/volkswagen_combo.it.pem

this combo is , www.volkswagen.it.crt + key + CA

we have created

and added in haproy.cfg , but maybe is not totally good,

we see form.volkswagen.it with unsecure certificate for some minute and after 
other minutes we see "secure" ith epire correct 16th december 2023

how is it possible , ..

if i try to do yum update haproxy i see just always 1.5

but not haproxy VERSION 2

maybe have i to add other REPO for other packages,  or i can so haproxy 
localinstall ?

not worries about dependencies ?


[root@xbalance prova100]# haproxy -v
HA-Proxy version 1.5.18 2016/05/10


[root@xbalance ]# yum update haproxy
BDB2053 Freeing read locks for locker 0x6d1b: 8947/139808343164800
Loaded plugins: langpacks, product-id, search-disabled-repos, 
subscription-manager

Resolving Dependencies
--> Running transaction check
---> Package haproxy.x86_64 0:1.5.18-3.el7_3.1 will be updated
---> Package haproxy.x86_64 0:1.5.18-9.el7_9.1 will be an update


--> Finished Dependency Resolution

Dependencies Resolved

=
 Package Arch   Version 
Repository  Size
=
Updating:
 haproxy x86_64 
1.5.18-9.el7_9.1rhel-7-server-rpms  
   835 k

Transaction Summary
=
Upgrade  1 Package

Total download size: 835 k



OS
[root@]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.3 (Maipo)


 CERTIFICATE WE HAVE PUT IS
/etc/haproxy/volkswagen_combo.it.pem crt

bind *:443 ssl crt /etc/haproxy/volkswagen_combo.it.pem crt

[root@pallino-xbalance-xpro prova100]# grep -i ssl /etc/haproxy/haproxy.cfg


Maybe MULTISAN ha Cryptographyy too high for  HAPROXY 1.5 ?

i know is not a simple issue,

maybe can be in relation with Os or anyway could you suggest something pls ?

thanx a lot

Massimiliano .



Massimiliano Toscano

Supporto Tecnico

Telefono  +39 02 82397643


Re: Question about the "name" option for the bind line

2023-01-08 Thread Willy Tarreau
On Sun, Jan 08, 2023 at 05:54:46PM -0700, Shawn Heisey wrote:
> On 1/7/23 09:59, Willy Tarreau wrote:
> > Also if you want you can show the IP:ports by adding "stats show-legends"
> > in your stats section. However, be aware that it will also show server IP
> > addresses, configured stickiness cookies etc. Thus only do this if access
> > to your stats page is restricted to authorized users only.
> 
> I have auth configured for my stats URI.  It's open to the world other than
> that.  All the addresses are RFC1918, so they aren't useful unless somebody
> manages to exploit a vulnerability on my server.

It's a matter of choice. Most users don't want to reveal their internal
addresses. For example some might exploit some weaknesses of some of
your servers by prepending an x-forwarded-for showing one of such
addresses and being allowed to perform some operations that are normally
not allowed from outside. Just an example of course, but you get the
idea.

> I did configure show-legends, and after finally figuring out how the info
> was presented (tooltip), I noticed something:  It does not indicate whether
> the listener is TCP or UDP.  The fact that I have "quic" in the name makes
> it easy for me to figure it out, but I do think it should indicate protocol.

Interesting, you're right. When the first UDP listeners were added,
nobody thought about improving the stats output to mention this. But
it seems to me that it would be useful. Maybe we should even try to
reproduce the format as present on the "bind" line. That's something
we need to have a look at.

Willy



Re: Question about the "name" option for the bind line

2023-01-08 Thread Shawn Heisey

On 1/7/23 09:59, Willy Tarreau wrote:

Also if you want you can show the IP:ports by adding "stats show-legends"
in your stats section. However, be aware that it will also show server IP
addresses, configured stickiness cookies etc. Thus only do this if access
to your stats page is restricted to authorized users only.


I have auth configured for my stats URI.  It's open to the world other 
than that.  All the addresses are RFC1918, so they aren't useful unless 
somebody manages to exploit a vulnerability on my server.


I did configure show-legends, and after finally figuring out how the 
info was presented (tooltip), I noticed something:  It does not indicate 
whether the listener is TCP or UDP.  The fact that I have "quic" in the 
name makes it easy for me to figure it out, but I do think it should 
indicate protocol.


Thanks,
Shawn



Re: Question about the "name" option for the bind line

2023-01-07 Thread Shawn Heisey

On 1/7/23 10:35, Shawn Heisey wrote:
I made that change, and it's still not including the alt-svc header on 
the stats page.


Once again bitten by my pacemaker config!  Turns out that it had 
switched the VIP to the other server, which still had the old config.  I 
think one of my haproxy restarts was noticed by pacemaker and it 
declared the server bad.


http-after-response is working.

Thanks,
Shawn



Re: Question about the "name" option for the bind line

2023-01-07 Thread Willy Tarreau
On Sat, Jan 07, 2023 at 10:39:47AM -0700, Shawn Heisey wrote:
> On 1/7/23 09:59, Willy Tarreau wrote:
> > No, you just have one entry per "bind" line. If it's only a matter of
> > listening on multiple host:port and you want them merged, you could
> > probably put all the addresses on the same line separated by commas
> > and see if it's better:
> > 
> >bind quic4@1.1.1.1:443,quic4@2.2.2.2:443,quic4@3.3.3.3:443 ssl crt ...
> 
> That's how I had it configured and it showed three lines on stats.

Ah, then maybe the name is per-listener then, my memory might be
confused. This would mean that when declaring a port range we'd
get one line per port in this case. I stand corrected.

> I now
> have three lines, all with different names.  quic443-vip and quic443-
> for each of the real servers.

OK, thanks for the feedback!

Willy



Re: Question about the "name" option for the bind line

2023-01-07 Thread Shawn Heisey

On 1/7/23 09:59, Willy Tarreau wrote:

No, you just have one entry per "bind" line. If it's only a matter of
listening on multiple host:port and you want them merged, you could
probably put all the addresses on the same line separated by commas
and see if it's better:

   bind quic4@1.1.1.1:443,quic4@2.2.2.2:443,quic4@3.3.3.3:443 ssl crt ...


That's how I had it configured and it showed three lines on stats.  I 
now have three lines, all with different names.  quic443-vip and 
quic443- for each of the real servers.


Thanks,
Shawn



Re: Question about the "name" option for the bind line

2023-01-07 Thread Shawn Heisey

On 1/7/23 10:04, Willy Tarreau wrote:

However there's a solution nowadays, you can use "http-after-response"
instead of "http-response". It will apply *after* the response, and will
happily overwrite stats, redirects and even error pages. I'd say that
it's the recommended way to place alt-svc now. /me just realises that I
should update the example on the haproxy.org main page by the way
I made that change, and it's still not including the alt-svc header on 
the stats page.


Initially I changed every http-response to http-after-response, but that 
didn't work for my error lines like this:


  http-response return status 404 default-errorfiles if { status 404 }

still on haproxy 2.7.1.

I love this community.  Thank you for all your efforts.

Thanks,
Shawn



Re: Question about the "name" option for the bind line

2023-01-07 Thread Willy Tarreau
On Sat, Jan 07, 2023 at 09:57:06AM -0700, Shawn Heisey wrote:
> On 1/7/23 09:41, Shawn Heisey wrote:
> > That's really cool.  But I have an oddity I'd like to share and see if
> > it needs some work.
> 
> Semi-related but separate:  I have this line in that frontend:
> 
>   stats uri /redacted
> 
> Which works well ... but it never switches to http/3.  The alt-svc header
> that the frontend specifies is not in the response, even if I move that
> config before stats.
> 
>   http-response add-header alt-svc 'h3=":443"; ma=7200'
> 
> That's not causing me any problems ... it's not like http/2 is slow. :) But
> I did want to mention it, see if you might want to change it.  I suspect
> that for the stats uri there is a lot of frontend config that is ignored.

Indeed, stats intercept the request and response processing and produce
a response directly. You may even remember that years ago it was not
even possible to support compression nor keep-alive due to this. Nowadays
we have to keep this behavior because of the numerous deployed configs
which assume that http-response apply to incoming responses from servers.

However there's a solution nowadays, you can use "http-after-response"
instead of "http-response". It will apply *after* the response, and will
happily overwrite stats, redirects and even error pages. I'd say that
it's the recommended way to place alt-svc now. /me just realises that I
should update the example on the haproxy.org main page by the way.

Willy



Re: Question about the "name" option for the bind line

2023-01-07 Thread Willy Tarreau
On Sat, Jan 07, 2023 at 09:41:01AM -0700, Shawn Heisey wrote:
> On 1/7/23 07:46, Willy Tarreau wrote:
> > Indeed, you need "option socket-stats" in the frontend that has such
> > listeners, so that the stats are collected per-listening socket (this
> > is not the case by default).
> 
> That's really cool.  But I have an oddity I'd like to share and see if it
> needs some work.
> 
> https://www.dropbox.com/s/m54wp15wkmkrzcp/haproxy_option_socket-stats.png?dl=0
> 
> The bind config line I have for quic lists three host:port combos (the two
> real server IPs and the VIP), which is why it's there three times. I know
> you probably don't want to put info like the host:port on the stats page.
> Because all three host:port combos are part of the same bind line, I think
> it probably should have only listed the name once. Thoughts?

No, you just have one entry per "bind" line. If it's only a matter of
listening on multiple host:port and you want them merged, you could
probably put all the addresses on the same line separated by commas
and see if it's better:

  bind quic4@1.1.1.1:443,quic4@2.2.2.2:443,quic4@3.3.3.3:443 ssl crt ...

Also if you want you can show the IP:ports by adding "stats show-legends"
in your stats section. However, be aware that it will also show server IP
addresses, configured stickiness cookies etc. Thus only do this if access
to your stats page is restricted to authorized users only.

> I will be adjusting that to three bind lines with separate names and the
> same options, because I do like the idea of having them separate ... but I
> think by default it probably should have only showed one line, not three
> with the same name, because it's currently one config line.

No they are precisely 3 config lines, which is why you have 3 stats.
Internally one such line creates what we call a "bind_conf" which
groups a number of settings, and the name is given to this line.

Regards,
Willy



Re: Question about the "name" option for the bind line

2023-01-07 Thread Shawn Heisey

On 1/7/23 09:41, Shawn Heisey wrote:
That's really cool.  But I have an oddity I'd like to share and see if 
it needs some work.


Semi-related but separate:  I have this line in that frontend:

  stats uri /redacted

Which works well ... but it never switches to http/3.  The alt-svc 
header that the frontend specifies is not in the response, even if I 
move that config before stats.


  http-response add-header alt-svc 'h3=":443"; ma=7200'

That's not causing me any problems ... it's not like http/2 is slow. :) 
But I did want to mention it, see if you might want to change it.  I 
suspect that for the stats uri there is a lot of frontend config that is 
ignored.


In case it's important, haproxy -vv:

HAProxy version 2.7.1 2022/12/19 - https://haproxy.org/
Status: stable branch - will stop receiving fixes around Q1 2024.
Known bugs: http://www.haproxy.org/bugs/bugs-2.7.1.html
Running on: Linux 6.0.0-1009-oem #9-Ubuntu SMP PREEMPT_DYNAMIC Thu Dec 8 
07:13:10 UTC 2022 x86_64

Build options :
  TARGET  = linux-glibc
  CPU = native
  CC  = cc
  CFLAGS  = -O2 -march=native -g -Wall -Wextra -Wundef 
-Wdeclaration-after-statement -Wfatal-errors -Wtype-limits 
-Wshift-negative-value -Wshift-overflow=2 -Wduplicated-cond 
-Wnull-dereference -fwrapv -Wno-address-of-packed-member 
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered 
-Wno-missing-field-initializers -Wno-cast-function-type 
-Wno-string-plus-int -Wno-atomic-alignment
  OPTIONS = USE_PCRE2_JIT=1 USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1 
USE_QUIC=1

  DEBUG   =

Feature list : +EPOLL -KQUEUE +NETFILTER -PCRE -PCRE_JIT -PCRE2 
+PCRE2_JIT +POLL +THREAD -PTHREAD_EMULATION +BACKTRACE -STATIC_PCRE 
-STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT +CRYPT_H 
-ENGINE +GETADDRINFO +OPENSSL -OPENSSL_WOLFSSL -LUA +ACCEPT4 -CLOSEFROM 
+ZLIB -SLZ +CPU_AFFINITY +TFO +NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL 
+SYSTEMD -OBSOLETE_LINKER +PRCTL -PROCCTL +THREAD_DUMP -EVPORTS -OT 
+QUIC -PROMEX -MEMORY_PROFILING +SHM_OPEN


Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_TGROUPS=16, MAX_THREADS=256, 
default=48).

Built with OpenSSL version : OpenSSL 1.1.1s+quic  1 Nov 2022
Running on OpenSSL version : OpenSSL 1.1.1s+quic  1 Nov 2022
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with network namespace support.
Support for malloc_trim() is enabled.
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), 
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT 
IPV6_TRANSPARENT IP_FREEBIND

Built with PCRE2 version : 10.39 2021-10-29
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with gcc compiler version 11.3.0

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
   quic : mode=HTTP  side=FE mux=QUIC  flags=HTX|NO_UPG|FRAMED
 h2 : mode=HTTP  side=FE|BE  mux=H2flags=HTX|HOL_RISK|NO_UPG
   fcgi : mode=HTTP  side=BE mux=FCGI  flags=HTX|HOL_RISK|NO_UPG
   : mode=HTTP  side=FE|BE  mux=H1flags=HTX
 h1 : mode=HTTP  side=FE|BE  mux=H1flags=HTX|NO_UPG
   : mode=TCP   side=FE|BE  mux=PASS  flags=
   none : mode=TCP   side=FE|BE  mux=PASS  flags=NO_UPG

Available services : none

Available filters :
[BWLIM] bwlim-in
[BWLIM] bwlim-out
[CACHE] cache
[COMP] compression
[FCGI] fcgi-app
[SPOE] spoe
[TRACE] trace

Thanks,
Shawn



Re: Question about the "name" option for the bind line

2023-01-07 Thread Shawn Heisey

On 1/7/23 07:46, Willy Tarreau wrote:

Indeed, you need "option socket-stats" in the frontend that has such
listeners, so that the stats are collected per-listening socket (this
is not the case by default).


That's really cool.  But I have an oddity I'd like to share and see if 
it needs some work.


https://www.dropbox.com/s/m54wp15wkmkrzcp/haproxy_option_socket-stats.png?dl=0

The bind config line I have for quic lists three host:port combos (the 
two real server IPs and the VIP), which is why it's there three times. 
I know you probably don't want to put info like the host:port on the 
stats page.  Because all three host:port combos are part of the same 
bind line, I think it probably should have only listed the name once. 
Thoughts?


I will be adjusting that to three bind lines with separate names and the 
same options, because I do like the idea of having them separate ... but 
I think by default it probably should have only showed one line, not 
three with the same name, because it's currently one config line.


Thanks,
Shawn



Re: Question about the "name" option for the bind line

2023-01-07 Thread Marcel Menzel

Hello Willy,

Am 07/01/2023 um 15:46 schrieb Willy Tarreau:

Hello Marcel,
Indeed, you need "option socket-stats" in the frontend that has such
listeners, so that the stats are collected per-listening socket (this
is not the case by default).


Yes, that was my missing option I was looking for. Thank you for the 
quick response!


Regards,

Marcel



Re: Question about the "name" option for the bind line

2023-01-07 Thread Willy Tarreau
Hello Marcel,

On Sat, Jan 07, 2023 at 03:34:43PM +0100, Marcel Menzel wrote:
> Hello list,
> 
> according to the documentation [1], there is an option to set an optional
> name for sockets that's being displayed on the stats page. I was hoping to
> receive per address family statistics without having to copy & paste a whole
> new frontend block by just setting a different name on a new bind directive:
> 
>  frontend fe_https
>          bind 0.0.0.0:443 name fe_https_ipv4 tfo ssl curves
> X448:X25519:P-256 alpn h2,http/1.1 crt-list /etc/haproxy/certlist.txt
>          bind [::]:443 v6only name fe_https_ipv6 tfo ssl curves
> X448:X25519:P-256 alpn h2,http/1.1 crt-list /etc/haproxy/certlist.txt
> 
> But I am not seeing those names anywhere on the stats page, nor the builtin
> prometheus exporter:
> 
>  listen stats
>    http-request set-log-level silent
>    bind :9000
>     mode http
>    http-request use-service prometheus-exporter if { path /metrics  }
>   stats enable
>   stats admin if TRUE
>   stats realm Haproxy\ Statistics
>    stats uri /
>    stats show-legends
>   stats show-desc
> stats show-node
>    stats show-modules
> 
> Am I missing something out here?

Indeed, you need "option socket-stats" in the frontend that has such
listeners, so that the stats are collected per-listening socket (this
is not the case by default).

Regards,
Willy



Question about the "name" option for the bind line

2023-01-07 Thread Marcel Menzel

Hello list,

according to the documentation [1], there is an option to set an 
optional name for sockets that's being displayed on the stats page. I 
was hoping to receive per address family statistics without having to 
copy & paste a whole new frontend block by just setting a different name 
on a new bind directive:


 frontend fe_https
         bind 0.0.0.0:443 name fe_https_ipv4 tfo ssl curves 
X448:X25519:P-256 alpn h2,http/1.1 crt-list /etc/haproxy/certlist.txt
         bind [::]:443 v6only name fe_https_ipv6 tfo ssl curves 
X448:X25519:P-256 alpn h2,http/1.1 crt-list /etc/haproxy/certlist.txt


But I am not seeing those names anywhere on the stats page, nor the 
builtin prometheus exporter:


 listen stats
   http-request set-log-level silent
   bind :9000
    mode http
   http-request use-service prometheus-exporter if { path /metrics  }
  stats enable
  stats admin if TRUE
  stats realm Haproxy\ Statistics
   stats uri /
   stats show-legends
  stats show-desc
stats show-node
   stats show-modules

Am I missing something out here?


Kind regards,

Marcel Menzel


1: https://docs.haproxy.org/2.7/configuration.html#5.1-name

Help with a backend server configuration question

2022-11-16 Thread Yujun Wu
Hello,


Could someone help with a backend server question? Our servers are dual

stack machines with both IPv4/IPv6 addresses. For example, one of them has:


IPv4 address 131.225.69.84<https://131.225.69.84>

IPv6 address 2620:6a:0:4812:f0:0:69:84


If I want the server to accept both IPv4 and IPv6 requests (some of the clients 
only have IPv6 addresses), what should I put in the [IPADDRESS]?


--

backend webdav1

balance roundrobin

mode tcp

server server1 [IPADDRESS]:8000 check

--


Or, I need have two backends, one for IPv4 (webdav1ipv4), another for 
IPv6(webdav1ipv6)?


Thanks in advice for any advice on this.


Regards,
Yujun


General application question

2022-07-26 Thread Dave Swinton
One topic I am still trying to wrap my head around in the CFG is that we are 
not binding CPU's and we aren't using nbproc configs. I believe this means we 
are only using a single processor, and that process is using one additional 
thread as a default. If we feel the need, could that processor be run on three 
threads if we use the nbthread config?


David Swinton
RedIron Technologies
Mobile: (925) 864-1783
Email:  dave.swin...@redirontech.com

[519F0236]



Re: Download Question

2022-05-02 Thread Aleksandar Lazic
Hi.

On Mon, 2 May 2022 14:44:45 +
Dave Swinton  wrote:

> Do you have a repository for the current releases in RPM? We are currently
> using 1.8 but would like to move to 2.5.x after some internal testing but
> don't see any direct links to an RPM from the download page.

You can build your own version based on this repo.

https://github.com/DBezemer/rpm-haproxy

Regards
Alex

> Thank you.
> 
> David Swinton
> RedIron Technologies
> Mobile: (925) 864-1783
> Email:  dave.swin...@redirontech.com
> 
> [519F0236]
> 




Download Question

2022-05-02 Thread Dave Swinton
Do you have a repository for the current releases in RPM? We are currently 
using 1.8 but would like to move to 2.5.x after some internal testing but don't 
see any direct links to an RPM from the download page.

Thank you.

David Swinton
RedIron Technologies
Mobile: (925) 864-1783
Email:  dave.swin...@redirontech.com

[519F0236]



Re: Stupid question about nbthread and maxconn

2022-04-26 Thread Lukas Tribus
Hello,


> > Let's say we have the following setup.
> >
> > ```
> > maxconn 2
> > nbthread 4
> > ```
> >
> > My understanding is that HAProxy will accept 2 concurrent connection,
> > right? Even when I increase the nbthread will HAProxy *NOT* accept more then
> > 2 concurrent connection, right?

Yes.


> > What confuses me is "maximum per-process" in the maxconn docu part, will 
> > every
> > thread handle the maxconn or is this for the whole HAProxy instance.

Per process limits apply to processes, they do not apply to threads.

Maxconn is per process. It is NOT per thread.

Multithreading solves those issues.


Lukas



Re: Stupid question about nbthread and maxconn

2022-04-26 Thread Aleksandar Lazic
Hi.

Anyone any Idea about the question below?

Regards
Alex

On Sat, 23 Apr 2022 11:05:36 +0200
Aleksandar Lazic  wrote:

> Hi.
> 
> I'm not sure if I understand the doc properly.
> 
> https://docs.haproxy.org/2.2/configuration.html#nbthread
> ```
> This setting is only available when support for threads was built in. It
> makes haproxy run on  threads. This is exclusive with "nbproc". While
> "nbproc" historically used to be the only way to use multiple processors, it
> also involved a number of shortcomings related to the lack of synchronization
> between processes (health-checks, peers, stick-tables, stats, ...) which do
> not affect threads. As such, any modern configuration is strongly encouraged
> to migrate away from "nbproc" to "nbthread". "nbthread" also works when
> HAProxy is started in foreground. On some platforms supporting CPU affinity,
> when nbproc is not used, the default "nbthread" value is automatically set to
> the number of CPUs the process is bound to upon startup. This means that the
> thread count can easily be adjusted from the calling process using commands
> like "taskset" or "cpuset". Otherwise, this value defaults to 1. The default
> value is reported in the output of "haproxy -vv". See also "nbproc".
> ```
> 
> https://docs.haproxy.org/2.2/configuration.html#3.2-maxconn
> ```
> Sets the maximum per-process number of concurrent connections to . It
> is equivalent to the command-line argument "-n". Proxies will stop accepting
> connections when this limit is reached. The "ulimit-n" parameter is
> automatically adjusted according to this value. See also "ulimit-n". Note:
> the "select" poller cannot reliably use more than 1024 file descriptors on
> some platforms. If your platform only supports select and reports "select
> FAILED" on startup, you need to reduce maxconn until it works (slightly
> below 500 in general). If this value is not set, it will automatically be
> calculated based on the current file descriptors limit reported by the
> "ulimit -n" command, possibly reduced to a lower value if a memory limit
> is enforced, based on the buffer size, memory allocated to compression, SSL
> cache size, and use or not of SSL and the associated maxsslconn (which can
> also be automatic).
> 
> ```
> 
> Let's say we have the following setup.
> 
> ```
> maxconn 2
> nbthread 4
> ```
> 
> My understanding is that HAProxy will accept 2 concurrent connection,
> right? Even when I increase the nbthread will HAProxy *NOT* accept more then
> 2 concurrent connection, right?
> 
> The increasing of nbthread will "only" change that the performance will be
> "better" on a let's say 32 CPU Machine, especially for the upcoming 2.6 :-)
> 
> https://docs.microsoft.com/en-us/azure/virtual-machines/dv3-dsv3-series#dsv3-series
> => Standard_D32s_v3: 32 CPU, 128G RAM
> 
> What confuses me is "maximum per-process" in the maxconn docu part, will every
> thread handle the maxconn or is this for the whole HAProxy instance.
> 
> More mathematically :-O.
> 2 * 4 = 8
> or
> 2 * 4 = 2
> 
> Regards
> Alex
> 




Stupid question about nbthread and maxconn

2022-04-23 Thread Aleksandar Lazic
Hi.

I'm not sure if I understand the doc properly.

https://docs.haproxy.org/2.2/configuration.html#nbthread
```
This setting is only available when support for threads was built in. It
makes haproxy run on  threads. This is exclusive with "nbproc". While
"nbproc" historically used to be the only way to use multiple processors, it
also involved a number of shortcomings related to the lack of synchronization
between processes (health-checks, peers, stick-tables, stats, ...) which do
not affect threads. As such, any modern configuration is strongly encouraged
to migrate away from "nbproc" to "nbthread". "nbthread" also works when
HAProxy is started in foreground. On some platforms supporting CPU affinity,
when nbproc is not used, the default "nbthread" value is automatically set to
the number of CPUs the process is bound to upon startup. This means that the
thread count can easily be adjusted from the calling process using commands
like "taskset" or "cpuset". Otherwise, this value defaults to 1. The default
value is reported in the output of "haproxy -vv". See also "nbproc".
```

https://docs.haproxy.org/2.2/configuration.html#3.2-maxconn
```
Sets the maximum per-process number of concurrent connections to . It
is equivalent to the command-line argument "-n". Proxies will stop accepting
connections when this limit is reached. The "ulimit-n" parameter is
automatically adjusted according to this value. See also "ulimit-n". Note:
the "select" poller cannot reliably use more than 1024 file descriptors on
some platforms. If your platform only supports select and reports "select
FAILED" on startup, you need to reduce maxconn until it works (slightly
below 500 in general). If this value is not set, it will automatically be
calculated based on the current file descriptors limit reported by the
"ulimit -n" command, possibly reduced to a lower value if a memory limit
is enforced, based on the buffer size, memory allocated to compression, SSL
cache size, and use or not of SSL and the associated maxsslconn (which can
also be automatic).

```

Let's say we have the following setup.

```
maxconn 2
nbthread 4
```

My understanding is that HAProxy will accept 2 concurrent connection, right?
Even when I increase the nbthread will HAProxy *NOT* accept more then 2
concurrent connection, right?

The increasing of nbthread will "only" change that the performance will be
"better" on a let's say 32 CPU Machine, especially for the upcoming 2.6 :-)

https://docs.microsoft.com/en-us/azure/virtual-machines/dv3-dsv3-series#dsv3-series
=> Standard_D32s_v3: 32 CPU, 128G RAM

What confuses me is "maximum per-process" in the maxconn docu part, will every
thread handle the maxconn or is this for the whole HAProxy instance.

More mathematically :-O.
2 * 4 = 8
or
2 * 4 = 2

Regards
Alex



RE: Question about http compression

2022-02-21 Thread Zakharychev, Bob
>From: Emerson Gomes  
>Sent: Monday, February 21, 2022 2:46 PM
>To: Tom Browder 
>Cc: HAProxy 
>Subject: Re: Question about http compression
>
>Hi,
>
>You're mixing up the concepts of TLS compression and HTTP compression. They 
>are different things.
>Indeed TLS compression is not advised due to security concerns.
>
>However, this has nothing to do with HTTP compression, which is normally done 
>using gzip or brotli algorithms, and specified as "Content-Encoding" on the 
>HTTP header.

Emerson,

With all due respect, please read up on BREACH at the link Lukas Tribus 
provided in response to OP (https://breachattack.com) 
which attacks regular HTTP compression using techniques similar to CRIME attack 
against TLS compression.
Unfortunately it is very much a thing and appears to be only completely 
mitigated by disabling HTTP compression
of potentially vulnerable responses or, if this can't be determined, then all 
responses.

Cheers,
   Bob


Re: Question about http compression

2022-02-21 Thread Emerson Gomes
Hi,

You're mixing up the concepts of TLS compression and HTTP compression. They
are different things.
Indeed TLS compression is not advised due to security concerns.

However, this has nothing to do with HTTP compression, which is normally
done using gzip or brotli algorithms, and specified as "Content-Encoding"
on the HTTP header.

HTTP compression is generally advised when you often provide highly
compressible files (like HTMLs) but keep in mind that it has a CPU cost
noticeable for very intense traffic sites. That's why sometimes you might
want to use HAProxy to compress HTTP responses to offload the CPU cost from
your backend server.

In HAProxy you can use http://www.libslz.org/, which provides ultra-fast
compression with the gzip algorithm.

BR.,
Emerson

Em seg., 21 de fev. de 2022 às 14:26, Tom Browder 
escreveu:

> I'm getting ready to try 2.5 HAProxy on my system and see http comression
> is recommended.
>
> I am running Apache 2.4.52 and have for years tried to keep its TLS
> security as good as possible according to what advice I get from the Apache
> docs and SSL Labs. From those sources I thought https should not use
> compression because of some known exploit, so I'm not currently using it.
> My sites get an A+ rating from SSL Labs testing.
>
> So, not being at all an expert, I plan not to use the compression
> (although I've always wanted to).  Perhaps I'm not as up-to-date as I
> should be (this is a hobbly, but it's an important one, although I can't
> spend the time on it I would like to).
>
> Your thoughts and advice are appreciated.
>
> -Tom
>


Re: Question about http compression

2022-02-21 Thread Tom Browder
On Mon, Feb 21, 2022 at 08:21 Lukas Tribus  wrote:

> Hello,
>
>
> On Mon, 21 Feb 2022 at 14:25, Tom Browder  wrote:
> >
> > I'm getting ready to try 2.5 HAProxy on my system
> > and see http comression is recommended.
>
> I'm not sure we are actively encouraging to enable HTTP compression.
> Where did you see this recommendation?


I think I implied that because I saw no note or warning about the hazards
of http compression.

Thanks, Lukas.

Cheers!

-Tom


Re: Question about http compression

2022-02-21 Thread Lukas Tribus
Hello,


On Mon, 21 Feb 2022 at 14:25, Tom Browder  wrote:
>
> I'm getting ready to try 2.5 HAProxy on my system
> and see http comression is recommended.

I'm not sure we are actively encouraging to enable HTTP compression.
Where did you see this recommendation?


> From those sources I thought https should not use compression
> because of some known exploit, so I'm not currently using it.

You are talking about BREACH [1], and I'm afraid there is no magic fix
 for that. The mitigations on the BREACH website apply.


Lukas


[1] http://www.breachattack.com/#mitigations



Question about http compression

2022-02-21 Thread Tom Browder
I'm getting ready to try 2.5 HAProxy on my system and see http comression
is recommended.

I am running Apache 2.4.52 and have for years tried to keep its TLS
security as good as possible according to what advice I get from the Apache
docs and SSL Labs. From those sources I thought https should not use
compression because of some known exploit, so I'm not currently using it.
My sites get an A+ rating from SSL Labs testing.

So, not being at all an expert, I plan not to use the compression (although
I've always wanted to).  Perhaps I'm not as up-to-date as I should be (this
is a hobbly, but it's an important one, although I can't spend the time on
it I would like to).

Your thoughts and advice are appreciated.

-Tom


Newbie question

2022-02-19 Thread Tom Browder
I am running a single Apache httpd server (2.4.52) with multiple virtual
sites, all under TLS with individual Let's Encrypt certs using Apache's
managed domain feature. The setup has worked well for years (mostly static,
but some using CGI).

Now I want to be able to use a reverse proxy to enable the https data
received on port 443 to be:

+ decrypted using the appropriate domain's certs
+ sent to a unique port for its domain
+ have a Raku (formerly Perl 6) script take care of the backend business
+ re-encrypt the response
+ send the https response back to the client

Is that possible using HAProxy on a single server?

Thanks,

-Tom


quick question

2022-01-25 Thread Adam Carter
I noticed you shared an article from W2.org when you talked about HTTP
status codes here: http://www.formilux.org/archives/haproxy/1004/3312.html


I've read that piece from W2.org, and I thought it was pretty good. But we
recently published an article that’s much better.


There are a lot of tools to help you figure out the HTTP status codes of
any particular page or site – but, ours is the only one that really
explains how you can use these codes to improve your SEO.  For example, did
you know that you may want to consider status codes when accomplishing
these goals?

   -

   Migrating a Site
   -

   Debugging a page
   -

   Doing competitive research
   -

   Checking whether you’re incorrectly restricting Googlebot’s access to
   your webpage?


Here's the article if you want to take a look:
https://www.clickintelligence.co.uk/header-response-checker/



Would you consider sharing our article with your readers by linking to it?



Thank you, and please don’t hesitate to reach out if you have any questions.


Take care,

Adam

--
Adam Carter
5 Ross Rd
Durham, NH 03824

BTW, if you didn't like getting this email, please reply with something
like "please don't email me anymore", and I'll make sure that we don't.


Re: Maybe stupid question but should "maxconn 0" work?

2021-12-02 Thread Aleksandar Lazic

On 02.12.21 15:12, Frank Wall wrote:

On 2021-12-02 02:16, Aleksandar Lazic wrote:

I try to test some limits with peers and wanted to test "maxconn 0"
before I start with the peers.
Should "maxconn 0" work?
I expect to get connection refused or similar and and 500 in the log
but both curls get a 200


Maybe I got your question wrong, but "maxconn 0" is not supposed to block
all connections:

   The default value is "0" which means unlimited.
(http://cbonte.github.io/haproxy-dconv/2.4/configuration.html#maxconn%20(Server%20and%20default-server%20options)


Thanks Frank for the answer.
So the answer to my question is "Yes, it's a stupid question because RTFM!" :-)


Regards
- Frank


Best regards
Alex




Re: Maybe stupid question but should "maxconn 0" work?

2021-12-02 Thread Frank Wall

On 2021-12-02 02:16, Aleksandar Lazic wrote:

I try to test some limits with peers and wanted to test "maxconn 0"
before I start with the peers.
Should "maxconn 0" work?
I expect to get connection refused or similar and and 500 in the log
but both curls get a 200


Maybe I got your question wrong, but "maxconn 0" is not supposed to 
block

all connections:

  The default value is "0" which means unlimited.
  
(http://cbonte.github.io/haproxy-dconv/2.4/configuration.html#maxconn%20(Server%20and%20default-server%20options)



Regards
- Frank



Maybe stupid question but should "maxconn 0" work?

2021-12-01 Thread Aleksandar Lazic



Hi.

I try to test some limits with peers and wanted to test "maxconn 0" before I 
start with the peers.
Should "maxconn 0" work?
I expect to get connection refused or similar and and 500 in the log but both 
curls get a 200

```
# curl -v http://127.0.0.1:8080/; curl -v http://127.0.0.1:8080/
```

```
podman exec haproxy-dest haproxy -vv
HAProxy version 2.4.8-d1f8d41 2021/11/03 - https://haproxy.org/
Status: long-term supported branch - will stop receiving fixes around Q2 2026.
Known bugs: http://www.haproxy.org/bugs/bugs-2.4.8.html
Running on: Linux 5.11.0-40-generic #44~20.04.2-Ubuntu SMP Tue Oct 26 18:07:44 
UTC 2021 x86_64
Build options :
  TARGET  = linux-glibc
  CPU = generic
  CC  = cc
  CFLAGS  = -O2 -g -Wall -Wextra -Wdeclaration-after-statement -fwrapv 
-Wno-address-of-packed-member -Wno-unused-label -Wno-sign-compare 
-Wno-unused-parameter -Wno-clobbered -Wno-missing-field-initializers 
-Wno-cast-function-type -Wtype-limits -Wshift-negative-value -Wshift-overflow=2 
-Wduplicated-cond -Wnull-dereference
  OPTIONS = USE_PCRE2=1 USE_PCRE2_JIT=1 USE_GETADDRINFO=1 USE_OPENSSL=1 
USE_LUA=1 USE_PROMEX=1
  DEBUG   =

Feature list : +EPOLL -KQUEUE +NETFILTER -PCRE -PCRE_JIT +PCRE2 +PCRE2_JIT 
+POLL -PRIVATE_CACHE +THREAD -PTHREAD_PSHARED +BACKTRACE -STATIC_PCRE 
-STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT +CRYPT_H 
+GETADDRINFO +OPENSSL +LUA +FUTEX +ACCEPT4 -CLOSEFROM -ZLIB +SLZ +CPU_AFFINITY 
+TFO +NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL -SYSTEMD -OBSOLETE_LINKER 
+PRCTL -PROCCTL +THREAD_DUMP -EVPORTS -OT -QUIC +PROMEX -MEMORY_PROFILING

Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_THREADS=64, default=12).
Built with OpenSSL version : OpenSSL 1.1.1k  25 Mar 2021
Running on OpenSSL version : OpenSSL 1.1.1k  25 Mar 2021
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with Lua version : Lua 5.3.3
Built with the Prometheus exporter as a service
Built with network namespace support.
Built with libslz for stateless compression.
Compression algorithms supported : identity("identity"), deflate("deflate"), 
raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT 
IP_FREEBIND
Built with PCRE2 version : 10.36 2020-12-04
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with gcc compiler version 10.2.1 20210110

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
  h2 : mode=HTTP   side=FE|BE mux=H2   
flags=HTX|CLEAN_ABRT|HOL_RISK|NO_UPG
fcgi : mode=HTTP   side=BEmux=FCGI 
flags=HTX|HOL_RISK|NO_UPG
: mode=HTTP   side=FE|BE mux=H1   flags=HTX
  h1 : mode=HTTP   side=FE|BE mux=H1   flags=HTX|NO_UPG
: mode=TCPside=FE|BE mux=PASS flags=
none : mode=TCPside=FE|BE mux=PASS flags=NO_UPG

Available services : prometheus-exporter
Available filters :
[SPOE] spoe
[CACHE] cache
[FCGI] fcgi-app
[COMP] compression
[TRACE] trace
```

Haproxy config
```
global
log stdout format short daemon debug
maxconn 0

defaults
timeout connect 1s
timeout server 5s
timeout client 5s

frontend http
mode http
log global
log-format "[%tr] %ST %B %CC %CS %tsc %hr %hs %{+Q}r"
declare capture response len 4

bind :::8080 v4v6

default_backend nginx

listen nginx
mode http
bind :::8081

http-request return status 200 content-type text/plain string "static" hdr x-host 
"%[req.hdr(host)]"
```

Regards
Alex



Re: FW: Question regarding backend connection rates

2021-11-22 Thread Willy Tarreau
Hi Dominik,

On Mon, Nov 22, 2021 at 10:31:15AM +, Froehlich, Dominik wrote:
> For ongoing connections (not total), the stats page shows a tooltip stating
> 
> 
>   *   Current Active Connections
>   *   Current Used Connections
>   *   Current Idle Connections (broken down into safe and unsafe idle 
> connections)
> 
> What is the difference between active and used connections? Which number
> combined with idle connections reflects the current number of open
> connections on the OS level? (i.e. using resources like fds, buffers, ports)

You had me look at the code to figure the exact detail :-/

They're not computed the same way but I think that this results in them
being equivalent nowadays:
  - active is server->cur_sess, which is incremented whenever we need to
establish a connection to a server, possibly after leaving the queue;

  - used is the number of server connections used, that is maintained at
the idle pool layer for statistics.

Functionally speaking, even though they are not computed at the same place
and for the same reasons, I think they're the same. And except in rare
cases in my tests (slight timing differences), I think they're the same
now. Previously when idle connections couldn't be shared between threads
it would have been more complicated as one would be a sum of stuff that
could not necessarily be usable though. That's probably something we need
to consider cleaning for future versions.

> My ultimate goal is to answer the question "how loaded is this machine?" vs.
> a limit of open connections.

Got it. Then use either (or focus on active which is the historical one).
It's incremented only when under use. And use "used+idle" to know the number
of established connections at the OS level.

> What's the difference between safe and unsafe idle connections? Is it related
> to the http-reuse directive, e.g. private vs. non-private reusable
> connections?

Yes that's in part it. A safe connection is one which has proven that it was
reusable (it got reused at least once) an on which we're OK with sending the
first request of a connection because it's reasonably safe. An unsafe
connection is one that has processed 0 or 1 request only. When you use
"http-reuse always", this makesk no difference, both are always used.

Willy



Re: FW: Question regarding backend connection rates

2021-11-22 Thread Froehlich, Dominik
Hi Willy,

Thanks for the response, yes I think that clarifies the rates for me.

I have another question you probably could help me with:

For ongoing connections (not total), the stats page shows a tooltip stating


  *   Current Active Connections
  *   Current Used Connections
  *   Current Idle Connections (broken down into safe and unsafe idle 
connections)

What is the difference between active and used connections? Which number 
combined with idle connections reflects the current number of open connections 
on the OS level? (i.e. using resources like fds, buffers, ports)
My ultimate goal is to answer the question “how loaded is this machine?” vs. a 
limit of open connections.

What’s the difference between safe and unsafe idle connections? Is it related 
to the http-reuse directive, e.g. private vs. non-private reusable connections?

Thank you so much,
D

From: Willy Tarreau 
Date: Saturday, 20. November 2021 at 10:01
To: Froehlich, Dominik 
Cc: haproxy@formilux.org 
Subject: Re: FW: Question regarding backend connection rates
Hi Dominik,

On Fri, Nov 19, 2021 at 08:42:40AM +, Froehlich, Dominik wrote:
> However, the number of "current sessions" at the backend is almost 0 all the
> time (between 0 and 5, the number of servers). When I look at the "total
> sessions" at the backend after the test, it tells me that 99% of connections
> have been reused. So in my book, when a connection is reused no new
> connection needs to be opened, that's why I am so stumped about the backend
> session rate. If 99% of sessions are reused, why is the rate of new sessions
> not 0?

This is because the "sessions" counter indicates the number of sessions
that used this backend. Sessions are idle in the frontend waiting for a
new request, and once the request is analysed by the frontend rules, it's
routed to a backend, at which point the counter is incremented. As such,
in a backend you'll essentially see as many sessions as requests.

The "new connections" counter, that was added after keep-alive support
was introduced many years ago will, however, indicate the real number of
new connections that had to be established to servers. And this is the
same for each "server" line by the way.

I've sometimes been wondering whether it could make sense to change the
"sessions/total" column in the stats page to show the number of new
connections instead, but it seems to me that it would not bring much
value and will only result in causing confusion to existing users. Given
that in both cases one will have to hover on the field to get the details,
it would not help I guess.

Hoping this helps,
Willy


Re: Hello HAProxy Team! I have a question about one of your posts...

2021-09-20 Thread Vanessa Diwa
Hey there!

Hope all is well, it's me again.

I sent you an email a few days ago but never heard back. Will you be
updating your post anytime soon? Would you be open to proposals, say adding
our link as an additional resource to your post?

Here is our article for your reference:
https://www.toptal.com/kubernetes/service-mesh-comparison

Hope to hear from you soon!

Cheers,
Vanessa


-Original Message-

Hi HAProxy Team,

I reached out a few days back regarding your remarkable post. Just checking
in to see if you have received my previous email regarding your article
https://www.haproxy.com/blog/power-your-consul-service-mesh-with-haproxy/
since I haven't received any feedback.

I would love to know if you are interested in adding our article
https://www.toptal.com/kubernetes/service-mesh-comparison as a resource to
your post?

You see we are hoping you can add our article as an additional resource to
your article. I believe our article goes well together and that it will
benefit your readers to know more about the article topic. One of Toptal's
goals is to provide more helpful information about the topic to others by
connecting our articles to other expert's articles like yours :)

Also, one of the most interesting benefits for you when adding our link
would be the fact that Google loves freshly updated content and tend to
rank such articles better.  As a result, this could help increase the
ranking of your article for this topic -

   - Links are Important – Linking out to pages along the same subject
   matter will give an enormous boost to your Google rankings.
   - External links ARE good for SEO and it’s widely accepted that external
   links are one of the most important metrics for high-position ranking.
   - External links are the most important source of ranking power.
   - Valuable external links will also help to improve the authority of
   your website, by providing a viewer with references.


Looking forward to hearing from you!


Cheers,
Vanessa


-Original Message-

Hi HAProxy Team,

Vanessa from Toptal here! I promise this will be quick. I just read one of
your posts -
https://www.haproxy.com/blog/power-your-consul-service-mesh-with-haproxy/
and I was hoping that we could connect.

I totally agree when you mention that Kubernetes makes configuring a
service mesh easier tactically because you can run multiple containers
inside a single pod, which is often referred to as running sidecar
containers.

When discussing microservices architecture and containerization, one set of
production-proven tools has captured most of the attention in recent years:
the service mesh.

Would you be interested in adding this as an additional resource to your
article? It will be a great update especially to your readers that are
interested in the Kubernetes service.

Here is the link for your reference:
https://www.toptal.com/kubernetes/service-mesh-comparison

Looking forward to hearing your thoughts on this and stay safe always.

Regards,
Vanessa


Re: Hello HAProxy Team! I have a question about one of your posts...

2021-09-12 Thread Vanessa Diwa
Hi HAProxy Team,

I reached out a few days back regarding your remarkable post. Just checking
in to see if you have received my previous email regarding your article
https://www.haproxy.com/blog/power-your-consul-service-mesh-with-haproxy/
since I haven't received any feedback.

I would love to know if you are interested in adding our article
https://www.toptal.com/kubernetes/service-mesh-comparison as a resource to
your post?

You see we are hoping you can add our article as an additional resource to
your article. I believe our article goes well together and that it will
benefit your readers to know more about the article topic. One of Toptal's
goals is to provide more helpful information about the topic to others by
connecting our articles to other expert's articles like yours :)

Also, one of the most interesting benefits for you when adding our link
would be the fact that Google loves freshly updated content and tend to
rank such articles better.  As a result, this could help increase the
ranking of your article for this topic -

   - Links are Important – Linking out to pages along the same subject
   matter will give an enormous boost to your Google rankings.
   - External links ARE good for SEO and it’s widely accepted that external
   links are one of the most important metrics for high-position ranking.
   - External links are the most important source of ranking power.
   - Valuable external links will also help to improve the authority of
   your website, by providing a viewer with references.


Looking forward to hearing from you!


Cheers,
Vanessa


-Original Message-

Hi HAProxy Team,

Vanessa from Toptal here! I promise this will be quick. I just read one of
your posts -
https://www.haproxy.com/blog/power-your-consul-service-mesh-with-haproxy/
and I was hoping that we could connect.

I totally agree when you mention that Kubernetes makes configuring a
service mesh easier tactically because you can run multiple containers
inside a single pod, which is often referred to as running sidecar
containers.

When discussing microservices architecture and containerization, one set of
production-proven tools has captured most of the attention in recent years:
the service mesh.

Would you be interested in adding this as an additional resource to your
article? It will be a great update especially to your readers that are
interested in the Kubernetes service.

Here is the link for your reference:
https://www.toptal.com/kubernetes/service-mesh-comparison

Looking forward to hearing your thoughts on this and stay safe always.

Regards,
Vanessa


Hello HAProxy Team! I have a question about one of your posts...

2021-09-08 Thread Vanessa Diwa
Hi HAProxy Team,

Vanessa from Toptal here! I promise this will be quick. I just read one of
your posts -
https://www.haproxy.com/blog/power-your-consul-service-mesh-with-haproxy/
and I was hoping that we could connect.

I totally agree when you mention that Kubernetes makes configuring a
service mesh easier tactically because you can run multiple containers
inside a single pod, which is often referred to as running sidecar
containers.

When discussing microservices architecture and containerization, one set of
production-proven tools has captured most of the attention in recent years:
the service mesh.

Would you be interested in adding this as an additional resource to your
article? It will be a great update especially to your readers that are
interested in the Kubernetes service.

Here is the link for your reference:
https://www.toptal.com/kubernetes/service-mesh-comparison

Looking forward to hearing your thoughts on this and stay safe always.

Regards,
Vanessa


Re: question: ExecStartPre removal from systemd unit file

2021-08-19 Thread Tim Düsterhus

William,

On 8/19/21 1:50 PM, William Lallemand wrote:

The config check should prevent HAProxy from going into wait mode when
the config is bad on a reload. If I am not mistaken it's not possible to
recover from wait mode without a full restart, no?



Well, this line is not used for the reload, but only for the start. For
the reload the first ExecReload line is used:


Ah, yes, indeed. I should have checked instead of just assuming stuff. 
Sorry! I don't see an issue with your proposed change then.


Best regards
Tim Düsterhus



Re: question: ExecStartPre removal from systemd unit file

2021-08-19 Thread William Lallemand
Hi Tim,

On Thu, Aug 19, 2021 at 12:22:25PM +0200, Tim Düsterhus wrote:
> 
> The config check should prevent HAProxy from going into wait mode when 
> the config is bad on a reload. If I am not mistaken it's not possible to 
> recover from wait mode without a full restart, no?
> 

Well, this line is not used for the reload, but only for the start. For
the reload the first ExecReload line is used:

ExecReload=@SBINDIR@/haproxy -Ws -f $CONFIG -c -q $EXTRAOPTS
ExecReload=/bin/kill -USR2 $MAINPID
 
The purpose of this reload line is only to return a status code to
systemd during a reload, because kill can't achieve that. It's not
really a problem to be in "wait" mode, if you do a reload again with a
working configuration it will be in a normal state. The wait mode is
just a state where the master only supervise the previous workers and
couldn't fork new workers.

-- 
William Lallemand



Re: question: ExecStartPre removal from systemd unit file

2021-08-19 Thread Tim Düsterhus

William,

On 8/19/21 12:04 PM, William Lallemand wrote:

I realized yesterday that we have this line in the systemd unit file:

 ExecStartPre=@SBINDIR@/haproxy -f $CONFIG -c -q $EXTRAOPTS

Which does not make any sense to me since starting HAProxy itself
checks the configuration, so it slows the start of the service for
nothing.

I'm going to remove this line.

Is there anyone against it, or did I miss a particular usecase?


The config check should prevent HAProxy from going into wait mode when 
the config is bad on a reload. If I am not mistaken it's not possible to 
recover from wait mode without a full restart, no?


Best regards
Tim Düsterhus



question: ExecStartPre removal from systemd unit file

2021-08-19 Thread William Lallemand
Hi List,

I realized yesterday that we have this line in the systemd unit file:

ExecStartPre=@SBINDIR@/haproxy -f $CONFIG -c -q $EXTRAOPTS

Which does not make any sense to me since starting HAProxy itself
checks the configuration, so it slows the start of the service for
nothing.

I'm going to remove this line.

Is there anyone against it, or did I miss a particular usecase?

Thanks,

-- 
William Lallemand



Re: [PATCH] DOC: Minor typo fix - 'question mark' -> 'exclamation mark'

2021-08-17 Thread Willy Tarreau
looks good, now applied, thank you!
Willy



[PATCH] DOC: Minor typo fix - 'question mark' -> 'exclamation mark'

2021-08-16 Thread Kunal
From: Kunal Gangakhedkar 

Signed-off-by: Kunal Gangakhedkar 
---
 doc/configuration.txt | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 556e97731..0ee901c04 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -813,8 +813,8 @@ expression made of any combination of:
   - a non-nul integer (e.g. '1'), always returns "true".
   - a predicate optionally followed by argument(s) in parenthesis.
   - a condition placed between a pair of parenthesis '(' and ')'
-  - a question mark ('!') preceding any of the non-empty elements above, and
-which will negate its status.
+  - an exclamation mark ('!') preceding any of the non-empty elements above,
+and which will negate its status.
   - expressions combined with a logical AND ('&&'), which will be evaluated
 from left to right until one returns false
   - expressions combined with a logical OR ('||'), which will be evaluated
-- 
2.25.1




Re: Question about available fetch-methods for http-request

2021-08-12 Thread Maya Lena Ayleen Scheu
Thank you so much Jarno, 

the word converter did exactly what I needed. Works really well :) 

- Maya

> On 12. Aug 2021, at 08:10, Jarno Huuskonen  wrote:
> 
> Hello,
> 
> On 8/12/21 8:59 AM, Maya Lena Ayleen Scheu wrote:
>> Your solution would work if I had only one static context path. The tricky 
>> thing is, that I would like to have it dynamic, so that the word between the 
>> first two “/“ always becomes the subdomain if a certain condition is true.
>> Thats where I am stuck, I don’t know how to grep that information and put it 
>> infront of my domain without being able to use the path_reg method.
> 
> Take a look at field,word and regsub:
> http://cbonte.github.io/haproxy-dconv/2.4/configuration.html#7.3.1-field
> http://cbonte.github.io/haproxy-dconv/2.4/configuration.html#7.3.1-regsub
> http://cbonte.github.io/haproxy-dconv/2.4/configuration.html#7.3.1-word
> 
> And maybe path, variables and concat with field,word.
> 
> regsub probably can modify whole url to context_path.domain.com host header.
> 
> -Jarno
> 
>> Best Regards, Maya
>>> On 12. Aug 2021, at 04:15, Igor Cicimov >> <mailto:ig...@encompasscorporation.com>> wrote:
>>> 
>>> Hi Maya,
>>> 
>>> Maybe try this:
>>> 
>>> http-request set-header Hostcontext_path.ms.example.com 
>>> <http://context_path.ms.example.com/>if { path_beg /context_path } { 
>>> hdr(Host) -iexample.com <http://example.com/>}
>>> 
>>> *From:*Maya Lena Ayleen Scheu >> <mailto:maya.sc...@check24.de>>
>>> *Sent:*Wednesday, August 11, 2021 9:58 PM
>>> *To:*haproxy@formilux.org <mailto:haproxy@formilux.org> 
>>> mailto:haproxy@formilux.org>>
>>> *Subject:*Question about available fetch-methods for http-request
>>> Hi there,
>>> 
>>> I have some questions regarding Haproxy Configuration in Version HA-Proxy 
>>> version 2.0.23, which is not clear by reading the official documentation. I 
>>> hope you would have some ideas how this could be solved.
>>> 
>>> 
>>> *What I wish to accomplish:*
>>> 
>>> A frontend application is called by an url with a context path in it.
>>> Haproxy should set a Header in the backend section with `http-request 
>>> set-header Host` whereas the set Host contains the context_path found in 
>>> the url-path. I try to make it clear with an example:
>>> 
>>> The called url looks like: `https://example.com/context_path/abc/etc 
>>> <https://example.com/context_path/abc/etc>`
>>> Out of this url I would need to set the following Host Header: 
>>> `context_path.ms.example.com <http://example.com>`, while the path remains 
>>> `/context_path/abc/etc`
>>> 
>>> While I find many fetch-examples for ACLs, I had to learn that most of them 
>>> don’t work on `http-request set-header or set-env`. I tried to use 
>>> `path_beg` or `path_reg`, which parses with errors, that the fetch method 
>>> is unknown.
>>> 
>>> So something like this doesn’t work:
>>> `http-request set-header Host %[path_reg(...)].ms.example.domain.com 
>>> <http://ms.example.domain.com/>if host_example`
>>> 
>>> or this:
>>> `http-request set-var(req.url_context) path_beg,lower if host_example`
>>> 
>>> *Question:*
>>> 
>>> I am certain that this should somehow be possible, as I found even 
>>> solutions to set variables or Headers by urlp, cookies, etc.
>>> What would be the explanation, why fetch methods like path_beg are not 
>>> available in this context? And how to work around it?
>>> 
>>> Thank you in advance and best regards,
>>> Maya Scheu
>>> *Know Your Customer due diligence on demand, powered by intelligent process 
>>> automation*
>>> Blogs <https://www.encompasscorporation.com/blog/> | LinkedIn 
>>> <https://www.linkedin.com/company/encompass-corporation/> | Twitter 
>>> <https://twitter.com/EncompassCorp>
>>> Encompass Corporation UK Ltd | Company No. SC493055 | Address: Level 3, 33 
>>> Bothwell Street, Glasgow, UK, G2 6NL
>>> Encompass Corporation Pty Ltd | ACN 140 556 896 | Address: Level 10, 117 
>>> Clarence Street, Sydney, New South Wales, 2000
>>> This email and any attachments is intended only for the use of the 
>>> individual or entity named above and may contain confidential information
>>> If you are not the intended recipient, any dissemination, distribution or 
>>> copying of this email is prohibited.
>>> If received in error, please notify us immediately by return email and 
>>> destroy the original message.
> 
> -- 
> Jarno Huuskonen



Re: Question about available fetch-methods for http-request

2021-08-11 Thread Jarno Huuskonen

Hello,

On 8/12/21 8:59 AM, Maya Lena Ayleen Scheu wrote:
Your solution would work if I had only one static context path. The 
tricky thing is, that I would like to have it dynamic, so that the word 
between the first two “/“ always becomes the subdomain if a certain 
condition is true.
Thats where I am stuck, I don’t know how to grep that information and 
put it infront of my domain without being able to use the path_reg method.


Take a look at field,word and regsub:
http://cbonte.github.io/haproxy-dconv/2.4/configuration.html#7.3.1-field
http://cbonte.github.io/haproxy-dconv/2.4/configuration.html#7.3.1-regsub
http://cbonte.github.io/haproxy-dconv/2.4/configuration.html#7.3.1-word

And maybe path, variables and concat with field,word.

regsub probably can modify whole url to context_path.domain.com host header.

-Jarno



Best Regards, Maya

On 12. Aug 2021, at 04:15, Igor Cicimov 
<mailto:ig...@encompasscorporation.com>> wrote:


Hi Maya,

Maybe try this:

http-request set-header Hostcontext_path.ms.example.com 
<http://context_path.ms.example.com/>if { path_beg /context_path } { 
hdr(Host) -iexample.com <http://example.com/>}


*From:*Maya Lena Ayleen Scheu <mailto:maya.sc...@check24.de>>

*Sent:*Wednesday, August 11, 2021 9:58 PM
*To:*haproxy@formilux.org <mailto:haproxy@formilux.org> 
mailto:haproxy@formilux.org>>

*Subject:*Question about available fetch-methods for http-request
Hi there,

I have some questions regarding Haproxy Configuration in Version 
HA-Proxy version 2.0.23, which is not clear by reading the official 
documentation. I hope you would have some ideas how this could be solved.



*What I wish to accomplish:*

A frontend application is called by an url with a context path in it.
Haproxy should set a Header in the backend section with `http-request 
set-header Host` whereas the set Host contains the context_path found 
in the url-path. I try to make it clear with an example:


The called url looks like: `https://example.com/context_path/abc/etc 
<https://example.com/context_path/abc/etc>`
Out of this url I would need to set the following Host Header: 
`context_path.ms.example.com <http://example.com>`, while the path 
remains `/context_path/abc/etc`


While I find many fetch-examples for ACLs, I had to learn that most of 
them don’t work on `http-request set-header or set-env`. I tried to 
use `path_beg` or `path_reg`, which parses with errors, that the fetch 
method is unknown.


So something like this doesn’t work:
`http-request set-header Host %[path_reg(...)].ms.example.domain.com 
<http://ms.example.domain.com/>if host_example`


or this:
`http-request set-var(req.url_context) path_beg,lower if host_example`

*Question:*

I am certain that this should somehow be possible, as I found even 
solutions to set variables or Headers by urlp, cookies, etc.
What would be the explanation, why fetch methods like path_beg are not 
available in this context? And how to work around it?


Thank you in advance and best regards,
Maya Scheu
*Know Your Customer due diligence on demand, powered by intelligent 
process automation*
Blogs <https://www.encompasscorporation.com/blog/> | LinkedIn 
<https://www.linkedin.com/company/encompass-corporation/> | Twitter 
<https://twitter.com/EncompassCorp>
Encompass Corporation UK Ltd | Company No. SC493055 | Address: Level 
3, 33 Bothwell Street, Glasgow, UK, G2 6NL
Encompass Corporation Pty Ltd | ACN 140 556 896 | Address: Level 10, 
117 Clarence Street, Sydney, New South Wales, 2000
This email and any attachments is intended only for the use of the 
individual or entity named above and may contain confidential information
If you are not the intended recipient, any dissemination, distribution 
or copying of this email is prohibited.
If received in error, please notify us immediately by return email and 
destroy the original message.




--
Jarno Huuskonen



Re: Question about available fetch-methods for http-request

2021-08-11 Thread Maya Lena Ayleen Scheu
Hi Igor,

thank you for your reply.

Your solution would work if I had only one static context path. The tricky 
thing is, that I would like to have it dynamic, so that the word between the 
first two “/“ always becomes the subdomain if a certain condition is true.
Thats where I am stuck, I don’t know how to grep that information and put it 
infront of my domain without being able to use the path_reg method.

Best Regards, Maya

On 12. Aug 2021, at 04:15, Igor Cicimov 
mailto:ig...@encompasscorporation.com>> wrote:

Hi Maya,

Maybe try this:

http-request set-header Host 
context_path.ms.example.com<http://context_path.ms.example.com/> if { path_beg 
/context_path } { hdr(Host) -i example.com<http://example.com/> }

From: Maya Lena Ayleen Scheu 
mailto:maya.sc...@check24.de>>
Sent: Wednesday, August 11, 2021 9:58 PM
To: haproxy@formilux.org<mailto:haproxy@formilux.org> 
mailto:haproxy@formilux.org>>
Subject: Question about available fetch-methods for http-request

Hi there,

I have some questions regarding Haproxy Configuration in Version HA-Proxy 
version 2.0.23, which is not clear by reading the official documentation. I 
hope you would have some ideas how this could be solved.


What I wish to accomplish:

A frontend application is called by an url with a context path in it.
Haproxy should set a Header in the backend section with `http-request 
set-header Host` whereas the set Host contains the context_path found in the 
url-path. I try to make it clear with an example:

The called url looks like: `https://example.com/context_path/abc/etc`
Out of this url I would need to set the following Host Header: 
`context_path.ms.example.com<http://example.com>`, while the path remains 
`/context_path/abc/etc`

While I find many fetch-examples for ACLs, I had to learn that most of them 
don’t work on `http-request set-header or set-env`. I tried to use `path_beg` 
or `path_reg`, which parses with errors, that the fetch method is unknown.

So something like this doesn’t work:
`http-request set-header Host 
%[path_reg(...)].ms.example.domain.com<http://ms.example.domain.com/> if 
host_example`

or this:
`http-request set-var(req.url_context) path_beg,lower if host_example`

Question:

I am certain that this should somehow be possible, as I found even solutions to 
set variables or Headers by urlp, cookies, etc.
What would be the explanation, why fetch methods like path_beg are not 
available in this context? And how to work around it?

Thank you in advance and best regards,
Maya Scheu
[https://c.ap4.content.force.com/servlet/servlet.ImageServer?id=0156F0DRM7G&oid=00D9000absk&lastMod=1526270984000]
Know Your Customer due diligence on demand, powered by intelligent process 
automation
Blogs<https://www.encompasscorporation.com/blog/> |  
LinkedIn<https://www.linkedin.com/company/encompass-corporation/> |  
Twitter<https://twitter.com/EncompassCorp>
Encompass Corporation UK Ltd | Company No. SC493055 | Address: Level 3, 33 
Bothwell Street, Glasgow, UK, G2 6NL
Encompass Corporation Pty Ltd | ACN 140 556 896 | Address: Level 10, 117 
Clarence Street, Sydney, New South Wales, 2000
This email and any attachments is intended only for the use of the individual 
or entity named above and may contain confidential information
If you are not the intended recipient, any dissemination, distribution or 
copying of this email is prohibited.
If received in error, please notify us immediately by return email and destroy 
the original message.



Re: Question about available fetch-methods for http-request

2021-08-11 Thread Igor Cicimov
Hi Maya,

Maybe try this:

http-request set-header Host context_path.ms.example.com if { path_beg 
/context_path } { hdr(Host) -i example.com }

From: Maya Lena Ayleen Scheu 
Sent: Wednesday, August 11, 2021 9:58 PM
To: haproxy@formilux.org 
Subject: Question about available fetch-methods for http-request

Hi there,

I have some questions regarding Haproxy Configuration in Version HA-Proxy 
version 2.0.23, which is not clear by reading the official documentation. I 
hope you would have some ideas how this could be solved.


What I wish to accomplish:

A frontend application is called by an url with a context path in it.
Haproxy should set a Header in the backend section with `http-request 
set-header Host` whereas the set Host contains the context_path found in the 
url-path. I try to make it clear with an example:

The called url looks like: `https://example.com/context_path/abc/etc`
Out of this url I would need to set the following Host Header: 
`context_path.ms.example.com`, while the path remains `/context_path/abc/etc`

While I find many fetch-examples for ACLs, I had to learn that most of them 
don’t work on `http-request set-header or set-env`. I tried to use `path_beg` 
or `path_reg`, which parses with errors, that the fetch method is unknown.

So something like this doesn’t work:
`http-request set-header Host 
%[path_reg(...)].ms.example.domain.com<http://ms.example.domain.com> if 
host_example`

or this:
`http-request set-var(req.url_context) path_beg,lower if host_example`

Question:

I am certain that this should somehow be possible, as I found even solutions to 
set variables or Headers by urlp, cookies, etc.
What would be the explanation, why fetch methods like path_beg are not 
available in this context? And how to work around it?

Thank you in advance and best regards,
Maya Scheu

[https://c.ap4.content.force.com/servlet/servlet.ImageServer?id=0156F0DRM7G&oid=00D9000absk&lastMod=1526270984000]

Know Your Customer due diligence on demand, powered by intelligent process 
automation

Blogs<https://www.encompasscorporation.com/blog/> |  
LinkedIn<https://www.linkedin.com/company/encompass-corporation/> |  
Twitter<https://twitter.com/EncompassCorp>

Encompass Corporation UK Ltd | Company No. SC493055 | Address: Level 3, 33 
Bothwell Street, Glasgow, UK, G2 6NL
Encompass Corporation Pty Ltd | ACN 140 556 896 | Address: Level 10, 117 
Clarence Street, Sydney, New South Wales, 2000
This email and any attachments is intended only for the use of the individual 
or entity named above and may contain confidential information
If you are not the intended recipient, any dissemination, distribution or 
copying of this email is prohibited.
If received in error, please notify us immediately by return email and destroy 
the original message.






Question about available fetch-methods for http-request

2021-08-11 Thread Maya Lena Ayleen Scheu
Hi there,

I have some questions regarding Haproxy Configuration in Version HA-Proxy 
version 2.0.23, which is not clear by reading the official documentation. I 
hope you would have some ideas how this could be solved.


What I wish to accomplish:

A frontend application is called by an url with a context path in it.
Haproxy should set a Header in the backend section with `http-request 
set-header Host` whereas the set Host contains the context_path found in the 
url-path. I try to make it clear with an example:

The called url looks like: `https://example.com/context_path/abc/etc`
Out of this url I would need to set the following Host Header: 
`context_path.ms.example.com`, while the path remains `/context_path/abc/etc`

While I find many fetch-examples for ACLs, I had to learn that most of them 
don’t work on `http-request set-header or set-env`. I tried to use `path_beg` 
or `path_reg`, which parses with errors, that the fetch method is unknown.

So something like this doesn’t work:
`http-request set-header Host 
%[path_reg(...)].ms.example.domain.com<http://ms.example.domain.com> if 
host_example`

or this:
`http-request set-var(req.url_context) path_beg,lower if host_example`

Question:

I am certain that this should somehow be possible, as I found even solutions to 
set variables or Headers by urlp, cookies, etc.
What would be the explanation, why fetch methods like path_beg are not 
available in this context? And how to work around it?

Thank you in advance and best regards,
Maya Scheu


Question About ev_epoll.c:_do_fork()

2021-07-14 Thread Kazuhiro Takenaka
Hello

I am reading ev_epoll.c:_do_fork() of HAProxy-1.8.30.

In its comment, the following sentence appears.

  If it fails, it disables the poller by setting
  its pref to 0.

But I can't find out such code in _do_fork();

I also read _do_fork() of HAProxy-2.4.0 and its
comment doesn't have the above sentence.

Also, There is no difference between code of
_do_fork() of 1.8.30 and that of 2.4.0 except
REGPRM1 declaration.

So I thought the above sentense would be
unnecessary to HAProxy-1.8.30.

Is my thought correct?

Kazu


Re: Maybe stupid question but, I don't see a fetch method for %rt => StreamID

2021-06-04 Thread Aleksandar Lazic

On 02.06.21 11:38, Christopher Faulet wrote:

Le 6/1/21 à 8:26 PM, Aleksandar Lazic a écrit :

On 01.06.21 14:23, Tim Düsterhus wrote:

Aleks,

On 6/1/21 10:30 AM, Aleksandar Lazic wrote:
This phrasing is understandable to me, but now I'm wondering if this is the best 
solution. Maybe the already existing user-configurable unique request ID should 
instead be sent to the SPOE and then logged?


https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#7.3.6-unique-id

The request_counter (%rt) you mentioned could be embedded into this unique-id.


Well this uniqe-id is not send as Stream ID to SPOA receiver, due to this fact 
can't you debug which stream is the troubled one.


Yes, that's why I suggested that the SPOE is extended to also include this 
specific ID somewhere (just) for logging purposes.


Yep.
Any opinion from the other community Members?



The SID provided in the SPOE log message is the one used in the SPOP frame header. 
This way it is possible to match a corresponding log message emitted by the agent.


The "unique-id-format %rt" fix the issue for me.

Regarding the format for this log message, its original purpose was to diagnose 
problems. Instead of adding custom information, I guess the best would be to have 
a "log-format" directive. At least to not break existing tools parsing those 
log messages. But to do so, all part of the current message must be available 
via log variables and/or sample fetches. And, at first glance, it will be hard 
to achieve (sample fetches are probably easier though).


Regarding the stream_uniq_id sample fetch, it is a good idea to add it. 
In fact, when it makes sense, a log variable must also be accessible via a 
sample fetch. Tim's remarks about the patch are valid. For the scope, INTRN or 
L4CLI, I don't know. I'm inclined to choose INTRN.


Let me withdrawal my patch because I use the following configs to satisfy  may
requirement.


```
global
log stdout format raw daemon
# daemon
maxconn 2

defaults
log global
modehttp
option  httplog
option  dontlognull
timeout connect 5000
timeout client  5
timeout server  5

frontend haproxynode
bind *:9080
mode http

unique-id-format %rt
http-request set-var(sess.my_fe_path) path
http-request set-var(sess.my_fe_src) src
http-request set-var(sess.my_fe_referer) req.hdr(Referer)
http-request set-var(sess.my_fe_requestedhost) req.hdr(Host)

# define the spoe agents
filter spoe engine agent-on-http-req config resources/haproxy/spoe-url.conf
filter spoe engine agent-on-http-res config resources/haproxy/spoe-url.conf

# map the spoe response to acl variables
# acl authenticated var(sess.allevents.info) -m bool

http-response set-header x-spoe %[var(sess.feevents.info)]
default_backend streams

backend agent-on-http-req
mode tcp
log global

server spoe 127.0.0.1:9000 check

backend agent-on-http-res
mode tcp
log global

server spoe 127.0.0.1:9000 check

backend streams
log global

server socat 127.0.0.1:1234 check
```

```
[agent-on-http-req]
spoe-agent agent-on-http-req

log global

messages agent-on-http-req

option var-prefix feevents

timeout hello  2s
timeout idle   2m
timeout processing 1s

use-backend agent-on-http-req

spoe-message agent-on-http-req
args my_path=path my_src=src my_referer=req.hdr(Referer) my_sid=unique-id 
my_req_host=req.hdr(Host)
event on-frontend-http-request

[agent-on-http-res]
spoe-agent agent-on-http-res

log global

messages agent-on-http-res

option var-prefix feevents

timeout hello  2s
timeout idle   2m
timeout processing 1s

use-backend agent-on-http-res

spoe-message agent-on-http-res
args my_path=var(sess.my_fe_path) my_src=src 
my_referer=var(sess.my_fe_referer) my_sid=unique-id 
my_req_host=var(sess.my_fe_requestedhost)
event on-http-response
```



Re: Maybe stupid question but, I don't see a fetch method for %rt => StreamID

2021-06-02 Thread Christopher Faulet

Le 6/1/21 à 8:26 PM, Aleksandar Lazic a écrit :

On 01.06.21 14:23, Tim Düsterhus wrote:

Aleks,

On 6/1/21 10:30 AM, Aleksandar Lazic wrote:

This phrasing is understandable to me, but now I'm wondering if this is the 
best solution. Maybe the already existing user-configurable unique request ID 
should instead be sent to the SPOE and then logged?

https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#7.3.6-unique-id

The request_counter (%rt) you mentioned could be embedded into this unique-id.


Well this uniqe-id is not send as Stream ID to SPOA receiver, due to this fact 
can't you debug which stream is the troubled one.


Yes, that's why I suggested that the SPOE is extended to also include this 
specific ID somewhere (just) for logging purposes.


Yep.
Any opinion from the other community Members?



The SID provided in the SPOE log message is the one used in the SPOP frame 
header. This way it is possible to match a corresponding log message emitted by 
the agent.


Regarding the format for this log message, its original purpose was to diagnose 
problems. Instead of adding custom information, I guess the best would be to 
have a "log-format" directive. At least to not break existing tools parsing 
those log messages. But to do so, all part of the current message must be 
available via log variables and/or sample fetches. And, at first glance, it will 
be hard to achieve (sample fetches are probably easier though).


Regarding the stream_uniq_id sample fetch, it is a good idea to add it. In fact, 
when it makes sense, a log variable must also be accessible via a sample fetch. 
Tim's remarks about the patch are valid. For the scope, INTRN or L4CLI, I don't 
know. I'm inclined to choose INTRN.


--
Christopher Faulet



Re: Maybe stupid question but, I don't see a fetch method for %rt => StreamID

2021-06-01 Thread Aleksandar Lazic

On 01.06.21 14:23, Tim Düsterhus wrote:

Aleks,

On 6/1/21 10:30 AM, Aleksandar Lazic wrote:

This phrasing is understandable to me, but now I'm wondering if this is the 
best solution. Maybe the already existing user-configurable unique request ID 
should instead be sent to the SPOE and then logged?

https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#7.3.6-unique-id

The request_counter (%rt) you mentioned could be embedded into this unique-id.


Well this uniqe-id is not send as Stream ID to SPOA receiver, due to this fact 
can't you debug which stream is the troubled one.


Yes, that's why I suggested that the SPOE is extended to also include this 
specific ID somewhere (just) for logging purposes.


Yep.
Any opinion from the other community Members?


Best regards
Tim Düsterhus






Re: Maybe stupid question but, I don't see a fetch method for %rt => StreamID

2021-06-01 Thread Tim Düsterhus

Aleks,

On 6/1/21 10:30 AM, Aleksandar Lazic wrote:

This phrasing is understandable to me, but now I'm wondering if this is the 
best solution. Maybe the already existing user-configurable unique request ID 
should instead be sent to the SPOE and then logged?

https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#7.3.6-unique-id

The request_counter (%rt) you mentioned could be embedded into this unique-id.


Well this uniqe-id is not send as Stream ID to SPOA receiver, due to this fact 
can't you debug which stream is the troubled one.


Yes, that's why I suggested that the SPOE is extended to also include 
this specific ID somewhere (just) for logging purposes.


Best regards
Tim Düsterhus



Re: Maybe stupid question but, I don't see a fetch method for %rt => StreamID

2021-06-01 Thread Aleksandar Lazic
Tim,

Jun 1, 2021 9:50:17 AM Tim Düsterhus :

> Aleks,
>
> On 6/1/21 1:03 AM, Aleksandar Lazic wrote:
  srv_conn([/]) : integer
    Returns an integer value corresponding to the number of currently 
 established
    connections on the designated server, possibly including the connection 
 being
 @@ -17514,6 +17509,9 @@ stopping : boolean
  str() : string
    Returns a string.

 +stream_uniq_id : integer
 +  Returns the uniq stream id.
 +
>>>
>>> This explanation is not useful to the reader (even I don't understand it).
>> […]
>> This is shown on the SPOE log line as sid and therefore I think it should be
>> possible to get the same ID also within HAProxy as fetch method.
>> ```
>> SPOE: [agent-on-http-req]  sid=88 st=0 
>> 0/0/0/0/0 1/1 0/0 10/33
>> ```
>> […]
>> ```
>> This fetch method returns the internal Stream ID, if a stream is available. 
>> The
>> internal Stream ID is used in several places in HAProxy to trace the Stream
>> inside HAProxy. It is also uses in SPOE as "sid" value.
>> ```
>>
>
> This phrasing is understandable to me, but now I'm wondering if this is the 
> best solution. Maybe the already existing user-configurable unique request ID 
> should instead be sent to the SPOE and then logged?
>
> https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#7.3.6-unique-id
>
> The request_counter (%rt) you mentioned could be embedded into this unique-id.

Well this uniqe-id is not send as Stream ID to SPOA receiver, due to this fact 
can't you debug which stream is the troubled one.

> Best regards
> Tim Düsterhus

Regards
Alex


Re: Maybe stupid question but, I don't see a fetch method for %rt => StreamID

2021-06-01 Thread Tim Düsterhus

Aleks,

On 6/1/21 1:03 AM, Aleksandar Lazic wrote:

 srv_conn([/]) : integer
   Returns an integer value corresponding to the number of currently 
established
   connections on the designated server, possibly including the 
connection being

@@ -17514,6 +17509,9 @@ stopping : boolean
 str() : string
   Returns a string.

+stream_uniq_id : integer
+  Returns the uniq stream id.
+


This explanation is not useful to the reader (even I don't understand 
it).


[…]

This is shown on the SPOE log line as sid and therefore I think it 
should be

possible to get the same ID also within HAProxy as fetch method.

```
SPOE: [agent-on-http-req]  sid=88 st=0 
0/0/0/0/0 1/1 0/0 10/33

```

[…]

```
This fetch method returns the internal Stream ID, if a stream is 
available. The

internal Stream ID is used in several places in HAProxy to trace the Stream
inside HAProxy. It is also uses in SPOE as "sid" value.
```




This phrasing is understandable to me, but now I'm wondering if this is 
the best solution. Maybe the already existing user-configurable unique 
request ID should instead be sent to the SPOE and then logged?


https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#7.3.6-unique-id

The request_counter (%rt) you mentioned could be embedded into this 
unique-id.


Best regards
Tim Düsterhus



Re: Maybe stupid question but, I don't see a fetch method for %rt => StreamID

2021-05-31 Thread Aleksandar Lazic

Tim.

On 31.05.21 23:23, Tim Düsterhus wrote:

Aleks,

On 5/31/21 9:35 PM, Aleksandar Lazic wrote:

While I try to get the stream id from spoa I recognized that there is no fetch 
method for the streamID.


Attached a patch which adds the fetch sample for the stream id.
I assume it could be back ported up to version 2.0


The backporting information should be part of the commit message. But I don't 
think it's going to be backported that far.

Further comments inline.


From 15a2026c495e64d8165a13a3c8a4e5e19ad7e8d6 Mon Sep 17 00:00:00 2001
From: Alexandar Lazic 
Date: Mon, 31 May 2021 21:28:56 +0200
Subject: [PATCH] MINOR: sample: fetch stream_uniq_id

This fetch sample allows to get the current Stream ID for the
current session.

---
 doc/configuration.txt  | 13 ++
 reg-tests/sample_fetches/stream_id.vtc | 33 ++
 src/sample.c   | 14 +++
 3 files changed, 55 insertions(+), 5 deletions(-)
 create mode 100644 reg-tests/sample_fetches/stream_id.vtc

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 11c38945c..7eb7e29cd 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -17433,11 +17433,6 @@ rand([]) : integer
   needed to take some routing decisions for example, or just for debugging
   purposes. This random must not be used for security purposes.

-uuid([]) : string
-  Returns a UUID following the RFC4122 standard. If the version is not
-  specified, a UUID version 4 (fully random) is returned.
-  Currently, only version 4 is supported.
-


Good catch, but please split moving this around into a dedicated patch (DOC).


Done.


 srv_conn([/]) : integer
   Returns an integer value corresponding to the number of currently established
   connections on the designated server, possibly including the connection being
@@ -17514,6 +17509,9 @@ stopping : boolean
 str() : string
   Returns a string.

+stream_uniq_id : integer
+  Returns the uniq stream id.
+


This explanation is not useful to the reader (even I don't understand it).


Hm. Well it fetches the uniq_id from the stream struct.

http://git.haproxy.org/?p=haproxy.git;a=blob;f=include/haproxy/stream-t.h;h=9499e94d77feea0dad787eb3bd7b6b0375ca0148;hb=HEAD#l120
120 unsigned int uniq_id;   /* unique ID used for the traces */

This is shown on the SPOE log line as sid and therefore I think it should be
possible to get the same ID also within HAProxy as fetch method.

```
SPOE: [agent-on-http-req]  sid=88 st=0 
0/0/0/0/0 1/1 0/0 10/33
```

In the log is this the variable "%rt" when a stream is available, when no stream
is available then is this the "global.req_count".

http://git.haproxy.org/?p=haproxy.git;a=blob;f=src/log.c;h=7dabe16f8fa54631f6eab815eb73f77d058d0368;hb=HEAD#l2178

In the doc is it described as request_counter which is only true when no stream
is available, when a stream is available then is that %rt the uniq id.

http://git.haproxy.org/?p=haproxy.git;a=blob;f=doc/configuration.txt;h=11c38945c29d2d28c9afb13afed60b30a97069cb;hb=HEAD#l20576
20576   |   | %rt  | request_counter (HTTP req or TCP session) | numeric
 |

So, yes I agree it's difficult to describe it in the doc for the normal user.

How about this wording.

```
This fetch method returns the internal Stream ID, if a stream is available. The
internal Stream ID is used in several places in HAProxy to trace the Stream
inside HAProxy. It is also uses in SPOE as "sid" value.
```



 table_avl([]) : integer
   Returns the total number of available entries in the current proxy's
   stick-table or in the designated stick-table. See also table_cnt.
@@ -17528,6 +17526,11 @@ thread : integer
   the function, between 0 and (global.nbthread-1). This is useful for logging
   and debugging purposes.

+uuid([]) : string
+  Returns a UUID following the RFC4122 standard. If the version is not
+  specified, a UUID version 4 (fully random) is returned.
+  Currently, only version 4 is supported.
+
 var() : undefined
   Returns a variable with the stored type. If the variable is not set, the
   sample fetch fails. The name of the variable starts with an indication
diff --git a/src/sample.c b/src/sample.c
index 09c272c48..5d3b06b10 100644
--- a/src/sample.c
+++ b/src/sample.c
@@ -4210,6 +4210,18 @@ static int smp_fetch_uuid(const struct arg *args, struct 
sample *smp, const char
 return 0;
 }

+/* returns the stream uniq_id */
+static int
+smp_fetch_stream_uniq_id(const struct arg *args, struct sample *smp, const 
char *kw, void *private)


I believe the 'static int' should go on the same line.


Well I copied from "smp_fetch_cpu_calls" but yes most of the other fetches are
in the same line so I will put it in the same line.


+{
+    if (!smp->strm)
+    return 0;
+
+    smp->data.type = SMP_T_SINT;
+    smp->data.u.sint = smp->strm->uniq_id;
+    return 1;
+}
+
 /* Note: must not be declared  as its list will be overwritten.
  * Note: fetches that may return multiple ty

Re: Maybe stupid question but, I don't see a fetch method for %rt => StreamID

2021-05-31 Thread Tim Düsterhus

Aleks,

On 5/31/21 9:35 PM, Aleksandar Lazic wrote:
While I try to get the stream id from spoa I recognized that there is 
no fetch method for the streamID.


Attached a patch which adds the fetch sample for the stream id.
I assume it could be back ported up to version 2.0


The backporting information should be part of the commit message. But I 
don't think it's going to be backported that far.


Further comments inline.


From 15a2026c495e64d8165a13a3c8a4e5e19ad7e8d6 Mon Sep 17 00:00:00 2001
From: Alexandar Lazic 
Date: Mon, 31 May 2021 21:28:56 +0200
Subject: [PATCH] MINOR: sample: fetch stream_uniq_id

This fetch sample allows to get the current Stream ID for the
current session.

---
 doc/configuration.txt  | 13 ++
 reg-tests/sample_fetches/stream_id.vtc | 33 ++
 src/sample.c   | 14 +++
 3 files changed, 55 insertions(+), 5 deletions(-)
 create mode 100644 reg-tests/sample_fetches/stream_id.vtc

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 11c38945c..7eb7e29cd 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -17433,11 +17433,6 @@ rand([]) : integer
   needed to take some routing decisions for example, or just for debugging
   purposes. This random must not be used for security purposes.
 
-uuid([]) : string

-  Returns a UUID following the RFC4122 standard. If the version is not
-  specified, a UUID version 4 (fully random) is returned.
-  Currently, only version 4 is supported.
-


Good catch, but please split moving this around into a dedicated patch 
(DOC).



 srv_conn([/]) : integer
   Returns an integer value corresponding to the number of currently established
   connections on the designated server, possibly including the connection being
@@ -17514,6 +17509,9 @@ stopping : boolean
 str() : string
   Returns a string.
 
+stream_uniq_id : integer

+  Returns the uniq stream id.
+


This explanation is not useful to the reader (even I don't understand it).


 table_avl([]) : integer
   Returns the total number of available entries in the current proxy's
   stick-table or in the designated stick-table. See also table_cnt.
@@ -17528,6 +17526,11 @@ thread : integer
   the function, between 0 and (global.nbthread-1). This is useful for logging
   and debugging purposes.
 
+uuid([]) : string

+  Returns a UUID following the RFC4122 standard. If the version is not
+  specified, a UUID version 4 (fully random) is returned.
+  Currently, only version 4 is supported.
+
 var() : undefined
   Returns a variable with the stored type. If the variable is not set, the
   sample fetch fails. The name of the variable starts with an indication
diff --git a/src/sample.c b/src/sample.c
index 09c272c48..5d3b06b10 100644
--- a/src/sample.c
+++ b/src/sample.c
@@ -4210,6 +4210,18 @@ static int smp_fetch_uuid(const struct arg *args, struct 
sample *smp, const char
return 0;
 }
 
+/* returns the stream uniq_id */

+static int
+smp_fetch_stream_uniq_id(const struct arg *args, struct sample *smp, const 
char *kw, void *private)


I believe the 'static int' should go on the same line.


+{
+   if (!smp->strm)
+   return 0;
+
+   smp->data.type = SMP_T_SINT;
+   smp->data.u.sint = smp->strm->uniq_id;
+   return 1;
+}
+
 /* Note: must not be declared  as its list will be overwritten.
  * Note: fetches that may return multiple types must be declared as the lowest
  * common denominator, the type that can be casted into all other ones. For
@@ -4243,6 +4255,8 @@ static struct sample_fetch_kw_list smp_kws = {ILH, {
{ "bin",  smp_fetch_const_bin,  ARG1(1,STR),  smp_check_const_bin , 
SMP_T_BIN,  SMP_USE_CONST },
{ "meth", smp_fetch_const_meth, ARG1(1,STR),  smp_check_const_meth, 
SMP_T_METH, SMP_USE_CONST },
 
+	{ "stream_uniq_id", smp_fetch_stream_uniq_id, 0,  NULL, SMP_T_SINT, SMP_USE_INTRN },

+


I believe 'SMP_USE_INTRN' is not correct. I believe you need 
'SMP_SRC_L4CLI', but don't quote me on that.



{ /* END */ },
 }};
 
--

2.25.1


Best regards
Tim Düsterhus



Re: Maybe stupid question but, I don't see a fetch method for %rt => StreamID

2021-05-31 Thread Aleksandar Lazic

Hi.

On 31.05.21 14:23, Aleksandar Lazic wrote:

Hi.

While I try to get the stream id from spoa I recognized that there is no fetch 
method for the streamID.


Attached a patch which adds the fetch sample for the stream id.
I assume it could be back ported up to version 2.0

Regards
Alex


The discussion is here.
https://github.com/criteo/haproxy-spoe-go/issues/28

That's the sid in filter spoa log output.
SPOE: [agent-on-http-req]  sid=88 st=0 
0/0/0/0/0 1/1 0/0 10/33

http://git.haproxy.org/?p=haproxy-2.4.git;a=blob;f=src/flt_spoe.c;h=a68f7b9141025963e8f4ad79c0d1617a4c59774e;hb=HEAD#l2815

```
2815 if (ctx->status_code || !(conf->agent_fe.options2 & 
PR_O2_NOLOGNORM))
2816 send_log(&conf->agent_fe, (!ctx->status_code ? 
LOG_NOTICE : LOG_WARNING),
2817  "SPOE: [%s]  sid=%u st=%u 
%ld/%ld/%ld/%ld/%ld %u/%u %u/%u %llu/%llu\n",
2818  agent->id, spoe_event_str[ev], s->uniq_id, 
ctx->status_code,
  ^^
2819  ctx->stats.t_request, ctx->stats.t_queue, 
ctx->stats.t_waiting,
2820  ctx->stats.t_response, 
ctx->stats.t_process,
2821  agent->counters.idles, 
agent->counters.applets,
2822  agent->counters.nb_sending, 
agent->counters.nb_waiting,
2823  agent->counters.nb_errors, 
agent->counters.nb_processed);

```

It looks to me that the %rt log format have the stream id, right?

http://git.haproxy.org/?p=haproxy-2.4.git;a=blob;f=doc/configuration.txt;h=a13a9a77f8a077a6ac798b1dccc8a0f2f3f67396;hb=HEAD#l20576

|   | %rt  | request_counter (HTTP req or TCP session) | numeric |

http://git.haproxy.org/?p=haproxy-2.4.git;a=blob;f=src/log.c;hb=c5c5bc4e36ce4a6f3bc113c8e16824fdb276c220#l3175
3175 case LOG_FMT_COUNTER: // %rt

http://git.haproxy.org/?p=haproxy-2.4.git;a=blob;f=src/log.c;hb=c5c5bc4e36ce4a6f3bc113c8e16824fdb276c220#l2202
2202 uniq_id = _HA_ATOMIC_FETCH_ADD(&global.req_count, 1);

Regards
Alex



>From 15a2026c495e64d8165a13a3c8a4e5e19ad7e8d6 Mon Sep 17 00:00:00 2001
From: Alexandar Lazic 
Date: Mon, 31 May 2021 21:28:56 +0200
Subject: [PATCH] MINOR: sample: fetch stream_uniq_id

This fetch sample allows to get the current Stream ID for the
current session.

---
 doc/configuration.txt  | 13 ++
 reg-tests/sample_fetches/stream_id.vtc | 33 ++
 src/sample.c   | 14 +++
 3 files changed, 55 insertions(+), 5 deletions(-)
 create mode 100644 reg-tests/sample_fetches/stream_id.vtc

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 11c38945c..7eb7e29cd 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -17433,11 +17433,6 @@ rand([]) : integer
   needed to take some routing decisions for example, or just for debugging
   purposes. This random must not be used for security purposes.
 
-uuid([]) : string
-  Returns a UUID following the RFC4122 standard. If the version is not
-  specified, a UUID version 4 (fully random) is returned.
-  Currently, only version 4 is supported.
-
 srv_conn([/]) : integer
   Returns an integer value corresponding to the number of currently established
   connections on the designated server, possibly including the connection being
@@ -17514,6 +17509,9 @@ stopping : boolean
 str() : string
   Returns a string.
 
+stream_uniq_id : integer
+  Returns the uniq stream id.
+
 table_avl([]) : integer
   Returns the total number of available entries in the current proxy's
   stick-table or in the designated stick-table. See also table_cnt.
@@ -17528,6 +17526,11 @@ thread : integer
   the function, between 0 and (global.nbthread-1). This is useful for logging
   and debugging purposes.
 
+uuid([]) : string
+  Returns a UUID following the RFC4122 standard. If the version is not
+  specified, a UUID version 4 (fully random) is returned.
+  Currently, only version 4 is supported.
+
 var() : undefined
   Returns a variable with the stored type. If the variable is not set, the
   sample fetch fails. The name of the variable starts with an indication
diff --git a/reg-tests/sample_fetches/stream_id.vtc b/reg-tests/sample_fetches/stream_id.vtc
new file mode 100644
index 0..ec512b198
--- /dev/null
+++ b/reg-tests/sample_fetches/stream_id.vtc
@@ -0,0 +1,33 @@
+varnishtest "stream id sample fetch Test"
+
+#REQUIRE_VERSION=2.0
+
+feature ignore_unknown_macro
+
+server s1 {
+rxreq
+txresp
+} -start
+
+haproxy h1 -conf {
+defaults
+mode http
+timeout connect 1s
+timeout client  1s
+timeout server  1s
+
+frontend fe
+bind "fd@${fe}"
+http-response set-header stream-id   "%[stream_uniq_id]"
+default_backend be
+
+backend be
+server srv1 

Maybe stupid question but, I don't see a fetch method for %rt => StreamID

2021-05-31 Thread Aleksandar Lazic

Hi.

While I try to get the stream id from spoa I recognized that there is no fetch 
method for the streamID.

The discussion is here.
https://github.com/criteo/haproxy-spoe-go/issues/28

That's the sid in filter spoa log output.
SPOE: [agent-on-http-req]  sid=88 st=0 
0/0/0/0/0 1/1 0/0 10/33

http://git.haproxy.org/?p=haproxy-2.4.git;a=blob;f=src/flt_spoe.c;h=a68f7b9141025963e8f4ad79c0d1617a4c59774e;hb=HEAD#l2815

```
2815 if (ctx->status_code || !(conf->agent_fe.options2 & 
PR_O2_NOLOGNORM))
2816 send_log(&conf->agent_fe, (!ctx->status_code ? 
LOG_NOTICE : LOG_WARNING),
2817  "SPOE: [%s]  sid=%u st=%u 
%ld/%ld/%ld/%ld/%ld %u/%u %u/%u %llu/%llu\n",
2818  agent->id, spoe_event_str[ev], s->uniq_id, 
ctx->status_code,
 ^^
2819  ctx->stats.t_request, ctx->stats.t_queue, 
ctx->stats.t_waiting,
2820  ctx->stats.t_response, 
ctx->stats.t_process,
2821  agent->counters.idles, 
agent->counters.applets,
2822  agent->counters.nb_sending, 
agent->counters.nb_waiting,
2823  agent->counters.nb_errors, 
agent->counters.nb_processed);

```

It looks to me that the %rt log format have the stream id, right?

http://git.haproxy.org/?p=haproxy-2.4.git;a=blob;f=doc/configuration.txt;h=a13a9a77f8a077a6ac798b1dccc8a0f2f3f67396;hb=HEAD#l20576

|   | %rt  | request_counter (HTTP req or TCP session) | numeric |

http://git.haproxy.org/?p=haproxy-2.4.git;a=blob;f=src/log.c;hb=c5c5bc4e36ce4a6f3bc113c8e16824fdb276c220#l3175
3175 case LOG_FMT_COUNTER: // %rt

http://git.haproxy.org/?p=haproxy-2.4.git;a=blob;f=src/log.c;hb=c5c5bc4e36ce4a6f3bc113c8e16824fdb276c220#l2202
2202 uniq_id = _HA_ATOMIC_FETCH_ADD(&global.req_count, 1);

Regards
Alex



Re: Question about rfc8441 (Bootstrapping WebSockets with HTTP/2)

2021-01-29 Thread Aleksandar Lazic

On 29.01.21 12:27, Christopher Faulet wrote:

Le 22/01/2021 à 07:08, Willy Tarreau a écrit :

On Thu, Jan 21, 2021 at 11:09:33PM +0100, Aleksandar Lazic wrote:

On 21.01.21 21:57, Christopher Faulet wrote:

Le 21/01/2021 à 21:19, Aleksandar Lazic a écrit :

Hi.

I'm not sure if I have missed something, because there are so many great 
features
now in HAProxy, therefore I just ask here.

Is the rfc8441 (Bootstrapping WebSockets with HTTP/2) possible in HAProxy now?



Hi,

It is not possible right now. But it will be very very soon. Amaury implemented 
the
H2 websocket support and it works pretty well. Unfortunately, this relies on 
some
tricky fixes on the tunnel management that must be carefully reviewed. It is a
nightmare to support all tunnel combinations. But I've almost done the review. I
must split a huge patch in 2 or 3 smaller and more manageable ones. I'm on it 
and I
will do my best to push it very soon. Anyway, it will be a feature for the 2.4.


Wow that sounds really great. Thank you for your answer.


And by the way initially we thought we'd backport Amaury's work to 2.3,
but give the dependency with the tunnel stuff that opened this pandora
box, now I'm pretty sure we won't :-)

One nice point is that he managed to natively support the WS handshake,
it's not just a blind tunnel anymore, so that it's possible to have WS
using either H1 or H2 on the frontend, and either H1 or H2 on the backend.
Now we're really seeing the benefits of HTX because while at each extremity
we have a very specific WS handshake, in the middle we just have a tunnel
using a WS protocol, which allows a CONNECT on one side to become a GET on
the other side.

As Christopher said, the tunnel changes are extremely complicated because
these uncovered some old limitations at various levels, and each time we
reviewed the pending changes we could imagine a situation where an odd use
case would break if we don't recursively go into another round of refactoring
at yet another deeper level. But we're on the right track now, things start
to look good.



FYI, the HTTP/2 websockets support is now available and will be part of the 
next 2.4-dev release (2.4-dev7)


Cool thanks.



Re: Question about rfc8441 (Bootstrapping WebSockets with HTTP/2)

2021-01-29 Thread Christopher Faulet

Le 22/01/2021 à 07:08, Willy Tarreau a écrit :

On Thu, Jan 21, 2021 at 11:09:33PM +0100, Aleksandar Lazic wrote:

On 21.01.21 21:57, Christopher Faulet wrote:

Le 21/01/2021 à 21:19, Aleksandar Lazic a écrit :

Hi.

I'm not sure if I have missed something, because there are so many great 
features
now in HAProxy, therefore I just ask here.

Is the rfc8441 (Bootstrapping WebSockets with HTTP/2) possible in HAProxy now?



Hi,

It is not possible right now. But it will be very very soon. Amaury implemented 
the
H2 websocket support and it works pretty well. Unfortunately, this relies on 
some
tricky fixes on the tunnel management that must be carefully reviewed. It is a
nightmare to support all tunnel combinations. But I've almost done the review. I
must split a huge patch in 2 or 3 smaller and more manageable ones. I'm on it 
and I
will do my best to push it very soon. Anyway, it will be a feature for the 2.4.


Wow that sounds really great. Thank you for your answer.


And by the way initially we thought we'd backport Amaury's work to 2.3,
but give the dependency with the tunnel stuff that opened this pandora
box, now I'm pretty sure we won't :-)

One nice point is that he managed to natively support the WS handshake,
it's not just a blind tunnel anymore, so that it's possible to have WS
using either H1 or H2 on the frontend, and either H1 or H2 on the backend.
Now we're really seeing the benefits of HTX because while at each extremity
we have a very specific WS handshake, in the middle we just have a tunnel
using a WS protocol, which allows a CONNECT on one side to become a GET on
the other side.

As Christopher said, the tunnel changes are extremely complicated because
these uncovered some old limitations at various levels, and each time we
reviewed the pending changes we could imagine a situation where an odd use
case would break if we don't recursively go into another round of refactoring
at yet another deeper level. But we're on the right track now, things start
to look good.



FYI, the HTTP/2 websockets support is now available and will be part of the next 
2.4-dev release (2.4-dev7)


--
Christopher Faulet



Re: Question about substring match (*_sub)

2021-01-23 Thread Aleksandar Lazic

On 23.01.21 07:36, Илья Шипицин wrote:

the following usually works for performance profiling.


1) setup work stand (similar to what you use in production)

2) use valgrind + callgrind for collecting traces

3) put workload

4) aggregate using kcachegrind

most probably you were going to do very similar things already :)


Thanks for the tips ;-)

The issue here is that for sub-string matching are several parameters
important like pattern, pattern length, text, text length and the
alphabet.

My question was focused to hear some "common" setups to be able to
create some valid tests for the different algorithms to compare it.

I think something like the examples below. As I don't used _sub
in the past it's difficult to me alone to create some valid use
cases which are used out there. It's okay to send examples only
to me, just in case for some security or privacy reasons.

acl allow_from_int hdr(x-forwarded-for) hdr_sub("192.168.4.5")
acl admin_access   hdr(user)hdr_sub("admin")
acl test_url   path urlp_sub("test=1")

Should UTF-* be considered as valid Alphabet or only ASCII?

If _sub is a very rare case then it's okay as it is, isn't it?

Opinions?


сб, 23 янв. 2021 г. в 03:18, Aleksandar Lazic mailto:al-hapr...@none.at>>:

Hi.

I would like to take a look into the substring match implementation because 
of
the comment there.


http://git.haproxy.org/?p=haproxy.git;a=blob;f=src/pattern.c;h=8729769e5e549bcd4043ae9220ceea440445332a;hb=HEAD#l767
 
<http://git.haproxy.org/?p=haproxy.git;a=blob;f=src/pattern.c;h=8729769e5e549bcd4043ae9220ceea440445332a;hb=HEAD#l767>

"NB: Suboptimal, should be rewritten using a Boyer-Moore method."

Now before I take a deeper look into the different algorithms about 
sub-string
match I would like to know which pattern and length is a "common" use case
for the user here?

There are so many different algorithms which are mostly implemented in the
Smart Tool ( https://github.com/smart-tool/smart ) therefore it would be
interesting to know some metrics about the use cases.

Thanks for sharing.
Best regards

Aleks






Re: Question about substring match (*_sub)

2021-01-22 Thread Илья Шипицин
the following usually works for performance profiling.


1) setup work stand (similar to what you use in production)

2) use valgrind + callgrind for collecting traces

3) put workload

4) aggregate using kcachegrind

most probably you were going to do very similar things already :)

сб, 23 янв. 2021 г. в 03:18, Aleksandar Lazic :

> Hi.
>
> I would like to take a look into the substring match implementation
> because of
> the comment there.
>
>
> http://git.haproxy.org/?p=haproxy.git;a=blob;f=src/pattern.c;h=8729769e5e549bcd4043ae9220ceea440445332a;hb=HEAD#l767
>
> "NB: Suboptimal, should be rewritten using a Boyer-Moore method."
>
> Now before I take a deeper look into the different algorithms about
> sub-string
> match I would like to know which pattern and length is a "common" use case
> for the user here?
>
> There are so many different algorithms which are mostly implemented in the
> Smart Tool ( https://github.com/smart-tool/smart ) therefore it would be
> interesting to know some metrics about the use cases.
>
> Thanks for sharing.
> Best regards
>
> Aleks
>
>


Question about substring match (*_sub)

2021-01-22 Thread Aleksandar Lazic

Hi.

I would like to take a look into the substring match implementation because of
the comment there.

http://git.haproxy.org/?p=haproxy.git;a=blob;f=src/pattern.c;h=8729769e5e549bcd4043ae9220ceea440445332a;hb=HEAD#l767

"NB: Suboptimal, should be rewritten using a Boyer-Moore method."

Now before I take a deeper look into the different algorithms about sub-string
match I would like to know which pattern and length is a "common" use case
for the user here?

There are so many different algorithms which are mostly implemented in the
Smart Tool ( https://github.com/smart-tool/smart ) therefore it would be
interesting to know some metrics about the use cases.

Thanks for sharing.
Best regards

Aleks



Re: Question about rfc8441 (Bootstrapping WebSockets with HTTP/2)

2021-01-21 Thread Willy Tarreau
On Thu, Jan 21, 2021 at 11:09:33PM +0100, Aleksandar Lazic wrote:
> On 21.01.21 21:57, Christopher Faulet wrote:
> > Le 21/01/2021 à 21:19, Aleksandar Lazic a écrit :
> >> Hi.
> >>
> >> I'm not sure if I have missed something, because there are so many great 
> >> features
> >> now in HAProxy, therefore I just ask here.
> >>
> >> Is the rfc8441 (Bootstrapping WebSockets with HTTP/2) possible in HAProxy 
> >> now?
> >>
> >
> > Hi,
> >
> > It is not possible right now. But it will be very very soon. Amaury 
> > implemented the
> > H2 websocket support and it works pretty well. Unfortunately, this relies 
> > on some
> > tricky fixes on the tunnel management that must be carefully reviewed. It 
> > is a
> > nightmare to support all tunnel combinations. But I've almost done the 
> > review. I
> > must split a huge patch in 2 or 3 smaller and more manageable ones. I'm on 
> > it and I
> > will do my best to push it very soon. Anyway, it will be a feature for the 
> > 2.4.
> 
> Wow that sounds really great. Thank you for your answer.

And by the way initially we thought we'd backport Amaury's work to 2.3,
but give the dependency with the tunnel stuff that opened this pandora
box, now I'm pretty sure we won't :-)

One nice point is that he managed to natively support the WS handshake,
it's not just a blind tunnel anymore, so that it's possible to have WS
using either H1 or H2 on the frontend, and either H1 or H2 on the backend.
Now we're really seeing the benefits of HTX because while at each extremity
we have a very specific WS handshake, in the middle we just have a tunnel
using a WS protocol, which allows a CONNECT on one side to become a GET on
the other side.

As Christopher said, the tunnel changes are extremely complicated because
these uncovered some old limitations at various levels, and each time we
reviewed the pending changes we could imagine a situation where an odd use
case would break if we don't recursively go into another round of refactoring
at yet another deeper level. But we're on the right track now, things start
to look good.

Cheers,
Willy



Re: Question about rfc8441 (Bootstrapping WebSockets with HTTP/2)

2021-01-21 Thread Aleksandar Lazic

On 21.01.21 21:57, Christopher Faulet wrote:
> Le 21/01/2021 à 21:19, Aleksandar Lazic a écrit :
>> Hi.
>>
>> I'm not sure if I have missed something, because there are so many great 
features
>> now in HAProxy, therefore I just ask here.
>>
>> Is the rfc8441 (Bootstrapping WebSockets with HTTP/2) possible in HAProxy 
now?
>>
>
> Hi,
>
> It is not possible right now. But it will be very very soon. Amaury 
implemented the
> H2 websocket support and it works pretty well. Unfortunately, this relies on 
some
> tricky fixes on the tunnel management that must be carefully reviewed. It is a
> nightmare to support all tunnel combinations. But I've almost done the 
review. I
> must split a huge patch in 2 or 3 smaller and more manageable ones. I'm on it 
and I
> will do my best to push it very soon. Anyway, it will be a feature for the 
2.4.

Wow that sounds really great. Thank you for your answer.

Regards
Aleks



Re: Question about rfc8441 (Bootstrapping WebSockets with HTTP/2)

2021-01-21 Thread Christopher Faulet

Le 21/01/2021 à 21:19, Aleksandar Lazic a écrit :

Hi.

I'm not sure if I have missed something, because there are so many great 
features
now in HAProxy, therefore I just ask here.

Is the rfc8441 (Bootstrapping WebSockets with HTTP/2) possible in HAProxy now?



Hi,

It is not possible right now. But it will be very very soon. Amaury implemented 
the H2 websocket support and it works pretty well. Unfortunately, this relies on 
some tricky fixes on the tunnel management that must be carefully reviewed. It 
is a nightmare to support all tunnel combinations. But I've almost done the 
review. I must split a huge patch in 2 or 3 smaller and more manageable ones. 
I'm on it and I will do my best to push it very soon. Anyway, it will be a 
feature for the 2.4.


--
Christopher Faulet



Question about rfc8441 (Bootstrapping WebSockets with HTTP/2)

2021-01-21 Thread Aleksandar Lazic

Hi.

I'm not sure if I have missed something, because there are so many great 
features
now in HAProxy, therefore I just ask here.

Is the rfc8441 (Bootstrapping WebSockets with HTTP/2) possible in HAProxy now?

Regards

Aleks



Re: Question about USE_OPENSSL build option

2021-01-05 Thread William Dauchy
Sorry, previous mail left too early :-)

On Tue, Jan 5, 2021 at 5:59 PM Willy Tarreau  wrote:
> Note, they still have to enter the target operating system so minimal
> reading is necessary. But this can be addressed in the makefile's help
> message which is their first contact indicating them what target to use.
> (we could even suggest what the current target looks like for some of
> them).

I also agree that's one potential thing which could be nice to address
to make it easier.
-- 
William



Re: Question about USE_OPENSSL build option

2021-01-05 Thread William Dauchy
On Tue, Jan 5, 2021 at 5:59 PM Willy Tarreau  wrote:
> > I used to think most people use `use_openssl=1` and wondered why it
> > was not the default, but I recently discovered a large setup not
> > making use of tls. The market is however strongly moving towards end
> > to end encryption so I would say it makes sense to have use_openssl=1
> > by default.
>
> At least not to have to type it anymore ?

correct.


> Note, they still have to enter the target operating system so minimal
> reading is necessary. But this can be addressed in the makefile's help
> message which is their first contact indicating them what target to use.
> (we could even suggest what the current target looks like for some of
> them).
>
> > A developer/maintainer knows how to deactivate it for test purposes to
> > reply to Tim's comment even if it is longer to type.
>
> It's true as well. Nowadays I have a myriad of build scripts which all
> build with various options combinations, for various platforms, with
> various debugging options etc, so the typing time on the developer's
> machine is not a big deal:
>
>   $ ls make-*|wc -l
>   84
>
> So I don't find myself often adding USE_OPENSSL=1 by hand. But on the
> other hand I also trapped myself into forgetting it when building by
> hand for the same reason.
>
> Maybe if we figure a nice way to print some options before building,
> it could then be nice to recap the main options used so that users
> still have a chance to press Ctrl-C and change them. This would still
> alleviate the need to read docs and provide indications about other
> possible options (PCRE, ZLIB, etc).
>
> Willy



--
William



Re: Question about USE_OPENSSL build option

2021-01-05 Thread Willy Tarreau
On Tue, Jan 05, 2021 at 05:44:27PM +0100, William Dauchy wrote:
> Hi Willy,
> 
> On Tue, Jan 5, 2021 at 5:23 PM Willy Tarreau  wrote:
> > as I suspected in issue #1020, another user got trapped not enabling
> > SSL when building from sources (probably for the first time, as it
> > happens to everyone to build haproxy for the first time).
> >
> > Given that haproxy's main target is HTTP and that these days it often
> > comes with SSL (and it doesn't seem like it's going to revert soon),
> > I was wondering if it would be a good idea for 2.4 and onwards to preset
> > USE_OPENSSL=1 by default. At least users who face build errors will have
> > a glance at the README and figure how to disable it if they don't want
> > it. But providing a successful build which misses some essential features
> > doesn't sound like a very good long-term solution to me.
> >
> > I'm interested in any opinion here.
> 
> I used to think most people use `use_openssl=1` and wondered why it
> was not the default, but I recently discovered a large setup not
> making use of tls. The market is however strongly moving towards end
> to end encryption so I would say it makes sense to have use_openssl=1
> by default.

At least not to have to type it anymore ?

> People like things which work out of the box without
> reading any doc. So I'm quite a supporter of that change.

Note, they still have to enter the target operating system so minimal
reading is necessary. But this can be addressed in the makefile's help
message which is their first contact indicating them what target to use.
(we could even suggest what the current target looks like for some of
them).

> A developer/maintainer knows how to deactivate it for test purposes to
> reply to Tim's comment even if it is longer to type.

It's true as well. Nowadays I have a myriad of build scripts which all
build with various options combinations, for various platforms, with
various debugging options etc, so the typing time on the developer's
machine is not a big deal:

  $ ls make-*|wc -l
  84

So I don't find myself often adding USE_OPENSSL=1 by hand. But on the
other hand I also trapped myself into forgetting it when building by
hand for the same reason.

Maybe if we figure a nice way to print some options before building,
it could then be nice to recap the main options used so that users
still have a chance to press Ctrl-C and change them. This would still
alleviate the need to read docs and provide indications about other
possible options (PCRE, ZLIB, etc).

Willy



Re: Question about USE_OPENSSL build option

2021-01-05 Thread Willy Tarreau
On Tue, Jan 05, 2021 at 05:34:46PM +0100, Tim Düsterhus wrote:
> Willy,
> 
> Am 05.01.21 um 17:22 schrieb Willy Tarreau:
> > Given that haproxy's main target is HTTP and that these days it often
> > comes with SSL (and it doesn't seem like it's going to revert soon),
> > I was wondering if it would be a good idea for 2.4 and onwards to preset
> > USE_OPENSSL=1 by default. At least users who face build errors will have
> > a glance at the README and figure how to disable it if they don't want
> > it. But providing a successful build which misses some essential features
> > doesn't sound like a very good long-term solution to me.
> > 
> > I'm interested in any opinion here.
> > 
> 
> This would be a -1 from my side. For development and testing I usually
> build with a simple `make -j4 all TARGET=linux-glibc` to keep build
> times low.
> 
> I suspect that the vast majority of users consume distro packages
> anyway. Users that compile themselves can usually be expected to read
> the `INSTALL` file.

I tend to agree on this point. However the scenario probably is:

  $ make
  (... blabbering about all suppored targets ...)
  $ make TARGET=mytarget

Thus the help message from the makefile should at least suggest that
plenty of other options exist (and give hints about common ones).

> I would be fine with a warning if `USE_OPENSSL` is not explicitly
> provided, though.

I hadn't thought about this one. It's true that it could be nice to see
something like "note: USE_OPENSSL not specified, building without SSL support".

It's not necessarily trivial to fit into the makefile for the "all"
target however, as we can't run actions before the dependencies are
met, so it would probably be a dirty hack before the target definition.
But it could be worth trying.

Thanks!
Willy



Re: Question about USE_OPENSSL build option

2021-01-05 Thread William Dauchy
Hi Willy,

On Tue, Jan 5, 2021 at 5:23 PM Willy Tarreau  wrote:
> as I suspected in issue #1020, another user got trapped not enabling
> SSL when building from sources (probably for the first time, as it
> happens to everyone to build haproxy for the first time).
>
> Given that haproxy's main target is HTTP and that these days it often
> comes with SSL (and it doesn't seem like it's going to revert soon),
> I was wondering if it would be a good idea for 2.4 and onwards to preset
> USE_OPENSSL=1 by default. At least users who face build errors will have
> a glance at the README and figure how to disable it if they don't want
> it. But providing a successful build which misses some essential features
> doesn't sound like a very good long-term solution to me.
>
> I'm interested in any opinion here.

I used to think most people use `use_openssl=1` and wondered why it
was not the default, but I recently discovered a large setup not
making use of tls. The market is however strongly moving towards end
to end encryption so I would say it makes sense to have use_openssl=1
by default. People like things which work out of the box without
reading any doc. So I'm quite a supporter of that change.
A developer/maintainer knows how to deactivate it for test purposes to
reply to Tim's comment even if it is longer to type.

-- 
William



Re: Question about USE_OPENSSL build option

2021-01-05 Thread Tim Düsterhus
Willy,

Am 05.01.21 um 17:22 schrieb Willy Tarreau:
> Given that haproxy's main target is HTTP and that these days it often
> comes with SSL (and it doesn't seem like it's going to revert soon),
> I was wondering if it would be a good idea for 2.4 and onwards to preset
> USE_OPENSSL=1 by default. At least users who face build errors will have
> a glance at the README and figure how to disable it if they don't want
> it. But providing a successful build which misses some essential features
> doesn't sound like a very good long-term solution to me.
> 
> I'm interested in any opinion here.
> 

This would be a -1 from my side. For development and testing I usually
build with a simple `make -j4 all TARGET=linux-glibc` to keep build
times low.

I suspect that the vast majority of users consume distro packages
anyway. Users that compile themselves can usually be expected to read
the `INSTALL` file.

I would be fine with a warning if `USE_OPENSSL` is not explicitly
provided, though.

Best regards
Tim Düsterhus



Question about USE_OPENSSL build option

2021-01-05 Thread Willy Tarreau
Hi all,

as I suspected in issue #1020, another user got trapped not enabling
SSL when building from sources (probably for the first time, as it
happens to everyone to build haproxy for the first time).

Given that haproxy's main target is HTTP and that these days it often
comes with SSL (and it doesn't seem like it's going to revert soon),
I was wondering if it would be a good idea for 2.4 and onwards to preset
USE_OPENSSL=1 by default. At least users who face build errors will have
a glance at the README and figure how to disable it if they don't want
it. But providing a successful build which misses some essential features
doesn't sound like a very good long-term solution to me.

I'm interested in any opinion here.

Thanks,
Willy



Re: [*EXT*] Re: Quick question on atomics on ARM

2020-12-23 Thread Willy Tarreau
On Wed, Dec 23, 2020 at 03:12:38PM +0100, Ionel GARDAIS wrote:
> My bad, I wasn't up to date.

No worries, a devel version is never up to date by definition!

> Olivier's fix is OK : no more CPU hogging.

Perfect, thanks for the quick test!
Willy



Re: [*EXT*] Re: Quick question on atomics on ARM

2020-12-23 Thread Ionel GARDAIS
My bad, I wasn't up to date.
Olivier's fix is OK : no more CPU hogging.

-- 
Ionel

- Mail original -
De: "Willy Tarreau" 
À: "Ionel GARDAIS" 
Cc: "David CARLIER" , "haproxy" 
Envoyé: Mercredi 23 Décembre 2020 14:52:32
Objet: Re: [*EXT*] Re: Quick question on atomics on ARM

On Wed, Dec 23, 2020 at 02:48:17PM +0100, Ionel GARDAIS wrote:
> For what it's worth, I tried to build haproxy on Apple M1.
> It builds OK but at run, it's stuck in the initial pool_flush, hogging 100% 
> CPU.
> 
> the assembly part of __ha_cas_dw for __aarch64__ seems to be ignored.

What version did you try ? Olivier just fixed an issue related to the
macos assembler using ';' as a comment, precisely in __ha_cas_dw :-)
Please try again with the latest master from right now.

Willy
--
232 avenue Napoleon BONAPARTE 92500 RUEIL MALMAISON
Capital EUR 219 300,00 - RCS Nanterre B 408 832 301 - TVA FR 09 408 832 301




  1   2   3   4   5   6   7   8   9   >