Re: Patch sample_conv_json_query in sample.c to return array values

2023-09-15 Thread Aleksandar Lazic

Dear Jens.

Please can you create a patch as mentioned in 
https://github.com/haproxy/haproxy/blob/master/CONTRIBUTING as suggested 
in https://github.com/haproxy/haproxy/issues/2281#issuecomment-1721014384


Regards
Alex

On 2023-09-15 (Fr.) 14:57, Jens Popp wrote:

Hi,

currently the method sample_conv_query in sample.c returns an empty value, if 
the given json path leads to an json array. There are multiple use cases, where 
you need to check the content of an array, e.g., if the array contains a list 
of roles and you want to check, if the array contains a certain role (for 
OIDC). I propose the simple fix below, to copy the complete array (including 
brackets) in the result of the function:

...(Line 4162)
 case MJSON_TOK_ARRAY:
 // We copy the complete array, including square 
brackets into the return buffer
 // result looks like: 
["manage-account","manage-account-links","view-profile"]
 strncpy( trash->area, token, token_size);
 trash->data = token_size;
 trash->size = token_size;
 smp->data.u.str = *trash;
 smp->data.type = SMP_T_STR;
 return 1;
 case MJSON_TOK_NULL:

... (currently Line 4164)

If possible I would also like to fix this in current stable release 2.8.

Changes are also in my fork,

https://github.com/jenspopp/haproxy/blob/master/src/sample.c#L4162-L4171

Any comment / help is appreciated.

Best regards
Jens
[X]


An Elisa camLine Holding GmbH company - www.camline.com


camLine GmbH - Fraunhoferring 9, 85238 Petershausen, Germany
Amtsgericht München HRB 88821
Managing Directors: Frank Bölstler, Evelyn Tag, Bernhard Völker


The content of this message is CAMLINE CONFIDENTIAL. If you are not the 
intended recipient, please notify me, delete this email and do not use or 
distribute this email.








Re: HAProxy and musl (was: Re: HAproxy Error)

2023-09-14 Thread Aleksandar Lazic

Hi.

Resuscitate this old thread with a musl lib update.

https://musl.libc.org/releases.html

```
musl-1.2.4.tar.gz (sig) - May 1, 2023

This release adds TCP fallback to the DNS stub resolver, fixing the 
longstanding inability to query large DNS records and incompatibility 
with recursive nameservers that don't give partial results in truncated 
UDP responses. It also makes a number of other bug fixes and 
improvements in DNS and related functionality, including making both the 
modern and legacy API results differentiate between NODATA and NxDomain 
conditions so that the caller can handle them differently.




```

Regards
Alex


On 2020-04-16 (Do.) 13:26, Willy Tarreau wrote:

On Thu, Apr 16, 2020 at 12:29:42PM +0200, Tim Düsterhus wrote:

FWIW musl seems to work OK here when building for linux-glibc-legacy.


Yes. HAProxy linked against Musl is smoke tested as part of the Docker
Official Images program, because the Alpine-based Docker images use Musl
as their libc. In fact you can even use TARGET=linux-glibc + USE_BACKTRACE=.


By the way, I initially thought I was the only one building with musl
for my EdgeRouter-x that I'm using as a distcc load balancer for the
build farm at work. But if there are other users, we'd rather add
a linux-musl target, as the split between OS and library was precisely
made for this purpose!

Anyone objects against something like this (+ the appropriate entries
in other places and doc) ?


diff --git a/Makefile b/Makefile
index d5841a5..a3dad36 100644
--- a/Makefile
+++ b/Makefile
@@ -341,6 +341,18 @@ ifeq ($(TARGET),linux-glibc-legacy)
  USE_ACCEPT4 USE_LINUX_SPLICE USE_PRCTL USE_THREAD_DUMP USE_GETADDRINFO)
  endif
  
+# For linux >= 2.6.28 and musl

+ifeq ($(TARGET),linux-musl)
+  set_target_defaults = $(call default_opts, \
+USE_POLL USE_TPROXY USE_LIBCRYPT USE_DL USE_RT USE_CRYPT_H USE_NETFILTER  \
+USE_CPU_AFFINITY USE_THREAD USE_EPOLL USE_FUTEX USE_LINUX_TPROXY  \
+USE_ACCEPT4 USE_LINUX_SPLICE USE_PRCTL USE_THREAD_DUMP USE_NS USE_TFO \
+USE_GETADDRINFO)
+ifneq ($(shell echo __arm__/__aarch64__ | $(CC) -E -xc - | grep 
'^[^\#]'),__arm__/__aarch64__)
+  TARGET_LDFLAGS=-latomic
+endif
+endif
+
  # Solaris 8 and above
  ifeq ($(TARGET),solaris)
# We also enable getaddrinfo() which works since solaris 8.

Willy




Re: HaProxy does not updating DNS cache

2023-09-13 Thread Aleksandar Lazic

Hi.

On 2023-09-13 (Mi.) 14:39, Henning Svane wrote:

Hi

I have tried using a DNS with a TTL of 600 sec. and the DNS changes once 
in a while, but every time I have to restart Haproxy to get the updated 
DNS to work.


Even if I wait for hours. I can see with nslookup that the server can 
see the updated DNS correctly.


So is there a setting that makes HaProxy TTL aware? So HaProxy reloads 
the DNS record every time the TTL expires.


Please add always the output of `haproxy -vv`, thanks.


Regards

Henning


Regards
Alex



Re: how to upgrade haproxy

2023-08-28 Thread Aleksandar Lazic

Hi.


On 2023-08-28 (Mo.) 22:30, Atharva Shripad Dudwadkar wrote:

Hi Haproxy team,

Can we install haproxy using source code in ubuntu 20.04 and how ?


You can follow the Install file to compile HAProxy.

https://git.haproxy.org/?p=haproxy.git;a=blob;f=INSTALL;h=8492a4f37208a6099629101466fec3378a28e73c;hb=HEAD

Regards
Alex

On Thu, 24 Aug 2023 at 4:00 PM, Aleksandar Lazic <mailto:al-hapr...@none.at>> wrote:


Hi Atharva Shripad Dudwadkar.

On 2023-08-24 (Do.) 12:08, Willy Tarreau wrote:
 > Hi,
 >
 > On Thu, Aug 24, 2023 at 03:23:59PM +0530, Atharva Shripad
Dudwadkar wrote:
 >> Hi haproxy Team,
 >>
 >> Can you please help me with the upgrading process regarding
haproxy from
 >> 2.0.7 to 2.5. in RHEL. Could you please share with me upgrading
process?
 >
 > Please note that 2.5 is no longer supported, it was a short-lived
 > version. You should consider upgrading to a long term supported one
 > to replace your 2.0, these are 2.4, 2.6 or 2.8. Please look at the
 > packages here for various distros and from various maintainers:
 >
 > https://github.com/haproxy/wiki/wiki/Packages
<https://github.com/haproxy/wiki/wiki/Packages>

In addition to that site can you also open a RH Case and ask the Vendor
if there is a updated package, in case you expect some support for the
RHEL package :-).

https://access.redhat.com/support/cases/
<https://access.redhat.com/support/cases/>

 > Regards,
 > Willy

Regards
Alex

--
Sahil Shripad Dudwadkar Sent from iphone




Re: [ANNOUNCE] haproxy-2.9-dev4

2023-08-25 Thread Aleksandar Lazic

Hi.

On 2023-08-25 (Fr.) 19:35, Willy Tarreau wrote:

Hi,

HAProxy 2.9-dev4 was released on 2023/08/25. It added 59 new commits
after version 2.9-dev3.

Some interesting new stuff continues to arrive in this version:



[snipp]


   - reverse HTTP: see below for a complete description. I hope it will
 answer Alex's question :-)


Thank you :-)


   - xxhash was updated to 0.8.2 (we were on 0.8.1) because it fixes a
 build issue on ppc64le.

   - various doc/regtest/CI updates as usual.

Now, regarding reverse HTTP: that's a feature that we've been repeatedly
asked for over the last decade, constantly responding "not possible yet".
But with the flexibility of the current architecture, it appeared that
there was no more big show-stopper and it was about time to respond to
this demand. What is this ? The principle is to permit a server to
establish a connection to haproxy, then to switch the connection
direction on both sides, so that haproxy can send requests to that
server. There was a trend around this 20 years ago on HTTP/1 and it
didn't work well, to be honest. And we were counting on H2 to do that
because it allows to multiplex streams over a connection and to reset
a stream without breaking a connection.


[snipp good explanation]

Looks like that "Reverse HTTP Transport" will be only possible with H2 & 
H3 for now, right. This looks then to me that quic + H3 will be 
implemented also for server as "proto h3", right?


Will HAProxy be the first one which will have this or is there anybody 
else which have also implemented this into there SW?


Regards
Alex



Please what is 'new protocol named "reverse_connect"' for?

2023-08-24 Thread Aleksandar Lazic

Hi.

I just seen some commits about protocol for active reverse connect and 
ask me, what's the main use case for that protocol could be? As far as I 
have seen is it for now for H2 Settings but I'm not sure if I understood 
the commits right.


Regards
Alex



Re: how to upgrade haproxy

2023-08-24 Thread Aleksandar Lazic

Hi Atharva Shripad Dudwadkar.

On 2023-08-24 (Do.) 12:08, Willy Tarreau wrote:

Hi,

On Thu, Aug 24, 2023 at 03:23:59PM +0530, Atharva Shripad Dudwadkar wrote:

Hi haproxy Team,

Can you please help me with the upgrading process regarding haproxy from
2.0.7 to 2.5. in RHEL. Could you please share with me upgrading process?


Please note that 2.5 is no longer supported, it was a short-lived
version. You should consider upgrading to a long term supported one
to replace your 2.0, these are 2.4, 2.6 or 2.8. Please look at the
packages here for various distros and from various maintainers:

 https://github.com/haproxy/wiki/wiki/Packages


In addition to that site can you also open a RH Case and ask the Vendor 
if there is a updated package, in case you expect some support for the 
RHEL package :-).


https://access.redhat.com/support/cases/


Regards,
Willy


Regards
Alex



Re: WebTransport support/roadmap

2023-08-17 Thread Aleksandar Lazic

Hi.

On 2023-08-17 (Do.) 10:14, Artur wrote:

Feature request submitted: https://github.com/haproxy/haproxy/issues/2256


Thank you. I have added a simple picture based on your E-Mails, hope I 
have understood your request properly.


Regards
Alex



Re: WebTransport support/roadmap

2023-08-16 Thread Aleksandar Lazic

Hi.

On 2023-08-16 (Mi.) 17:29, Artur wrote:

Hello !

I wonder if there is a roadmap to support WebTransport protocol in haproxy.

There are some explanations/references (if needed) from socket.io dev 
team that started to support it :


https://socket.io/get-started/webtransport


Looks like that's Websocket for udp/QUIC just because the Websocket 
Protocol does not work with QUIC, imho.


Cite from https://datatracker.ietf.org/doc/html/draft-ietf-webtrans-http2/

```
By relying only on generic HTTP semantics, this protocol might allow 
deployment using any HTTP version. However, this document only defines 
negotiation for HTTP/2 [HTTP2] as the current most common TCP-based 
fallback to HTTP/3.

```

Please can you open a Feature request on 
https://github.com/haproxy/haproxy/issues so that anybody, maybe you 
:-), can pick it and implement it.


When I look back how a nightmare the  Websocket in the different version 
 was to implement it will this variant for QUIC not be much easier, 
from my point of view.


Jm2c


--
Best regards,
Artur


Regards
Alex



Re: Problems using custom error files with HTTP/2

2023-08-07 Thread Aleksandar Lazic

Hi.

On 2023-08-07 (Mo.) 18:35, Nick Wood wrote:

Hello all,


I'm not sure if anything further happened with this, but after upgrading 
from 2.6 to 2.8.1, custom pages are now broken by default over HTTP/2.


Please can you specific more deeper what you mean with "broken by default".

What does not work anymore?
what's your config?
Is the custom page also broken when you activate H2 on 2.6?

Has HTTP/2 support been enabled by default? If so how would one turn it 
off so we don't have to downgrade back to v2.6?


In the Announcement of 1.8 is described how to deactivate the H2.
https://www.mail-archive.com/haproxy@formilux.org/msg43600.html

```
- HTTP/2 is advertised by default in ALPN on TLS listeners. It was about
  time, 5 years have passed since it was introduced, it's been enabled by
  default in clear text as an HTTP/1 upgrade for 4 years, yet some users
  do not know how to enable it. From now on, ALPN defaults to "h2,http/1.1"
  on TCP and "h3" on QUIC so that these protocol versions work by default.
  It's still possible to set/reset the ALPN to disable them of course. The
  old concern some users were having about window sizes was addressed by
  having a setting for each side (front vs back).
```

That the doc link to the alpn keyword.
http://docs.haproxy.org/2.8/configuration.html#5.1-alpn


Thanks,

Nick


Regards
Alex


On 17/04/2023 15:09, Aleksandar Lazic wrote:



On 17.04.23 15:08, Willy Tarreau wrote:

On Mon, Apr 17, 2023 at 03:04:05PM +0200, Lukas Tribus wrote:

On Sat, 15 Apr 2023 at 23:08, Willy Tarreau  wrote:


On Sat, Apr 15, 2023 at 10:59:42PM +0200, Willy Tarreau wrote:

Hi Nick,

On Sat, Apr 15, 2023 at 09:44:32PM +0100, Nick Wood wrote:
And here is my configuration - I've slimmed it down to the 
absolute minimum

to reproduce the problem:

If the back end is down, the custom 503.http page should be served.

This works on HTTP/1.1 but not over HTTP/2:


Very useful, thank you. In fact it's irrelevant to the errorfile but
it's the 503 that is not produced in this case. I suspect that it's
interpreted on the server side as only a retryable connection error
and that if the HTTP/1 client had faced it on its second request it
would have been the same (in H1 there's a special case for the first
request on a connection, that is not automatically retryable, but
after the first one we have the luxry of closing silently to force
the client to retry, something that H2 supports natively).

I'm still trying to figure when this problem appeared, and it looks
like even 2.4.0 did behave like this. I'm still digging.


And indeed, this issue appeared with this commit in 1.9-dev10 4 
years ago:


   746fb772f ("MEDIUM: mux_h2: Always set CS_FL_NOT_FIRST for new 
conn_streams.")


So it makes h2 behave like the second and more H1 requests which 
are silent
about this. We overlooked this specificity, it would need to be 
rethought a

little bit I guess.


Even though we had this issue for a long time and nobody noticed, we
should probably not enable H2 on a massive scale with new 2.8 defaults
before this is fixed to avoid silently breaking this error condition.


I totally agree ;-)


Well, I would prefer to keep on the line so that such bugs could be 
found much earlier :-).


Jm2c


Willy







libcrypt may be removed completely in future Glibc releases

2023-08-02 Thread Aleksandar Lazic

Hi.

I have seen this lines in the current glibc release notes

https://sourceware.org/glibc/wiki/Release/2.38
```
2.1. Building libcrypt is disabled by default

If you still need Glibc libcrypt, pass --enable-crypt to the configure 
script.


Note that libcrypt may be removed completely in future Glibc releases. 
Distributions are encouraged to provide libcrypt via libxcrypt[1], 
instead of relying on Glibc libcrypt.

```

The libxcrypt page mention to be backward compatible but we should keep 
an eye on this, IMHO.


Regards
Alex

[1] https://github.com/besser82/libxcrypt



Re: QUIC with a fcgi backend

2023-07-24 Thread Aleksandar Lazic

Yaacov.

On 2023-07-24 (Mo.) 15:08, Christopher Faulet wrote:

Le 7/24/23 à 12:24, Yaacov Akiba Slama a écrit :

Hi Christopher,

Thanks for report. It is not a known issue, but I can confirm it. When
H3 HEADERS frames are converted to the internal HTTP representation
(HTX), a flag is missing to specify a content-length was found.

I pushed a flag, it should be fixed:

commit e42241ed2b1df77beb1817eb9bcc46bab793f25c (HEAD -> master,
haproxy.org/master)
Author: Christopher Faulet 
Date:   Mon Jul 24 11:37:10 2023 +0200


Thanks for the fix. I just tested and it works but I can still see a
weird behavior when using curl (I still didn't test with a browser):
when the uploaded data is big (bigger than bufsize), the connection is
not immediately closed but only after a timeout:

curl --http3-only -d @ 



curl: (55) ngtcp2_conn_handle_expiry returned error: ERR_IDLE_CLOSE



This time, I'm unable to reproduce. I guess we need help of the quic men 
(Fred or Amaury).


Are the HAProxy and the FCGI Server on the same host/network or is there 
any firewall or anything in between?


What's the error message on the HAProxy and on the FCGI server when the 
timeout occur?


Regards
Alex



Re: QUIC with a fcgi backend

2023-07-22 Thread Aleksandar Lazic

Hi.

On 2023-07-22 (Sa.) 21:48, Yaacov Akiba Slama wrote:

Hi,

It seems that there is a bug in QUIC when using a fastcgi backend:

As soon as the size of the uploaded data is more than bufsize, the 
server returns 400 Bad request and shows PH-- in the logs.


The problem occurs with both haproxy 2.8.1 and 2.9-dev2 (both build 
quictls OpenSSL_1_1_1u-quic1).


When using h2 or an http backend, everything is ok.

Is it a known problem?


Please can you share the config you use to be able to reproduce the 
issue. I think it's not know but it would be good to be able to 
reproduce it.



Thanks,

--yas


Regards
Alex



Re: [PATCH] BUILD: ssl: Build with new cryptographic library AWS-LC

2023-07-13 Thread Aleksandar Lazic

Hi Andrew.

Thank you for your answers.

On 2023-07-13 (Do.) 08:22, Hopkins, Andrew wrote:

Hi Alex, thanks for taking a look at this change, to answer your questions:

* Do you plan to make releases which stable ABI on that we can rely on?
Yes, we have releases on GitHub that follow semantic versioning and 
within minor versions everything is backward compatible. Internal 
details of structs may change in an API compatible way over time but 
might not be ABI. This would be signaled in the release notes and 
version number.


Okay.


* Do you plan to add quic (Server part) faster then OpenSSL?

I have not looked into quic benchmarks but it uses the same 
cryptographic primitives as TLS so I imagine we'd be faster for a lot of 
the algorithms. It might not be useful for HAProxy which is all C, but 
AWS also launched s2n-quic [1] which does have extensive testing for 
correctness and performance. s2n-quic evenuses AWS-LC's libcrypto for 
all of the cryptographic operations [2] though our rust bindings 
aws-lc-rs [3].


Hm, this implies a dependency for rust which increases the complexity to 
build HAProxy. From my point of view isn't this very helpfull to bring 
the library into haproxy.


* Will be there some packages for debian/ubuntu/RHEL/... so that the 
users of HAProxy can "just install and run" HAProxy with that SSL Lib?


In the near future no. Currently AWS-LC does not support enough packages 
to fully replace libcrypto for the entire operating system, and 
balancing different programs using different library paths and libcrypto 
implementations is tricky. Eventually distributing static archives and 
shared libraries once we have more support makes sense. There is more 
context/history in this issue [4].


Uh that's a show stopper, at least from my point of view. This implies 
the same work as HAProxy team have for wolfssl, BoringSSL and quictls 
and that's a lot of work.


As the patch looks quite small and AWS-LC relies on BoringSSL are you 
handle the BoringSSL chnanges so that the API and not often ABI changes 
are handled by AWS-LC?


[1] https://github.com/aws/s2n-quic 
[2] https://github.com/aws/s2n-quic/pull/1840

[3] https://github.com/aws/aws-lc-rs
[4] https://github.com/aws/aws-lc/issues/804

Thanks, Andrew

----
*From:* Aleksandar Lazic 
*Sent:* Wednesday, July 12, 2023 1:14 AM
*To:* Hopkins, Andrew; haproxy@formilux.org
*Subject:* RE: [EXTERNAL][PATCH] BUILD: ssl: Build with new 
cryptographic library AWS-LC
CAUTION: This email originated from outside of the organization. Do not 
click links or open attachments unless you can confirm the sender and 
know the content is safe.




Hi Andrew.

On 2023-07-12 (Mi.) 02:26, Hopkins, Andrew wrote:

Hello HAProxy maintainers, I work on the AWS libcrypto (AWS-LC) project [1].
Our goal is to improve the cryptography we use internally at AWS and help our
customers externally. In the spirit of helping people use good crypto we know
it’s important to make it easy to use AWS-LC everywhere they use cryptography.
This is why we are interested in integrating AWS-LC into HAProxy.

AWS-LC is a fork of BoringSSL which you already partially support. We recently
merged in several PRs (Full OCSP support [2] and custom extension support [3])
to fully support HAProxy the same as OpenSSL. To ensure we continue to support
HAProxy long term we added HAProxy built with AWS-LC to our CI [4].

In our early testing we see modest improvements in overall throughput when
compared to OpenSSL 3.1 on x86 and arm CPUs. Following a similar setup as this
blog [5] I observe a small (~2.5%) increase in requests per second for 5 kb
requests on a C6i (x86) and C6g (arm) instance using TLS 1.3 and AES 256 GCM. 
For
both tests I used
`taskset -c 2-47 ./h1load -e -ll -P -t 46 -s 30 -d 120 -c 500 https://[c6i 
<https://[c6i> or c6g ip]:[aws-lc or openssl port]/?s=5k`.

This small difference in this symmetric crypto workload comes down to AWS-LC
and OpenSSL having similar AES implementations. We observe larger performance
improvements with our micro-benchmarks for algorithms related to the TLS
handshake such as 15% reduction for ECDH with P-256, and 40% reduction for
P-521 on a C6i. This comes from our s2n-bignum library[6], a formally verified
bignum library with a focus on performance and correctness.

When built with AWS-LC all current regression tests pass. I have included a
small patch to update your documentation with AWS-LC as an option and I
attempted to add AWS-LC to your CI. I need a little help figuring out how to
test that part. Lastly from your excellent contributing guide I am not 
subscribed
so I would like to be cc’d on all responses.


Sounds quite interesting library.

I have a few questions about the future plans of the library.

* Do you plan to make releases which stable ABI on that we can rely on?
    That's one of the pain points with BoringSSL.
* Do you plan to add quic (Ser

Re: [PATCH] BUILD: ssl: Build with new cryptographic library AWS-LC

2023-07-12 Thread Aleksandar Lazic

Hi Andrew.

On 2023-07-12 (Mi.) 02:26, Hopkins, Andrew wrote:

Hello HAProxy maintainers, I work on the AWS libcrypto (AWS-LC) project [1].
Our goal is to improve the cryptography we use internally at AWS and help our
customers externally. In the spirit of helping people use good crypto we know
it’s important to make it easy to use AWS-LC everywhere they use cryptography.
This is why we are interested in integrating AWS-LC into HAProxy.

AWS-LC is a fork of BoringSSL which you already partially support. We recently
merged in several PRs (Full OCSP support [2] and custom extension support [3])
to fully support HAProxy the same as OpenSSL. To ensure we continue to support
HAProxy long term we added HAProxy built with AWS-LC to our CI [4].

In our early testing we see modest improvements in overall throughput when
compared to OpenSSL 3.1 on x86 and arm CPUs. Following a similar setup as this
blog [5] I observe a small (~2.5%) increase in requests per second for 5 kb
requests on a C6i (x86) and C6g (arm) instance using TLS 1.3 and AES 256 GCM. 
For
both tests I used
`taskset -c 2-47 ./h1load -e -ll -P -t 46 -s 30 -d 120 -c 500 https://[c6i or 
c6g ip]:[aws-lc or openssl port]/?s=5k`.

This small difference in this symmetric crypto workload comes down to AWS-LC
and OpenSSL having similar AES implementations. We observe larger performance
improvements with our micro-benchmarks for algorithms related to the TLS 
handshake such as 15% reduction for ECDH with P-256, and 40% reduction for 
P-521 on a C6i. This comes from our s2n-bignum library[6], a formally verified

bignum library with a focus on performance and correctness.

When built with AWS-LC all current regression tests pass. I have included a
small patch to update your documentation with AWS-LC as an option and I
attempted to add AWS-LC to your CI. I need a little help figuring out how to
test that part. Lastly from your excellent contributing guide I am not 
subscribed
so I would like to be cc’d on all responses.


Sounds quite interesting library.

I have a few questions about the future plans of the library.

* Do you plan to make releases which stable ABI on that we can rely on?
  That's one of the pain points with BoringSSL.
* Do you plan to add quic (Server part) faster then OpenSSL?
* Will be there some packages for debian/ubuntu/RHEL/... so that the 
users of HAProxy can "just install and run" HAProxy with that SSL Lib?



Thanks, Andrew


Regards
Alex


[1] https://github.com/aws/aws-lc
[2] https://github.com/aws/aws-lc/pull/1054
[3] https://github.com/aws/aws-lc/pull/1071
[4] https://github.com/aws/aws-lc/pull/1083
[5] 
https://www.haproxy.com/blog/haproxy-forwards-over-2-million-http-requests-per-second-on-a-single-aws-arm-instance
[6] https://github.com/awslabs/s2n-bignum






Re: QUIC (mostly) working on top of unpatched OpenSSL

2023-07-07 Thread Aleksandar Lazic

Hi.

Just a addendum below to my last mail.

On 2023-07-07 (Fr.) 00:33, Aleksandar Lazic wrote:

Hi Willy

On 2023-07-06 (Do.) 22:05, Willy Tarreau wrote:

Hi all,

as the subject says it, Fred managed to make QUIC mostly work on top of
a regular OpenSSL. Credit goes to the NGINX team who found a clever and
absolutely ugly way to abuse OpenSSL callbacks to intercept and inject
data from/to the TLS hello messages. It does have limitations, such as
0-RTT not being supported, and maybe other ones we're not aware of. I'm
hesitating in merging it because there are some non-negligible impacts
for the QUIC ecosystem itself in doing this, ranging from a possibly
lower performance or reliability that could disappoint some users of the
protocol, to discouraging the efforts to get a real alternative stack
working.

I've opened the discussion on the QUIC working group here to collect
various opinions and advices:

   
https://mailarchive.ietf.org/arch/browse/quic/?gbt=1=M9pkSGzTSHunNC1yeySaB3irCVo


Unsurprizingly, the perception for now is mostly aligned with my first
feelings, i.e. "OpenSSL will be happy and QUIC will be degraded, that's
a bad idea". But I also know that on the WG we exclusively speak between
implementors, who don't always have the users' perspective.

I would encourage those who really want to ease QUIC adoption to read
the thread above (possibly even share their opinion, that would be
welcome) so that we can come to a consensus regarding this (e.g. merge,
drop, merge conditioned at build time, or with an expert runtime option,
anything else, I don't know). I feel like it's a difficult stretch to
find the best approach. The "it's not possible at all with openssl,
period" excuse is no longer true, however "it's only a degraded approach"
remains true.

I wouldn't like end-users to just think "pwah, all that for this, I'm
not impressed" without realizing that they wouldn't be benefitting from
everything. But maybe it would be good enough for most of those who are
not going to rebuild QuicTLS or wolfSSL. I sincerely don't know and I
do welcome opinions.


Amazing work from the nginx team :-)

 From my point of view is the way to go wolfSSL as the way on which 
OpenSSL is does not looks very promising for the future, at least for 
me. This implies that HAProxy will have different packages for the OS 
and creates much more work for the nice packaging Persons :-(. I don't 
know how big the challenge is to run HAProxy complete with wolfSSL, if 
it's not already done but to have a package like "haproxy-openssl" and 
"haproxy-quic" which implies wolfSSL would be a nice solution for the 
HAProxy Users, imho. A nice Change would be if nginx and Apache HTTPd 
also move to wolfSSL :-).
What's not clear to me is how the future of wolfSSL will be as the 
Company behind the lib looks for now very open for Open Source Projects 
but who knows the future.


Maybe another option could be gnutls as it added the QUIC API in 3.7.0 
but I think that's even a higher challenge then to move from OpenSSL to 
wolfSSL then to gnutls just because there is not even a single line of 
code with gnutls.


https://lists.gnupg.org/pipermail/gnutls-help/2020-December/004670.html
...
** libgnutls: Added a new set of API to enable QUIC implementation 
(#826, #849, #850).

...

ngtcp2 have examples with different TLS library, just fyi.
https://github.com/ngtcp2/ngtcp2/tree/main/examples

Another Question is, is the TLS/SSL Layer in HAProxy enough separated to 
add another TLS implementation? I'm pretty sure that a lot of people 
knows this but just for the archive let me share the way how curl handle 
different TLS backends.


https://github.com/curl/curl/tree/master/lib/vtls

All in all from my point of view was OpenSSL a good library in the past 
but for the future should a more modern and open (from Org and Mindset) 
Library be used, Jm2c.


Interesting point, at least for me, it looks like that OpenSSL starts to 
implement quic, is there any official info from OpenSSL about this part 
in this year? Is there also a statement about the performance issue with 
3.x?


https://github.com/openssl/openssl/tree/master/ssl/quic


Cheers,
Willy


Regards
Alex





Re: QUIC (mostly) working on top of unpatched OpenSSL

2023-07-06 Thread Aleksandar Lazic

Hi Willy

On 2023-07-06 (Do.) 22:05, Willy Tarreau wrote:

Hi all,

as the subject says it, Fred managed to make QUIC mostly work on top of
a regular OpenSSL. Credit goes to the NGINX team who found a clever and
absolutely ugly way to abuse OpenSSL callbacks to intercept and inject
data from/to the TLS hello messages. It does have limitations, such as
0-RTT not being supported, and maybe other ones we're not aware of. I'm
hesitating in merging it because there are some non-negligible impacts
for the QUIC ecosystem itself in doing this, ranging from a possibly
lower performance or reliability that could disappoint some users of the
protocol, to discouraging the efforts to get a real alternative stack
working.

I've opened the discussion on the QUIC working group here to collect
various opinions and advices:

   
https://mailarchive.ietf.org/arch/browse/quic/?gbt=1=M9pkSGzTSHunNC1yeySaB3irCVo

Unsurprizingly, the perception for now is mostly aligned with my first
feelings, i.e. "OpenSSL will be happy and QUIC will be degraded, that's
a bad idea". But I also know that on the WG we exclusively speak between
implementors, who don't always have the users' perspective.

I would encourage those who really want to ease QUIC adoption to read
the thread above (possibly even share their opinion, that would be
welcome) so that we can come to a consensus regarding this (e.g. merge,
drop, merge conditioned at build time, or with an expert runtime option,
anything else, I don't know). I feel like it's a difficult stretch to
find the best approach. The "it's not possible at all with openssl,
period" excuse is no longer true, however "it's only a degraded approach"
remains true.

I wouldn't like end-users to just think "pwah, all that for this, I'm
not impressed" without realizing that they wouldn't be benefitting from
everything. But maybe it would be good enough for most of those who are
not going to rebuild QuicTLS or wolfSSL. I sincerely don't know and I
do welcome opinions.


Amazing work from the nginx team :-)

From my point of view is the way to go wolfSSL as the way on which 
OpenSSL is does not looks very promising for the future, at least for 
me. This implies that HAProxy will have different packages for the OS 
and creates much more work for the nice packaging Persons :-(. I don't 
know how big the challenge is to run HAProxy complete with wolfSSL, if 
it's not already done but to have a package like "haproxy-openssl" and 
"haproxy-quic" which implies wolfSSL would be a nice solution for the 
HAProxy Users, imho. A nice Change would be if nginx and Apache HTTPd 
also move to wolfSSL :-).
What's not clear to me is how the future of wolfSSL will be as the 
Company behind the lib looks for now very open for Open Source Projects 
but who knows the future.


Maybe another option could be gnutls as it added the QUIC API in 3.7.0 
but I think that's even a higher challenge then to move from OpenSSL to 
wolfSSL then to gnutls just because there is not even a single line of 
code with gnutls.


https://lists.gnupg.org/pipermail/gnutls-help/2020-December/004670.html
...
** libgnutls: Added a new set of API to enable QUIC implementation 
(#826, #849, #850).

...

ngtcp2 have examples with different TLS library, just fyi.
https://github.com/ngtcp2/ngtcp2/tree/main/examples

Another Question is, is the TLS/SSL Layer in HAProxy enough separated to 
add another TLS implementation? I'm pretty sure that a lot of people 
knows this but just for the archive let me share the way how curl handle 
different TLS backends.


https://github.com/curl/curl/tree/master/lib/vtls

All in all from my point of view was OpenSSL a good library in the past 
but for the future should a more modern and open (from Org and Mindset) 
Library be used, Jm2c.




Cheers,
Willy


Regards
Alex



Re: [PATCH 1/1] MEDIUM: ssl: new sample fetch method to get curve name

2023-06-20 Thread Aleksandar Lazic

Hi.

On 2023-06-20 (Di.) 18:50, Mariam John wrote:

Adds a new sample fetch method to get the curve name used in the
key agreement to enable better observability. In OpenSSLv3, the function
`SSL_get_negotiated_group` returns the NID of the curve and from the NID,
we get the curve name by passing the NID to OBJ_nid2sn. This was not
available in v1.1.1. SSL_get_curve_name(), which returns the curve name
directly was merged into OpenSSL master branch last week but will be available
only in its next release.
---
  doc/configuration.txt|  8 +
  reg-tests/ssl/ssl_client_samples.vtc |  2 ++
  reg-tests/ssl/ssl_curves.vtc |  4 +++
  src/ssl_sample.c | 46 
  4 files changed, 60 insertions(+)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 8bcfc3c06..d944ac132 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -20646,6 +20646,10 @@ ssl_bc_cipher : string
over an SSL/TLS transport layer. It can be used in a tcp-check or an
http-check ruleset.
  
+ssl_bc_curve : string

+  Returns the name of the curve used in the key agreement when the outgoing
+  connection was made over an SSL/TLS transport layer.
+
  ssl_bc_client_random : binary
Returns the client random of the back connection when the incoming 
connection
was made over an SSL/TLS transport layer. It is useful to to decrypt traffic
@@ -20944,6 +20948,10 @@ ssl_fc_cipher : string
Returns the name of the used cipher when the incoming connection was made
over an SSL/TLS transport layer.
  
+ssl_fc_curve : string

+  Returns the name of the curve used in the key agreement when the incoming
+  connection was made over an SSL/TLS transport layer.
+
  ssl_fc_cipherlist_bin([]) : binary
Returns the binary form of the client hello cipher list. The maximum
returned value length is limited by the shared capture buffer size


Please can you sort the key words in proper alphabetical order.

Please can you add "Require OpenSSL >= 3..." the right version simlar to 
https://docs.haproxy.org/2.8/configuration.html#7.3.4-ssl_fc_server_handshake_traffic_secret 
.




diff --git a/reg-tests/ssl/ssl_client_samples.vtc 
b/reg-tests/ssl/ssl_client_samples.vtc
index 5a84e4b25..1f078ea98 100644
--- a/reg-tests/ssl/ssl_client_samples.vtc
+++ b/reg-tests/ssl/ssl_client_samples.vtc
@@ -46,6 +46,7 @@ haproxy h1 -conf {
  http-response add-header x-ssl-s_serial %[ssl_c_serial,hex]
  http-response add-header x-ssl-key_alg %[ssl_c_key_alg]
  http-response add-header x-ssl-version %[ssl_c_version]
+http-response add-header x-ssl-curve-name %[ssl_fc_curve]
  
  bind "${tmpdir}/ssl.sock" ssl crt ${testdir}/common.pem ca-file ${testdir}/ca-auth.crt verify optional crt-ignore-err all crl-file ${testdir}/crl-auth.pem
  
@@ -69,6 +70,7 @@ client c1 -connect ${h1_clearlst_sock} {

  expect resp.http.x-ssl-s_serial == "02"
  expect resp.http.x-ssl-key_alg == "rsaEncryption"
  expect resp.http.x-ssl-version == "1"
+expect resp.http.x-ssl-curve-name == "X25519"
  } -run
  
  
diff --git a/reg-tests/ssl/ssl_curves.vtc b/reg-tests/ssl/ssl_curves.vtc

index 5cc70df14..3dbe47c4d 100644
--- a/reg-tests/ssl/ssl_curves.vtc
+++ b/reg-tests/ssl/ssl_curves.vtc
@@ -75,6 +75,7 @@ haproxy h1 -conf {
  listen ssl1-lst
  bind "${tmpdir}/ssl1.sock" ssl crt ${testdir}/common.pem ca-file 
${testdir}/set_cafile_rootCA.crt verify optional curves P-256:P-384
  server s1 ${s1_addr}:${s1_port}
+http-response add-header x-ssl-fc-curve-name %[ssl_fc_curve]
  
  # The prime256v1 curve, which is used by default by a backend when no

  # 'curves' or 'ecdhe' option is specified, is not allowed on this listener
@@ -98,6 +99,7 @@ haproxy h1 -conf {
  
  bind "${tmpdir}/ssl-ecdhe-256.sock" ssl crt ${testdir}/common.pem ca-file ${testdir}/set_cafile_rootCA.crt verify optional ecdhe prime256v1

  server s1 ${s1_addr}:${s1_port}
+http-response add-header x-ssl-fc-curve-name %[ssl_fc_curve]
  
  } -start
  
@@ -105,6 +107,7 @@ client c1 -connect ${h1_clearlst_sock} {

txreq
rxresp
expect resp.status == 200
+  expect resp.http.x-ssl-fc-curve-name == "prime256v1"
  } -run
  
  # The backend tries to use the prime256v1 curve that is not accepted by the

@@ -129,6 +132,7 @@ client c4 -connect ${h1_clearlst_sock} {
txreq -url "/ecdhe-256"
rxresp
expect resp.status == 200
+  expect resp.http.x-ssl-fc-curve-name == "prime256v1"
  } -run
  
  syslog Slg_cust_fmt -wait


Please can you create a dedicated test file for that feature so that the 
test can be exluded when the requierd OpenSSL is not used.
I think the "openssl_version_atleast(1.1.1)" should be "3." which is 
in the ssl_curves.vtc file.




diff --git a/src/ssl_sample.c b/src/ssl_sample.c
index 5aec97fef..d7a7a09f9 100644
--- a/src/ssl_sample.c
+++ b/src/ssl_sample.c
@@ -1304,6 +1304,46 @@ 

Re: OCSP renewal with 2.8

2023-06-03 Thread Aleksandar Lazic

Hi.

On 2023-06-02 (Fr.) 22:42, Lukas Tribus wrote:

On Fri, 2 Jun 2023 at 21:55, Willy Tarreau  wrote:

Initially during the design phase we thought about having 3 states:
"off", "on", "auto", with the last one only enabling updates for certs
that already had a .ocsp file. But along discussions with some users
we were told that it was not going to be that convenient (I don't
remember why, but I think that Rémi and/or William probably remember
the reason), and it ended up dropping "auto".

Alternately maybe instead of enabling for all certs, what would be
useful would be to just change the default, because if you have 100k
certs, it's likely that 99.9k work one way and the other ones the other
way, and what you want is to indicate the default and only mention the
exception for those concerned.


I suggest we make it configurable on the bind line like other ssl
options, so it will work for the common use cases that don't involve
crt-lists, like a simple crt statement pointing to a certificate or a
directory.

It could also be a global option *as well*, but imho it does need to
be a bind line configuration option, just like strict-sni, alpn and
ciphers, so we can enable it specifically (per frontend, per bind
line) without requiring crt-list.


+1 to this suggestion.



Lukas





@Wolfssl: any plans to add "ECH (Encrypted client hello) support" and question about Roadmap

2023-06-01 Thread Aleksandar Lazic

Hi,

As we have now a shiny new LTS let's take a look into the future :-)

As the Wolfssl looks like a good future alternative for OpenSSL is there 
any plan to add ECH (Encrypted client hello) ( 
https://github.com/haproxy/haproxy/issues/1924 ) into Wolfssl?


Is there any Idea which feature is planed to be added by HAProxy Company 
from the feature requests 
https://github.com/haproxy/haproxy/labels/type%3A%20feature ?


Regards
Alex



Re: Followup on openssl 3.0 note seen in another thread

2023-05-29 Thread Aleksandar Lazic

Hi Shawn.

On 2023-05-28 (So.) 05:30, Shawn Heisey wrote:

On 5/27/23 18:03, Shawn Heisey wrote:

On 5/27/23 14:56, Shawn Heisey wrote:
Yup.  It was using keepalive.  I turned keepalive off and repeated 
the tests.


I did the tests again with 200 threads.  The system running the tests 
has 12 hyperthreaded cores, so this definitely pushes its capabilities.


I had forgotten a crucial fact that means all my prior testing work was 
invalid:  Apache HttpClient 4.x defaults to a max simultaneous 
connection count of 2.  Not going to exercise concurrency with that!


I have increased that to 1024, my program's max thread count, and now 
the test is a LOT faster ... it's actually running 200 threads at the 
same time.  Two runs per branch here, one with 200 threads and one with 
24 threads.


Still no smoking gun showing 3.0 as the slowest of the bunch.  In fact, 
3.0 is giving the best results!  So my test method is still probably the 
wrong approach.


Maybe you can change the setup in that way

HAProxies FE => HAProxies BE => Destination Servers

Where the Destination Servers are also HAProxies which just returns a 
static content or any high performance low latency HTTPS Server.

With such a Setup can you test also the Client mode of the OpenSSL.

Regards
Alex


1.1.1t:
21:06:45.388 [main] INFO  o.e.t.h.MainSSLTest Count 20 234.54/s
21:06:45.388 [main] INFO  o.e.t.h.MainSSLTest 10th % 54 ms
21:06:45.388 [main] INFO  o.e.t.h.MainSSLTest 25th % 94 ms
21:06:45.389 [main] INFO  o.e.t.h.MainSSLTest Median 188 ms
21:06:45.389 [main] INFO  o.e.t.h.MainSSLTest 75th % 991 ms
21:06:45.389 [main] INFO  o.e.t.h.MainSSLTest 95th % 3698 ms
21:06:45.389 [main] INFO  o.e.t.h.MainSSLTest 99th % 6924 ms
21:06:45.390 [main] INFO  o.e.t.h.MainSSLTest 99.9 % 11983 ms
-
21:20:35.400 [main] INFO  o.e.t.h.MainSSLTest Count 24000 355.56/s
21:20:35.400 [main] INFO  o.e.t.h.MainSSLTest 10th % 40 ms
21:20:35.401 [main] INFO  o.e.t.h.MainSSLTest 25th % 46 ms
21:20:35.401 [main] INFO  o.e.t.h.MainSSLTest Median 57 ms
21:20:35.401 [main] INFO  o.e.t.h.MainSSLTest 75th % 71 ms
21:20:35.401 [main] INFO  o.e.t.h.MainSSLTest 95th % 126 ms
21:20:35.401 [main] INFO  o.e.t.h.MainSSLTest 99th % 168 ms
21:20:35.401 [main] INFO  o.e.t.h.MainSSLTest 99.9 % 721 ms

3.0.8:
20:50:12.916 [main] INFO  o.e.t.h.MainSSLTest Count 20 244.69/s
20:50:12.917 [main] INFO  o.e.t.h.MainSSLTest 10th % 56 ms
20:50:12.917 [main] INFO  o.e.t.h.MainSSLTest 25th % 93 ms
20:50:12.917 [main] INFO  o.e.t.h.MainSSLTest Median 197 ms
20:50:12.917 [main] INFO  o.e.t.h.MainSSLTest 75th % 949 ms
20:50:12.918 [main] INFO  o.e.t.h.MainSSLTest 95th % 3425 ms
20:50:12.918 [main] INFO  o.e.t.h.MainSSLTest 99th % 6679 ms
20:50:12.918 [main] INFO  o.e.t.h.MainSSLTest 99.9 % 11582 ms
-
21:23:22.076 [main] INFO  o.e.t.h.MainSSLTest Count 24000 404.78/s
21:23:22.077 [main] INFO  o.e.t.h.MainSSLTest 10th % 40 ms
21:23:22.077 [main] INFO  o.e.t.h.MainSSLTest 25th % 45 ms
21:23:22.077 [main] INFO  o.e.t.h.MainSSLTest Median 53 ms
21:23:22.077 [main] INFO  o.e.t.h.MainSSLTest 75th % 63 ms
21:23:22.077 [main] INFO  o.e.t.h.MainSSLTest 95th % 90 ms
21:23:22.077 [main] INFO  o.e.t.h.MainSSLTest 99th % 121 ms
21:23:22.078 [main] INFO  o.e.t.h.MainSSLTest 99.9 % 671 ms

3.1.0+locks:
20:33:32.805 [main] INFO  o.e.t.h.MainSSLTest Count 20 238.02/s
20:33:32.806 [main] INFO  o.e.t.h.MainSSLTest 10th % 58 ms
20:33:32.806 [main] INFO  o.e.t.h.MainSSLTest 25th % 95 ms
20:33:32.806 [main] INFO  o.e.t.h.MainSSLTest Median 196 ms
20:33:32.806 [main] INFO  o.e.t.h.MainSSLTest 75th % 1001 ms
20:33:32.807 [main] INFO  o.e.t.h.MainSSLTest 95th % 3475 ms
20:33:32.807 [main] INFO  o.e.t.h.MainSSLTest 99th % 6288 ms
20:33:32.807 [main] INFO  o.e.t.h.MainSSLTest 99.9 % 10700 ms
-
21:26:24.555 [main] INFO  o.e.t.h.MainSSLTest Count 24000 402.89/s
21:26:24.556 [main] INFO  o.e.t.h.MainSSLTest 10th % 39 ms
21:26:24.556 [main] INFO  o.e.t.h.MainSSLTest 25th % 45 ms
21:26:24.556 [main] INFO  o.e.t.h.MainSSLTest Median 52 ms
21:26:24.556 [main] INFO  o.e.t.h.MainSSLTest 75th % 64 ms
21:26:24.556 [main] INFO  o.e.t.h.MainSSLTest 95th % 93 ms
21:26:24.556 [main] INFO  o.e.t.h.MainSSLTest 99th % 127 ms
21:26:24.557 [main] INFO  o.e.t.h.MainSSLTest 99.9 % 689 ms





Re: unsubscribe

2023-05-14 Thread Aleksandar Lazic

Hi.

On 14.05.23 22:07, Roman Gelfand wrote:




Here is the unsubscribe address.
https://www.haproxy.org/#tact

Regards
Alex



Re: equivalent of url32+src for hdr_ip(x-forwarded-for)?

2023-05-11 Thread Aleksandar Lazic
 [TRACE] trace


Hope that helps

Regards
Alex

On Thu, May 11, 2023 at 11:21 PM Aleksandar Lazic <mailto:al-hapr...@none.at>> wrote:


Dear Nathan.

On 11.05.23 23:59, Nathan Rixham wrote:
 > Hi All,
 >
 > I've run into an issue I can't figure out, essentially need to use
 > url32+src in stick tables, but where src is the x-forwarded-for
address
 > rather than the connecting source - any advice would be appreciated.

As this is a quite generic question please send us the following info's.

* haproxy -vv
* your config, reduced and without any sensible data
* A more detail explanation what exactly you want to do and what does
not work.

 > Cheers,
 >
 > Nathan

Regards
Alex





Re: equivalent of url32+src for hdr_ip(x-forwarded-for)?

2023-05-11 Thread Aleksandar Lazic

Dear Nathan.

On 11.05.23 23:59, Nathan Rixham wrote:

Hi All,

I've run into an issue I can't figure out, essentially need to use 
url32+src in stick tables, but where src is the x-forwarded-for address 
rather than the connecting source - any advice would be appreciated.


As this is a quite generic question please send us the following info's.

* haproxy -vv
* your config, reduced and without any sensible data
* A more detail explanation what exactly you want to do and what does 
not work.



Cheers,

Nathan


Regards
Alex



Re: Drain L4 host that fronts a L7 cluster

2023-05-05 Thread Aleksandar Lazic
Isn't is a similar request to 
https://github.com/haproxy/haproxy/issues/969 as I mentioned in the 
issue https://github.com/haproxy/haproxy/issues/2149


On 06.05.23 01:18, Abhijeet Rastogi wrote:

Thanks for the response Tristan.

For the future reader of this thread, a feature request was created
for this. https://github.com/haproxy/haproxy/issues/2146


On Fri, May 5, 2023 at 4:09 PM Tristan  wrote:

however, our reason to migrate to HAproxy is adding gRPC
compliance to the stack, so H2 support is a must. Thanks for the
workarounds, indeed interesting, I'll check them out.

  From a cursory look at the gRPC spec it seems like you would indeed
really need the GOAWAY to get anywhere


trigger the GOAWAY H2 frames (which isn't possible at

the moment, as far as I can tell)

*Is this a valid feature request for HAProxy?*
Maybe, we can provide "setting CLO mode" via the "http-request" directive?

I can't make that call, but at least it sounds quite useful to me indeed.

And in particular, being able to set CLO mode is likely a little bit
nicer in the long run than something like a hypothetical 'http-request
send-h2-goaway', since CLO mode can account for future protocols or spec
changes transparently as those eventually get added to HAProxy.

Interesting problem either way!

Cheers,
Tristan



--
Cheers,
Abhijeet (https://abhi.host)





Any Roadmap for "Server weight modulation based on smoothed average measurement" ( https://github.com/haproxy/haproxy/issues/1977 )

2023-04-28 Thread Aleksandar Lazic

Hi.

Is there any Plan when the work on this part will start or will be this
a smooth forward :-)

Regards
Alex



Re: [PATCH] MINOR: sample: Add bc_rtt and bc_rttvar

2023-04-28 Thread Aleksandar Lazic

Hi Willy.

On 28.04.23 11:14, Aleksandar Lazic wrote:

Hi Will.

On 28.04.23 11:07, Willy Tarreau wrote:


[snipp]


So from what I'm reading above, the regtest is fake and doesn't test
the presence of digits in the returned value. Could you please correct
it so that it properly verifies that your patch works, and then I'm
fine with merging it.


Okay will take a look and create a new patch.


Attached the new patch.

Regards
AlexFrom 01b0561f0aad6ecf14e1bef552d9c2ad66ad1d67 Mon Sep 17 00:00:00 2001
From: Aleksandar Lazic 
Date: Fri, 28 Apr 2023 11:39:12 +0200
Subject: [PATCH] MINOR: sample: Add bc_rtt and bc_rttvar

This Patch adds fetch samples for backends round trip time.
---
 doc/configuration.txt| 16 ++
 reg-tests/sample_fetches/tcpinfo_rtt.vtc | 39 
 src/tcp_sample.c | 32 +++
 3 files changed, 87 insertions(+)
 create mode 100644 reg-tests/sample_fetches/tcpinfo_rtt.vtc

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 32d2fec17..28f308f9d 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -19642,6 +19642,22 @@ be_name : string
   frontends with responses to check which backend processed the request. It can
   also be used in a tcp-check or an http-check ruleset.
 
+bc_rtt() : integer
+  Returns the Round Trip Time (RTT) measured by the kernel for the backend
+  connection.  is facultative, by default the unit is milliseconds. 
+  can be set to "ms" for milliseconds or "us" for microseconds. If the server
+  connection is not established, if the connection is not TCP or if the
+  operating system does not support TCP_INFO, for example Linux kernels before
+  2.4, the sample fetch fails.
+
+bc_rttvar() : integer
+  Returns the Round Trip Time (RTT) variance measured by the kernel for the
+  backend connection.  is facultative, by default the unit is milliseconds.
+   can be set to "ms" for milliseconds or "us" for microseconds. If the
+  server connection is not established, if the connection is not TCP or if the
+  operating system does not support TCP_INFO, for example Linux kernels before
+  2.4, the sample fetch fails.
+
 be_server_timeout : integer
   Returns the configuration value in millisecond for the server timeout of the
   current backend. This timeout can be overwritten by a "set-timeout" rule. See
diff --git a/reg-tests/sample_fetches/tcpinfo_rtt.vtc b/reg-tests/sample_fetches/tcpinfo_rtt.vtc
new file mode 100644
index 0..93300d528
--- /dev/null
+++ b/reg-tests/sample_fetches/tcpinfo_rtt.vtc
@@ -0,0 +1,39 @@
+varnishtest "Test declaration of TCP rtt fetches"
+
+# feature cmd "$HAPROXY_PROGRAM -cc 'version_atleast(v2.8-dev8)'"
+feature ignore_unknown_macro
+
+server s1 {
+rxreq
+txresp
+}  -start
+
+haproxy h1 -conf {
+  defaults common
+  mode http
+  timeout connect "${HAPROXY_TEST_TIMEOUT-5s}"
+  timeout client  "${HAPROXY_TEST_TIMEOUT-5s}"
+  timeout server  "${HAPROXY_TEST_TIMEOUT-5s}"
+
+  frontend fe from common
+  bind "fd@${feh1}"
+
+  default_backend be
+
+  backend be from common
+
+  http-response set-header x-test1 "%[fc_rtt]"
+  http-response set-header x-test2 "%[bc_rtt(us)]"
+  http-response set-header x-test3 "%[fc_rttvar]"
+  http-response set-header x-test4 "%[bc_rttvar]"
+
+  server s1 ${s1_addr}:${s1_port}
+
+} -start
+
+client c1 -connect ${h1_feh1_sock} {
+txreq -req GET -url /
+rxresp
+expect resp.status == 200
+expect resp.http.x-test2 ~ "[0-9]+"
+} -run
\ No newline at end of file
diff --git a/src/tcp_sample.c b/src/tcp_sample.c
index 12eb25c4e..393e39e93 100644
--- a/src/tcp_sample.c
+++ b/src/tcp_sample.c
@@ -401,6 +401,35 @@ smp_fetch_fc_rttvar(const struct arg *args, struct sample *smp, const char *kw,
 	return 1;
 }
 
+/* get the mean rtt of a backend connection */
+static int
+smp_fetch_bc_rtt(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+	if (!get_tcp_info(args, smp, 1, 0))
+		return 0;
+
+	/* By default or if explicitly specified, convert rtt to ms */
+	if (!args || args[0].type == ARGT_STOP || args[0].data.sint == TIME_UNIT_MS)
+		smp->data.u.sint = (smp->data.u.sint + 500) / 1000;
+
+	return 1;
+}
+
+/* get the variance of the mean rtt of a backend connection */
+static int
+smp_fetch_bc_rttvar(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+	if (!get_tcp_info(args, smp, 1, 1))
+		return 0;
+
+	/* By default or if explicitly specified, convert rttvar to ms */
+	if (!args || args[0].type == ARGT_STOP || args[0].data.sint == TIME_UNIT_MS)
+		smp->data.u.sint = (smp->data.u.sint + 500) / 1000;
+
+	return 1;
+}
+
+
 #if defined(__linux__) || defined(__FreeBSD__) || defined(__NetBSD__) || defined(__Ope

Re: [PATCH] MINOR: sample: Add bc_rtt and bc_rttvar

2023-04-28 Thread Aleksandar Lazic

Hi Will.

On 28.04.23 11:07, Willy Tarreau wrote:

Hi Alex,

On Fri, Apr 28, 2023 at 10:59:46AM +0200, Aleksandar Lazic wrote:

Hi Willy.

On 30.03.23 06:23, Willy Tarreau wrote:

On Thu, Mar 30, 2023 at 06:16:34AM +0200, Willy Tarreau wrote:

Hi Alex,

On Wed, Mar 29, 2023 at 04:06:10PM +0200, Aleksandar Lazic wrote:

Ping?


thanks for the ping, I missed it a few times when being busy with some
painful bugs in the past. I've pushed it to a topic branch to verify
what it does on the CI for non-linux OS; we might have to add a
"feature cmd" filter in the regtest to check for linux, and I don't
think we directly have this right now (though we could rely on
LINUX_SPLICE for now as a proxy). Or even simpler, we still have
the ability to use "EXCLUDE_TARGETS=freebsd,osx,generic" so I may
adapt your regtest to that as well if it fails on the CI.


Ah so... it passes because we have TCP_INFO on macos as well, and on
Windows we don't run vtest. However the "expect" rule is only for a
status code 200 :-)  I think it would be nice to check for the presence
of digits in these 4 headers. I'll try to do it as time permits but if
you beat me to it I'll take your proposal!


I'm not sure if I get you answer.
Do you need another patch from me?


Damn, I continue to forget about this one :-(  Actually it's extremely
difficult for me to dedicate time to modify stuff that I didn't create
because it's not in my radar.

So from what I'm reading above, the regtest is fake and doesn't test
the presence of digits in the returned value. Could you please correct
it so that it properly verifies that your patch works, and then I'm
fine with merging it.


Okay will take a look and create a new patch.


Thank you!
Willy


Regards
Alex



Re: [PATCH] MINOR: sample: Add bc_rtt and bc_rttvar

2023-04-28 Thread Aleksandar Lazic

Hi Willy.

On 30.03.23 06:23, Willy Tarreau wrote:

On Thu, Mar 30, 2023 at 06:16:34AM +0200, Willy Tarreau wrote:

Hi Alex,

On Wed, Mar 29, 2023 at 04:06:10PM +0200, Aleksandar Lazic wrote:

Ping?


thanks for the ping, I missed it a few times when being busy with some
painful bugs in the past. I've pushed it to a topic branch to verify
what it does on the CI for non-linux OS; we might have to add a
"feature cmd" filter in the regtest to check for linux, and I don't
think we directly have this right now (though we could rely on
LINUX_SPLICE for now as a proxy). Or even simpler, we still have
the ability to use "EXCLUDE_TARGETS=freebsd,osx,generic" so I may
adapt your regtest to that as well if it fails on the CI.


Ah so... it passes because we have TCP_INFO on macos as well, and on
Windows we don't run vtest. However the "expect" rule is only for a
status code 200 :-)  I think it would be nice to check for the presence
of digits in these 4 headers. I'll try to do it as time permits but if
you beat me to it I'll take your proposal!


I'm not sure if I get you answer.
Do you need another patch from me?


Thanks,
Willy


Regards
Alex



Re: Reproducible ERR_QUIC_PROTOCOL_ERROR with all QUIC-enabled versions (2.6 to latest 2.8-dev)

2023-04-18 Thread Aleksandar Lazic

Hi Bob.

On 18.04.23 17:07, Zakharychev, Bob wrote:
While experimenting with enabling QUIC in HAProxy sitting in front of 
our closed-source application I stumbled upon a reproducible QUIC 
protocol failure/malfunction while accessing specific CSS resource, 
which is served via internal application proxy: accessing it over QUIC 
results either in ERR_QUIC_PROTOCOL_FAILURE in the browser and no 
mention of request in HAProxy log or incomplete resource being download 
and CD-- request termination flags in HAProxy log (and logged request 
looks a bit different from other, successful, H3 requests). Accessing 
the same resource over HTTP/2 works fine. I need help with setting up a 
proper debug session so that I could capture all necessary information 
which may help with fixing this issue: HAProxy internal 
debugging/tracing flags to enable, etc. I don’t want to open a bug on 
GitHub for this and would appreciate if anyone from HAProxy team could 
reach out to me directly so that I could share relevant information and 
attempt to debug under your direction.


In case you use the HAProxy Enterprise can you get in touch via 
https://www.haproxy.com/contact-us/ or 
https://my.haproxy.com/portal/cust/login


Here are the support options listed.
https://www.haproxy.com/support/support-options/

In case you use the Open Source version please run `haproxy -vv` (with 
two `v`).


What is your configuration?
* Include as much configuration as possible, including global and 
default sections.

* Replace confidential data like domain names and IP addresses.



Thanks in advance,

    Vladimir “Bob” Zakharychev


Regards
Alex



Re: Puzzlement : empty field vs. ,field() -m

2023-04-17 Thread Aleksandar Lazic

Hi.

On 18.04.23 00:55, Jim Freeman wrote:

In splitting out fields from req.cook, populated fields work well, but
detecting an unset field has me befuddled:

   acl COOK_META_MISSING  req.cook(cook2hdr),field(3,\#) ! -m found -m str ''

does not detect that a cookie/field is empty ?

Running the attached 'hdrs' script against the attached haproxy.cfg sees :
===
...
cookie: cook2hdr=#
bar: bar
baz: baz
meta: ,bar,baz
foo:
===
when foo: should not be created, and meta: should only have 2 fields.

Am I just getting the idiom/incantation wrong ?

[ stock/current haproxy 2.6 from Debian/Ubuntu LTS backports ]


A `haproxy -vv` is better then guessing which version this is :-)

Looks like the doc does not mention the empty field case.

https://docs.haproxy.org/2.6/configuration.html#7.3.1-field

From the code looks like that the data is set to 0
https://github.com/haproxy/haproxy/blob/master/src/sample.c#L2432

I would just try to make a '! -m found' but that's untested, I'm pretty 
sure that some persons on this list have much more experience with empty 
return values test.


Regards
Alex



Re: Problems using custom error files with HTTP/2

2023-04-17 Thread Aleksandar Lazic




On 17.04.23 15:08, Willy Tarreau wrote:

On Mon, Apr 17, 2023 at 03:04:05PM +0200, Lukas Tribus wrote:

On Sat, 15 Apr 2023 at 23:08, Willy Tarreau  wrote:


On Sat, Apr 15, 2023 at 10:59:42PM +0200, Willy Tarreau wrote:

Hi Nick,

On Sat, Apr 15, 2023 at 09:44:32PM +0100, Nick Wood wrote:

And here is my configuration - I've slimmed it down to the absolute minimum
to reproduce the problem:

If the back end is down, the custom 503.http page should be served.

This works on HTTP/1.1 but not over HTTP/2:


Very useful, thank you. In fact it's irrelevant to the errorfile but
it's the 503 that is not produced in this case. I suspect that it's
interpreted on the server side as only a retryable connection error
and that if the HTTP/1 client had faced it on its second request it
would have been the same (in H1 there's a special case for the first
request on a connection, that is not automatically retryable, but
after the first one we have the luxry of closing silently to force
the client to retry, something that H2 supports natively).

I'm still trying to figure when this problem appeared, and it looks
like even 2.4.0 did behave like this. I'm still digging.


And indeed, this issue appeared with this commit in 1.9-dev10 4 years ago:

   746fb772f ("MEDIUM: mux_h2: Always set CS_FL_NOT_FIRST for new 
conn_streams.")

So it makes h2 behave like the second and more H1 requests which are silent
about this. We overlooked this specificity, it would need to be rethought a
little bit I guess.


Even though we had this issue for a long time and nobody noticed, we
should probably not enable H2 on a massive scale with new 2.8 defaults
before this is fixed to avoid silently breaking this error condition.


I totally agree ;-)


Well, I would prefer to keep on the line so that such bugs could be 
found much earlier :-).


Jm2c


Willy





Re: Opinions desired on HTTP/2 config simplification

2023-04-15 Thread Aleksandar Lazic

Hi.

On 15.04.23 11:32, Willy Tarreau wrote:

Hi everyone,

I was discussing with Tristan a few hours ago about the widespread
deployment of H2 and H3, with Cloudflare showing that H1 only accounts
for less than 7% of their traffic and H3 getting close to 30% [1],
and the fact that on the opposite yesterday I heard someone say "we
still have not tried H2, so H3..." (!).

Tristan said something along the lines of "if only proxies would enable
it by default by now", which resonated to me like when we decided to
switch some defaults on (keep-alive, http-reuse, threads, etc).

And it's true that at the beginning there was not even a question about
enabling H2 by default on the edge, but nowadays it's as reliable as H1
and used by virtually everyone, yet it still requires admins to know
about this TLS-specific extension called "ALPN" and the exact syntax of
its declaration, in order to enable H2 over TLS, while it's already on
by default for clear traffic.


Is there any experience about the backends and what protocol they use?
As far as I can see is QUIC/h3 not yet there, what's the plan to add 
QUIC/h3 as backend protocol?


```
podman run --rm --network host --name haproy-test --entrypoint /bin/bash 
-it haproxy:latest


Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
 h2 : mode=HTTP  side=FE|BE  mux=H2flags=HTX|HOL_RISK|NO_UPG
   fcgi : mode=HTTP  side=BE mux=FCGI  flags=HTX|HOL_RISK|NO_UPG
   : mode=HTTP  side=FE|BE  mux=H1flags=HTX
 h1 : mode=HTTP  side=FE|BE  mux=H1flags=HTX|NO_UPG
   : mode=TCP   side=FE|BE  mux=PASS  flags=
   none : mode=TCP   side=FE|BE  mux=PASS  flags=NO_UPG
```

Do you see any benifit when there is a quic e2e connection?
Something like:

client - Q/H3 - HAProxy - Q/H3 - Backend


Thus you're seeing me coming with my question: does anyone have any
objection against turning "alpn h2,http/1.1" on by default for HTTP
frontends, and "alpn h3" by default for QUIC frontends, and have a new
"no-alpn" option to explicitly turn off ALPN negotiation on HTTP
frontends e.g. for debugging ? This would mean that it would no longer
be necessary to know the ALPN strings to configure these protocols. I
have not looked at the code but I think it should not be too difficult.
ALPN is always driven by the client anyway so the option states what we
do with it when it's presented, thus it will not make anything magically
fail.


Get a +1 for turning on the default settings.

This must be highlighted in the documentation as it could break some 
working setups which have not activated H2 on some the listener for some 
specific reasons.



And if we change this default, do you prefer that we do it for 2.8 that
will be an LTS release and most likely to be shipped with next year's
LTS distros, or do you prefer that we skip this one and start with 2.9,
hence postpone to LTS distros of 2026 ?


+1 for 2.8 .


Even if I wouldn't share my feelings, some would consider that I'm
trying to influence their opinion, so I'll share them anyway :-)  I
think that with the status change from "experimental-but-supported" to
"production" for QUIC in 2.8, having to manually and explicitly deal
with 3 HTTP versions in modern configs while the default (h1) only
corresponds to 7% of what clients prefer is probably an indicator that
it's the right moment to simplify these a little bit. But I'm open to
any argument in any direction.


As the history shows, that the a lot of peoples reuses some sample 
configs I would also consider to add a example quic+h2 setup in the 
examples directory because the current example quick config looks 
somehow wrong.


http://git.haproxy.org/?p=haproxy.git;a=blob;f=examples/quick-test.cfg;h=f27eeff432de116132d2df36121356af0938b8a4;hb=HEAD

I would be nice when the package owner of the distributions would also 
adopt there config examples but this is a decision which is done outside 
of haproxy :-)



It would be nice to be able to decide (and implement a change if needed)
before next week's dev8, so that it leaves some time to collect feedback
before end of May, so please voice in!

Thanks!
Willy

[1] https://radar.cloudflare.com/adoption-and-usage


Regards
Alex




Re: Problems using custom error files with HTTP/2

2023-04-15 Thread Aleksandar Lazic

Hi Nic,

On 15.04.23 19:35, Nick Wood wrote:

Hello all,


I have recently enabled HTTP/2 on our HAProxy server by adding the 
following to the bind line:



alpn h2,http/1.1


Everything appears to be working fine, apart from our custom error pages.

Rather than serving the custom page as before, browsers just report an 
error. In Chrome its ERR_HTTP2_SERVER_REFUSED_STREAM. In Firefox its a 
more generic response about the data being invalid.



Here is the content of /etc/haproxy/errorpages/503.http:



[snipp]



I've searched the archives but not found anyone else with this issue - 
apart from someone who didn't have the correct HTTP headers defined at 
the top of their error file - but mine look OK. I've tried using 
HTTP/1.1 instead of HTTP/1.0 and also removing the Connection: close 
header, but nothing makes a difference.



Any clues as to what I'm doing wrong would be much appreciated.


Please can you share the haproxy version `haproxy -vv`.

What is your configuration? Include as much configuration as possible, 
including global and default sections. Replace confidential data like 
domain names and IP addresses.



Thanks,


Nick


Best regards
Alex



Re: Interest in HA Proxy from Sonicwall

2023-04-05 Thread Aleksandar Lazic

Hi Kenny.

On 05.04.23 20:04, Kenny Lederman wrote:

Hi team,

Do you have an account rep assigned to Sonicwall that could help me with 
getting a POC set up?


This is the Open Source Mailing list, if you want to get in touch with 
the Company behind HAProxy please use this.


https://www.haproxy.com/contact-us/

Of course can you setup the Open Source HAProxy by your team, the 
documentation is hosted at this URL.


http://docs.haproxy.org/


Thank you,

Kenny Lederman


Best Regards
Alex


Enterprise Account Manager

(206) 455-6488 - Office

(847) 932-9771 - Cell

kenny.leder...@softchoice.com 





Softchoice 



415 1st Avenue North, Suite 300
Seattle, WA  98109



Manage Subscription 
Unsubscribe 
Privacy 







Re: [PATCH] MINOR: sample: Add bc_rtt and bc_rttvar

2023-03-29 Thread Aleksandar Lazic

Ping?

On 10.01.23 21:27, Aleksandar Lazic wrote:



On 09.12.22 13:17, Aleksandar Lazic wrote:

Hi.

As I still think that the Balancing algorithm (Peak) EWMA ( 
https://github.com/haproxy/haproxy/issues/1570 ) could help to make a 
"better" decision to which server should the request be send, here the 
beginning of the patches.


In any cases it would be nice to know the rtt from the backend, Imho.

Does anybody know how I can "delay/sleep/wait" for the server answer 
to get some rtt which are not 0 as the rtt is 0.


Here the updated Patch without the EWMA reference.


Regards
Alex




Re: RFQ HAPROXY SERVER for CTBC Bank

2023-03-29 Thread Aleksandar Lazic

HI.

On 29.03.23 05:02, Procurement - TTSolution wrote:

Hi Sir/Madam,

Please help to provide quotation below for:

 1. *HAPROXY SERVER – QTY: 1*


As willy already written is this list mainly for the OpenSource HAProxy.
You can get in touch for the Enterprise Version on this page.

https://www.haproxy.com/contact-us/


Thanks & Best Regards,

Najihah


Best regards
Alex



Re: HAProxy CE Docker Debian and Ubuntu images with QUIC

2023-03-20 Thread Aleksandar Lazic

Hi Dinko.

On 19.03.23 19:54, Dinko Korunic wrote:

Dear community,

As previously requested, we have also started building HAProxy CE  for 
2.6, 2.7 and 2.8 branches with QUIC (based on OpenSSL 1.1.1t-quic 
Release 1) built on top of Debian 11 Bullseye and Ubuntu 22.04 Jammy 
Jellyfish base images.


Thank you for the fast build.

Images are being built for only two architectures listed below due to 
build/stability issues (as opposed to Alpine variant, which is also 
built for linux/arm/v6 and linux/arm/v7):

- linux/amd64
- linux/arm64

Images are available at the usual Docker Hub repositories:
- 
https://hub.docker.com/repository/docker/haproxytech/haproxy-debian-quic 

- 
https://hub.docker.com/repository/docker/haproxytech/haproxy-ubuntu-quic 



The corresponding Github repositories with update scripts, Dockerfiles, 
configurations and GA workflows are at the respective places:
- https://github.com/haproxytech/haproxy-docker-debian-quic 

- https://github.com/haproxytech/haproxy-docker-ubuntu-quic 



Let me know if you spot any issues and/or have any problems with these.

As other our haproxytech Docker images, these will auto-rebuild on:
- dataplaneapi releases
- HAProxy CE releases

including also:
- QUICTLS/OpenSSL releases


Kind regards,
D.

--
Dinko Korunic                   ** Standard disclaimer applies **
Sent from OSF1 osf1v4b V4.0 564 alpha





Re: HAProxy CE Docker Alpine image with QUIC

2023-03-18 Thread Aleksandar Lazic

Hi Dinko.

On 17.03.23 20:59, Dinko Korunic wrote:

Dear community,

Upon many requests, we have started building HAProxy CE for 2.6, 2.7 and 
2.8 branches with QUIC (based on OpenSSL 1.1.1t-quic Release 1) as 
Docker Alpine 3.17 images.


That's great news :-).

What should keep in mind is that Apline's musl libc does not handle TCP 
DNS queries, which limits the answers for dns Queries to ~30 entries.


https://www.linkedin.com/pulse/musl-libc-alpines-greatest-weakness-rogan-lynch/
=> https://twitter.com/richfelker/status/994629795551031296?lang=en

```
My choice not to do TCP in musl's stub resolver was based on an 
interpretation that truncated results are not just acceptable but better 
ux - not only do you save major round-trip delays to DNS but you also 
get a reasonable upper bound on # of addrs in result.


-Rich Felker (via twitter)
```

Any chance to get also a libc based image with quic?

Regards
Alex


All these are being built for several architectures, namely:
- linux/amd64
- linux/arm/v6
- linux/arm/v7
- linux/arm64

As usual, Docker pull will fetch appropriate image for your architecture 
if it exists.


These images are available at Docker Hub as usual (and they have 
dataplaneapi binary as well):


Docker 
hub.docker.com 





And sources (scripts, Dockerfiles, GA workflows etc.) are available below:

haproxy-docker-alpine-quic.png
haproxytech/haproxy-docker-alpine-quic: HAProxy CE Docker Alpine image 
with QUIC (quictls) 


github.com 




Kind regards,
D.

--
Dinko Korunic                   ** Standard disclaimer applies **
Sent from OSF1 osf1v4b V4.0 564 alpha





Re: [PATCH] MINOR: sample: Add bc_rtt and bc_rttvar

2023-02-16 Thread Aleksandar Lazic

Hi.

Any chance to add this Patch?

Regards
Alex

On 10.01.23 21:27, Aleksandar Lazic wrote:



On 09.12.22 13:17, Aleksandar Lazic wrote:

Hi.

As I still think that the Balancing algorithm (Peak) EWMA ( 
https://github.com/haproxy/haproxy/issues/1570 ) could help to make a 
"better" decision to which server should the request be send, here the 
beginning of the patches.


In any cases it would be nice to know the rtt from the backend, Imho.

Does anybody know how I can "delay/sleep/wait" for the server answer 
to get some rtt which are not 0 as the rtt is 0.


Here the updated Patch without the EWMA reference.


Regards
Alex




Re: proxy

2023-01-11 Thread Aleksandar Lazic

Hi Adam.

On 12.01.23 01:30, Adam wrote:

Dear Friend
I have a service to broadcast channels and movies over the Internet
by panel iptv
And I have servers that I want to hide the real IP of in order to 
protect them from attacks

It is on the other hand a complaint of abuse
How do you help me with that
I have more than 10 Ubuntu servers
I am waiting for your reply


You can use haproxy for that and there are quite good blog posts about 
protection of backend servers.


https://www.haproxy.com/blog/category/security/

As you have also contacted cont...@haproxy.com you could get a offer for 
the HAProxy Enterprise product 
https://www.haproxy.com/products/haproxy-enterprise/ .


Regards
Alex



Re: [ANNOUNCE] haproxy-2.8-dev1

2023-01-10 Thread Aleksandar Lazic

Hi Willy.

On 07.01.23 19:49, Willy Tarreau wrote:

Hi Alex,

On Sat, Jan 07, 2023 at 06:31:40PM +0100, Aleksandar Lazic wrote:



On 07.01.23 10:38, Willy Tarreau wrote:

Hi,

HAProxy 2.8-dev1 was released on 2023/01/07. It added 206 new commits
after version 2.8-dev0.


[snipp]

Any chance to add this patch to 1.8?

[PATCH] MINOR: sample: Add bc_rtt and bc_rttvar
https://www.mail-archive.com/haproxy@formilux.org/msg42962.html

What's the plan for this feature request?


We can merge it. I think the reason it's been let rotting is that it
seems from its commit message to be quite strongly tied to the EWMA
stuff and in my opinion it should not. As you mentioned in the message
above, it has plenty of use cases, one of which is simply logging. Some
may want it to be backported just for logging and we don't want to put
such confusing references there. So let's just adjust the commit message
to be more factual about what it does (i.e. provide bc_rtt and bc_rtt_avg
to report the RTT measured over a TCP backend connection) and be done
with it.


That's a good point. Have send the patch without the EWMA commit message 
in the original mail thread .



Server weight modulation based on smoothed average measurement
https://github.com/haproxy/haproxy/issues/1977

which looks a per-requirement for

New Balancing algorithm (Peak) EWMA
https://github.com/haproxy/haproxy/issues/1570


I really have no status for all this. Feature requests accumulate facter
than bug reports and the only cases where I create one is to make sure
to dump what I have in mind after a discussion so that I have somewhere
to look for the details when trying to get back to it :-/


Okay, thanks for the explanation.


Cheers,
Willy


Regards
Alex



Re: [PATCH] MINOR: sample: Add bc_rtt and bc_rttvar

2023-01-10 Thread Aleksandar Lazic



On 09.12.22 13:17, Aleksandar Lazic wrote:

Hi.

As I still think that the Balancing algorithm (Peak) EWMA ( 
https://github.com/haproxy/haproxy/issues/1570 ) could help to make a 
"better" decision to which server should the request be send, here the 
beginning of the patches.


In any cases it would be nice to know the rtt from the backend, Imho.

Does anybody know how I can "delay/sleep/wait" for the server answer to 
get some rtt which are not 0 as the rtt is 0.


Here the updated Patch without the EWMA reference.


Regards
AlexFrom 7610bb7234bd324e06e56732a67bf8a0e65d7dbc Mon Sep 17 00:00:00 2001
From: Aleksandar Lazic 
Date: Fri, 9 Dec 2022 13:05:52 +0100
Subject: [PATCH] MINOR: sample: Add bc_rtt and bc_rttvar

This Patch adds the fetch sample for backends round trip time.

---
 doc/configuration.txt| 16 ++
 reg-tests/sample_fetches/tcpinfo_rtt.vtc | 39 
 src/tcp_sample.c | 33 
 3 files changed, 88 insertions(+)
 create mode 100644 reg-tests/sample_fetches/tcpinfo_rtt.vtc

diff --git a/doc/configuration.txt b/doc/configuration.txt
index c45f0b4b6..e8526de7f 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -18854,6 +18854,22 @@ be_server_timeout : integer
   current backend. This timeout can be overwritten by a "set-timeout" rule. See
   also the "cur_server_timeout".
 
+bc_rtt() : integer
+  Returns the Round Trip Time (RTT) measured by the kernel for the backend
+  connection.  is facultative, by default the unit is milliseconds. 
+  can be set to "ms" for milliseconds or "us" for microseconds. If the server
+  connection is not established, if the connection is not TCP or if the
+  operating system does not support TCP_INFO, for example Linux kernels before
+  2.4, the sample fetch fails.
+
+bc_rttvar() : integer
+  Returns the Round Trip Time (RTT) variance measured by the kernel for the
+  backend connection.  is facultative, by default the unit is milliseconds.
+   can be set to "ms" for milliseconds or "us" for microseconds. If the
+  server connection is not established, if the connection is not TCP or if the
+  operating system does not support TCP_INFO, for example Linux kernels before
+  2.4, the sample fetch fails.
+
 be_tunnel_timeout : integer
   Returns the configuration value in millisecond for the tunnel timeout of the
   current backend. This timeout can be overwritten by a "set-timeout" rule. See
diff --git a/reg-tests/sample_fetches/tcpinfo_rtt.vtc b/reg-tests/sample_fetches/tcpinfo_rtt.vtc
new file mode 100644
index 0..f28a2072e
--- /dev/null
+++ b/reg-tests/sample_fetches/tcpinfo_rtt.vtc
@@ -0,0 +1,39 @@
+varnishtest "Test declaration of TCP rtt fetches"
+
+# feature cmd "$HAPROXY_PROGRAM -cc 'version_atleast(v2.8-dev1)'"
+feature ignore_unknown_macro
+
+server s1 {
+rxreq
+txresp
+}  -start
+
+haproxy h1 -conf {
+  defaults common
+  mode http
+  timeout connect "${HAPROXY_TEST_TIMEOUT-5s}"
+  timeout client  "${HAPROXY_TEST_TIMEOUT-5s}"
+  timeout server  "${HAPROXY_TEST_TIMEOUT-5s}"
+
+  frontend fe from common
+  bind "fd@${feh1}"
+
+  default_backend be
+
+  backend be from common
+
+  http-response set-header x-test1 "%[fc_rtt]"
+  http-response set-header x-test2 "%[bc_rtt]"
+  http-response set-header x-test3 "%[fc_rttvar]"
+  http-response set-header x-test4 "%[bc_rttvar]"
+
+  server s1 ${s1_addr}:${s1_port}
+
+} -start
+
+client c1 -connect ${h1_feh1_sock} {
+txreq -req GET -url /
+rxresp
+expect resp.status == 200
+#expect resp.http.x-test2 ~ " ms"
+} -run
diff --git a/src/tcp_sample.c b/src/tcp_sample.c
index 925b93291..bf0d538ea 100644
--- a/src/tcp_sample.c
+++ b/src/tcp_sample.c
@@ -373,6 +373,34 @@ static inline int get_tcp_info(const struct arg *args, struct sample *smp,
 	return 1;
 }
 
+/* get the mean rtt of a backend/server connection */
+static int
+smp_fetch_bc_rtt(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+	if (!get_tcp_info(args, smp, 1, 0))
+		return 0;
+
+	/* By default or if explicitly specified, convert rtt to ms */
+	if (!args || args[0].type == ARGT_STOP || args[0].data.sint == TIME_UNIT_MS)
+		smp->data.u.sint = (smp->data.u.sint + 500) / 1000;
+
+	return 1;
+}
+
+/* get the variance of the mean rtt of a backend/server connection */
+static int
+smp_fetch_bc_rttvar(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+	if (!get_tcp_info(args, smp, 1, 1))
+		return 0;
+
+	/* By default or if explicitly specified, convert rttvar to ms */
+	if (!args || args[0].type == ARGT_STOP || args[0].data.sint == TIME_UNIT_MS)
+		smp->data.u.sint = (smp->data.u.sint + 500) /

Re: [ANNOUNCE] haproxy-2.8-dev1

2023-01-07 Thread Aleksandar Lazic




On 07.01.23 10:38, Willy Tarreau wrote:

Hi,

HAProxy 2.8-dev1 was released on 2023/01/07. It added 206 new commits
after version 2.8-dev0.


[snipp]

Any chance to add this patch to 1.8?

[PATCH] MINOR: sample: Add bc_rtt and bc_rttvar
https://www.mail-archive.com/haproxy@formilux.org/msg42962.html

What's the plan for this feature request?

Server weight modulation based on smoothed average measurement
https://github.com/haproxy/haproxy/issues/1977

which looks a per-requirement for

New Balancing algorithm (Peak) EWMA
https://github.com/haproxy/haproxy/issues/1570

regards
alex



Re: [PATCH] MINOR: sample: Add bc_rtt and bc_rttvar

2022-12-14 Thread Aleksandar Lazic

Hi,

Any feedback to that patch?

On 09.12.22 13:17, Aleksandar Lazic wrote:

Hi.

As I still think that the Balancing algorithm (Peak) EWMA ( 
https://github.com/haproxy/haproxy/issues/1570 ) could help to make a 
"better" decision to which server should the request be send, here the 
beginning of the patches.


In any cases it would be nice to know the rtt from the backend, Imho.

Does anybody know how I can "delay/sleep/wait" for the server answer to 
get some rtt which are not 0 as the rtt is 0.


Regards
Alex




[PATCH] MINOR: sample: Add bc_rtt and bc_rttvar

2022-12-09 Thread Aleksandar Lazic

Hi.

As I still think that the Balancing algorithm (Peak) EWMA ( 
https://github.com/haproxy/haproxy/issues/1570 ) could help to make a 
"better" decision to which server should the request be send, here the 
beginning of the patches.


In any cases it would be nice to know the rtt from the backend, Imho.

Does anybody know how I can "delay/sleep/wait" for the server answer to 
get some rtt which are not 0 as the rtt is 0.


Regards
AlexFrom 7610bb7234bd324e06e56732a67bf8a0e65d7dbc Mon Sep 17 00:00:00 2001
From: Aleksandar Lazic 
Date: Fri, 9 Dec 2022 13:05:52 +0100
Subject: [PATCH] MINOR: sample: Add bc_rtt and bc_rttvar

To be able to implement "Balancing algorithm (Peak) EWMA" is it
necessary to know the round trip time to the backend.

This Patch adds the fetch sample for the backend server.

Part of GH https://github.com/haproxy/haproxy/issues/1570

---
 doc/configuration.txt| 16 ++
 reg-tests/sample_fetches/tcpinfo_rtt.vtc | 39 
 src/tcp_sample.c | 33 
 3 files changed, 88 insertions(+)
 create mode 100644 reg-tests/sample_fetches/tcpinfo_rtt.vtc

diff --git a/doc/configuration.txt b/doc/configuration.txt
index c45f0b4b6..e8526de7f 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -18854,6 +18854,22 @@ be_server_timeout : integer
   current backend. This timeout can be overwritten by a "set-timeout" rule. See
   also the "cur_server_timeout".
 
+bc_rtt() : integer
+  Returns the Round Trip Time (RTT) measured by the kernel for the backend
+  connection.  is facultative, by default the unit is milliseconds. 
+  can be set to "ms" for milliseconds or "us" for microseconds. If the server
+  connection is not established, if the connection is not TCP or if the
+  operating system does not support TCP_INFO, for example Linux kernels before
+  2.4, the sample fetch fails.
+
+bc_rttvar() : integer
+  Returns the Round Trip Time (RTT) variance measured by the kernel for the
+  backend connection.  is facultative, by default the unit is milliseconds.
+   can be set to "ms" for milliseconds or "us" for microseconds. If the
+  server connection is not established, if the connection is not TCP or if the
+  operating system does not support TCP_INFO, for example Linux kernels before
+  2.4, the sample fetch fails.
+
 be_tunnel_timeout : integer
   Returns the configuration value in millisecond for the tunnel timeout of the
   current backend. This timeout can be overwritten by a "set-timeout" rule. See
diff --git a/reg-tests/sample_fetches/tcpinfo_rtt.vtc b/reg-tests/sample_fetches/tcpinfo_rtt.vtc
new file mode 100644
index 0..f28a2072e
--- /dev/null
+++ b/reg-tests/sample_fetches/tcpinfo_rtt.vtc
@@ -0,0 +1,39 @@
+varnishtest "Test declaration of TCP rtt fetches"
+
+# feature cmd "$HAPROXY_PROGRAM -cc 'version_atleast(v2.8-dev1)'"
+feature ignore_unknown_macro
+
+server s1 {
+rxreq
+txresp
+}  -start
+
+haproxy h1 -conf {
+  defaults common
+  mode http
+  timeout connect "${HAPROXY_TEST_TIMEOUT-5s}"
+  timeout client  "${HAPROXY_TEST_TIMEOUT-5s}"
+  timeout server  "${HAPROXY_TEST_TIMEOUT-5s}"
+
+  frontend fe from common
+  bind "fd@${feh1}"
+
+  default_backend be
+
+  backend be from common
+
+  http-response set-header x-test1 "%[fc_rtt]"
+  http-response set-header x-test2 "%[bc_rtt]"
+  http-response set-header x-test3 "%[fc_rttvar]"
+  http-response set-header x-test4 "%[bc_rttvar]"
+
+  server s1 ${s1_addr}:${s1_port}
+
+} -start
+
+client c1 -connect ${h1_feh1_sock} {
+txreq -req GET -url /
+rxresp
+expect resp.status == 200
+#expect resp.http.x-test2 ~ " ms"
+} -run
diff --git a/src/tcp_sample.c b/src/tcp_sample.c
index 925b93291..bf0d538ea 100644
--- a/src/tcp_sample.c
+++ b/src/tcp_sample.c
@@ -373,6 +373,34 @@ static inline int get_tcp_info(const struct arg *args, struct sample *smp,
 	return 1;
 }
 
+/* get the mean rtt of a backend/server connection */
+static int
+smp_fetch_bc_rtt(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+	if (!get_tcp_info(args, smp, 1, 0))
+		return 0;
+
+	/* By default or if explicitly specified, convert rtt to ms */
+	if (!args || args[0].type == ARGT_STOP || args[0].data.sint == TIME_UNIT_MS)
+		smp->data.u.sint = (smp->data.u.sint + 500) / 1000;
+
+	return 1;
+}
+
+/* get the variance of the mean rtt of a backend/server connection */
+static int
+smp_fetch_bc_rttvar(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+	if (!get_tcp_info(args, smp, 1, 1))
+		return 0;
+
+	/* By default or if explicitly specified, convert rttvar to ms */
+	if (!args || args[0].type == ARGT_STOP || args[0]

Re: Haproxy send-proxy probes error

2022-11-23 Thread Aleksandar Lazic
Hi.

There is already a bug entry in apache bz from 2019 about that message.

https://bz.apache.org/bugzilla/show_bug.cgi?id=63893

Regards
Alex

23.11.2022 21:36:26 Marcello Lorenzi :

> Hi All,
> we use haproxy 2.2.17-dd94a25 in our development environment and we configure 
> a backend with proxy protocol v2 to permit the source IP forwarding to a TLS 
> backend server. All the configuration works fine but we notice this error 
> reported on backend Apache error logs:
> 
> AH03507: RemoteIPProxyProtocol: unsupported command 20
> 
> We configure the options check-send-proxy on backend probes but the issue 
> persists. 
> 
> Is it possible to remove this persistent error?
> 
> Thanks,
> Marcello



Re: Rate Limit a specific HTML request

2022-11-22 Thread Aleksandar Lazic

Hi.

On 22.11.22 23:19, Branitsky, Norman wrote:

A "computationally expensive" request is a request sent to our Public Search
service - no login required so it seems to be the target of abuse.
For example:
https:///datamart/searchByName.do?anchor=169a72e.0


Okay, let me rephrase your question.

How can be a IP blocked which creates a request which takes
$too_much_time to response.

Where could be the $too_much_time defined?
Could it be the "timeout server ..." config parameter?

Could the "%Tr" or "%TR" be used from logformat for that?
https://docs.haproxy.org/2.6/configuration.html#8.2.6

or the request get a 504 for internal state.

Idea:

backend block_bad_client
  stick-table  type ip size 100k expire 30s store http_req_rate(10s)
  http-request track-sc0 src unless { $too_much_time }

and call the table block_bad_client in the frontend config.

Is this what you would like to do?

I'm not sure if this is possible with HAProxy.

Regards
Alex


Norman Branitsky
Senior Cloud Architect
P: 416-916-1752

-----Original Message-
From: Aleksandar Lazic 
Sent: Tuesday, November 22, 2022 4:27 PM
To: Branitsky, Norman 
Cc: HAProxy 
Subject: Re: Rate Limit a specific HTML request

Hi.

On 22.11.22 21:57, Branitsky, Norman wrote:

I have the following "generic" rate limit defined - 150 requests in
10s from the same IP address:

  stick-table  type ip size 100k expire 30s store
http_req_rate(10s)
  http-request track-sc0 src unless { src -f
/etc/CONFIG/haproxy/cidr.lst }
  http-request deny deny_status 429 if { sc_http_req_rate(0) gt 150
}

Is it possible to rate limit a specific "computationally expensive"
HTML request from the same IP address to a much smaller number?


What do you define as a "computationally expensive" request?

Maybe you could draw a bigger Picture and tell us what version of HAProxy do 
you use.

In the upcoming 2.7 is also a "Bandwidth limitation", maybe this could help to 
solve your issue.
https://urldefense.com/v3/__https://docs.haproxy.org/dev/configuration.html*9.7__;Iw!!A69Ausm6DtA!cXofLVgdVtpc37THsFRU0XMLkddQpViT0iPILErgEsXJ5Ij0hkHgjayqKAMX3sQrCOK74wbouLMjDkb0ZJe5a08n2NK9$

HTML is a Description Language therefore I think you want to restrict HTTP 
Request/Response, isn't it?

https://urldefense.com/v3/__https://www.rfc-editor.org/rfc/rfc1866__;!!A69Ausm6DtA!cXofLVgdVtpc37THsFRU0XMLkddQpViT0iPILErgEsXJ5Ij0hkHgjayqKAMX3sQrCOK74wbouLMjDkb0ZJe5a55k2_bp$


*Norman Branitsky*
Senior Cloud Architect
Tyler Technologies, Inc.


Regards
Alex


P: 416-916-1752
C: 416.843.0670
http://www.tylertech.com
Tyler Technologies






Re: Rate Limit a specific HTML request

2022-11-22 Thread Aleksandar Lazic

Hi.

On 22.11.22 21:57, Branitsky, Norman wrote:
I have the following "generic" rate limit defined - 150 requests in 10s 
from the same IP address:


 stick-table  type ip size 100k expire 30s store http_req_rate(10s)
 http-request track-sc0 src unless { src -f 
/etc/CONFIG/haproxy/cidr.lst }

 http-request deny deny_status 429 if { sc_http_req_rate(0) gt 150 }

Is it possible to rate limit a specific "computationally expensive" HTML 
request from the same IP address to a much smaller number?


What do you define as a "computationally expensive" request?

Maybe you could draw a bigger Picture and tell us what version of
HAProxy do you use.

In the upcoming 2.7 is also a "Bandwidth limitation", maybe this could 
help to solve your issue.

https://docs.haproxy.org/dev/configuration.html#9.7

HTML is a Description Language therefore I think you want to restrict
HTTP Request/Response, isn't it?

https://www.rfc-editor.org/rfc/rfc1866


*Norman Branitsky*
Senior Cloud Architect
Tyler Technologies, Inc.


Regards
Alex


P: 416-916-1752
C: 416.843.0670
www.tylertech.com
Tyler Technologies 





Re: How to return 429 Status Code instead of 503

2022-11-17 Thread Aleksandar Lazic
hi.

but there is a 429 error code in the source.

https://git.haproxy.org/?p=haproxy.git=search=HEAD=grep=HTTP_ERR_429

As you don't written which version you use, maybe you can use the latest 2.6 
version and give the error code 429 a chance :-)

regards
alex

17.11.2022 16:29:02 Chilaka Ramakrishna :

> Thanks Jarno, for the reply.
> 
> But i don't think this would work for me, I just want to change the status 
> code (return 429 instead of 503) that i can return, if queue timeout occurs 
> for a request..
> 
> Please confirm, if this is possible or this sort of provision is even exposed 
> by HAP.
> 
> On Thu, Nov 17, 2022 at 12:43 PM Jarno Huuskonen  
> wrote:
>> Hello,
>> 
>> On Tue, 2022-11-08 at 09:30 +0530, Chilaka Ramakrishna wrote:
>>> On queue timeout, currently HAProxy throws 503, But i want to return 429,
>>> I understand that 4xx means a client problem and client can't help here.
>>> But due to back compatibility reasons, I want to return 429 instead of
>>> 503. Is this possible ?
>> 
>> errorfile 503 /path/to/429.http
>> (http://cbonte.github.io/haproxy-dconv/2.6/configuration.html#4-errorfile)
>> 
>> Or maybe it's possible with http-error
>> (http://cbonte.github.io/haproxy-dconv/2.6/configuration.html#http-error)
>> 
>> -Jarno
>> 



Re: HAPROXYU (apps) -

2022-11-07 Thread Aleksandar Lazic

Dear Carolina.

Please get in touch with the HAProxy Company for a offer.
https://www.haproxy.com/contact-us/

This Mailing list is for the OpenSource HAProxy.

Regards
Alex

On 07.11.22 13:06, Coco, Carolina wrote:

Hi team,

Could you please send us an offer for the marked in yellow?, its for one 
customer
of us.

[cid:image001.png@01D8EAB0.674071D0]

Thanks

Carolina Coco
Inside Sales
Direct:  +34 91 598 1406
Mobile:+34 649837471
Email: carolina.coco @softwareone.com
SoftwareONE Spain, S.A.
c/ Via de los Poblados 3, Edificio 4B, 1ªPlanta
28033 Madrid
España
https://www.softwareone.com/es


[cid:image009.gif@01D8F068.6024F750]
[cid:image008.jpg@01D8EAB0.674071D0]





Re: dsr and haproxy

2022-11-04 Thread Aleksandar Lazic

Hi.

On 04.11.22 12:24, Szabo, Istvan (Agoda) wrote:

Hi,

Is there anybody successfully configured haproxy and dsr?


Well maybe this Blog Post is a good start point.

https://www.haproxy.com/blog/layer-4-load-balancing-direct-server-return-mode/

Regards
Alex


Istvan Szabo
Senior Infrastructure Engineer
---
Agoda Services Co., Ltd.
e: istvan.sz...@agoda.com
---


This message is confidential and is for the sole use of the intended 
recipient(s).
It may also be privileged or otherwise protected by copyright or other legal 
rules.
If you have received it by mistake please let us know by reply email and delete 
it
from your system. It is prohibited to copy this message or disclose its content 
to
anyone. Any confidentiality or privilege is not waived or lost by any mistaken
delivery or unauthorized disclosure of the message. All messages sent to and 
from
Agoda may be monitored to ensure compliance with company policies, to protect 
the
company's interests and to remove potential malware. Electronic messages may be
intercepted, amended, lost or deleted, or contain viruses.




Re: Two frontends with the same IP and Port

2022-10-25 Thread Aleksandar Lazic

Hi Roberto.

On 25.10.22 17:01, Roberto Carna wrote:

Sorry, I want two different backends with same IP/port and different
SSL options as follow, and the same SSL wildcard certificate:

# Frontend 1 with certain SSL options
frontend Web1
bind 10.10.1.1:443 ssl crt /root/ssl/ no-sslv3 no-tlsv10 no-tlsv11
no-tls-tickets force-tlsv12
acl url_web1hdr_dom(host) -i www1.example.com
use_backend Server1  if url_web1

# Frontend 2 with any SSL options
frontend Web2
bind 10.10.1.1:443 ssl crt /root/ssl/
acl url_web2hdr_dom(host) -i www2.example.com
use_backend Server2  if url_web2

I made the above configuration, but sometimes the web traffic doesn't
reach the second server, until a browser refresh.


I think you could use this option for your setup.
https://docs.haproxy.org/2.6/configuration.html#5.1-crt-list

Hth
Alex


Special thanks!

El mar, 25 oct 2022 a las 10:16, Roberto Carna
() escribió:


Dear, I have a HAproxy server with two different frontends with the
same IP and port, both pointing to different backends, as follow:

frontend Web1
bind 10.10.1.1:443 ssl crt /root/ssl/ no-sslv3 no-tlsv10 no-tlsv11
no-tls-tickets force-tlsv12
acl url_web1hdr_dom(host) -i www1.example.com
use_backend Server1  if url_web1

frontend Web2
bind 10.10.1.1:443 ssl crt /root/ssl/ no-sslv3 no-tlsv10 no-tlsv11
no-tls-tickets force-tlsv12
acl url_web2hdr_dom(host) -i www2.example.com
use_backend Server2  if url_web2

If somebody goes to www1.example.com he enters to the first frontend,
and if somebody goes to www2.example.com he enters to the second
frontend.

Is this configuration OK or do I have to have any errors???

Thanks a lot!






Re: I can't disable TLS v1.1 from Internet

2022-10-24 Thread Aleksandar Lazic

Hi Roberto.

On 24.10.22 03:21, Roberto Carna wrote:

Dear, I have this scenario:

Internet --> HAproxy Frontend --> HAproxy Backend --> Web servers

HAproxy version 1.5.8 in frontend (disabling protocols in the backend
section connected to HAProxy backend):

server HA-Backend 172.20.20.1:443 ssl verify none ciphers
EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA:!AES256+ECDHE:!AES256+DHE
no-tlsv11 no-tlsv10 no-sslv3

HAproxy version 1.5.8 in backend (disabling protocols in the backend
section connected to web server) -->

server WEB01 10.12.12.1:443 ssl verify none ciphers
DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:AES256-GCM-SHA384:AES256-SHA256:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA:!AES256+ECDHE:!AES256+DHE
cookie s1 no-tlsv11 no-tlsv10 no-sslv3

server WEB02 10.12.12.2:443 ssl verify none ciphers
DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:AES256-GCM-SHA384:AES256-SHA256:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA:!AES256+ECDHE:!AES256+DHE
cookie s2 no-tlsv11 no-tlsv10 no-sslv3

Web Servers IIS (supporting TLS 1.0, TLS 1.1 and TLS 1.2)

As it is impossible to disable TLS 1.0 and TLS 1.1 from the IIS web
servers for specific functionality reasons (the web administrator
doesn't let me do this), I suppose I can disable TLS 1.0 and TLS 1.1
from the HAProxy frontend and backend.

But after that, I executed a test from SSL Labs from Qualys, and it
said TLS 1.1 is still enabled.

What can be the reason because the HAProxy frontend can't disable TLS
1.1 in connections from the Internet ?

Is anything wrong?


Well you have changed the server line not the frontend config.

The flow is like this

INet => HAProxy Frontend
 \
  Frontend
  \
   Backend => HAproxy Backend

SSL Labs test the Frontend config from HAProxy Frontend.

What is the config for the frontend of the HAProxy Frontend?

BTW.: HAProxy 1.5 is't maintained any more since 2020-01-10
https://www.haproxy.org/

You can get a more recent version from this repos.
https://github.com/iusrepo?q=hap=all==
https://github.com/DBezemer/rpm-haproxy


Thanks in advance, greetings!!!


Regards
Alex



Re: HA Proxy License

2022-10-07 Thread Aleksandar Lazic
Hi John.

I suggest to get in touch whith HAProxy company via this form.

https://www.haproxy.com/contact-us/

best regards
alex

07.10.2022 17:55:42 John Bowling (CE CEN) :

> Hello,
> 
> What are the costs for the license or is there a subscription for license?
> 
> *John L. Bowling (JB)*
> 
> Senior Team Leader
> 
> *IES – Network Engineering & Security (NES)*
> 
> *Network Operational Readiness (NOC)*
> 
> Whole Foods Market – Global Support (CEN)
> 
> An Amazon Company
> 
> 1011 W 5th  Street, 4th floor
> 
> Austin, Texas USA 78703
> 
> Mobile: +1-512-221-3780
> 
> Desk: +1-512.542.0797
> 
> Email: john.bowl...@wholefoods.com
> 
> www.wholefoodsmarket.com[http://www.wholefoodsmarket.com/]
> 
> Four principles: customer obsession rather than competitor focus, passion for 
> invention, commitment to operational excellence, and long-term thinking
> 
>  
> 
> For WFM technical support please call Global Help Desk at 1-877-923-4263  
>  
> 
>  Monday-Friday 6:00am-9:00pm CST Sat & Sun: 8:00am-4:00pm
> 
> For service request, open up WFM Internal ticket in OrchardNow 
> OrchardNow[https://wfmprod.service-now.com/nav_to.do?uri=%2Fhome.do%3F]
> 
>  
> 
> This email contains proprietary and confidential material for the sole use of 
> the intended recipient. Any review, use, distribution or disclosure by others 
> without the permission of the sender is strictly prohibited.  If you are not 
> the intended recipient (or authorized to receive for the recipient), please 
> contact the sender by reply email and delete all copies of this message. 
> Thank you.
> 


Re: http-response option in frontend section or backend section?

2022-10-03 Thread Aleksandar Lazic

Hi.

On 03.10.22 16:29, Roberto Carna wrote:

Dear, I have a HAProxy with several web applications but I have to
solve the cookie without a secure flag problem in just one web
application.

Do I have to define the "http-response replace header" option in the
frontend section or in the backend section of haproxy.cfg ? Or is it
the same ?

Because if I define the option in the frontend section, I modify the
cookie behaviour of all the applications, and this is not what I want.

Thanks a lot!!!


I would say when the application have a own backend config then modify 
the response there.


Regards
Alex



Re: LibreSSL 3.6.0 QUIC support with HAProxy 2.7

2022-09-14 Thread Aleksandar Lazic

Hi William.

On 14.09.22 18:50, William Lallemand wrote:

Hello List,

We've just finished the portage of HAProxy for the next libreSSL
version which implements the quicTLS API.


Wow great news.


For those interested this is how you are supposed to compile everything:

The libreSSL library:

$ git clone https://github.com/libressl-portable/portable libressl
$ cd libressl
$ ./autogen.sh

// The QUIC API is not public and not available in the shared
// library for now, you have to link with the .a
$ ./configure --prefix=/opt/libressl-quic/ --disable-shared 
CFLAGS=-DLIBRESSL_HAS_QUIC
$ make V=1
$ sudo make install

HAProxy:

$ git clone http://git.haproxy.org/git/haproxy.git/
$ cd haproxy
$ make TARGET=linux-glibc USE_OPENSSL=1 USE_QUIC=1 
SSL_INC=/opt/libressl-quic/include/ \
   SSL_LIB=/opt/libressl-quic/lib/ DEFINE='-DLIBRESSL_HAS_QUIC'


$ ./haproxy -vv
HAProxy version 2.7-dev5-7eeef9-91 2022/09/14 - https://haproxy.org/
Status: development branch - not safe for use in production.
Known bugs: https://github.com/haproxy/haproxy/issues?q=is:issue+is:open
Running on: Linux 5.15.0-47-generic #51-Ubuntu SMP Thu Aug 11 07:51:15 
UTC 2022 x86_64
Build options :
  TARGET  = linux-glibc
  CPU = generic
  CC  = cc
  CFLAGS  = -O2 -ggdb3 -Wall -Wextra -Wundef 
-Wdeclaration-after-statement -Wfatal-errors -Wtype-limits 
-Wshift-negative-value -Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference 
-fwrapv -Wno-address-of-packed-member -Wno-unused-label -Wno-sign-compare 
-Wno-unused-parameter -Wno-clobbered -Wno-missing-field-initializers 
-Wno-cast-function-type -Wno-string-plus-int -Wno-atomic-alignment 
-DLIBRESSL_HAS_QUIC
  OPTIONS = USE_PCRE=1 USE_OPENSSL=1 USE_LUA=1 USE_ZLIB=1 USE_SYSTEMD=1 
USE_QUIC=1
  DEBUG   = -DDEBUG_MEMORY_POOLS -DDEBUG_STRICT

Feature list : +EPOLL -KQUEUE +NETFILTER +PCRE -PCRE_JIT -PCRE2 
-PCRE2_JIT +POLL +THREAD -PTHREAD_EMULATION +BACKTRACE -STATIC_PCRE 
-STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT +CRYPT_H -ENGINE 
+GETADDRINFO +OPENSSL +LUA +ACCEPT4 -CLOSEFROM +ZLIB -SLZ +CPU_AFFINITY +TFO 
+NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL +SYSTEMD -OBSOLETE_LINKER +PRCTL 
-PROCCTL +THREAD_DUMP -EVPORTS -OT +QUIC -PROMEX -MEMORY_PROFILING

Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_TGROUPS=16, MAX_THREADS=256, 
default=8).
Built with OpenSSL version : LibreSSL 3.6.0
Running on OpenSSL version : LibreSSL 3.6.0
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3


How about to change this to something like

Built with SSL Library version
Running on SSL Library version
SSL library supports ...

Because it's confusing :-)

Built with OpenSSL version : LibreSSL 3.6.0

I thought also something like

Built with (OpenSSL|LibreSSL) version : LibreSSL 3.6.0

But this looks ugly to me.



Built with Lua version : Lua 5.4.3
Built with network namespace support.
Support for malloc_trim() is enabled.
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), deflate("deflate"), 
raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT 
IPV6_TRANSPARENT IP_FREEBIND
Built with PCRE version : 8.39 2016-06-14
Running on PCRE version : 8.39 2016-06-14
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Encrypted password support via crypt(3): yes
Built with gcc compiler version 11.2.0

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' 
keyword)
   quic : mode=HTTP  side=FE mux=QUIC  flags=HTX|NO_UPG|FRAMED
 h2 : mode=HTTP  side=FE|BE  mux=H2flags=HTX|HOL_RISK|NO_UPG
   fcgi : mode=HTTP  side=BE mux=FCGI  flags=HTX|HOL_RISK|NO_UPG
   : mode=HTTP  side=FE|BE  mux=H1flags=HTX
 h1 : mode=HTTP  side=FE|BE  mux=H1flags=HTX|NO_UPG
   : mode=TCP   side=FE|BE  mux=PASS  flags=
   none : mode=TCP   side=FE|BE  mux=PASS  flags=NO_UPG

Available services : none

Available filters :
[BWLIM] bwlim-in
[BWLIM] bwlim-out
[CACHE] cache
[COMP] compression
   

Re: Defining two FTP connections pointing to the same server

2022-08-18 Thread Aleksandar Lazic

Hi.

On 18.08.22 20:40, Roberto Carna wrote:

Dear all, I have to change my haproxy.cfg file in order to enable two
FTP connections to the same server, with these requirements:

FTP server IP: 10.10.1.10

1st FTP service:
FTP Control: port 21
FTP Data: port 11000 to 11010

2nd FTP service:
FTP Control: port 2100
FTP Data: 11000 to 10010 (same range as the first service)

In the haproxy.cfg I tried this:

listen ftp-control-1
 bind 10.10.1.1:21
 mode tcp
 option tcplog
  server FTP 10.10.1.10:21 check 21

listen ftp-control-2
 bind 10.10.1.1:2100
 mode tcp
 option tcplog
  server FTP 10.10.1.10:21 check 2100

listen ftp-data-1-2   <--- The same config for FTP data because they
use the same port range
 bind 10.10.1.1:11000-11010
 mode tcp
 option tcplog
 server FTP 10.10.1.10 check

But it doesn't work.


What do you have in the logs or at start time in the output?
I assume you will get something similar like "port already in use".


Is my config correct or not?

Is it correct if I use the same FTP data port range for both services
on the same server?


Well the "bind" implies that haproxy bind to that ports, you should see 
this with "netstat -tulpn".


I think you will need different listening (bind) ports for the second 
server.


See https://docs.haproxy.org/2.6/configuration.html#4.2-bind


Thanks and greetings!

Robert


Regards
Alex



Re: 3rd party modules support

2022-08-18 Thread Aleksandar Lazic

Hi.

On 17.08.22 16:54, Pavel Krestovozdvizhenskiy wrote:
Does HAProxy support of 3rd party modules? Not LUA scripts but compiled 
modules. Something like modules in nginx. I've read the documentation 
and did not found clear answer.


Not as far as i know, a more detailed answer can be found here.
https://www.mail-archive.com/haproxy@formilux.org/msg12985.html

What you can do is to add some filter addons similar to the current one
https://github.com/haproxy/haproxy/tree/master/addons and build haproxy 
with the filter.



Thanks. Paul


Regards
Alex



Re: Sending CORS headers with HAProxy-generated error responses

2022-08-12 Thread Aleksandar Lazic

Hi.

On 12.08.22 14:48, Eric Johanson wrote:

Thanks for the reply.  It sounds like for my situation, I want to add some CORS headers when 
fc_err > 0 perhaps using the "set-header" action of http-response.  (Your 
example uses the set-status action, which I don't think solves my problem of generating CORS 
headers for internal HAProxy connection errors).

https://docs.haproxy.org/2.6/configuration.html#4.2-http-response%20add-header

So maybe something like:
http-response add-header Access-Control-Allow-Origin "https://example.com; if 
fc_err gt 0
# ... more like this for the other required CORS headers

I haven't tried this, but does it some like it will accomplish what I described 
in my original post?


I would say give it a try and see if works.

Regards
Alex


-Original Message-
From: Aleksandar Lazic 
Sent: Friday, August 12, 2022 6:45 AM
To: Eric Johanson 
Cc: haproxy@formilux.org
Subject: Re: Sending CORS headers with HAProxy-generated error responses

Hi Eric.

On 11.08.22 21:59, Eric Johanson wrote:

When HAProxy generates an HTTP 500 error (say because our servers are
down), then HAProxy does not send any CORS information.  Because of
this, the HTTP 500 responses do not arrive at our web application because they 
are blocked by the browser.

To solve this, I want to add the relavent CORS headers to these
HAProxy-generated error responses.  And I want to add CORS headers
ONLY to such error responses (the error pages specified by the
"errorfile" directive in the haproxy.cfg file).  Our middleware server
already adds appropriate CORS headers for all responses that come from our 
middleware, including error responses that come from our middleware.  But when 
HAProxy sends an error response that it generates internally, there are no CORS 
headers.

I see there are various solutions for adding CORS headers to all HTTP traffic 
from front-ends (e.g.
https://github.com/haproxytech/haproxy-lua-cors).  But I want to add
CORS headers ONLY to the internal HAProxy-generated error responses.

Is there a way to do this, and if so how?


Well I'm not sure if it's possible but I would try to use fc_err and set 
errorfile.
https://docs.haproxy.org/2.6/configuration.html#7.3.3-fc_err
https://docs.haproxy.org/2.6/configuration.html#3.8

Example, untested.
http-response set-status [1] 500 if fc_err gt 0

Which error is the right one can be seen here 
https://docs.haproxy.org/2.6/configuration.html#7.3.3-fc_err_str

Jm2c
Regards
Alex


Thank you.
Confidentiality Notice: The information in this email is confidential. It is 
intended only for the use of the named recipient. Any use or disclosure of this 
email is strictly prohibited, if you are not the intended recipient. In case 
you have received this email in error, please notify us immediately and then 
delete this email. To help reduce unnecessary paper wastage, we would request 
that you do not print this email unless it is really necessary - thank you.


Confidentiality Notice: The information in this email is confidential. It is 
intended only for the use of the named recipient. Any use or disclosure of this 
email is strictly prohibited, if you are not the intended recipient. In case 
you have received this email in error, please notify us immediately and then 
delete this email. To help reduce unnecessary paper wastage, we would request 
that you do not print this email unless it is really necessary - thank you.
Confidentiality Notice: The information in this email is confidential. It is 
intended only for the use of the named recipient. Any use or disclosure of this 
email is strictly prohibited, if you are not the intended recipient. In case 
you have received this email in error, please notify us immediately and then 
delete this email. To help reduce unnecessary paper wastage, we would request 
that you do not print this email unless it is really necessary - thank you.




Re: Sending CORS headers with HAProxy-generated error responses

2022-08-12 Thread Aleksandar Lazic

Hi Eric.

On 11.08.22 21:59, Eric Johanson wrote:

When HAProxy generates an HTTP 500 error (say because our servers are down), 
then HAProxy does not send any CORS
information.  Because of this, the HTTP 500 responses do not arrive at our web 
application because they are
blocked by the browser.

To solve this, I want to add the relavent CORS headers to these 
HAProxy-generated error responses.  And I want to
add CORS headers ONLY to such error responses (the error pages specified by the 
"errorfile" directive in the
haproxy.cfg file).  Our middleware server already adds appropriate CORS headers 
for all responses that come from
our middleware, including error responses that come from our middleware.  But 
when HAProxy sends an error response
that it generates internally, there are no CORS headers.

I see there are various solutions for adding CORS headers to all HTTP traffic 
from front-ends (e.g.
https://github.com/haproxytech/haproxy-lua-cors).  But I want to add CORS 
headers ONLY to the internal
HAProxy-generated error responses.

Is there a way to do this, and if so how?


Well I'm not sure if it's possible but I would try to use fc_err and set 
errorfile.

https://docs.haproxy.org/2.6/configuration.html#7.3.3-fc_err
https://docs.haproxy.org/2.6/configuration.html#3.8

Example, untested.
http-response set-status [1] 500 if fc_err gt 0

Which error is the right one can be seen here
https://docs.haproxy.org/2.6/configuration.html#7.3.3-fc_err_str

Jm2c
Regards
Alex


Thank you.
Confidentiality Notice: The information in this email is confidential. It is 
intended only for the use of the named recipient. Any use or disclosure of this 
email is strictly prohibited, if you are not the intended recipient. In case 
you have received this email in error, please notify us immediately and then 
delete this email. To help reduce unnecessary paper wastage, we would request 
that you do not print this email unless it is really necessary - thank you.





Re: [PATCH] DOC: add info about ssl-engine for 2.6

2022-07-27 Thread Aleksandar Lazic

Hi Tim.

Thank you for your feedback.
Attached the new version

regards
Alex

On 16.06.22 15:16, Tim Düsterhus wrote:

Alex,


From 85bcc5ea26d7c1f468dbbf6a10b33bc9f79da819 Mon Sep 17 00:00:00 2001
From: Aleksandar Lazic 
Date: Wed, 15 Jun 2022 23:52:30 +0200
Subject: [PATCH] DOC: add info about ssl-engine for 2.6

In the announcment of 2.6 is mentioned that the openssl engine


There's a typo here: announcement.


is not enabled by default.

This patch add the information to the configuration.txt.

Is related to #1752


Please explicitly mention 'GitHub issue':

This is related to GitHub Issue #1752.



Should be backported to 2.6
---
 doc/configuration.txt | 4 
 1 file changed, 4 insertions(+)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 183710c35..d0e74e0fb 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -2666,6 +2666,10 @@ ssl-engine  [algo of algorithms>]

   openssl configuration file uses:
   https://www.openssl.org/docs/man1.0.2/apps/config.html

+  Since version 2.6 is the ssl-engine not enabled in the default 
build. In case


That first sentence sounds like a German sentence structure to me that 
is not correct English grammar. Suggestion that also unifies the wording 
with other places the refer to the USE_* flags:


Version 2.6 disabled the support for engines in the default build. This 
option is only available when HAProxy has been

compiled with USE_ENGINE.

+  that the ssl-engine is requierd can HAProxy be rebuild with 
USE_ENGINE=1


Typo: required


+  build flag.
+
 ssl-mode-async
   Adds SSL_MODE_ASYNC mode to the SSL context. This enables 
asynchronous TLS
   I/O operations if asynchronous capable SSL engines are used. The 
current

--
2.25.1


Best regards
Tim Düsterhus
From b0991e2f011d8fbbde3fc3a3e4fcc4a956e41064 Mon Sep 17 00:00:00 2001
From: Aleksandar Lazic 
Date: Wed, 27 Jul 2022 15:24:54 +0200
Subject: [PATCH] DOC: add info about ssl-engine for 2.6

In the announcement of 2.6 is mentioned that the openssl engine
is not enabled by default.

This patch add the information to the configuration.txt.

This is related to GitHub Issue #1752.

Should be back ported to 2.6
---
 doc/configuration.txt | 5 +
 1 file changed, 5 insertions(+)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index c348a08de..35d58f29c 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -2680,6 +2680,11 @@ ssl-engine  [algo ]
   openssl configuration file uses:
   https://www.openssl.org/docs/man1.0.2/apps/config.html
 
+  Version 2.6 disabled the support for engines in the default build. This
+  option is only available when HAProxy has been compiled with USE_ENGINE. 
+  In case that the ssl-engine is required can HAProxy be rebuild with 
+  USE_ENGINE=1 build flag.
+
 ssl-mode-async
   Adds SSL_MODE_ASYNC mode to the SSL context. This enables asynchronous TLS
   I/O operations if asynchronous capable SSL engines are used. The current
-- 
2.25.1



Re: Adding "Content-Type" and other needed headers in the response

2022-06-28 Thread Aleksandar Lazic
Hi.

On Tue, 28 Jun 2022 12:23:15 +0200
spfma.t...@e.mail.fr wrote:

> Hi,   I have a problem to solve : I never paid attention to the fact HAProxy
> (2.5.1-86b093a) did not return HTTP headers in the reponses, because there
> was no complaints so far. But now we got one, because of an old application
> which needs at least "Content-Type" as some tests are performed before
> generating the content.   If the devs don't fix it, I will have to find a
> solution on the load balancer side.   The LB is serving content from a Tomcat
> server, wich is returning plenty of headers.   So is there a way to add them
> in the response, like some "pass thru" ?   I was not able to find useful
> informations so far, maybe because I don't know what concepts and directives
> are involved.   As a very dumb and primitive test, I have added
> "http-response add-header Content-Type 'text/html'" at both FE and BE level,
> but the header is still not shown.   Thanks for any help.   Regards 
> 
> -
> FreeMail powered by mail.fr

Pleaese can you try to update HAProxy as there are ~150 Fixes in the latest
2.5. https://www.haproxy.org/bugs/bugs-2.5.1.html or try to use the latest
shiny 2.6 :-)

Can you share your current config.

Maybe the "http-after-response" could help in your case, but it's just a guess.
https://docs.haproxy.org/2.6/configuration.html#4.2-http-after-response%20set-header

Regards
Alex



Re: [ANNOUNCE] haproxy-2.7-dev1

2022-06-25 Thread Aleksandar Lazic
Hi Willy.

On Fri, 24 Jun 2022 22:58:53 +0200
Willy Tarreau  wrote:

> Hi,
> 
> HAProxy 2.7-dev1 was released on 2022/06/24. It added 131 new commits
> after version 2.7-dev0.
> 
> There's not that much new stuff yet but plenty of small issues were
> addressed, and it's already been 3 weeks since the release thus I figured
> it was a perfect timing for a -dev1 for those who want to stay on the edge
> without taking much risks.
> 
> In addition to the fixes that went into 2.6.1 already, some HTTP/3 issues
> were addressed and a memory leak affecting QUIC was addressed as well (thanks
> to @Tristan971 for his precious help on this one). 
> 
> Aside fixes, a few improvements started already. First, and to finish on
> QUIC, the QUICv2 version negotiation was implemented. This will allow us
> to follow the progress on the QUICv2 drafts more closely.
> 
> On HTTP/2, the maintainer of the Lighttpd web server reported a nasty case
> that he observed between curl and lighttpd which is very similar to the so
> called "Silly Window Syndrom" in TCP where a difference of one byte between
> a buffer size and a window size may progressively make the transfer
> degenerate until almost all frames are 1-byte in size. It's not a bug in
> any product, just a consequence of making certain standard-compliant stacks
> interoperate. Some workarounds were placed in various components that
> allowed the issue to appear. We did careful testing on haproxy and couldn't
> produce it there, in part due to our buffer management that makes it
> difficult to read exactly the sizes that produce the issue. But there's
> nothing either that can strictly prevent it from happening (e.g. with a
> sender using smaller frames maybe). So we implemented the workaround as
> well, which will also result in sending slightly less frames during
> uploads. The goal is to backport this once it has been exposed for a
> while without trouble in 2.7.
> 
> Another noticeable improvement is the inclusion of a feature that had
> been written in the now dead ROADMAP file for 15 years: multi-criteria
> bandwidth limiting. It allows to combine multiple filters to enforce
> bandwidth limitations on arbitrary criteria by looking at their total
> rate in a stick table. Thus it's possible to have per-source, per-
> destination, per-network, per-AS, per-interface bandwidth limits in
> each direction. In addition there's a stream-specific pair of limits
> (one per direction as well) that can even be adjusted on the fly. We
> could for example imagine that a client sends a POST request to a
> server, that the server responds with a 100-Continue and a header
> indicating the max permitted upload bandwidth, and then the transfer
> will be automatically capped. Quite frankly, I've been wanting this
> for a long time to address the problem of buffer bloat on small links
> (e.g. my old ADSL line), and here there's now an opportunity to
> maintain a good quality of service without saturating links thanks to
> this. I'm pretty sure that some users will be creative and may even
> come up with ideas of improvements ;-)

WOW that's great news.
Thanks for that feature :-)

Regards
Alex



Re: Segfault on 2.6.0 with TCP switching to HTTP/2

2022-06-16 Thread Aleksandar Lazic
On Thu, 16 Jun 2022 20:49:00 +1000
David Leadbeater  wrote:

> On Thu, 16 Jun 2022 at 20:27, Aleksandar Lazic  wrote:
> [...]
> > > Thanks ! I'm able to reproduce the segfault. I'm on it.
> 
> Thanks!
> 
> > But in any way wouldn't be better that the rule
> >
> > acl ipwtf hdr(Host),lower,field(1,:),word(-1,.,2) ip.wtf
> >
> > be after
> >
> > > >tcp-request inspect-delay 10s
> > > >tcp-request content switch-mode http if !ipwtf
> >
> > because it "feels somehow wrong" to make header checks in tcp mode.
> 
> There's some explanation in the configuration manual about how it
> works, and it's documented to work, at least for HTTP/1.
> 
> https://docs.haproxy.org/2.6/configuration.html#4
> "While HAProxy is able to parse HTTP/1 in-fly from tcp-request content
> rules"...
> 
> Essentially I want to keep the connection as TCP, so that I can have a
> backend that gets raw HTTP/1.1. I wrote some more about it at
> https://dgl.cx/2022/04/showing-you-your-actual-http-request

Nice service this https://ip.wtf/ thanks for offering it.

> [...]
> > Opinions?
> 
> Clearly in nearly all cases it's better to let haproxy be the HTTP
> proxy layer, especially as it isn't possible to mix for HTTP/2, but it
> lets me do my crazy thing here :)

Thank you David for your patience and explanation.
I fully agree HAProxy is a very flexible Server :-)

> David

Regards
Alex



Re: Segfault on 2.6.0 with TCP switching to HTTP/2

2022-06-16 Thread Aleksandar Lazic
On Thu, 16 Jun 2022 10:22:30 +0200
Christopher Faulet  wrote:

> Le 6/16/22 à 05:12, David Leadbeater a écrit :
> > I tried upgrading to 2.6.0 (from 2.5.6) and I'm seeing a segfault when
> > making HTTP/2 requests. I'm using a frontend in TCP mode and then
> > switching it to HTTP/2.
> > 
> > I've made a minimal config that exhibits the segfault, below. Simply
> > doing curl -vk https://ip is enough to trigger it for me.
> > 
> > Thread 1 "haproxy" received signal SIGSEGV, Segmentation fault.
> > 0x555d1d07 in h2s_close (h2s=0x55a60b70) at src/mux_h2.c:1497
> > 1497 HA_ATOMIC_DEC(>h2c->px_counters->open_streams);
> > (gdb) bt
> > #0  0x555d1d07 in h2s_close (h2s=0x55a60b70) at
> > src/mux_h2.c:1497 #1  h2s_destroy (h2s=0x55a60b70) at src/mux_h2.c:1515
> > #2  0x555d3463 in h2_detach (sd=) at
> > src/mux_h2.c:4432
> > 
> > The exact backtrace varies but always in h2s_destroy.
> > 
> > (In case you're wondering what on earth I'm doing, there's a write-up
> > of it at https://dgl.cx/2022/04/showing-you-your-actual-http-request)
> > 
> > David
> > 
> > ---
> > global
> >ssl-default-bind-options no-sslv3 no-tlsv10
> >user nobody
> > 
> > defaults
> >timeout connect 10s
> >timeout client 30s
> >timeout server 2m
> > 
> > frontend tcp-https
> >mode tcp
> >bind [::]:443 v4v6 ssl crt /etc/haproxy/ssl/bodge.cloud.pem alpn
> > h2,http/1.1 
> >acl ipwtf hdr(Host),lower,field(1,:),word(-1,.,2) ip.wtf
> >default_backend ipwtf
> >tcp-request inspect-delay 10s
> >tcp-request content switch-mode http if !ipwtf
> >use_backend cloud-regions.bodge.cloud if !ipwtf
> > 
> > backend ipwtf
> >mode tcp
> >server ipwtf localhost:8080
> > 
> > backend cloud-regions.bodge.cloud
> >mode http
> >server cr localhost:8080
> > 
> 
> Hi,
> 
> Thanks ! I'm able to reproduce the segfault. I'm on it.

But in any way wouldn't be better that the rule

acl ipwtf hdr(Host),lower,field(1,:),word(-1,.,2) ip.wtf

be after  

> >tcp-request inspect-delay 10s
> >tcp-request content switch-mode http if !ipwtf

because it "feels somehow wrong" to make header checks in tcp mode.

Or check if it's http before the hdr check.
https://docs.haproxy.org/2.6/configuration.html#7.3.5-req.proto_http

```
tcp-request inspect-delay 10s
tcp-request content switch-mode http if HTTP

acl ipwtf hdr(Host),lower,field(1,:),word(-1,.,2) ip.wtf
```

Opinions?

Jm2c

Regards
Alex



[PATCH] DOC: add info about ssl-engine for 2.6

2022-06-15 Thread Aleksandar Lazic
Hi.

Attached a doc patch about ssl-engine and 2.6 is related to
https://github.com/haproxy/haproxy/issues/1752


Regards
Alex
>From 85bcc5ea26d7c1f468dbbf6a10b33bc9f79da819 Mon Sep 17 00:00:00 2001
From: Aleksandar Lazic 
Date: Wed, 15 Jun 2022 23:52:30 +0200
Subject: [PATCH] DOC: add info about ssl-engine for 2.6

In the announcment of 2.6 is mentioned that the openssl engine
is not enabled by default.

This patch add the information to the configuration.txt.

Is related to #1752

Should be backported to 2.6
---
 doc/configuration.txt | 4 
 1 file changed, 4 insertions(+)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 183710c35..d0e74e0fb 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -2666,6 +2666,10 @@ ssl-engine  [algo ]
   openssl configuration file uses:
   https://www.openssl.org/docs/man1.0.2/apps/config.html
 
+  Since version 2.6 is the ssl-engine not enabled in the default build. In case
+  that the ssl-engine is requierd can HAProxy be rebuild with USE_ENGINE=1
+  build flag.
+
 ssl-mode-async
   Adds SSL_MODE_ASYNC mode to the SSL context. This enables asynchronous TLS
   I/O operations if asynchronous capable SSL engines are used. The current
-- 
2.25.1



Re: HttpClient in Lua

2022-06-15 Thread Aleksandar Lazic
Hi Phil,

please keep the ML in the loop.

On Thu, 16 Jun 2022 00:19:57 +1000
Philip Young  wrote:

> Hi Alex
> 
> Thanks for the reply, but unfortunately that only sets the CA certs that
> issued the server certs. I need a way to specify a client certificate that
> will be used to talk to authz service. 

Ah okay sorry haven't understood that you want to send client certificate.
I would try to use http://docs.haproxy.org/2.6/configuration.html#5.2-crt
with the Client Certificate in the pem and set it on the server line.

It's my conclusion of that code.
https://git.haproxy.org/?p=haproxy.git;a=blob;f=src/hlua.c;hb=HEAD#l12530

Again it's just a assumption as I had never the requirements to use client
certificates with haproxy.

Regards
Alex

> Thanks anyway
> 
> Sent from my iPhone
> 
> > On 16 Jun 2022, at 12:03 am, Aleksandar Lazic  wrote:
> > 
> > HI.
> > 
> >> On Wed, 15 Jun 2022 23:33:27 +1000
> >> Philip Young  wrote:
> >> 
> >> Hi
> >> I am currently writing a LUA module to make authorisation decisions on
> >> whether a request is allowed, by calling out to another service to make the
> >> authorisation decision.
> >> In the Lua module, I am using Socket.connect_ssl() to
> >> connect to the authorisation service but I am struggling to work out how to
> >> set the path to the certificate I want to use to connect to the
> >> authorisation service.
> >> Does anybody know how to set the path to the certificate that is
> >> used when using Socket.connect_ssl() Is it possible to do this using the
> >> httpclient?
> > 
> > As I'm not a lua nor httpclient expert but maybe this could help.
> > https://docs.haproxy.org/2.6/configuration.html#httpclient.ssl.ca-file
> > 
> > Also check if you mabye need to adopt this at least for the beginning.
> > https://docs.haproxy.org/2.6/configuration.html#httpclient.ssl.verify
> > 
> >> I have tried asking the Slack chat channel and on the commercial
> >> site but no one knows. 
> >> 
> >> Cheers Phil
> > 
> > Hth
> > Alex




Re: HttpClient in Lua

2022-06-15 Thread Aleksandar Lazic
HI.

On Wed, 15 Jun 2022 23:33:27 +1000
Philip Young  wrote:

> Hi
> I am currently writing a LUA module to make authorisation decisions on
> whether a request is allowed, by calling out to another service to make the
> authorisation decision.
> In the Lua module, I am using Socket.connect_ssl() to
> connect to the authorisation service but I am struggling to work out how to
> set the path to the certificate I want to use to connect to the authorisation
> service.
> Does anybody know how to set the path to the certificate that is
> used when using Socket.connect_ssl() Is it possible to do this using the
> httpclient?

As I'm not a lua nor httpclient expert but maybe this could help.
https://docs.haproxy.org/2.6/configuration.html#httpclient.ssl.ca-file

Also check if you mabye need to adopt this at least for the beginning.
https://docs.haproxy.org/2.6/configuration.html#httpclient.ssl.verify

> I have tried asking the Slack chat channel and on the commercial
> site but no one knows. 
> 
> Cheers Phil

Hth
Alex



Re: V2.3 allow use of TLSv1.0

2022-06-09 Thread Aleksandar Lazic
Hi spfma.tech.

Uff, the mail is quite hard to read but looks like you are on ubuntu.

Maybe this page can help to solve your issue.

Enable TLSv1 in Ubuntu 20.04
https://ndk.sytes.net/wordpress/?p=1169

Regards
Alex

On Thu, 09 Jun 2022 09:58:10 +0200
spfma.t...@e.mail.fr wrote:

> Hi,   Thanks for your answer.   I have tried the generated config from this
> wonderful site, but no improvement.   So here is the output of the haproxy
> command :   HA-Proxy version 2.3.20-1ppa1~focal 2022/04/29 -
> https://haproxy.org/ Status: End of life - please upgrade to branch 2.4.
> Known bugs: http://www.haproxy.org/bugs/bugs-2.3.20.html Running on: Linux
> 5.4.0-113-generic #127-Ubuntu SMP Wed May 18 14:30:56 UTC 2022 x86_64 Build
> options : TARGET = linux-glibc CPU = generic
>  CC = cc
>  CFLAGS = -O2 -g -O2
> -fdebug-prefix-map=/build/haproxy-VMWa1u/haproxy-2.3.20=.
> -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time
> -D_FORTIFY_SOURCE=2 -Wall -Wextra -Wdeclaration-after-statement -fwrapv
> -Wno-address-of-packed-member -Wno-unused-label -Wno-sign-compare
> -Wno-unused-parameter -Wno-clobbered -Wno-missing-field-initializers
> -Wno-cast-function-type -Wtype-limits -Wshift-negative-value
> -Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference OPTIONS = USE_PCRE2=1
> USE_PCRE2_JIT=1 USE_OPENSSL=1 USE_LUA=1 USE_ZLIB=1 USE_SYSTEMD=1 DEBUG = 
> 
> Feature list : +EPOLL -KQUEUE +NETFILTER -PCRE -PCRE_JIT +PCRE2 +PCRE2_JIT
> +POLL -PRIVATE_CACHE +THREAD -PTHREAD_PSHARED +BACKTRACE -STATIC_PCRE
> -STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT +CRYPT_H
> +GETADDRINFO +OPENSSL +LUA +FUTEX +ACCEPT4 -CLOSEFROM +ZLIB -SLZ
> +CPU_AFFINITY +TFO +NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL +SYSTEMD
> -OBSOLETE_LINKER +PRCTL +THREAD_DUMP -EVPORTS
> 
> Default settings :
>  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
> 
> Built with multi-threading support (MAX_THREADS=64, default=2).
> Built with OpenSSL version : OpenSSL 1.1.1f 31 Mar 2020
> Running on OpenSSL version : OpenSSL 1.1.1f 31 Mar 2020
> OpenSSL library supports TLS extensions : yes
> OpenSSL library supports SNI : yes
> OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
> Built with Lua version : Lua 5.3.3
> Built with network namespace support.
> Built with the Prometheus exporter as a service
> Built with zlib version : 1.2.11
> Running on zlib version : 1.2.11
> Compression algorithms supported : identity("identity"), deflate("deflate"),
> raw-deflate("deflate"), gzip("gzip") Built with transparent proxy support
> using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND Built with PCRE2 version :
> 10.34 2019-11-21 PCRE2 library supports JIT : yes
> Encrypted password support via crypt(3): yes
> Built with gcc compiler version 9.4.0
> 
> Available polling systems :
>  epoll : pref=300, test result OK
>  poll : pref=200, test result OK
>  select : pref=150, test result OK
> Total: 3 (3 usable), will use epoll.
> 
> Available multiplexer protocols :
> (protocols marked as  cannot be specified using 'proto' keyword)
>  h2 : mode=HTTP side=FE|BE mux=H2
>  fcgi : mode=HTTP side=BE mux=FCGI
>   : mode=HTTP side=FE|BE mux=H1
>   : mode=TCP side=FE|BE mux=PASS
> 
> Available services : prometheus-exporter
> Available filters :
>  [SPOE] spoe
>  [CACHE] cache
>  [FCGI] fcgi-app
>  [COMP] compression
>  [TRACE] trace   ---   OpenSSL 1.1.1f 31 Mar 2020
> built on: Tue May 3 17:49:36 2022 UTC
> platform: debian-amd64
> options: bn(64,64) rc4(16x,int) des(int) blowfish(ptr) 
> compiler: gcc -fPIC -pthread -m64 -Wa,--noexecstack -Wall -Wa,--noexecstack
> -g -O2 -fdebug-prefix-map=/build/openssl-7zx7z2/openssl-1.1.1f=.
> -fstack-protector-strong -Wformat -Werror=format-security
> -DOPENSSL_TLS_SECURITY_LEVEL=2 -DOPENSSL_USE_NODELETE -DL_ENDIAN
> -DOPENSSL_PIC -DOPENSSL_CPUID_OBJ -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT
> -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM
> -DSHA512_ASM -DKECCAK1600_ASM -DRC4_ASM -DMD5_ASM -DAESNI_ASM -DVPAES_ASM
> -DGHASH_ASM -DECP_NISTZ256_ASM -DX25519_ASM -DPOLY1305_ASM -DNDEBUG
> -Wdate-time -D_FORTIFY_SOURCE=2 OPENSSLDIR: "/usr/lib/ssl" ENGINESDIR:
> "/usr/lib/x86_64-linux-gnu/engines-1.1" Seeding source: os-specific
> 
>  ---   #
> # OpenSSL example configuration file.
> # This is mostly being used for generation of certificate requests.
> #
> 
> # Note that you can include other files from the main configuration
> # file using the .include directive.
> #.include filename
> 
> # This definition stops the following lines choking if HOME isn't
> # defined.
> HOME = .
> 
> # Extra OBJECT IDENTIFIER info:
> #oid_file = $ENV::HOME/.oid
> oid_section = new_oids
> 
> # To use this configuration file with the "-extfile" option of the
> # "openssl x509" utility, name here the section containing the
> # X.509v3 extensions to use:
> # extensions =
> # (Alternatively, use a configuration file that has only
> # X.509v3 extensions in its main [= default] section.)
> 
> [ new_oids ]
> 
> # 

Re: Rate Limiting with token/leaky bucket algorithm

2022-06-03 Thread Aleksandar Lazic
Hi.

On Fri, 3 Jun 2022 17:12:25 +0200
Seena Fallah  wrote:

> When using the below config to have 100req/s rate-limiting after passing
> the 100req/s all of the reqs will deny not reqs more than 100req/s!
> ```
> listen test
> bind :8000
> stick-table  type ip  size 100k expire 30s store http_req_rate(1s)
> http-request track-sc0 src
> http-request deny deny_status 429 if { sc_http_req_rate(0) gt 100 }
> http-request return status 200 content-type "text/plain" lf-string "200
> OK"
> ```
> 
> Is there a way to deny reqs more than 100 not all of them?
> For example, if we have 1000req/s, 100reqs get "200 OK" and the rest of
> them (900reqs) gets "429"?

Yes.

Here are some examples with explanation.
https://www.haproxy.com/blog/four-examples-of-haproxy-rate-limiting/

Here some search outputs, maybe some of the examples helps you to.
https://html.duckduckgo.com/html?q=haproxy%20rate%20limiting

Regards
Alex



Re: [ANNOUNCE] haproxy-2.6-dev12

2022-05-28 Thread Aleksandar Lazic
Hi.

On Sat, 28 May 2022 11:42:17 +
Ajay Mahto  wrote:

> Unsubscribe me.

Feel free to do it by your self.
https://www.haproxy.org/#tact

Regards
Alex

> Regards,
> 
> Ajay Kumar Mahto,
> Lead DevOps Engineer,
> NPCI, Hyderabad
> +91 8987510264
> 
> From: Willy Tarreau 
> Sent: Friday, May 27, 2022 11:55:07 PM
> To: haproxy@formilux.org 
> Subject: [ANNOUNCE] haproxy-2.6-dev12
> 
> WARNING: This mail has come from outside. Please verify sender, attachment
> and URLs before clicking on them.
> 
> Hi,
> 
> HAProxy 2.6-dev12 was released on 2022/05/27. It added 149 new commits
> after version 2.6-dev11.
> 
> Yeah I know, I said we'll only issue -dev12 if we face some trouble. But
> stay cool, we didn't face any trouble. However we figured that it would
> help last-minute testers to have a final tagged version.
> 
> The vast majority of patches are tagged CLEANUP and MINOR. That's great.
> 
> One old github issue was finally addressed, regarding the HTTP version
> validation. In the past we used to accept any 4-letter protocol using
> letters H,P,R,S,T, which allowed us to match both HTTP and RTSP. But it
> was reported to cause trouble because it was neither possible to disable
> RTSP support not extend this to other protocols. The problem with having
> RTSP enabled by default is that if haproxy forwards it to a backend server
> that doesn't know it, the server may respond with an HTTP/0.9 error that
> will be blocked by haproxy which then returns a 502 error. That's no big
> deal until you're watching your load balancer's logs and counters.
> 
> So now by default only HTTP is accepted, and this can be relaxed by
> adding "accept-invalid-http-request". To be honest, I really doubt that
> there are that many people using RTSP, given that we never ever get any
> single problem report about it, so I think it will not be a big deal to
> add this option in such cases so that all other users gain in serenity.
> This will likely be backported but if so, very slowly as this will be a
> behavior change, albeit a very small one.
> 
> Some polishing was done on QUIC, to improve the behavior on closing
> connections and stopping the process, and error processing in general.
> The maintainability was also improved by refactoring certain areas.
> Ah, crap, I just noticed that we missed a few patches from Fred who
> added some doc and a few settings!
> 
> The conn_streams that were holding up the release are now gone. It took
> two of us two full days of code analysis and head scratching to figure
> the role of certain antique flags and give them a more appropriate name,
> but that was really necessary. I must admit I really like the new model
> in 2.6, it's much more consistent and logical than 2.5 and older. It's
> visible in that it's easier to document and explain. And even during the
> changes it was easier to figure the field names for parts that had to be
> changed manually.
> 
> There are a bit more patches than I initially expected because this time
> I refused to leave poorly named function arguments and local variables:
> we've suffered from this for many years where process_stream() used to
> have a "struct stream *sess" and the session was "sess->sess". I didn't
> want to experience this anymore, we need the code to be more intuitive
> and readable especially for new contributors, and given the large amount
> of changes since 2.5 that will complicate backports anyway, it was the
> perfect opportunity to pursue that quest. While these changes represent
> many patches, they're essentially renames. There's always the tiny risk
> of an undetected mistake but all of them are trivial, were reviewed
> multiple times, built and individually tested so I'm not worried (famous
> last words :-)).
> 
> Some of us will continue testing over the week-end (it's already deployed
> on haproxy.org). I think we'll add a few bits of doc, Fred's patches that
> we missed, maybe a fix or two for last minute issues, and I expect to
> release on Tuesday (because Mondays are usually too short).
> 
> Please find the usual URLs below :
>Site index   :
> https://ind01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.haproxy.org%2Fdata=05%7C01%7Cajay.mahto%40npci.org.in%7C9d4ba987a73844dbbef808da400e7c5a%7C8ca9216b1bdf40569775f5e402a48d32%7C0%7C0%7C637892728218046385%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7Csdata=re5z5mdHaKq8gU73DlWwiMCCEYz4E9nnQzamqFlgUeo%3Dreserved=0
> Documentation:
> https://ind01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fdocs.haproxy.org%2Fdata=05%7C01%7Cajay.mahto%40npci.org.in%7C9d4ba987a73844dbbef808da400e7c5a%7C8ca9216b1bdf40569775f5e402a48d32%7C0%7C0%7C637892728218046385%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7Csdata=PHJsvY2xbGuNCep7m8SKF5ZJ30p67d3vzZIznjxgVoI%3Dreserved=0
> Wiki :
> 

Re: how to install on RHEL7 and 8

2022-05-28 Thread Aleksandar Lazic
Hi Ryan.

On Thu, 26 May 2022 13:28:58 -0500
"Ryan O'Hara"  wrote:

> On Wed, May 25, 2022 at 11:15 AM William Lallemand 
> wrote:
> 
> > On Tue, May 24, 2022 at 08:56:14PM +, Alford, Mark wrote:
> > > Do you have instruction on the exact library needed to fo the full
> > install on RHEL 7 and RHEL 8
> > >
> > > I read the INSTALL doc in the tar ball and the did the make command and
> > it failed because of LUA but lua.2.5.3 is installed
> > >
> > > Please help
> > >
> > >
> > Hello,
> >
> > I'm using this thread to launch a call for help about the redhat
> > packaging.
> >
> 
> I am the maintainer for all the Red Hat and Fedora packages. Feel free to
> ask questions here on the mailing list or email me directly.
> 
> 
> 
> > We try to document the list of available packages here:
> > https://github.com/haproxy/wiki/wiki/Packages
> >
> > The IUS repository is know to work but only provides packages as far as
> > 2.2. no 2.3, 2.4 or 2.5 are there but I'm seeing an open ticket for
> > the 2.4 here: https://github.com/iusrepo/wishlist/issues/303
> >
> > Unfortunately nobody ever step up to maintain constantly the upstream
> > releases for redhat/centos like its done for ubuntu/debian on
> > haproxy.debian.net.
> >
> 
> I try to keep Fedora up to date with latest upstream, but once a release
> goes into a specific Fedora release (eg. haproxy-2.4 in Fedora 35) I don't
> update to haproxy-2.5 in that same release. I have in the past and I get
> angry emails about rebasing to a newer release. I've spoken to Willy about
> this in the past and we seem to be in agreement on this.
> 
> RHEL is different. We almost never rebase to a later major release for the
> lifetime of RHEL. The one exception was when we added haproxy-1.8 to RHSCL
> (software collections) in RHEL7 since the base RHEL7 had haproxy-1.5 and
> there were significant features added to the 1.8 release.
> 
> I get this complaint often for haproxy in RHEL. Keep in mind that RHEL is
> focused on consistency and stability over a long period of time. I can't
> stress this enough - it is extremely rare to rebase to a new, major release
> of haproxy (or anything else) in a major RHEL release. For example, RHEL9
> has haproxy-2.4 and will likely always have that version. I do often rebase
> to newer minor release to pick up bug fixes (eg. haproxy-2.4.8 will be
> updated to haproxy-2.4.17, but very unlikely to be anything beyond the
> latest 2.4 release). I understand this is not for everybody.

Well written and I'm fully aware of the pro and cons of that strategy.

Let me make a suggestion.
Offer the latest HAPoxy as RPM in like epel or some extra repo and keep the
supported one in the main repo.

As far as I can see is there already a epel entry for HAProxy
https://bugzilla.redhat.com/buglist.cgi?bug_status=__open__=haproxy
as described here.
https://docs.fedoraproject.org/en-US/epel/epel-package-request/

The issue for some users is that there is no RPM available until the rpm is
build on there own with https://github.com/DBezemer/rpm-haproxy.
Thanks David for keep this repo up2date.

Looks like this is the source of the HAProxy builds for CentOS and RHEL, isn't
it?
https://git.centos.org/rpms/haproxy/branches?branchname=c8s

How about to add there a branch "upstream" or something else which uses the
latest LTS version as even HAProxy community onls supports the LTS version for
a long time. 

Another Idea is to add another repo under
https://github.com/orgs/haproxy/repositories like "linux-distro-build-sources"
and add there the RPM, deb and some other build files for some other linux
distributions. Now if an user want to offer an rpm or deb can the build config
be used from there, similar to the great work from Vincent for the Debian
Distribution.

As I know that some enterprise companies does not allow epel or other none
"official" RHEL Repos in there setup is this an option to offer them the latest
HAProxy for there system.

The solution for the problem "latest HAProxy on RPM based System" is the to use
the upstream rpm or build there own rpm based on the offical repo
"linux-distro-build-sources" from https://github.com/orgs/haproxy/repositories

Well yes, the name is up for discussion :-)

jm2c

> > Maybe it could be done with IUS, its as simple as a pull request on
> > their github for each new release, but someone need to be involve.
> >
> > I'm not a redhat user, but from time to time someone is asking for a
> > redhat package and nothing is really available and maintained outside of
> > the official redhat one.
> >
> 
> As mentioned elsewhere, COPR is likely the best place for this. It had been
> awhile since I've used it, but there have been times I did special,
> unsupported builds in COPR for others to use.
> 
> Hope this helps.
> 
> Ryan




Re: how to install on RHEL7 and 8

2022-05-24 Thread Aleksandar Lazic
Hi.

On Tue, 24 May 2022 20:56:14 +
"Alford, Mark"  wrote:

> Do you have instruction on the exact library needed to fo the full install on
> RHEL 7 and RHEL 8
> 
> I read the INSTALL doc in the tar ball and the did the make command and it
> failed because of LUA but lua.2.5.3 is installed

Please post the full steps you have done with the error.
Wild guess, the dev rpm's are not installed.

Maybe this repo with the specs helps you to find the error.
https://github.com/DBezemer/rpm-haproxy


> Please help
> 
> Mark Alford
> Security+
> IT Specialist (System Administrator)
> Office of Research and Development,
> Center for Computational Toxicology and
> Exposure
> Scientific Computing and Data Curation Division Application Development Branch
> 
> e: alford.m...@epa.gov
> t: (919) 541-4177
> m: (413) 358-0407
> 
> 
> If I am not the Federal Contracting Officer or Contracting Officer
> Representative (CO/COR) on your contract please do not consider this
> technical direction (TD). Any TD will be formally identified and/or
> documented from your CO or COR.
> 




Re: Paid feature development: TCP stream compression

2022-05-20 Thread Aleksandar Lazic
On Fri, 20 May 2022 12:16:07 +0100
Mark Zealey  wrote:

> Thanks, we may use this for a very rough proof-of-concept. However we 
> are dealing with millions of concurrent connections, 10-100 million 
> connections per day, so we'd prefer to pay someone to develop (+ test!) 
> something for haproxy which will work at this scale

Well at this scale you will have for sure more then one HAProxy instance. :-)

Do you want that the HAProxies all together have the same "knowledge" about the
connections?
What I mean should in the implementation the peers protocol be considered to be
used?
Do you expect some XMPP protocol knowledge in the implementation?

> Mark
> 
> On 20/05/2022 10:12, Илья Шипицин wrote:
> > in theory, you can try OpenVPN with compression enabled.
> > or maybe stunnel with compression stunnel TLS Proxy 
> > 
> >
> > пт, 20 мая 2022 г. в 13:59, Mark Zealey :
> >
> > Good point, I forgot to mention that bit. We will be
> > TLS-terminating the connection on haproxy itself so
> > compress/decompress would happen after the plain stream has been
> > received, prior to being forwarded (in plain, or re-encrypted with
> > TLS) to the backends.
> >
> > So:
> >
> > app generates gzip+tls TCP stream -> haproxy: strip TLS, gunzip ->
> > forward TCP to backend servers
> >
> > We don't have any other implementation of this, at the moment it
> > is just an idea we would like to implement.
> >
> > Mark
> >
> >
> > On 20/05/2022 09:54, Илья Шипицин wrote:
> >> isn't it SSL encapsulated ? how is compression is supposed to
> >> work in details ?
> >> any other implementation to look at ?
> >>
> >> чт, 19 мая 2022 г. в 21:32, Mark Zealey :
> >>
> >> Hi there,
> >>
> >> We are using HAProxy to terminate and balance TCP streams
> >> (XMPP) between
> >> our apps and our service infrastructure. We are currently running
> >> XMPP-level gzip compression but I'm interested in potentially
> >> shifting
> >> this to the haproxy layer - basically everything on the
> >> connection would
> >> be compressed with gzip, brotli or similar.
> >>
> >> If you would be interested in doing paid development on
> >> haproxy for
> >> this, please
> >> drop me a line with some details about roughly how much it
> >> would cost
> >> and how
> >> long it would take. Any development work done for this would be
> >> contributed back to the open source haproxy edition.
> >>
> >> Thanks,
> >>
> >> Mark
> >>
> >>




Re: Paid feature development: TCP stream compression

2022-05-19 Thread Aleksandar Lazic
Hi Mark.

On Thu, 19 May 2022 17:29:37 +0100
Mark Zealey  wrote:

> Hi there,
> 
> We are using HAProxy to terminate and balance TCP streams (XMPP) between
> our apps and our service infrastructure. We are currently running
> XMPP-level gzip compression but I'm interested in potentially shifting
> this to the haproxy layer - basically everything on the connection would
> be compressed with gzip, brotli or similar.
> 
> If you would be interested in doing paid development on haproxy for 
> this, please
> drop me a line with some details about roughly how much it would cost 
> and how
> long it would take. Any development work done for this would be
> contributed back to the open source haproxy edition.

That sounds really great, thank you for this offering :-)

I suggest to get in touch with cont...@haproxy.com as that's the company behind
HAProxy.

> Thanks,
> 
> Mark

Regards
Alex



Re: Download Question

2022-05-02 Thread Aleksandar Lazic
Hi.

On Mon, 2 May 2022 14:44:45 +
Dave Swinton  wrote:

> Do you have a repository for the current releases in RPM? We are currently
> using 1.8 but would like to move to 2.5.x after some internal testing but
> don't see any direct links to an RPM from the download page.

You can build your own version based on this repo.

https://github.com/DBezemer/rpm-haproxy

Regards
Alex

> Thank you.
> 
> David Swinton
> RedIron Technologies
> Mobile: (925) 864-1783
> Email:  dave.swin...@redirontech.com
> 
> [519F0236]
> 




Re: Networking

2022-04-30 Thread Aleksandar Lazic
Hi Nick.

On Sat, 30 Apr 2022 05:44:09 +
Nick Owen  wrote:

> So I am pretty new to networking and I am not quite sure how to set up the
> config file correctly. I just want a simple reverse proxy and I have created
> a diagram to show you how’d I’d like it configured. If you have any sites or
> examples that could point me in the right direction that’d be great.

Well first of all please take some time without any "pressure in any kind" to
dig into the topic, Loadbalancing and HAProxy could get quite fast quite
complex.

There are very good articles on the HAProxy blog which explains some basics
about Loadbalancing and HAProxy.
https://www.haproxy.com/blog/category/basics/

For your diagram below is this blog post helpfully and can show you a good
starting configuration.
https://www.haproxy.com/blog/haproxy-configuration-basics-load-balance-your-servers/

HAProxy have a very detailed documenation which shows you how flexible HAProxy
is.

https://docs.haproxy.org/

Best regards
Alex



Re: Stupid question about nbthread and maxconn

2022-04-26 Thread Aleksandar Lazic
Hi.

Anyone any Idea about the question below?

Regards
Alex

On Sat, 23 Apr 2022 11:05:36 +0200
Aleksandar Lazic  wrote:

> Hi.
> 
> I'm not sure if I understand the doc properly.
> 
> https://docs.haproxy.org/2.2/configuration.html#nbthread
> ```
> This setting is only available when support for threads was built in. It
> makes haproxy run on  threads. This is exclusive with "nbproc". While
> "nbproc" historically used to be the only way to use multiple processors, it
> also involved a number of shortcomings related to the lack of synchronization
> between processes (health-checks, peers, stick-tables, stats, ...) which do
> not affect threads. As such, any modern configuration is strongly encouraged
> to migrate away from "nbproc" to "nbthread". "nbthread" also works when
> HAProxy is started in foreground. On some platforms supporting CPU affinity,
> when nbproc is not used, the default "nbthread" value is automatically set to
> the number of CPUs the process is bound to upon startup. This means that the
> thread count can easily be adjusted from the calling process using commands
> like "taskset" or "cpuset". Otherwise, this value defaults to 1. The default
> value is reported in the output of "haproxy -vv". See also "nbproc".
> ```
> 
> https://docs.haproxy.org/2.2/configuration.html#3.2-maxconn
> ```
> Sets the maximum per-process number of concurrent connections to . It
> is equivalent to the command-line argument "-n". Proxies will stop accepting
> connections when this limit is reached. The "ulimit-n" parameter is
> automatically adjusted according to this value. See also "ulimit-n". Note:
> the "select" poller cannot reliably use more than 1024 file descriptors on
> some platforms. If your platform only supports select and reports "select
> FAILED" on startup, you need to reduce maxconn until it works (slightly
> below 500 in general). If this value is not set, it will automatically be
> calculated based on the current file descriptors limit reported by the
> "ulimit -n" command, possibly reduced to a lower value if a memory limit
> is enforced, based on the buffer size, memory allocated to compression, SSL
> cache size, and use or not of SSL and the associated maxsslconn (which can
> also be automatic).
> 
> ```
> 
> Let's say we have the following setup.
> 
> ```
> maxconn 2
> nbthread 4
> ```
> 
> My understanding is that HAProxy will accept 2 concurrent connection,
> right? Even when I increase the nbthread will HAProxy *NOT* accept more then
> 2 concurrent connection, right?
> 
> The increasing of nbthread will "only" change that the performance will be
> "better" on a let's say 32 CPU Machine, especially for the upcoming 2.6 :-)
> 
> https://docs.microsoft.com/en-us/azure/virtual-machines/dv3-dsv3-series#dsv3-series
> => Standard_D32s_v3: 32 CPU, 128G RAM
> 
> What confuses me is "maximum per-process" in the maxconn docu part, will every
> thread handle the maxconn or is this for the whole HAProxy instance.
> 
> More mathematically :-O.
> 2 * 4 = 8
> or
> 2 * 4 = 2
> 
> Regards
> Alex
> 




Re: Set environment variables

2022-04-26 Thread Aleksandar Lazic
On Tue, 26 Apr 2022 15:03:51 +0200
Valerio Pachera  wrote:

> Hi, I have several backend configuration that make use of a custom script:
> 
> external-check command 'custom-script.sh'
> 
> The script read uses the environment variables such as $HAPROXY_PROXY_NAME.
> I would like to be able to set and environment variable in the backend
> declaration, before running the external check.
> This environment variable will change the behavior of custom-script.sh.
> 
> Is it possible to declare environment variables in haproxy 1.9 or later?
> 
> What I need is to make custom-script.sh aware if SSL is used or not.
> If there's another way to achieve that, please tell me.

Well you can put it in the name of the server as I don't see any other option
to add extra variables into the external check.

https://git.haproxy.org/?p=haproxy.git;a=blob;f=src/extcheck.c;hb=e50aabe443125eb94e3e7823c387125ca7e0c302#l81

```
  81 const struct extcheck_env extcheck_envs[EXTCHK_SIZE] = {
  82 [EXTCHK_PATH]   = { "PATH",   
EXTCHK_SIZE_EVAL_INIT },
  83 [EXTCHK_HAPROXY_PROXY_NAME] = { "HAPROXY_PROXY_NAME", 
EXTCHK_SIZE_EVAL_INIT },
  84 [EXTCHK_HAPROXY_PROXY_ID]   = { "HAPROXY_PROXY_ID",   
EXTCHK_SIZE_EVAL_INIT },
  85 [EXTCHK_HAPROXY_PROXY_ADDR] = { "HAPROXY_PROXY_ADDR", 
EXTCHK_SIZE_EVAL_INIT },
  86 [EXTCHK_HAPROXY_PROXY_PORT] = { "HAPROXY_PROXY_PORT", 
EXTCHK_SIZE_EVAL_INIT },
  87 [EXTCHK_HAPROXY_SERVER_NAME]= { "HAPROXY_SERVER_NAME",
EXTCHK_SIZE_EVAL_INIT },
  88 [EXTCHK_HAPROXY_SERVER_ID]  = { "HAPROXY_SERVER_ID",  
EXTCHK_SIZE_EVAL_INIT },
  89 [EXTCHK_HAPROXY_SERVER_ADDR]= { "HAPROXY_SERVER_ADDR",
EXTCHK_SIZE_ADDR },
  90 [EXTCHK_HAPROXY_SERVER_PORT]= { "HAPROXY_SERVER_PORT",
EXTCHK_SIZE_UINT },
  91 [EXTCHK_HAPROXY_SERVER_MAXCONN] = { "HAPROXY_SERVER_MAXCONN", 
EXTCHK_SIZE_EVAL_INIT },
  92 [EXTCHK_HAPROXY_SERVER_CURCONN] = { "HAPROXY_SERVER_CURCONN", 
EXTCHK_SIZE_ULONG },
  93 };
```

> Thank you.

Hth
Alex



Learning from Spam (was: Re: Social media marketing Plans from Scratch haproxy.org)

2022-04-26 Thread Aleksandar Lazic
Hi,

On Tue, 26 Apr 2022 03:32:16 -0700
Ivana Paul  wrote:

> Hello haproxy.org

[SPAM Content]

New Idea for spam "learning platform" :-)

I never heard anything about "SMO services" and now I know it's this.

Social Media Optimization (SMO) Services

Regard
Alex



Stupid question about nbthread and maxconn

2022-04-23 Thread Aleksandar Lazic
Hi.

I'm not sure if I understand the doc properly.

https://docs.haproxy.org/2.2/configuration.html#nbthread
```
This setting is only available when support for threads was built in. It
makes haproxy run on  threads. This is exclusive with "nbproc". While
"nbproc" historically used to be the only way to use multiple processors, it
also involved a number of shortcomings related to the lack of synchronization
between processes (health-checks, peers, stick-tables, stats, ...) which do
not affect threads. As such, any modern configuration is strongly encouraged
to migrate away from "nbproc" to "nbthread". "nbthread" also works when
HAProxy is started in foreground. On some platforms supporting CPU affinity,
when nbproc is not used, the default "nbthread" value is automatically set to
the number of CPUs the process is bound to upon startup. This means that the
thread count can easily be adjusted from the calling process using commands
like "taskset" or "cpuset". Otherwise, this value defaults to 1. The default
value is reported in the output of "haproxy -vv". See also "nbproc".
```

https://docs.haproxy.org/2.2/configuration.html#3.2-maxconn
```
Sets the maximum per-process number of concurrent connections to . It
is equivalent to the command-line argument "-n". Proxies will stop accepting
connections when this limit is reached. The "ulimit-n" parameter is
automatically adjusted according to this value. See also "ulimit-n". Note:
the "select" poller cannot reliably use more than 1024 file descriptors on
some platforms. If your platform only supports select and reports "select
FAILED" on startup, you need to reduce maxconn until it works (slightly
below 500 in general). If this value is not set, it will automatically be
calculated based on the current file descriptors limit reported by the
"ulimit -n" command, possibly reduced to a lower value if a memory limit
is enforced, based on the buffer size, memory allocated to compression, SSL
cache size, and use or not of SSL and the associated maxsslconn (which can
also be automatic).

```

Let's say we have the following setup.

```
maxconn 2
nbthread 4
```

My understanding is that HAProxy will accept 2 concurrent connection, right?
Even when I increase the nbthread will HAProxy *NOT* accept more then 2
concurrent connection, right?

The increasing of nbthread will "only" change that the performance will be
"better" on a let's say 32 CPU Machine, especially for the upcoming 2.6 :-)

https://docs.microsoft.com/en-us/azure/virtual-machines/dv3-dsv3-series#dsv3-series
=> Standard_D32s_v3: 32 CPU, 128G RAM

What confuses me is "maximum per-process" in the maxconn docu part, will every
thread handle the maxconn or is this for the whole HAProxy instance.

More mathematically :-O.
2 * 4 = 8
or
2 * 4 = 2

Regards
Alex



[PATCH] DOC: remove double blanks in confiuration.txt

2022-03-29 Thread Aleksandar Lazic
Hi.

This patch removes some double blanks.

Regards
Alex
>From a65450d3da357c659b00bd3ecb5a038a1f827692 Mon Sep 17 00:00:00 2001
From: Aleksandar Lazic 
Date: Wed, 30 Mar 2022 00:11:40 +0200
Subject: [PATCH] DOC: remove double blanks in confiuration.txt

Double blanks in keywords are not good for the html documenation parser.
This commit fix the double blanks for tcp-request content use-service

---
 doc/configuration.txt | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 87ae43809..cb05fef91 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -3118,7 +3118,7 @@ server  [:] [param*]
   As previously mentioned, "peer" keyword may be replaced by "server" keyword
   with a support for all "server" parameters found in 5.2 paragraph.
   If the underlying peer is local, : parameters must not be present.
-  These parameters must  be provided on a "bind" line (see "bind" keyword
+  These parameters must be provided on a "bind" line (see "bind" keyword
   of this "peers" section).
   Some of these parameters are irrelevant for "peers" sections.
 
@@ -12553,7 +12553,7 @@ tcp-request content unset-var() [ { if | unless }  ]
   This is used to unset a variable. Please refer to "http-request set-var" for
   details about variables.
 
-tcp-request content  use-service   [ { if | unless }  ]
+tcp-request content use-service  [ { if | unless }  ]
 
   This action is used to executes a TCP service which will reply to the request
   and stop the evaluation of the rules. This service may choose to reply by
-- 
2.25.1



Re: [ANNOUNCE] haproxy-2.6-dev4

2022-03-26 Thread Aleksandar Lazic
Hi Willy.

On Sat, 26 Mar 2022 10:22:02 +0100
Willy Tarreau  wrote:

> Hi,
> 
> HAProxy 2.6-dev4 was released on 2022/03/26. It added 80 new commits
> after version 2.6-dev3.
> 
> The activity started to calm down a bit, which is good because we're
> roughly 2 months before the release and it will become important to avoid
> introducing last-minute regressions.
> 
> This version mostly integrates fixes for various bugs in various places
> like stream-interfaces, QUIC, the HTTP client or the trace subsystem. The
> remaining patches are mostly QUIC improvements and code cleanups. In
> addition the MQTT protocol parser was extended to also support MQTTv3.1.
> 
> A change discussed around previous announce was made in the H2 mux: the
> "timeout http-keep-alive" and "timeout http-request" are now respected
> and work as documented, so that it will finally be possible to force such
> connections to be closed when no request comes even if they're seeing
> control traffic such as PING frames. This can typically happen in some
> server-to-server communications whereby the client application makes use
> of PING frames to make sure the connection is still alive. I intend to
> backport this after some time, probably to 2.5 and later 2.4, as I've
> got reports about stable versions currently posing this problem.
> 
> I'm expecting to see another batch of stream-interface code refactoring
> that Christopher is still working on. This is a very boring and tedious
> task that should significantly lower the long-term maintenance effort,
> so I'm willing to wait a little bit for such changes to be ready. What
> this means for users is a reduction of the bugs we've seen over the last
> 2-3 years alternating between truncated responses and never-dying
> connections and that result from the difficulty to propagate certain
> events across multiple layers.
> 
> Also William still has some updates to finish on the HTTP client
> (connection retries, SSL cert verification and host name resolution
> mainly). On the paper, each of them is relatively easy, but practically,
> since the HTTP client is the first one of its category, each attempt to
> progress is stopped by the discovery of a shortcoming or bug that were
> not visible before. Thus the progress takes more time than desired but
> as a side effect, the core code gets much more reliable by getting rid
> of these old issues.
> 
> One front that made impressive progress over the last few months is QUIC.
> While a few months ago we were counting the number of red boxes on the
> interop tests at https://interop.seemann.io/ to figure what to work on as
> a top priority, now we're rather counting the number of tests that report
> a full-green state, and haproxy is now on par with other servers in these
> tests. Thus the idea emerged, in order to continue to make progress on
> this front, to start to deploy QUIC on haproxy.org so that interoperability
> issues with browsers and real-world traffic can be spotted. A few attempts
> were made and already revealed issues so for now it's disabled again. Be
> prepared to possibly observe a few occasional hiccups when visiting the
> site (and if so, please do complain to us). The range of possible issues
> would likely be frozen transfers and truncated responses, but these should
> not happen.
> 
> From a technical point, the way it's done is by having a separate haproxy
> process listening to QUIC on UDP port 1443, and forwarding HTTP requests
> to the existing process. The main process constantly checks the QUIC one,
> and when it's seen as operational, it appends an Alt-Svc header that
> indicates the client that an HTTP/3 implementation is available on port
> 1443, and that this announce is valid for a short time (we'll leave it to
> one minute only so that issues can resolve quickly, but for now it's only
> 10s so that quick tests cause no harm):
> 
> http-response add-header alt-svc 'h3=":1443"; ma=60' if \
>{ var(txn.host) -m end haproxy.org } { nbsrv(quic) gt 0 }
> 
> As such, compatible browsers are free to try to connect there or not. Other
> tools (such as git clone) will not use it. For those impatient to test it,
> the QUIC process' status is reported at the bottom of the stats page here:
> http://stats.haproxy.org/. The "quic" socket in the frontend at the top
> reports the total traffic received from the QUIC process, so if you're
> seeing it increase while you reload the page it's likely that you're using
> QUIC to read it. In Firefox I'm having this little plugin loaded:
> 
>   https://addons.mozilla.org/en-US/firefox/addon/http2-indicator/
> 
> It displays a small flash on the URL bar with different colors depending
> on the protocol used to load the page (H1/SPDY/H2/H3). When that works it's
> green (H3), otherwise it's blue (H2).
> 
> At this point I'd still say "do not reproduce these experiments at home".
> Amaury and Fred are still watching the process' traces very closely to
> spot bugs and stop it as 

Re: Rpm version 2.4.14

2022-03-15 Thread Aleksandar Lazic



On 15.03.22 05:36, Eli Bechavod wrote:

Hii guys,
I am looking for rpm to version 2.4.14 and didn’t found that ..

Why on image base centos/rhel did you stop in 1.8 ? I saw that I can install 
with a makefile but it old way .. :( .

I would to sound if you have any solutions


You can create a rpm based on that repo.
https://github.com/DBezemer/rpm-haproxy


Thanks
Eli


Regards
Alex



Re: Is there some kind of program that mimics a problematic HTTP server?

2022-03-01 Thread Aleksandar Lazic



Hi Shawn.

On 01.03.22 23:09, Shawn Heisey wrote:

I was thinking about ways to help pinpoint problems a client is having 
connecting to services.  And a thought
occurred to me.

Is there any kind of software available that can stand up a broken HTTP server, 
such that it is broken in very
specific and configurable ways?

Imagine a bit of software that can listen on a port and exhibit configurable 
failure scenarios.  Including but
certainly not limited to these:

* SSL negotiation issues
* Simulate dropped packets by ignoring incoming packets or failing to send 
outgoing packets.
* Timeouts, delays, no response, or incorrect behavior at various phases:
** TCP
** SSL
** GET/HEAD/POST

Does anything like this already exist?  It would be an awesome troubleshooting 
tool.  Configure it to fail in some
way, have a client try connecting to it with their software, and if they get 
the same error that they do when trying
it with the real server, then you've possibly pinpointed what the problem on 
the real server is, without diving into
logs or packet captures.  And the client may not know anything about the software 
they're using other than "it works
fine connecting to XXX", making them an exceedingly unreliable source of 
information.

So I'm not interested in something that can analyze network traffic or logs.  I 
can already do that. I am imagining
a server that can intentionally misbehave.

And here's why I am asking my question on the haproxy mailing list:  I think 
haproxy itself would serve as the
perfect starting point for this idea.  Imagine having configuration directives 
for haproxy that tell it to
intentionally misbehave, either on the frontend or the backend.  It could run 
side by side with a production
instance, on another port or on another machine, with a nearly identical config 
to production that has misbehave
configuration directives.

Side note: I think haproxy would be a perfect fit at $DAY_JOB to replace a 
couple of problematic pieces of software,
but I until I understand better how that software is configured, I can't 
mention it as a possible solution. I really
like haproxy. Please keep up the good work.  I'm looking for ways I can 
contribute to the project's success.


I don't know such a tool but this sounds like a interesting project Idea.

Maybe some parts could be done via LUA but as HAProxy internally handle a lot 
of errors it could be tricky to force
HAProxy do behave "weird" and not standard compliant.
http://www.arpalert.org/src/haproxy-lua-api/2.5/index.html

As you can see in the repo from Tim https://github.com/TimWolla/h-app-roxy that 
HAProxy and lua can be a quite powerful
combination.


Thanks,
Shawn


Regards
Alex



Re: Active Internet-Draft: Suppressing CA Certificates in TLS 1.3

2022-02-28 Thread Aleksandar Lazic

Hi.

On 28.02.22 13:55, Branitsky, Norman wrote:

Future requirement for HAProxy?

https://datatracker.ietf.org/doc/draft-kampanakis-tls-scas-latest/


From my point of view is this draft heavily based on the implementation of the 
underlaying TLS library.


For everyone which want to know what this is here a short intro cite.

```
1.  Introduction

   The most data heavy part of a TLS handshake is authentication.  It
   usually consists of a signature, an end-entity certificate and
   Certificate Authority (CA) certificates used to authenticate the end-
   entity to a trusted root CA.  These chains can sometime add to a few
   kB of data which could be problematic for some usecases.
   [EAPTLSCERT] and [EAP-TLS13] discuss the issues big certificate
   chains in EAP authentication.  Additionally, it is known that IEEE
   802.15.4 [IEEE802154] mesh networks and Wi-SUN [WISUN] Field Area
   Networks often notice significant delays due to EAP-TLS
   authentication in constrained bandwidth mediums.

   To alleviate the data exchanged in TLS [RFC8879] shrinks certificates
   by compressing them.  [CBOR-CERTS] uses different certificate
   encodings for constrained environments.  On the other hand, [CTLS]
   proposes the use of certificate dictionaries to omit sending CA
   certificates in a Compact TLS handshake.

   In a post-quantum context
   [I-D.hoffman-c2pq][NIST_PQ][I-D.ietf-tls-hybrid-design], the TLS
   authentication data issue is exacerbated.
   [CONEXT-PQTLS13SSH][NDSS-PQTLS13] show that post-quantum certificate
   chains exceeding the initial TCP congestion window (10MSS [RFC6928])
   will slow down the handshake due to the extra round-trips they

Thomson, et al.  Expires 17 August 2022 [Page 2]
Internet-DraftSuppress CAs February 2022

   introduce.  [PQTLS] shows that big certificate chains (even smaller
   than the initial TCP congestion window) will slow down the handshake
   in lossy environments.  [TLS-SUPPRESS] quantifies the post-quantum
   authentication data in QUIC and TLS and shows that even the leanest
   post-quantum signature algorithms will impact QUIC and TLS.
   [CL-BLOG] also shows that 9-10 kilobyte certificate chains (even with
   30MSS initial TCP congestion window) will lead to double digit TLS
   handshake slowdowns.  What's more, it shows that some clients or
   middleboxes cannot handle chains larger than 10kB.


```


*Norman Branitsky*
Senior Cloud Architect
Tyler Technologies, Inc.

P: 416-916-1752
C: 416.843.0670
www.tylertech.com

Tyler Technologies 






Re: [PATCH] MINOR: sample: Add srv_rtt server round trip time sample

2022-02-25 Thread Aleksandar Lazic



Hi Willy.

On 25.02.22 14:54, Willy Tarreau wrote:

Hi Alex,

On Thu, Feb 24, 2022 at 03:03:59AM +0100, Aleksandar Lazic wrote:

Hi.

Here the first patch for feature request "New Balancing algorithm (Peak) EWMA 
#1570"


Note, I don't think it is needed for this algo as long as we instead
use measured response time and/or health check time. But regardless
it's something useful to have. A few comments below:



Thanks you for your much valuable feedback.
I think also that the rtt information as fetch sample could be useful.


 From e95bf6a4bf107fdc59696c4b4a4ef7b03133b813 Mon Sep 17 00:00:00 2001
From: Aleksandar Lazic 
Date: Thu, 24 Feb 2022 02:56:21 +0100
Subject: [PATCH] MINOR: sample: Add srv_rtt server round trip time sample

This sample fetch get the server round trip time


You should mention "TCP round trip time" since it's measured at the TCP
level.


+srv_rtt : integer
+  Returns the Round Trip Time (RTT) measured by the kernel for the server
+  connection.  is facultative, by default the unit is milliseconds. 

+  can be set to "ms" for milliseconds or "us" for microseconds. If the server
+  connection is not established, if the connection is not TCP or if the
+  operating system does not support TCP_INFO, for example Linux kernels before
+  2.4, the sample fetch fails.


I would rather call it "bc_rtt" since it's not the server but the backend
connection. Technically speaking it indeed requires a connection to be
established and will only report the value for *this* connection and not
anything stateless related to the server. Thta's more in line with what we
have for the frontend connection already with fc_rtt.

You mentioned "unit" but it does not appear in the keyword syntax.

Also I think it would be useful to have all other fc_* from the same
section turned to bc_* (fc_rttvar, fc_retrans, etc), as it can sometimes
explain long response times in logs.


diff --git a/reg-tests/sample_fetches/srv_rtt.vtc 
b/reg-tests/sample_fetches/srv_rtt.vtc
new file mode 100644
index 0..c0ad0cbae
--- /dev/null
+++ b/reg-tests/sample_fetches/srv_rtt.vtc
@@ -0,0 +1,34 @@
+varnishtest "srv_rtt sample fetch Test"
+
+#REQUIRE_VERSION=2.6


Note, we *might* need to add a new macro to detect support for TCP_INFO.
We still don't have the config predicates to detect support for certain
keywords or sample fetch functions so that's not easy, but it's possible
that this test will break on some OS like cygwin. If so we could work
around this temporarily using "EXCLUDE_TARGETS" and in the worst case we
could mark it broken for the time it takes to completely solve this.


Agree here the suggestion is to add something like USE_GETSOCKOPT to be able to 
make '#REQUIRE_OPTIONS=GETSOCKOPT',
something like similar to USE_GETADDRINFO, right?


(...)

diff --git a/src/tcp_sample.c b/src/tcp_sample.c
index 19edcd243..7b8b616cb 100644
--- a/src/tcp_sample.c
+++ b/src/tcp_sample.c
@@ -446,6 +446,20 @@ smp_fetch_fc_reordering(const struct arg *args, struct 
sample *smp, const char *
return 0;
return 1;
  }
+
+/* get the mean rtt of a client connection */
+static int
+smp_fetch_srv_rtt(const struct arg *args, struct sample *smp, const char *kw, 
void *private)
+{
+   if (!get_tcp_info(args, smp, 1, 0))
+   return 0;
+
+   /* By default or if explicitly specified, convert rtt to ms */
+   if (!args || args[0].type == ARGT_STOP || args[0].data.sint == 
TIME_UNIT_MS)
+   smp->data.u.sint = (smp->data.u.sint + 500) / 1000;
+
+   return 1;
+}


That's another reason for extending the existing keywords, avoiding code
duplication. You can have all your new keywords map to the fc_* equivalent
and just change this:

  - if (!get_tcp_info(args, smp, 0, 0))
  + if (!get_tcp_info(args, smp, *kw == 'b', 0))

Please update the comments on top of the functions to mention that they're
called for both "fc_*" and "bc_*" depending on the side, and that's OK.


Let me go back to the "drawing board" and will send a update as soon as there 
is a update :-)


thanks,
Willy


Regards
Alex



[PATCH] MINOR: sample: Add srv_rtt server round trip time sample

2022-02-23 Thread Aleksandar Lazic


Hi.

Here the first patch for feature request "New Balancing algorithm (Peak) EWMA 
#1570"

regards
AlexFrom e95bf6a4bf107fdc59696c4b4a4ef7b03133b813 Mon Sep 17 00:00:00 2001
From: Aleksandar Lazic 
Date: Thu, 24 Feb 2022 02:56:21 +0100
Subject: [PATCH] MINOR: sample: Add srv_rtt server round trip time sample

This sample fetch get the server round trip time

Part of feature request #1570

---
 doc/configuration.txt|  8 +++
 reg-tests/sample_fetches/srv_rtt.vtc | 34 
 src/tcp_sample.c | 15 
 3 files changed, 57 insertions(+)
 create mode 100644 reg-tests/sample_fetches/srv_rtt.vtc

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 572c79d55..be6a811c8 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -18958,6 +18958,14 @@ srv_name : string
   While it's almost only used with ACLs, it may be used for logging or
   debugging. It can also be used in a tcp-check or an http-check ruleset.
 
+srv_rtt : integer
+  Returns the Round Trip Time (RTT) measured by the kernel for the server
+  connection.  is facultative, by default the unit is milliseconds. 
+  can be set to "ms" for milliseconds or "us" for microseconds. If the server
+  connection is not established, if the connection is not TCP or if the
+  operating system does not support TCP_INFO, for example Linux kernels before
+  2.4, the sample fetch fails.
+
 7.3.4. Fetching samples at Layer 5
 --
 
diff --git a/reg-tests/sample_fetches/srv_rtt.vtc b/reg-tests/sample_fetches/srv_rtt.vtc
new file mode 100644
index 0..c0ad0cbae
--- /dev/null
+++ b/reg-tests/sample_fetches/srv_rtt.vtc
@@ -0,0 +1,34 @@
+varnishtest "srv_rtt sample fetch Test"
+
+#REQUIRE_VERSION=2.6
+
+feature ignore_unknown_macro
+
+server s1 {
+rxreq
+txresp
+} -start
+
+
+haproxy h1 -conf {
+defaults
+mode http
+timeout connect "${HAPROXY_TEST_TIMEOUT-5s}"
+timeout client  "${HAPROXY_TEST_TIMEOUT-5s}"
+timeout server  "${HAPROXY_TEST_TIMEOUT-5s}"
+
+frontend fe
+bind "fd@${fe}"
+http-response set-header srv-rrt   "%[srv_rtt]"
+default_backend be
+
+backend be
+server srv1 ${s1_addr}:${s1_port}
+} -start
+
+client c1 -connect ${h1_fe_sock} {
+txreq -url "/"
+rxresp
+expect resp.status == 200
+expect resp.http.srv-rrt ~ "[0-9]+"
+} -run
diff --git a/src/tcp_sample.c b/src/tcp_sample.c
index 19edcd243..7b8b616cb 100644
--- a/src/tcp_sample.c
+++ b/src/tcp_sample.c
@@ -446,6 +446,20 @@ smp_fetch_fc_reordering(const struct arg *args, struct sample *smp, const char *
 		return 0;
 	return 1;
 }
+
+/* get the mean rtt of a client connection */
+static int
+smp_fetch_srv_rtt(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+	if (!get_tcp_info(args, smp, 1, 0))
+		return 0;
+
+	/* By default or if explicitly specified, convert rtt to ms */
+	if (!args || args[0].type == ARGT_STOP || args[0].data.sint == TIME_UNIT_MS)
+		smp->data.u.sint = (smp->data.u.sint + 500) / 1000;
+
+	return 1;
+}
 #endif // linux || freebsd || netbsd
 #endif // TCP_INFO
 
@@ -478,6 +492,7 @@ static struct sample_fetch_kw_list sample_fetch_keywords = {ILH, {
 #ifdef TCP_INFO
 	{ "fc_rtt",   smp_fetch_fc_rtt,   ARG1(0,STR), val_fc_time_value, SMP_T_SINT, SMP_USE_L4CLI },
 	{ "fc_rttvar",smp_fetch_fc_rttvar,ARG1(0,STR), val_fc_time_value, SMP_T_SINT, SMP_USE_L4CLI },
+	{ "srv_rtt",  smp_fetch_srv_rtt,  ARG1(0,STR), val_fc_time_value, SMP_T_SINT, SMP_USE_L4CLI },
 #if defined(__linux__) || defined(__FreeBSD__) || defined(__NetBSD__)
 	{ "fc_unacked",   smp_fetch_fc_unacked,   ARG1(0,STR), var_fc_counter, SMP_T_SINT, SMP_USE_L4CLI },
 	{ "fc_sacked",smp_fetch_fc_sacked,ARG1(0,STR), var_fc_counter, SMP_T_SINT, SMP_USE_L4CLI },
-- 
2.25.1



Re: haproxy in windows

2022-02-10 Thread Aleksandar Lazic

Hi.

On 10/02/2022 10:25, Gowri Shankar wrote:

Im trying to install haproxy for loadbalancing for my servers,but im
not able install from my windows system.Is there ha proxy available
for windows, please give and help us with documentation.


Well I don't think that there is a native Windows binary.
You can try to run haproxy in cygwin or any other linux environment on
Windows.

You can also try to port haproxy to windows but I this is a huge amount
of work :-)

Hth
Alex



Re: Problem: Port_443_lbb1/ - Error 400 BAD REQ

2022-02-01 Thread Aleksandar Lazic

Hi.

On 31.01.22 16:51, Roberto Carna wrote:

Dear all, I have haproxy-1.5.18-3.el7.x86_64 running OK.


You should consider to use a maintained version as 1.5 is End of Life from the 
community.
https://www.haproxy.org/
https://github.com/DBezemer/rpm-haproxy


Development area are claiming for an error, after clicking on a given URL from 
an internal App. We have
two backends nodes, and when DEV tries pointing to just one node, the click is 
OK. So we thought it was
a persistent session problem, so we set up a cookie. The sessions now are 
persistent for the clients, it
is OK, but when DEV tests the click with the problem, effectively it occurs 
again This is in haproxy.log
for this error:

10.10.1.14:59016 [31/Jan/2022:12:33:18.649] Port_443_lbb1~ Port_443_lbb1/** 
-1/-1/-1/-1/2232 *400* 187 - - PR-- 3/3/0/0/0 0/0 {|} "<*BADREQ*>"
10.10.1.14:59019 [31/Jan/2022:12:33:15.579] Port_443_lbb1~ app/NODE1 5824/0/0/54/5878 204 
610 - - --VN 3/3/0/1/0 0/0 {|app.company.com} "POST /api/data/UpdateRecentItems 
HTTP/1.1"

The backend is the following:

backend APP

balance roundrobin
        cookie SERVERID insert
        server NODE1 10.10.18.1:443 check cookie NODE2 ssl verify none
        server NODE2 10.10.18.2:443 check cookie NODE2 ssl verify none

If I remove the "check" option from the two lines, the error appears again:

Error 400 - Bad Request - Your browser sent a invalid request.

But when I point the browser to just one node editing the hosts file, the click 
works OK.

Please what can be the problem?

Thanks a lot !!!


Well I suggest to run haproxy with "-d" to see what happens as it's dev.
You should also try to use the "Network" view in the Browsers Developer Tools 
when you click.

Please also share more of the config as the BADREQ could lead also at the 
listen or frontend part.

Regards
Alex



Re: invalid request

2022-01-12 Thread Aleksandar Lazic



On 12.01.22 21:52, Andrew Anderson wrote:


On Wed, Jan 12, 2022 at 11:58 AM Aleksandar Lazic mailto:al-hapr...@none.at>> wrote:

Well, looks like you want a forward proxy like squid not a reverse proxy 
like haproxy.


The application being load balanced is a proxy, so http_proxy is not a good fit (and as you mention on the 
deprecation list), but haproxy as a load balancer is a much better at front-ending this environment than 
any other solution available.


We upgraded to 2.4 recently, and a Java application that uses these proxy servers is what exposed this 
issue for us.  Even if we were to use squid, we would still run into this, as I would want to ensure that 
squid was highly available for the environment, and we would hit the same code path when going through 
haproxy to connect to squid.


The only option currently available in 2.4 that I am aware of is to setup internal-only frontend/backend 
paths with accept-invalid-http-request configured on those paths exclusively for Java clients to use. This 
is effectively how we have worked around this for now:


listen proxy
     bind :8080
     mode http
     option httplog
     server proxy1 192.0.2.1:8080
     server proxy2 192.0.2.2:8080

listen proxy-internal
     bind :8081
     mode http
     option httplog
     option accept-invalid-http-request
     server proxy1 192.0.2.1:8080 track proxy/proxy1
     server proxy2 192.0.2.2:8080 track proxy/proxy2

This is a viable workaround for us in the short term, but this would not be a solution that would work for 
everyone.  If the uri parser patches I found in the 2.5/2.6 branches are the right ones to make haproxy 
more permissive on matching the authority with the host in CONNECT requests, that will remove the need for 
the parallel frontend/backends without validation enabled.  I hope to be able to have time to test a 2.4 
build with those patches included over the next few days.


By design is HAProxy a reverse proxy to a origin server not to a forwarding 
proxy which is the reason why the
CONNECT method is a invalid method.

Because of that fact I would not use "mode http" for the squid backend/servers 
because of the issues you
described.
Why not "mode tcp" with proxy protocol 
http://www.squid-cache.org/Doc/config/proxy_protocol_access/ if you
need the client ip.


Regards
Alex



Re: invalid request

2022-01-12 Thread Aleksandar Lazic



On 12.01.22 17:06, Andrew Anderson wrote:



On Thu, Dec 30, 2021 at 10:15 PM Willy Tarreau mailto:w...@1wt.eu>> wrote:

On Wed, Dec 29, 2021 at 12:29:11PM +0100, Aleksandar Lazic wrote:
 > >     0  CONNECT download.eclipse.org:443 HTTP/1.1\r\n
 > >     00043  Host: download.eclipse.org\r\n
 > >     00071  User-Agent: Apache-HttpClient/4.5.10 (Java/11.0.13)\r\n
 > >     00124  \r\n

It indeed looks like a recently fixed problem related to the mandatory
comparison between the authority part of the request and the Host header
field, which do not match above since only one contains a port.


I don't know how pervasive this issue is on non-Java clients, but the 
sendCONNECTRequest() method from
Java's HttpURLConnection API is responsible for the authority/host mismatch 
when using native Java HTTP
support, and has been operating this way for a very long time:

     /**
      * send a CONNECT request for establishing a tunnel to proxy server
      */
     private void sendCONNECTRequest() throws IOException {
         int port = url.getPort();

         requests.set(0, HTTP_CONNECT + " " + connectRequestURI(url)
                          + " " + httpVersion, null);
         requests.setIfNotSet("User-Agent", userAgent);

         String host = url.getHost();
         if (port != -1 && port != url.getDefaultPort()) {
             host += ":" + String.valueOf(port);
         }
         requests.setIfNotSet("Host", host);

The Apache-HttpClient library has a similar issue as well (as demonstrated 
above).

More recent versions are applying scheme-based normalization which consists
in dropping the port from the comparison when it matches the scheme
(which is implicitly https here).


Is there an option other than using "accept-invalid-http-request" available to 
modify this behavior on the
haproxy side in 2.4?  I have also run into this with Java 8, 11 and 17 clients.

Are these commits what you are referring to about scheme-based normalization 
available in more recent
versions (2.5+):

https://github.com/haproxy/haproxy/commit/89c68c8117dc18a2f25999428b4bfcef83f7069e
(MINOR: http: implement http uri parser)
https://github.com/haproxy/haproxy/commit/8ac8cbfd7219b5c8060ba6d7b5c76f0ec539e978
(MINOR: http: use http uri parser for scheme)
https://github.com/haproxy/haproxy/commit/69294b20ac03497e33c99464a0050951bdfff737
(MINOR: http: use http uri parser for authority)

If so, I can pull those into my 2.4 build and see if that works better for Java 
clients.


Well, looks like you want a forward proxy like squid not a reverse proxy like 
haproxy.
https://en.wikipedia.org/wiki/HTTP_tunnel

As you don't shared your config I assume you try to use option http_proxy which 
will be deprecated.
http://cbonte.github.io/haproxy-dconv/2.5/configuration.html#4-option%20http_proxy


Andrew


Regards Alex



Re: HAP 2.3.16 A bogus STREAM [0x559faa07b4f0] at "cache store filter"

2022-01-04 Thread Aleksandar Lazic

On 04.01.22 14:10, Christopher Faulet wrote:

Le 1/4/22 à 10:26, Aleksandar Lazic a écrit :


On 04.01.22 10:16, Christopher Faulet wrote:

Le 12/25/21 à 23:59, Aleksandar Lazic a écrit :


Hi.

as the message tell us that we should report this to the developers I do so :-)


```
Dec 24 01:10:31 lb1 haproxy[20008]: A bogus STREAM [0x559faa07b4f0] is spinning 
at 204371 calls per second
and refuses to die, aborting now!
Please report this error to developers
[strm=0x559faa07b4f0,12390e src=:::79.183.184.235 fe=https-in be=be_api 
dst=api_main2
    txn=0x559faab233e0,44000 txn.req=MSG_DONE,d txn.rsp=MSG_RPBEFORE,0 
rqf=48c4e068 rqa=4
    rpf=a000a860 rpa=0 sif=CLO,2c8002 sib=CLO,1280112 af=(nil),0 
csf=0x559faa07ba10,1059a0
    ab=(nil),0 csb=0x559faad7dcf0,1a0
    
cof=0x7f224212e5d0,80003300:H2(0x559faa7d7b00)/SSL(0x7f22424fc7a0)/tcpv6(2162)
    
cob=0x7f2240f79fe0,8982300:H1(0x559faa0ab840)/SSL(0x7f2263517770)/tcpv4(1490)
    filters={0x559faa29c520="cache store filter"}]



Hi Alex,

I think I found the issue. I'm unable to reproduce the spinning loop but I can 
freeze infinitely a stream.
It is probably just a matter of timing. On my side, it is related to L7 
retries. Could you confirm you have
a "retry-on" parameter in your configuration ?


Yes I can confirm.

```
defaults http
    log global
    mode http
    retry-on all-retryable-errors
    option forwardfor
    option redispatch
    option http-ignore-probes
    option httplog
    option dontlognull
    option ssl-hello-chk
    option log-health-checks
    option socket-stats
    timeout connect 5s
    timeout client  50s
    timeout server  50s
    http-reuse safe
    errorfile 400 /etc/haproxy/errors/400.http
    errorfile 403 /etc/haproxy/errors/403.http
...
```



Thanks Alex, I pushed a fix. It will be backported as far as the 2.0 ASAP.


Thank you Christopher




Re: HAP 2.3.16 A bogus STREAM [0x559faa07b4f0] at "cache store filter"

2022-01-04 Thread Aleksandar Lazic



On 04.01.22 10:16, Christopher Faulet wrote:

Le 12/25/21 à 23:59, Aleksandar Lazic a écrit :


Hi.

as the message tell us that we should report this to the developers I do so :-)


```
Dec 24 01:10:31 lb1 haproxy[20008]: A bogus STREAM [0x559faa07b4f0] is spinning 
at 204371 calls per second
and refuses to die, aborting now!
Please report this error to developers
[strm=0x559faa07b4f0,12390e src=:::79.183.184.235 fe=https-in be=be_api 
dst=api_main2
   txn=0x559faab233e0,44000 txn.req=MSG_DONE,d txn.rsp=MSG_RPBEFORE,0 
rqf=48c4e068 rqa=4
   rpf=a000a860 rpa=0 sif=CLO,2c8002 sib=CLO,1280112 af=(nil),0 
csf=0x559faa07ba10,1059a0
   ab=(nil),0 csb=0x559faad7dcf0,1a0
   
cof=0x7f224212e5d0,80003300:H2(0x559faa7d7b00)/SSL(0x7f22424fc7a0)/tcpv6(2162)
   cob=0x7f2240f79fe0,8982300:H1(0x559faa0ab840)/SSL(0x7f2263517770)/tcpv4(1490)
   filters={0x559faa29c520="cache store filter"}]



Hi Alex,

I think I found the issue. I'm unable to reproduce the spinning loop but I can 
freeze infinitely a stream.
It is probably just a matter of timing. On my side, it is related to L7 
retries. Could you confirm you have
a "retry-on" parameter in your configuration ?


Yes I can confirm.

```
defaults http
  log global
  mode http
  retry-on all-retryable-errors
  option forwardfor
  option redispatch
  option http-ignore-probes
  option httplog
  option dontlognull
  option ssl-hello-chk
  option log-health-checks
  option socket-stats
  timeout connect 5s
  timeout client  50s
  timeout server  50s
  http-reuse safe
  errorfile 400 /etc/haproxy/errors/400.http
  errorfile 403 /etc/haproxy/errors/403.http
...
```


Thanks !


Regards
Alex



Re: Troubles with AND in acl

2022-01-01 Thread Aleksandar Lazic

Hi.

On 01.01.22 20:56, Henning Svane wrote:

Hi

I have used it for some time in PFsense, but know made a Linux installation and now the configuration 
give me some troubles.


What have I done wrong here below?

As I cannot see what I should have done different, but sudo haproxy -c -f /etc/haproxy/haproxy01.cfg 
gives the following errors


error detected while parsing ACL 'XMail_EAS' : unknown fetch method 'if' in ACL 
expression 'if'.

error detected while parsing an 'http-request track-sc1' condition : unknown fetch method 'XMail_EAS' 
in ACL expression 'XMail_EAS'.


I have tried with { } around but that did not help


"if" is not a valid keyword for "acl" line.
http://cbonte.github.io/haproxy-dconv/2.4/configuration.html#7


Configuration:

bind 10.40.61.10:443 ssl crt /etc/haproxy/crt/mail_domain_com.pem alpn 
h2,http/1.1

acl XMail hdr(host) -i mail.domain.com autodiscover.domain.com

http-request redirect scheme https code 301 if !{ ssl_fc }

acl XMail_EAS if XMail AND {url_beg -i /microsoft-server-activesync}



This works.

  acl XMail hdr(host) -i mail.domain.com autodiscover.domain.com
  acl MS_ACT url_beg -i /microsoft-server-activesync

  http-request track-sc1 src table Table_SRC_XMail_EAS_L4 if XMail MS_ACT

The AND is implicit.
http://cbonte.github.io/haproxy-dconv/2.4/configuration.html#7.2


http-request track-sc1 src table Table_SRC_XMail_EAS_L4 if { XMail_EAS } { 
status 401 }  { status 403 }

http-request tarpit deny_status 429 if  { XMail_EAS} { sc_http_req_rate(1) gt 
10 }


Please can you share some more information's.
haproxy -vv


Regards

Henning


Regards
Alex





Re: invalid request

2021-12-29 Thread Aleksandar Lazic

Hi.

On 28.12.21 19:35, brendan kearney wrote:

list members,

i am running haproxy, and see some errors with requests.  i am trying to
understand why the errors are being thrown.  haproxy version and error
info below.  i am thinking that the host header is being exposed outside
the TLS encryption, but cannot be sure that is what is going on.

of note, the gnome weather extension runs into a similar issue. and the
eclipse IDE, when trying to call out to the download site.

where can i find more about what is going wrong with the requests and
why haproxy is blocking them?  if it matters, the calls are from apps to
a http VIP in haproxy, load balancing to squid backends.

# haproxy -v
HA-Proxy version 2.1.11-9da7aab 2021/01/08 - https://haproxy.org/
Status: stable branch - will stop receiving fixes around Q1 2021.
Known bugs: http://www.haproxy.org/bugs/bugs-2.1.11.html


As you can see on this page are 108 bugs fixed within the next version.
Maybe you should update to latest 2.4 and see if the behavior is still the same.


Running on: Linux 5.11.22-100.fc32.x86_64 #1 SMP Wed May 19 18:58:25 UTC
2021 x86_64

[28/Dec/2021:12:17:14.412] frontend proxy (#2): invalid request
    backend  (#-1), server  (#-1), event #154, src 
192.168.1.90:44228
    buffer starts at 0 (including 0 out), 16216 free,
    len 168, wraps at 16336, error at position 52
    H1 connection flags 0x, H1 stream flags 0x0012
    H1 msg state MSG_HDR_L2_LWS(24), H1 msg flags 0x1410
    H1 chunk len 0 bytes, H1 body len 0 bytes :

    0  CONNECT admin.fedoraproject.org:443 HTTP/1.1\r\n


Do you use 
http://cbonte.github.io/haproxy-dconv/2.1/configuration.html#4-option%20http_proxy
It would help when you share the haproxy config.


    00046  Host: admin.fedoraproject.org\r\n
    00077  Accept-Encoding: gzip, deflate\r\n
    00109  User-Agent: gnome-software/40.4\r\n
    00142  Connection: Keep-Alive\r\n
    00166  \r\n

[28/Dec/2021:12:48:34.023] frontend proxy (#2): invalid request
    backend  (#-1), server  (#-1), event #166, src 
192.168.1.90:44350
    buffer starts at 0 (including 0 out), 16258 free,
    len 126, wraps at 16336, error at position 49
    H1 connection flags 0x, H1 stream flags 0x0012
    H1 msg state MSG_HDR_L2_LWS(24), H1 msg flags 0x1410
    H1 chunk len 0 bytes, H1 body len 0 bytes :

    0  CONNECT download.eclipse.org:443 HTTP/1.1\r\n
    00043  Host: download.eclipse.org\r\n
    00071  User-Agent: Apache-HttpClient/4.5.10 (Java/11.0.13)\r\n
    00124  \r\n

thanks in advance,

brendan






HAP 2.3.16 A bogus STREAM [0x559faa07b4f0] at "cache store filter"

2021-12-25 Thread Aleksandar Lazic



Hi.

as the message tell us that we should report this to the developers I do so :-)


```
Dec 24 01:10:31 lb1 haproxy[20008]: A bogus STREAM [0x559faa07b4f0] is spinning 
at 204371 calls per second
and refuses to die, aborting now!
Please report this error to developers
[strm=0x559faa07b4f0,12390e src=:::79.183.184.235 fe=https-in be=be_api 
dst=api_main2
 txn=0x559faab233e0,44000 txn.req=MSG_DONE,d txn.rsp=MSG_RPBEFORE,0 
rqf=48c4e068 rqa=4
 rpf=a000a860 rpa=0 sif=CLO,2c8002 sib=CLO,1280112 af=(nil),0 
csf=0x559faa07ba10,1059a0
 ab=(nil),0 csb=0x559faad7dcf0,1a0
 cof=0x7f224212e5d0,80003300:H2(0x559faa7d7b00)/SSL(0x7f22424fc7a0)/tcpv6(2162)
 cob=0x7f2240f79fe0,8982300:H1(0x559faa0ab840)/SSL(0x7f2263517770)/tcpv4(1490)
 filters={0x559faa29c520="cache store filter"}]

Dec 24 01:10:31 lb1 haproxy[4818]: [ALERT] 357/011031 (20008) : A bogus STREAM 
[0x559faa07b4f0] is spinning
at 204371 calls per second and refuses to die, aborting now! Please report this 
error to developers
[strm=0x559faa07b4f0,12390e src=:::79.183.184.235 fe=https-in be=be_api 
dst=api_main2
 txn=0x559faab233e0,44000 txn.req=MSG_DONE,d txn.rsp=MSG_RPBEFORE,0 
rqf=48c4e068 rqa=4
 rpf=a000a860 rpa=0 sif=CLO,2c8002 sib=CLO,1280112 af=(nil),0 
csf=0x559faa07ba10,1059a0
 ab=(nil),0
 csb=0x559faad7dcf0,1a0 
cof=0x7f224212e5d0,80003300:H2(0x559faa7d7b00)/SSL(0x7f22424fc7a0)/tcpv6(2162)
 cob=0x7f2240f79fe0,8982300:H1(0x559faa0ab840)/SSL(0x7f2263517770)/tcpv4(1490)
 filters={0x559faa29c520="cache store filter"}]
```

Here the cache config from haproxy.

```
cache default_cache
total-max-size 1024 # MB
# max-object-size 1  # bytes
max-age 300 # seconds

cache api_cache
total-max-size 1024 # MB
# max-object-size 1  # bytes
max-age 300 # seconds

backend be_default
  log global

  http-request cache-use default_cache
  http-response cache-store default_cache

backend be_api
  log global

  http-request cache-use api_cache
  http-response cache-store api_cache
```

Here the haproxy version ans we plan to update to 2.4 version asap.

```
ubuntu@lb1:~$ haproxy -vv
HA-Proxy version 2.3.16-1ppa1~bionic 2021/11/25 - https://haproxy.org/
Status: stable branch - will stop receiving fixes around Q1 2022.
Known bugs: http://www.haproxy.org/bugs/bugs-2.3.16.html
Running on: Linux 4.15.0-139-generic #143-Ubuntu SMP Tue Mar 16 01:30:17 UTC 
2021 x86_64
Build options :
  TARGET  = linux-glibc
  CPU = generic
  CC  = cc
  CFLAGS  = -O2 -g -O2 
-fdebug-prefix-map=/build/haproxy-1kKZLK/haproxy-2.3.16=. 
-fstack-protector-strong -Wformat -Werror=format-security -Wdate-time 
-D_FORTIFY_SOURCE=2 -Wall -Wextra -Wdeclaration-after-statement -fwrapv 
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered 
-Wno-missing-field-initializers -Wtype-limits -Wshift-negative-value 
-Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference
  OPTIONS = USE_PCRE2=1 USE_PCRE2_JIT=1 USE_OPENSSL=1 USE_LUA=1 USE_ZLIB=1 
USE_SYSTEMD=1
  DEBUG   =

Feature list : +EPOLL -KQUEUE +NETFILTER -PCRE -PCRE_JIT +PCRE2 +PCRE2_JIT 
+POLL -PRIVATE_CACHE +THREAD -PTHREAD_PSHARED +BACKTRACE -STATIC_PCRE 
-STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT +CRYPT_H 
+GETADDRINFO +OPENSSL +LUA +FUTEX +ACCEPT4 -CLOSEFROM +ZLIB -SLZ +CPU_AFFINITY 
+TFO +NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL +SYSTEMD -OBSOLETE_LINKER 
+PRCTL +THREAD_DUMP -EVPORTS

Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_THREADS=64, default=8).
Built with OpenSSL version : OpenSSL 1.1.1  11 Sep 2018
Running on OpenSSL version : OpenSSL 1.1.1  11 Sep 2018
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with Lua version : Lua 5.3.3
Built with network namespace support.
Built with the Prometheus exporter as a service
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), deflate("deflate"), 
raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT 
IP_FREEBIND
Built with PCRE2 version : 10.31 2018-02-12
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with gcc compiler version 7.5.0

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
  h2 : mode=HTTP   side=FE|BE mux=H2
fcgi : mode=HTTP   side=BEmux=FCGI
: mode=HTTP   side=FE|BE mux=H1
: mode=TCPside=FE|BE mux=PASS

Available services : prometheus-exporter
Available filters :
[SPOE] spoe
[CACHE] cache
[FCGI] 

Re: Getting rid of outdated haproxy apt ppa repo

2021-12-20 Thread Aleksandar Lazic



Hi.

On 20.12.21 09:40, Christoph Kukulies wrote:

Due to some recent action I did from some may outdated instructions for haproxy 
1.6 under Ubuntu
I have a left off broken haproxy repo which comes up everytim I’m doing 
apt-updates:

Ign:3 http://ppa.launchpad.net/vbernat/haproxy-1.6/ubuntu bionic InRelease
Hit:4 http://ppa.launchpad.net/vbernat/haproxy-1.8/ubuntu bionic InRelease
Err:5 http://ppa.launchpad.net/vbernat/haproxy-1.6/ubuntu bionic Release
   404  Not Found [IP: 91.189.95.85 80]
Hit:6 http://archive.ubuntu.com/ubuntu bionic InRelease
Get:7 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Get:8 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
Reading package lists... Done
E: The repository 'http://ppa.launchpad.net/vbernat/haproxy-1.6/ubuntu bionic 
Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore 
disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration 
details.


Any clues how I can get rid of this?


Well 1.6 is end of life.

https://www.haproxy.org/

You should replace haproxy-1.6 with 2.4, IMHO.
https://haproxy.debian.net/#?distribution=Ubuntu=bionic=2.4

How to handle ppa can be searched in the Internet, here a example page from the 
internet search.
https://linuxhint.com/ppa_repositories_ubuntu/


—
Christoph


Regards
Alex



Re: Add HAProxy to quicwg Implementations wiki

2021-12-19 Thread Aleksandar Lazic



On 19.12.21 13:52, Willy Tarreau wrote:

Hi Aleks,

On Sun, Dec 19, 2021 at 01:43:01PM +0100, Aleksandar Lazic wrote:

Do you agree that we now can add HAProxy to that list :-)

https://github.com/quicwg/base-drafts/wiki/Implementations


Ideally we should submit it once we have a public server with it. There
are still low-level issues that Fred and Amaury are working on before
this can happen, but based on the progress I'm seeing on the interop
page at https://interop.seemann.io/  I definitely expect that these
will be addressed soon and that haproxy.org will be delivered over QUIC
before 2.6 is released :-)


Cool thanks for the update :-)


Willy


Regards
Alex



  1   2   3   4   5   6   7   8   9   10   >