Re: [RFC PATCH] DOC: resolvers: recommend disabling libc resolution

2025-03-22 Thread Luke Seelenbinder
Hi all,

Just wanted to add my 2¢ that the doc changes are a huge improvement from my 
perspective. Issuing a simple warning at startup could also be very useful for 
users who don't read the docs as religiously as I tend to…

Best,
Luke

—
Luke Seelenbinder
Stadia Maps | Founder & CEO
stadiamaps.com

> On Mar 17, 2025, at 14:24, Lukas Tribus  wrote:
> 
> Hi Willy,
> 
> 
> On Thu, 13 Mar 2025 at 18:33, Willy Tarreau  wrote:
>> 
>> OK, I never thought about all of this!
>> 
>>> For example case 1:
>>> An admin configures private resolvers in TCP mode to avoid issues with
>>> bigger response sizes, however unbeknownst to him TCP mode is not
>>> available/reachable for unrelated issues. The same name servers are
>>> configured in /etc/resolv.conf, so libc is able to resolve the private
>>> server IPs without issues, because libc uses UDP before falling back
>>> to TCP.
>> 
>> I generally agree (though I don't know how frequent this is). Also
>> this raises the point of the relevance of parse-resolv-conf then:
>> if discrepancies are this common, should we also discourage from
>> using this option ?
> 
> I don't dislike parse-resolv-conf, I think it's useful for cloud deployments.
> 
> It can work fine with libc resolution disabled, we just need to make
> that clear imo.
> 
> 
> 
>>> I really want to drive home the point that this is not *only* about
>>> different DNS servers, but also about different resolution behavior
>>> when using the same DNS servers, because our code and libc code is not
>>> the same.
>> 
>> Totally got it now, thank you.
>> 
>> I'm fine then with generally discouraging people from using the two at
>> once.
>> 
>> Maybe we could think about changing the default init-addr over the
>> long term when resolvers are used (not sure this is easy to do, think
>> about defaults). Or maybe we could add another variant of libc, such as
>> "opt-libc" which would be libc only if resolvers is not used, and switch
>> to that by default. What do you think ?
>> 
>> The only problem is that I'd like to be able to emit a warning before
>> changing that, and we probably don't want to force everyone to specify
>> init-addr everywhere if we'd want to change a default later. Or maybe
>> the resolution error should detect that resolvers are there and report
>> that the setting changed. It would not be super cool but it could help
>> users avoid serious traps.
> 
> Yes, ideally we would do more than just doc updates, but indeed
> considering that every backend server can have a different resolver
> config this could be non trivial.
> 
> Perhaps we could start with just a configuration warning when a
> backend server is haproxy resolver enabled but libc still enabled for
> the server?
> 
> We just have to keep in mind that this is really a per server config knob.
> 
> 
> 
>> And the other thing is to improve the doc. I'm pretty sure I'm far from
>> being the only one around not thinking about all these "details" between
>> libc and resolvers.
> 
> My attempt in this patch was sparse:
> 
> +  - disabling libc based initial name resolution with the "init-addr" server
> +setting is recommended to avoid using two different name resolution
> +strategies, as their behavior will diverge.
> 
> 
> I'm not sure if making it dense is beneficial or if it just becomes
> more confusing. If I make the FQDN / search domain example, people
> will just think that it doesn't impact them if they run a full FQDN
> config, yet it is just one of many examples and it will never be
> possible to predict / list them all.
> 
> 
> Perhaps just elaborating a little bit like:
> 
> +  - disabling libc based initial name resolution with the "init-addr" server
> +setting is recommended to avoid using two different name resolution
> +strategies, as their behavior will diverge. Due to the possible
> +behavior difference, initial libc resolution may hide problems related
> +to haproxy resolver.
> 
> 
> It would be useful to get some user feedback regarding this topic.
> 
> 
> 
> Regards,
> 
> Lukas



Re: [PATCH] MEDIUM: lb-chash: add directive hash-preserve-affinity

2025-03-21 Thread psavalle
> Isn't that the idea behind `option redispatch` 
> https://docs.haproxy.org/3.1/configuration.html#4.2-option%20redispatch
>
> How about to add there the option `maxconn` and/or `maxqueue` instead of 
> adding a new keyword?

There was an older discussion about this at 
https://haproxy.formilux.narkive.com/H9aMs0sp/how-to-redispatch-a-request-after-queue-timeout.
My understanding is that `redispatch` is more about controlling how we retry 
errors than load balancing strategy,
which may justify adding a keyword specific to hash load balancing.


> Thank you, pretty good work here! I have two requests below:

These all sound good, here's an updated patch. I have also added the directive 
to the 'index' in 'configuration.txt',
which I had missed earlier.

Thank you for taking a look so quickly!


>From ad7b183ea396dc92d5cf902f6d319c05133673b0 Mon Sep 17 00:00:00 2001
From: Pierre-Andre Savalle 
Date: Fri, 21 Mar 2025 11:27:21 +0100
Subject: [PATCH] MEDIUM: lb-chash: add directive hash-preserve-affinity

When using hash-based load balancing, requests are always assigned to the 
server corresponding
to the hash bucket for the balancing key, without taking maxconn or maxqueue 
into account, unlike
in other load balancing methods like 'first'. This adds a new backend directive 
that can be used
to take maxconn and possibly maxqueue in that context. This can be used when 
hashing is desired
to achieve cache locality, but sending requests to a different server is 
preferable to queuing
for a long time or failing requests when the initial server is saturated.

By default, affinity is preserved as was the case previously. When 
'hash-preserve-affinity' is
set to 'maxqueue', servers are considered successively in the order of the hash 
ring until a
server that does not have a full queue is found.

When 'maxconn' is set on a server, queueing cannot be disabled, as 'maxqueue=0' 
means unlimited.
To support picking a different server when a server is at 'maxconn' 
irrespective of the queue,
'hash-preserve-affinity' can be set to 'maxconn'.
---
 doc/configuration.txt   | 33 +-
 include/haproxy/proxy-t.h   |  8 ++-
 reg-tests/balance/balance-hash-maxconn.vtc  | 52 
 reg-tests/balance/balance-hash-maxqueue.vtc | 68 +
 src/lb_chash.c  | 13 +++-
 src/proxy.c | 32 ++
 tests/conf/test-hash-preseve-affinity.cfg   | 52 
 7 files changed, 254 insertions(+), 4 deletions(-)
 create mode 100644 reg-tests/balance/balance-hash-maxconn.vtc
 create mode 100644 reg-tests/balance/balance-hash-maxqueue.vtc
 create mode 100644 tests/conf/test-hash-preseve-affinity.cfg

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 8eb8db06f..132d32b2d 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -5969,6 +5969,7 @@ filter-  X
 X X
 fullconn  X  - X X
 guid  -  X X X
 hash-balance-factor   X  - X X
+hash-preserve-affinityX  - X X
 hash-type X  - X X
 http-after-response   X (!)  X X X
 http-check commentX  - X X
@@ -7857,6 +7858,35 @@ hash-balance-factor 
 
   See also : "balance" and "hash-type".
 
+hash-preserve-affinity { always | maxconn | maxqueue }
+  Specify a method for assigning streams to servers with hash load balancing 
when
+  servers are satured or have a full queue.
+
+  May be used in the following contexts: http
+
+  May be used in sections:   defaults | frontend | listen | backend
+   yes|no|   yes  |   yes
+
+  The following values can be specified:
+
+- "always"   : this is the default stategy. A stream is assigned to a 
server
+   based on hashing irrespective of whether the server is 
currently
+   saturated.
+
+- "maxconn"  : when selected, servers that have "maxconn" set and are 
currently
+   saturated will be skipped. Another server will be picked by
+   following the hashing ring. This has no effect on servers 
that do
+   not set "maxconn". If all servers are saturated, the 
request is
+   enqueued to the last server in the hash ring before the 
initially
+   selected server.
+
+- "maxqueue" : when selected, servers that have "maxconn" set, "maxqueue" 
set
+   to a non-zero value (limited queue size) and currently have 
a
+   full queue will be skipped. Another server will be picked by
+   following the hashing ring. This h

Re: [PATCH] MEDIUM: lb-chash: add directive hash-preserve-affinity

2025-03-21 Thread Willy Tarreau
On Fri, Mar 21, 2025 at 05:08:17PM +, psavalle wrote:
> > Thank you, pretty good work here! I have two requests below:
> 
> These all sound good, here's an updated patch. I have also added the
> directive to the 'index' in 'configuration.txt', which I had missed earlier.

Ah, I often miss it as well!

> Thank you for taking a look so quickly!

You're welcome, thanks to you for the quick turn around. I wish I
had reviewed it earlier, it could almost have landed in -dev8.

It all looks good to me, however I'm seeing a failure of the regtests
here:
  ...
   c2rxhdr|HTTP/1.1 503 Service Unavailable\r
   c2rxhdr|content-length: 107\r
   c2rxhdr|cache-control: no-cache\r
   c2rxhdr|content-type: text/html\r
   c2rxhdr|\r
   c2rxhdrlen = 107
   c2http[ 0] |HTTP/1.1
   c2http[ 1] |503
   c2http[ 2] |Service Unavailable
   c2http[ 3] |content-length: 107
   c2http[ 4] |cache-control: no-cache
   c2http[ 5] |content-type: text/html
   c2c-l|503 Service Unavailable
   c2c-l|No server is available to handle this request.
   c2c-l|
   c2bodylen = 107
  **   c2=== expect resp.status == 200
   c2EXPECT resp.status (503) == "200" failed
   c2b   rxhdr|HTTP/1.1 504 Gateway Time-out\r
   c2b   rxhdr|content-length: 92\r
   c2b   rxhdr|cache-control: no-cache\r
   c2b   rxhdr|content-type: text/html\r
   c2b   rxhdr|\r
   c2b   rxhdrlen = 103
   c2b   http[ 0] |HTTP/1.1
   c2b   http[ 1] |504
   c2b   http[ 2] |Gateway Time-out
   c2b   http[ 3] |content-length: 92
   c2b   http[ 4] |cache-control: no-cache
   c2b   http[ 5] |content-type: text/html
  ...

I'll have a look but need to be away from keyboard for an hour now,
I'm sharing this just in case you have an idea. I'm running with
HAPROXY_TEST_TIMEOUT=400 if that can help (the 504 above makes me
think it could be related but don't have the time to test otherwise
now).

Thanks!
Willy




Re: [PATCH] MEDIUM: lb-chash: add directive hash-preserve-affinity

2025-03-21 Thread Willy Tarreau
Hello!

On Fri, Mar 21, 2025 at 11:05:13AM +, psavalle wrote:
> Hello everyone,
> 
> This patch implements a new backend directive to control hash-based load
> balancing when servers are at the 'maxconn' limit or have a full queue. See
> https://github.com/haproxy/haproxy/issues/2893 for context.
> 
> Thank you!

Thank you, pretty good work here! I have two requests below:

> From e518ee79a56319c9781e62005da13f2c0064399b Mon Sep 17 00:00:00 2001
> From: psavalle 

Here, we'll need to have something that looks like a real person
name. If for whatever reason you don't want to leave your real
identity, maybe have someone accept to post the patch for you.

> --- a/src/cfgparse-listen.c
> +++ b/src/cfgparse-listen.c
> @@ -2533,6 +2533,34 @@ int cfg_parse_listen(const char *file, int linenum, 
> char **args, int kwm)
>   goto out;
>   }
>   }
> + else if (strcmp(args[0], "hash-preserve-affinity") == 0) {

We're trying to get rid of all those strcmp() for first-level keywords
because they prevent the keywords from being enumerated. Instead it's
possible to write the same code in a small parsing function that's
registered for the keyword. Could you please have a look at the
"retry-on" keyword which is parsed by proxy_parse_retry_on(), which
does essentially the same ? You'll have to reference it in the same
file in the cfg_kws[] array (just search for proxy_parse_retry_on,
you'll find it).

Thanks to this, you can then verify using "haproxy -dKcfg -f /dev/null"
that your new keyword is listed. This is useful for people who implement
parsers, APIs etc to detect new keywords addition.

No other comment about this, your code looks clean and the doc and tests
look fine, so I'm definitely willing to merge it once this is addressed.

Thank you!
Willy




Re: [PATCH] MEDIUM: lb-chash: add directive hash-preserve-affinity

2025-03-21 Thread Aleksandar Lazic

Hi.

On 2025-03-21 (Fr.) 12:05, psavalle wrote:

Hello everyone,

This patch implements a new backend directive to control hash-based load 
balancing when servers are at the 'maxconn' limit or have a full queue. See 
https://github.com/haproxy/haproxy/issues/2893 for context.


Isn't that the idea behind `option redispatch` 
https://docs.haproxy.org/3.1/configuration.html#4.2-option%20redispatch


How about to add there the option `maxconn` and/or `maxqueue` instea of adding a 
new keyword?



Thank you!


Regards
Alex

[patch snipped]




Re: [RFC PATCH] DOC: resolvers: recommend disabling libc resolution

2025-03-17 Thread Lukas Tribus
Hi Willy,


On Thu, 13 Mar 2025 at 18:33, Willy Tarreau  wrote:
>
> OK, I never thought about all of this!
>
> > For example case 1:
> > An admin configures private resolvers in TCP mode to avoid issues with
> > bigger response sizes, however unbeknownst to him TCP mode is not
> > available/reachable for unrelated issues. The same name servers are
> > configured in /etc/resolv.conf, so libc is able to resolve the private
> > server IPs without issues, because libc uses UDP before falling back
> > to TCP.
>
> I generally agree (though I don't know how frequent this is). Also
> this raises the point of the relevance of parse-resolv-conf then:
> if discrepancies are this common, should we also discourage from
> using this option ?

I don't dislike parse-resolv-conf, I think it's useful for cloud deployments.

It can work fine with libc resolution disabled, we just need to make
that clear imo.



> > I really want to drive home the point that this is not *only* about
> > different DNS servers, but also about different resolution behavior
> > when using the same DNS servers, because our code and libc code is not
> > the same.
>
> Totally got it now, thank you.
>
> I'm fine then with generally discouraging people from using the two at
> once.
>
> Maybe we could think about changing the default init-addr over the
> long term when resolvers are used (not sure this is easy to do, think
> about defaults). Or maybe we could add another variant of libc, such as
> "opt-libc" which would be libc only if resolvers is not used, and switch
> to that by default. What do you think ?
>
> The only problem is that I'd like to be able to emit a warning before
> changing that, and we probably don't want to force everyone to specify
> init-addr everywhere if we'd want to change a default later. Or maybe
> the resolution error should detect that resolvers are there and report
> that the setting changed. It would not be super cool but it could help
> users avoid serious traps.

Yes, ideally we would do more than just doc updates, but indeed
considering that every backend server can have a different resolver
config this could be non trivial.

Perhaps we could start with just a configuration warning when a
backend server is haproxy resolver enabled but libc still enabled for
the server?

We just have to keep in mind that this is really a per server config knob.



> And the other thing is to improve the doc. I'm pretty sure I'm far from
> being the only one around not thinking about all these "details" between
> libc and resolvers.

My attempt in this patch was sparse:

+  - disabling libc based initial name resolution with the "init-addr" server
+setting is recommended to avoid using two different name resolution
+strategies, as their behavior will diverge.


I'm not sure if making it dense is beneficial or if it just becomes
more confusing. If I make the FQDN / search domain example, people
will just think that it doesn't impact them if they run a full FQDN
config, yet it is just one of many examples and it will never be
possible to predict / list them all.


Perhaps just elaborating a little bit like:

+  - disabling libc based initial name resolution with the "init-addr" server
+setting is recommended to avoid using two different name resolution
+strategies, as their behavior will diverge. Due to the possible
+behavior difference, initial libc resolution may hide problems related
+to haproxy resolver.


It would be useful to get some user feedback regarding this topic.



Regards,

Lukas




Re: [RFC PATCH] DOC: resolvers: recommend disabling libc resolution

2025-03-13 Thread Willy Tarreau
Hi Lukas,

On Thu, Mar 13, 2025 at 02:38:31PM +0100, Lukas Tribus wrote:
> You are thinking of a case where resolv.conf points to some recursive
> nameserver, and the haproxy configuration resolver config points to
> different ones.
> 
> However DNS is complex and there are *a lot* of behavior differences
> one can shoot himself in the foot with, other than using different
> servers.
> 
> 
> - for libc we can use gethostbyname or getaddrinfo based on how
> haproxy was build, impacting address family results
> - resolv.conf does not force udp or tcp, libc decides, and in most but
> not all libc's a UDP query automatically falls back to TCP
> - haproxy resolvs explicitly either via UDP or TCP, it is the user
> responsibility to fix issue and configure fallbacks
> - haproxy only supports FQDN while libc may search domains (man 5
> resolv.conf has lots of options)
> - handling bigger responses is likely different
> - EDNS0 handling is likely different
> - handling of DNS flags is likely different

OK, I never thought about all of this!

> For example case 1:
> An admin configures private resolvers in TCP mode to avoid issues with
> bigger response sizes, however unbeknownst to him TCP mode is not
> available/reachable for unrelated issues. The same name servers are
> configured in /etc/resolv.conf, so libc is able to resolve the private
> server IPs without issues, because libc uses UDP before falling back
> to TCP.

I generally agree (though I don't know how frequent this is). Also
this raises the point of the relevance of parse-resolv-conf then:
if discrepancies are this common, should we also discourage from
using this option ?

> How much time and back and forth does this need in a support call, to
> find out that the haproxy internal resolver never run-time *updated*
> the server IPs because it never worked in the first place, hidden by
> the libc resolver which makes everything apparently work, when it
> would have been immediately obvious if libc resolution was disabled?

Oh yes. You know I'm for failing early!

> For example case 2:
> Lack of FQDN: same as case 1, libc searches a hostname in the local
> domain, haproxy does not. Again the internal resolver will fail to
> update server IPs and libc will hide this problem for some time.

Hmmm interesting as well, and trivial to trigger. Even just having
an FQDN but the entry in /etc/hosts.

> Even Luke's problem in this case was not really related to the
> difference results of nameservers, but the distinction between where
> libc resolving stops and haproxy internal resolving starts.
> 
> Every subtle difference in behavior can make the difference between a
> simple and a complex diagnosis, when two different implementations are
> involved, whether the root cause is an external factor, a local
> misconfiguration or a bug.

Based on your explanation, I agree. I do think, however, that this is
far from being obvious and needs to be mentioned somewhere. In the doc.
I always consider that a user asking for help or reporting an issue is
a failure of the doc.

> > > -carry on doing the first resolution when parsing the configuration.
> > > +keep trying to resolve names at startup during configuration parsing via 
> > > libc
> > > +for backwards compatibility.
> >
> > "keep trying" makes me think it insists, which is not true because at
> > the first error it fails to start. However, the libc resolvers are
> > generally blocking, and can be slow since serialized. Probably that
> > all of these concepts should be handled to clarify the picture.
> 
> Yeah, I didn't like "carry on", but it also works without it:
> 
> > Whether run time server name resolution has been enable or not, HAProxy will
> > do the first resolution at startup during configuration parsing via libc
> > for backwards compatibility.

OK.

> > Something along these lines maybe ?
> >
> >   Unless explicitly disabled via the server "init-addr" keyword, HAProxy
> >   will resolve server addresses on startup using the standard method
> >   provided by the operating system's C library ("libc"). It is important
> >   to understand that while this resolution generally relies on DNS, it
> >   can also involve other mechanisms that are specific to the deployment.
> >   If an address cannot be resolved, the process will stop with an error.
> >   In addition, resolutions are serialized, so that resolving addresses
> >   for 1000 servers will result in 1000 request-response cycles, which
> >   can take quite some time. Also, when DNS servers are unreachable or
> >   unresponsive, the libc can take a very long time before timing out for
> >   each and every server, rendering a startup impractical. Finally, if the
> >   servers are configured to rely on a "resolvers" section that references
> >   different DNS servers, the response from the libc might cause startup
> >   errors, or worse, long delays. For this reason it is important not to
> >   mix libc with other resolvers, and adjust the "init-addr" serve

Re: [RFC PATCH] DOC: resolvers: recommend disabling libc resolution

2025-03-13 Thread Lukas Tribus
On Thu, 13 Mar 2025 at 08:23, Willy Tarreau  wrote:
>
> Hi Lukas,
>
> On Tue, Mar 11, 2025 at 03:26:59PM +, Lukas Tribus wrote:
> > Using both libc and haproxy resolvers can lead to hard to diagnose issues
> > when their bevahiour diverges; recommend using only one type of resolver.
> >
> > Should be backported to stable versions.
> > ---
> >
> > > I think the docs could be updated to reflect this.
> >
> > That's my option at least, so here is an RFC doc patch for this.
> >
> > I don't know if others agree; there may be corner cases I'm not thinking
> > of.
>
> I'm thinking that maybe we should soften the language a bit to explain
> that the problem is mixing libc with resolvers that do not come from
> resolv.conf.

I disagree because there are lots of possible problems with this
configuration, using different nameserver is just one of them.


> In fact originally when DNS started to be useful, we've seen a lot of
> just standard resolv.conf being used, to the point that a new option
> "parse-resolv-conf" was added to ease this.
>
> I think that over time the ecosystem has matured a bit and cleaned up
> that mess, leaving users with some servers for the service discovery
> and the servers used by the system (if at all). And in my opinion,
> the problem arises in this specific case.

You are thinking of a case where resolv.conf points to some recursive
nameserver, and the haproxy configuration resolver config points to
different ones.

However DNS is complex and there are *a lot* of behavior differences
one can shoot himself in the foot with, other than using different
servers.


- for libc we can use gethostbyname or getaddrinfo based on how
haproxy was build, impacting address family results
- resolv.conf does not force udp or tcp, libc decides, and in most but
not all libc's a UDP query automatically falls back to TCP
- haproxy resolvs explicitly either via UDP or TCP, it is the user
responsibility to fix issue and configure fallbacks
- haproxy only supports FQDN while libc may search domains (man 5
resolv.conf has lots of options)
- handling bigger responses is likely different
- EDNS0 handling is likely different
- handling of DNS flags is likely different

For example case 1:
An admin configures private resolvers in TCP mode to avoid issues with
bigger response sizes, however unbeknownst to him TCP mode is not
available/reachable for unrelated issues. The same name servers are
configured in /etc/resolv.conf, so libc is able to resolve the private
server IPs without issues, because libc uses UDP before falling back
to TCP.

How much time and back and forth does this need in a support call, to
find out that the haproxy internal resolver never run-time *updated*
the server IPs because it never worked in the first place, hidden by
the libc resolver which makes everything apparently work, when it
would have been immediately obvious if libc resolution was disabled?

For example case 2:
Lack of FQDN: same as case 1, libc searches a hostname in the local
domain, haproxy does not. Again the internal resolver will fail to
update server IPs and libc will hide this problem for some time.

Even Luke's problem in this case was not really related to the
difference results of nameservers, but the distinction between where
libc resolving stops and haproxy internal resolving starts.

Every subtle difference in behavior can make the difference between a
simple and a complex diagnosis, when two different implementations are
involved, whether the root cause is an external factor, a local
misconfiguration or a bug.



> > @@ -18242,13 +18242,19 @@ init-addr {last | libc | none | },[...]*
> >instances on the fly. This option defaults to "last,libc" indicating 
> > that the
> >previous address found in the state file (if any) is used first, 
> > otherwise
> >the libc's resolver is used. This ensures continued compatibility with 
> > the
> > -  historic behavior.
> > +  historic behavior. When using the haproxy resolvers disabling libc based
> > +  resolution is recommended, also see section 5.3.
>
> Maybe we could say something along:
>
> "When haproxy explicitly uses different resolvers than the system's
>  ones, disabling libc based resolution is highly recommended, also
>  see section 5.3"
>
> or any variant ? What do you (and Luke) think ?

This is like saying as long as you are pointing to the same name
servers dual resolution is fine, which I disagree for the reason
mentioned above.


>
> > +  Example 2:
> > +  defaults
> > +  # disable libc resolution when using resolvers
> > +  default-server init-addr last,none
>
> Then here we could say "when using different resolvers".
>
> >  inter 
> >  fastinter 
> >  downinter 
> > @@ -19281,7 +19287,8 @@ workload.
> >  This chapter describes how HAProxy can be configured to process server's 
> > name
> >  resolution at run time.
> >  Whether run time server name resolution has been enable or not, HAProxy 
> > will
> > -carry on d

Re: Question about mimalloc (pronounced "me-malloc") and HAProxy

2025-03-13 Thread Willy Tarreau
Hi Alex,

On Wed, Mar 12, 2025 at 11:55:45PM +0100, Aleksandar Lazic wrote:
> Hi Willy
> 
> On 2025-02-08 (Sa.) 14:18, Aleksandar Lazic wrote:
> > Hi Willy.
> > 
> > On 2025-02-08 (Sa.) 05:49, Willy Tarreau wrote:
> > > Hi Alex,
> > > 
> 
> [snipp]
> 
> > > I'll ping the few heavy users who experienced watchdogs in free(), in
> > > case they want to give it a try.
> > 
> > Great, I'm very curious if this library could have any impact (+,-,=) to 
> > HAP :-)
> 
> Do you got any feedback from the heavy users?

No, and I don't even remember if I passed the word, I'll have to recheck.
I'm currently busy finishing the almost 2-yr old numa stuff as well as
trying to finish some fixes before doing backports and a new stable series.

Cheers,
Willy




Re: Question about mimalloc (pronounced "me-malloc") and HAProxy

2025-03-13 Thread Aleksandar Lazic

Hi Willy.

On 2025-03-13 (Do.) 08:03, Willy Tarreau wrote:

Hi Alex,

On Wed, Mar 12, 2025 at 11:55:45PM +0100, Aleksandar Lazic wrote:

Hi Willy

On 2025-02-08 (Sa.) 14:18, Aleksandar Lazic wrote:

Hi Willy.

On 2025-02-08 (Sa.) 05:49, Willy Tarreau wrote:

Hi Alex,



[snipp]


I'll ping the few heavy users who experienced watchdogs in free(), in
case they want to give it a try.


Great, I'm very curious if this library could have any impact (+,-,=) to HAP :-)


Do you got any feedback from the heavy users?


No, and I don't even remember if I passed the word, I'll have to recheck.
I'm currently busy finishing the almost 2-yr old numa stuff as well as
trying to finish some fixes before doing backports and a new stable series.


Okay. never mind. Will then see what the future will bring.


Cheers,
Willy


Regards
Alex




Re: [RFC PATCH] DOC: resolvers: recommend disabling libc resolution

2025-03-13 Thread Willy Tarreau
Hi Lukas,

On Tue, Mar 11, 2025 at 03:26:59PM +, Lukas Tribus wrote:
> Using both libc and haproxy resolvers can lead to hard to diagnose issues
> when their bevahiour diverges; recommend using only one type of resolver.
> 
> Should be backported to stable versions.
> ---
> 
> > I think the docs could be updated to reflect this.
> 
> That's my option at least, so here is an RFC doc patch for this.
> 
> I don't know if others agree; there may be corner cases I'm not thinking
> of.

I'm thinking that maybe we should soften the language a bit to explain
that the problem is mixing libc with resolvers that do not come from
resolv.conf.

In fact originally when DNS started to be useful, we've seen a lot of
just standard resolv.conf being used, to the point that a new option
"parse-resolv-conf" was added to ease this.

I think that over time the ecosystem has matured a bit and cleaned up
that mess, leaving users with some servers for the service discovery
and the servers used by the system (if at all). And in my opinion,
the problem arises in this specific case.

> @@ -18242,13 +18242,19 @@ init-addr {last | libc | none | },[...]*
>instances on the fly. This option defaults to "last,libc" indicating that 
> the
>previous address found in the state file (if any) is used first, otherwise
>the libc's resolver is used. This ensures continued compatibility with the
> -  historic behavior.
> +  historic behavior. When using the haproxy resolvers disabling libc based
> +  resolution is recommended, also see section 5.3.

Maybe we could say something along:

"When haproxy explicitly uses different resolvers than the system's
 ones, disabling libc based resolution is highly recommended, also
 see section 5.3"

or any variant ? What do you (and Luke) think ?

> +  Example 2:
> +  defaults
> +  # disable libc resolution when using resolvers
> +  default-server init-addr last,none

Then here we could say "when using different resolvers".

>  inter 
>  fastinter 
>  downinter 
> @@ -19281,7 +19287,8 @@ workload.
>  This chapter describes how HAProxy can be configured to process server's name
>  resolution at run time.
>  Whether run time server name resolution has been enable or not, HAProxy will
> -carry on doing the first resolution when parsing the configuration.
> +keep trying to resolve names at startup during configuration parsing via libc
> +for backwards compatibility.

"keep trying" makes me think it insists, which is not true because at
the first error it fails to start. However, the libc resolvers are
generally blocking, and can be slow since serialized. Probably that
all of these concepts should be handled to clarify the picture.
Something along these lines maybe ?

  Unless explicitly disabled via the server "init-addr" keyword, HAProxy
  will resolve server addresses on startup using the standard method
  provided by the operating system's C library ("libc"). It is important
  to understand that while this resolution generally relies on DNS, it
  can also involve other mechanisms that are specific to the deployment.
  If an address cannot be resolved, the process will stop with an error.
  In addition, resolutions are serialized, so that resolving addresses
  for 1000 servers will result in 1000 request-response cycles, which
  can take quite some time. Also, when DNS servers are unreachable or
  unresponsive, the libc can take a very long time before timing out for
  each and every server, rendering a startup impractical. Finally, if the
  servers are configured to rely on a "resolvers" section that references
  different DNS servers, the response from the libc might cause startup
  errors, or worse, long delays. For this reason it is important not to
  mix libc with other resolvers, and adjust the "init-addr" server setting
  according to the desired behavior.

I'm fine with any other proposal, I just want to be sure that these points
are clarified, because clearly the DNS part in the doc suffers quite a bit
and would deserve a refresh!

Thanks!
Willy




Re: Is `balance leastconn` still not recommend for http?

2025-03-12 Thread John Lauro
As to `balance leastconn`, I don't know if it's recommended but it's
what I normally use.  My backends are often not equal (some have more
horsepower (different model physical servers, different loads from vms
running on them, etc), some higher latency because of different
cities), etc...  and leastconn seems like that should do the best in
balancing unequal backends, automatically giving those that respond
the fastest the most load.  It would be interesting to see those
benchmarks if they set different resource capabilities on those
backends and see how they compare.  Sometimes you can know and use
weight, but if that's all dynamic and you don't always know

On Wed, Mar 12, 2025 at 6:55 PM Aleksandar Lazic  wrote:
>
> Hi.
>
> In the doc is this text.
>
> https://docs.haproxy.org/3.1/configuration.html#4-balance
>
> ```
> #snip
>
>leastconn   The server with the lowest number of connections receives the
>connection. Round-robin is performed within groups of servers
>of the same load to ensure that all servers will be used. Use
>of this algorithm is recommended where very long sessions are
>expected, such as LDAP, SQL, TSE, etc... but is not very well
>suited for protocols using short sessions such as HTTP. 
> ```
>
> But when I take a look into that Benchmark would I say that leastconn
> outperforms the other algorithms.
>
> https://www.haproxy.com/blog/power-of-two-load-balancing
>
> That's now the question:
>
> Is `balance leastconn` still not recommend for http workload?
>
> Regards
> Alex
>
>




Re: Is `balance leastconn` still not recommend for http?

2025-03-12 Thread Willy Tarreau
Hi!

On Wed, Mar 12, 2025 at 10:34:04PM -0400, John Lauro wrote:
> As to `balance leastconn`, I don't know if it's recommended but it's
> what I normally use.  My backends are often not equal (some have more
> horsepower (different model physical servers, different loads from vms
> running on them, etc), some higher latency because of different
> cities), etc...  and leastconn seems like that should do the best in
> balancing unequal backends, automatically giving those that respond
> the fastest the most load.  It would be interesting to see those
> benchmarks if they set different resource capabilities on those
> backends and see how they compare.  Sometimes you can know and use
> weight, but if that's all dynamic and you don't always know

The case where it's not recommended is when assigning long user sessions
to servers using cookies, because in this case the server is chosen based
on the instant load of the server, and the user is assigned for a long
period. In this case you can easily end up with many more users on the
few slightly faster servers than on the other ones.

Another thing is that leastconn consumes much more CPU than other algos
(particularly when dealing with many threads). The reason for this is
that a server has to be moved twice: once when picked (the number of
connections increases) and once when released (the number of connections
decreases). Not so long ago we still occasionally managed to trigger
some watchdogs on it under extreme conditions.

One nice alternative to leastconn is random. Random is not just random,
it in fact picks a few servers (2 by default), compares their numbers
of connections, and chooses the least loaded one. It's also called
"power of two choices". I've found that it performs almost as well as
leastconn for long connections and sometimes even better for short
ones that lead to cookie-based stickiness. And it's lockless, so when
using threads, it can become cheaper ;-)

If your workload is really sensitive to the number of connections,
I would suggest to even try "random(3)" which will compare up to 3
servers. Leastconn will always provide the lowest max number of
connections on each server, but the likelyhood to see it perform
better than random() decreases as the number of draws in argument
to random() increases.

I had performed some tests in 2019 about this:

   https://www.haproxy.com/blog/power-of-two-load-balancing

Though by then we weren't running with massive threads like the crazy
stuff we're seeing these days!

Willy




Re: Question about mimalloc (pronounced "me-malloc") and HAProxy

2025-03-12 Thread Aleksandar Lazic

Hi Willy

On 2025-02-08 (Sa.) 14:18, Aleksandar Lazic wrote:

Hi Willy.

On 2025-02-08 (Sa.) 05:49, Willy Tarreau wrote:

Hi Alex,



[snipp]


I'll ping the few heavy users who experienced watchdogs in free(), in
case they want to give it a try.


Great, I'm very curious if this library could have any impact (+,-,=) to HAP :-)


Do you got any feedback from the heavy users?


Thank you!
Willy


Regards
Alex




RE: [Request received]

2025-03-11 Thread Antifraud Response
Hello Team,

We are making a follow up email on the abuse case : 124492

We still see the reported website active.

We request your assistance in prioritizing this issue and resolve this as soon 
as possible.

Please reach out to us for any evidence needed to expedite the process.

Regards,
Vineeth Jyothi
Analyst | Global AntiFraud team
24x7 GSRT: 1 866 880 7025
24x7 GSRT Intl: 1 408 477 7706
antifraud.respo...@cscglobal.com<mailto:antifraud.respo...@cscglobal.com>

CSC®
251 Little Falls Drive
Wilmington, Delaware 19808-1674
USA
cscdbs.com

[MAC Image:Users:shartle5:Desktop:Rebrand Assets:Logos:Vertical 
Configuration:JPG Files with Solid 
Backgrounds:csc_logo_vertical_color_rgb_eps.jpg]
We are the business behind business
North America:  1 888 780 2723 or 1 902 746 5200
EMEA: +44 020 7751 0055
APAC: 1 800 CSC DBS or +61 396119519



From: Antifraud Response 
Sent: Tuesday, March 4, 2025 9:34 PM
To: 'IPRoyal support' 
Cc: network ; haproxy ; Antifraud 
Response 
Subject: RE: [Request received]

Hello Team,

We seek an update on the ticket number : 124492.

CSC Digital Brand Services is working on behalf of Invesco Group Services, Inc. 
as their authorized agent charged with to bringing infringing content online 
into compliance, and we have been informed that there is currently a website 
hosted by 185.68.16.9 that is involved in  impersonating and involved in 
Investment scam to target the customers of Invesco Group Services, Inc..


The infringing content can be found at:
https://www.invesco-properties.space/
https://www.invescope.space/
http://invesco-group.site

On behalf of Invesco Group Services, Inc., CSC Digital Brand Services requests 
that the Web site(s) listed above be deactivated immediately.

Regards,

Harish Kurapati
Analyst | Global AntiFraud Team
24x7 GSRT: 1 866 880 7025
24x7 GSRT Intl: 1 408 477 7706
antifraud.respo...@cscglobal.com<mailto:antifraud.respo...@cscglobal.com>

CSC®
251 Little Falls Drive
Wilmington, Delaware 19808-1674
USA
cscdbs.com




From: IPRoyal support mailto:supp...@iproyal.com>>
Sent: Sunday, March 2, 2025 9:36 AM
To: Antifraud Response 
mailto:antifraud.respo...@cscglobal.com>>
Cc: network mailto:netw...@abuse.team>>; haproxy 
mailto:haproxy@formilux.org>>
Subject: [Request received]


Your request (124492) has been received and is being reviewed by our support 
staff.

To add additional comments, reply to this email.
The content of this email is confidential and intended for the recipient 
specified in the message only. It is strictly forbidden to share any part of 
this message with any third party, without the written consent of the sender. 
If you received this message by mistake, please reply to this message and 
follow with its deletion, so that we can ensure such a mistake does not occur 
in the future.
[4W0436-PEKK4]


Re: [3.0.8] Nameserver with source param

2025-03-11 Thread Willy Tarreau
On Mon, Mar 10, 2025 at 03:17:49PM +0700, Luke Seelenbinder wrote:
> >> Our init-addr is `init-addr libc,last,none`. Due to a complex set of 
> >> factors,
> >> using libc to resolve a host can simply hang, instead of fail. When HAProxy
> >> starts up and libc hangs, the startup times out instead of failing with no 
> >> IP
> >> (i.e. `none`).
> > 
> > Ah OK makes sense. That said, if a regular server does not resolve,
> > normally it doesn't boot. You mean that here it still boots with no
> > address ?
> 
> No, that's the problem; we'd prefer HAProxy to have no IP for a server at
> boot than to fail to boot, but since libc just hangs instead of failing, it
> doesn't finish booting, and the startup times out. This ends up creating a
> service restart cycle.

Ah got it now!

> This means the docs are slightly misleading, however? Since `init-addr
> libc,last,none` results in a hang if libc just hangs vs failing.

Sure but we don't know that libc fails until it responds :-/ The fallback
here corresponds to the cases where it says "I can't resolve that one",
which is not the case in your situation.

> I don't know
> if a hard or configurable timeout on the underlying call makes sense?

I really have no idea if it's doable at all, to be honest. A function
is called and the program is blocked for all this time. I'm not sure
this is cancellable. I can imagine that those relying on a database for
example can have difficulties stopping a pending request :-/

> >> Is there a way to set the timeout for a libc address resolution? We may be
> >> able to drop `libc` in the init-addr list entirely due to a generally 
> >> better
> >> setup now, but it's useful in some cases.
> > 
> > I'm not aware of any way to tune the libc's resolver, though if there
> > is, it will be libc-specific, and even specific to the backend used by
> > the libc. What could be done, however, could indeed be to use a plain
> > IP address, but passing it via an environment variable in the global
> > section (or sourced from another file). This may be easier to handle
> > than hard-coding IP addresses. E.g:
> > 
> >global
> >setenv NS1_ADDR   tcp@10.11.12.1:5353
> >setenv NS2_ADDR   tcp@10.11.12.2:5353
> > 
> >resolvers
> >nameserver ns1 "$NS1_ADDR"
> >nameserver ns2 "$NS2_ADDR"
> 
> Fair enough. I figured that would be the answer... In that case, I think the
> best option for us is to remove `libc` entirely, and make sure our failure
> modes are good enough if it comes up without a backend for a few seconds.

OK. Initially I wanted us to support direct access to resolvers via
resolv.conf, but this would be a chicken-and-egg problem as it would
either require to have the polling loop already running, or to have
a duplicate code part just for that. Another approach could be to have
a "hosts" mode that relies on /etc/hosts that we'd parse ourselves
maybe, but I have not seen any demand for this so I doubt there's much
sympathy for this :-/

Willy




Re: [3.0.8] Nameserver with source param

2025-03-11 Thread Lukas Tribus
On Mon, 10 Mar 2025 at 09:17, Luke Seelenbinder
 wrote:
>
> Hi Willy,
>
> > On Mar 10, 2025, at 15:12, Willy Tarreau  wrote:
> >
> > Hi Luke,
> >
> > On Mon, Mar 10, 2025 at 12:54:51PM +0700, Luke Seelenbinder wrote:
> >> Hi all,
> >>
> >> Thanks for confirming it should work--based on the feedback, we realized 
> >> the
> >> issue is actually not with `nameservers`, but on startup, so this is 
> >> actually
> >> init-addr taking precedence. We missed that in the initial analysis.
> >>
> >> Our init-addr is `init-addr libc,last,none`. Due to a complex set of 
> >> factors,
> >> using libc to resolve a host can simply hang, instead of fail. When HAProxy
> >> starts up and libc hangs, the startup times out instead of failing with no 
> >> IP
> >> (i.e. `none`).
> >
> > Ah OK makes sense. That said, if a regular server does not resolve,
> > normally it doesn't boot. You mean that here it still boots with no
> > address ?
>
> No, that's the problem; we'd prefer HAProxy to have no IP for a server at 
> boot than to fail to boot, but since libc just hangs instead of failing, it 
> doesn't finish booting, and the startup times out. This ends up creating a 
> service restart cycle.
>
> This means the docs are slightly misleading, however? Since `init-addr 
> libc,last,none` results in a hang if libc just hangs vs failing. I don't know 
> if a hard or configurable timeout on the underlying call makes sense?

It's a blocking syscall, I don't think there is anything that can be
done about it.

Your use case matches "init-addr none" (or last,none): only the
haproxy resolver will be used, haproxy will not refuse to start and
will not depend on libc name resolution.

Mixing libc and resolver resolution is imo always dangerous. libc
resolution can hide resolver issues, which than become harder to
diagnose and vice versa.

But libc based name resolution behavior itself is never up to haproxy,
but OS/libc configuration.


Lukas




Re: [3.0.8] Nameserver with source param

2025-03-11 Thread Lukas Tribus
On Fri, 7 Mar 2025 at 21:32, Lukas Tribus  wrote:
>
> On Fri, 7 Mar 2025 at 18:42, Aurelien DARRAGON  wrote:
> >
> > Looking at the code, and testing it for TCP servers it does seem to be
> > supported. To confirm I tried to use a "bad" source address, and it
> > fails as expected:
> >
> > > [ALERT](104635) : Cannot bind to source address before connect() for 
> > > backend mybaddns. Aborting.
>
> In this case for me it does not actually abort and haproxy goes into a
> busy loop over this bind().

To reproduce this busy loop with a 5 line config:

lukas@dev:~/haproxy$ cat ../cert/dns-source-bind-short.cfg
resolvers default
 nameserver ns1 tcp4@8.8.8.8:53 source 192.168.99.99
listen listen
 mode http
 bind :8080
 server s1 www.google.com resolvers default init-addr none

lukas@dev:~/haproxy$ ./haproxy -f ../cert/dns-source-bind-short.cfg ^C
[ALERT](5227) : Cannot bind to source address before connect() for
backend default. Aborting.
[ALERT](5227) : Cannot bind to source address before connect() for
backend default. Aborting.
[ALERT](5227) : Cannot bind to source address before connect() for
backend default. Aborting.
[ALERT](5227) : Cannot bind to source address before connect() for
backend default. Aborting.
[ALERT](5227) : Cannot bind to source address before connect() for
backend default. Aborting.
[ALERT](5227) : Cannot bind to source address before connect() for
backend default. Aborting.
^C


Lukas




Re: [3.0.8] Nameserver with source param

2025-03-11 Thread Aurelien DARRAGON


On 3/10/25 18:39, Aurelien DARRAGON wrote:
> 
> resolver's code is probably trying to establish the connection attempt
> over and over without any tempo between 2 attempts.

Looks like this is the bit of code responsible for instantaneous
connection (re)attempt when a previous one just failed:

> index 14c811a10..737e48244 100644
> --- a/src/dns.c
> +++ b/src/dns.c
> @@ -912,6 +912,7 @@ static void dns_session_release(struct appctx *appctx)
>  
> /* Create a new appctx, We hope we can
>  * create from the release callback! */
> +   // THERE
> ds->appctx = dns_session_create(ds);
> if (!ds->appctx) {
> dns_session_free(ds);

The applet is unconditionally re-creating itself upon release()

Aurelien




Re: [3.0.8] Nameserver with source param

2025-03-10 Thread Luke Seelenbinder
Hi Lukas,

Thanks for your help!

> On Mar 10, 2025, at 21:49, Lukas Tribus  wrote:
> 
> It's a blocking syscall, I don't think there is anything that can be
> done about it.
> 
> Your use case matches "init-addr none" (or last,none): only the
> haproxy resolver will be used, haproxy will not refuse to start and
> will not depend on libc name resolution.

That makes sense. We'll make that change.

> 
> Mixing libc and resolver resolution is imo always dangerous. libc
> resolution can hide resolver issues, which than become harder to
> diagnose and vice versa.

I think the docs could be updated to reflect this. The current docs are quite 
unclear in the pitfalls of mixing the two, especially since the example has the 
following and the docs for init-add don't mention resolvers at all:

defaults
# never fail on address resolution
default-server init-addr last,libc,none

> 
> But libc based name resolution behavior itself is never up to haproxy,
> but OS/libc configuration.

Fair enough. :) There a timeout option for resolv.conf which we found, but I 
think moving to entirely depending on HAProxy resolvers is the best option 
anyways.

> Lukas

—
Luke Seelenbinder
Stadia Maps | Founder & CEO
stadiamaps.com




Re: [3.0.8] Nameserver with source param

2025-03-10 Thread Aurelien DARRAGON


On 3/10/25 15:39, Lukas Tribus wrote:
>> In this case for me it does not actually abort and haproxy goes into a
>> busy loop over this bind().
> 
> To reproduce this busy loop with a 5 line config:
> 
> lukas@dev:~/haproxy$ cat ../cert/dns-source-bind-short.cfg
> resolvers default
>  nameserver ns1 tcp4@8.8.8.8:53 source 192.168.99.99
> listen listen
>  mode http
>  bind :8080
>  server s1 www.google.com resolvers default init-addr none
> 
> lukas@dev:~/haproxy$ ./haproxy -f ../cert/dns-source-bind-short.cfg ^C
> [ALERT](5227) : Cannot bind to source address before connect() for
> backend default. Aborting.
> [ALERT](5227) : Cannot bind to source address before connect() for
> backend default. Aborting.
> [ALERT](5227) : Cannot bind to source address before connect() for
> backend default. Aborting.
> [ALERT](5227) : Cannot bind to source address before connect() for
> backend default. Aborting.
> [ALERT](5227) : Cannot bind to source address before connect() for
> backend default. Aborting.
> [ALERT](5227) : Cannot bind to source address before connect() for
> backend default. Aborting.
> ^C
> 
> 
> Lukas

Nice catch Lukas, can also reproduce without "source". Looks like
similar issue to what we recently fixed for tcp rings
(https://github.com/haproxy/haproxy/commit/9561b9fb6964af325a10e7128b563114f144a3cb).

resolver's code is probably trying to establish the connection attempt
over and over without any tempo between 2 attempts.

Aurelien





Re: [3.0.8] Nameserver with source param

2025-03-10 Thread Luke Seelenbinder
Hi Willy,

> On Mar 10, 2025, at 15:12, Willy Tarreau  wrote:
> 
> Hi Luke,
> 
> On Mon, Mar 10, 2025 at 12:54:51PM +0700, Luke Seelenbinder wrote:
>> Hi all,
>> 
>> Thanks for confirming it should work--based on the feedback, we realized the
>> issue is actually not with `nameservers`, but on startup, so this is actually
>> init-addr taking precedence. We missed that in the initial analysis.
>> 
>> Our init-addr is `init-addr libc,last,none`. Due to a complex set of factors,
>> using libc to resolve a host can simply hang, instead of fail. When HAProxy
>> starts up and libc hangs, the startup times out instead of failing with no IP
>> (i.e. `none`).
> 
> Ah OK makes sense. That said, if a regular server does not resolve,
> normally it doesn't boot. You mean that here it still boots with no
> address ?

No, that's the problem; we'd prefer HAProxy to have no IP for a server at boot 
than to fail to boot, but since libc just hangs instead of failing, it doesn't 
finish booting, and the startup times out. This ends up creating a service 
restart cycle.

This means the docs are slightly misleading, however? Since `init-addr 
libc,last,none` results in a hang if libc just hangs vs failing. I don't know 
if a hard or configurable timeout on the underlying call makes sense?

> 
>> Is there a way to set the timeout for a libc address resolution? We may be
>> able to drop `libc` in the init-addr list entirely due to a generally better
>> setup now, but it's useful in some cases.
> 
> I'm not aware of any way to tune the libc's resolver, though if there
> is, it will be libc-specific, and even specific to the backend used by
> the libc. What could be done, however, could indeed be to use a plain
> IP address, but passing it via an environment variable in the global
> section (or sourced from another file). This may be easier to handle
> than hard-coding IP addresses. E.g:
> 
>global
>setenv NS1_ADDR   tcp@10.11.12.1:5353
>setenv NS2_ADDR   tcp@10.11.12.2:5353
> 
>resolvers
>nameserver ns1 "$NS1_ADDR"
>nameserver ns2 "$NS2_ADDR"

Fair enough. I figured that would be the answer… In that case, I think the best 
option for us is to remove `libc` entirely, and make sure our failure modes are 
good enough if it comes up without a backend for a few seconds.

—
Luke Seelenbinder
Stadia Maps | Founder & CEO
stadiamaps.com





Re: [3.0.8] Nameserver with source param

2025-03-10 Thread Willy Tarreau
Hi Luke,

On Mon, Mar 10, 2025 at 12:54:51PM +0700, Luke Seelenbinder wrote:
> Hi all,
> 
> Thanks for confirming it should work--based on the feedback, we realized the
> issue is actually not with `nameservers`, but on startup, so this is actually
> init-addr taking precedence. We missed that in the initial analysis.
> 
> Our init-addr is `init-addr libc,last,none`. Due to a complex set of factors,
> using libc to resolve a host can simply hang, instead of fail. When HAProxy
> starts up and libc hangs, the startup times out instead of failing with no IP
> (i.e. `none`).

Ah OK makes sense. That said, if a regular server does not resolve,
normally it doesn't boot. You mean that here it still boots with no
address ?

> Is there a way to set the timeout for a libc address resolution? We may be
> able to drop `libc` in the init-addr list entirely due to a generally better
> setup now, but it's useful in some cases.

I'm not aware of any way to tune the libc's resolver, though if there
is, it will be libc-specific, and even specific to the backend used by
the libc. What could be done, however, could indeed be to use a plain
IP address, but passing it via an environment variable in the global
section (or sourced from another file). This may be easier to handle
than hard-coding IP addresses. E.g:

global
setenv NS1_ADDR   tcp@10.11.12.1:5353
setenv NS2_ADDR   tcp@10.11.12.2:5353

resolvers
nameserver ns1 "$NS1_ADDR"
nameserver ns2 "$NS2_ADDR"

Hoping this helps,
Willy




Re: [3.0.8] Nameserver with source param

2025-03-09 Thread Luke Seelenbinder
Hi all,

Thanks for confirming it should work—based on the feedback, we realized the 
issue is actually not with `nameservers`, but on startup, so this is actually 
init-addr taking precedence. We missed that in the initial analysis.

Our init-addr is `init-addr libc,last,none`. Due to a complex set of factors, 
using libc to resolve a host can simply hang, instead of fail. When HAProxy 
starts up and libc hangs, the startup times out instead of failing with no IP 
(i.e. `none`).

Is there a way to set the timeout for a libc address resolution? We may be able 
to drop `libc` in the init-addr list entirely due to a generally better setup 
now, but it's useful in some cases.

Best,
Luke

—
Luke Seelenbinder
Stadia Maps | Founder & CEO
stadiamaps.com

> On Mar 8, 2025, at 03:32, Lukas Tribus  wrote:
> 
> On Fri, 7 Mar 2025 at 18:42, Aurelien DARRAGON  wrote:
>> 
>> Looking at the code, and testing it for TCP servers it does seem to be
>> supported. To confirm I tried to use a "bad" source address, and it
>> fails as expected:
>> 
>>> [ALERT](104635) : Cannot bind to source address before connect() for 
>>> backend mybaddns. Aborting.
> 
> In this case for me it does not actually abort and haproxy goes into a
> busy loop over this bind().
> 
> 
> 
>> Using a correct address I see haproxy connecting using the proper
>> address (at least on the initial attempt)
> 
> Same here, it looks fine.
> 
> 
> 
 In practice, we've found it may use another IPv6 (e.g., one bound for 
 failover), which results in resolution failures.
> 
> Not trying to nitpick the setup here, just trying to get the full
> picture, are you saying that IPv6 connectivity is broken for every
> application that does specify a specific source address?
> 
> Are you sure there is nothing else going on here and it is haproxy
> that fails source address selection? I think this is going to require
> strace 'ing the haproxy process during connection establishment of the
> DNS TCP session and more setup details.
> 
> 
> Lukas
> 
> 
> 
> 
> Lukas
> 
> 



Re: [3.0.8] Nameserver with source param

2025-03-07 Thread Lukas Tribus
On Fri, 7 Mar 2025 at 18:42, Aurelien DARRAGON  wrote:
>
> Looking at the code, and testing it for TCP servers it does seem to be
> supported. To confirm I tried to use a "bad" source address, and it
> fails as expected:
>
> > [ALERT](104635) : Cannot bind to source address before connect() for 
> > backend mybaddns. Aborting.

In this case for me it does not actually abort and haproxy goes into a
busy loop over this bind().



> Using a correct address I see haproxy connecting using the proper
> address (at least on the initial attempt)

Same here, it looks fine.



> >> In practice, we've found it may use another IPv6 (e.g., one bound for 
> >> failover), which results in resolution failures.

Not trying to nitpick the setup here, just trying to get the full
picture, are you saying that IPv6 connectivity is broken for every
application that does specify a specific source address?

Are you sure there is nothing else going on here and it is haproxy
that fails source address selection? I think this is going to require
strace 'ing the haproxy process during connection establishment of the
DNS TCP session and more setup details.


Lukas




Lukas




Re: [3.0.8] Nameserver with source param

2025-03-07 Thread Aurelien DARRAGON
Looking at the code, and testing it for TCP servers it does seem to be
supported. To confirm I tried to use a "bad" source address, and it
fails as expected:

> [ALERT](104635) : Cannot bind to source address before connect() for 
> backend mybaddns. Aborting.

Using a correct address I see haproxy connecting using the proper
address (at least on the initial attempt)

(dns over tcp properly leverages stream applet so server options
relevant to tcp should work)

So if it doesn't work consistently we're more likely hitting a bug indeed :/

Aurelien


On 3/7/25 16:47, Willy Tarreau wrote:
> Hi Luke,
> 
> On Fri, Mar 07, 2025 at 02:28:04PM +0700, Luke Seelenbinder wrote:
>> Hi list,
>>
>> We had a quick question. Does `nameserver` support the `source` parameter? 
>> It appears to in the documentation and the config validates, but it seems 
>> HAProxy may ignore it.
>>
>> Our relevant config:
>>
>> resolvers default
>>   # Note: we prefer using AWS, but we can't due to: 
>> https://github.com/haproxy/haproxy/issues/1845
>>   #nameserver aws1 tcp6@[2600:9000:5300:f500::1]:53 source [{{ public_ipv6 
>> }}]
>>   #nameserver aws2 tcp6@[2600:9000:5302:cc00::1]:53 source [{{ public_ipv6 
>> }}]
>>   nameserver g1 tcp6@[2001:4860:4860::]:53 source [{{ 
>> public_ipv6 }}]
>>   nameserver g2 tcp6@[2001:4860:4860::8844]:53 source [{{ 
>> public_ipv6 }}]
>>   nameserver opendnstcp6@[2620:0:ccc::2]:53source [{{ 
>> public_ipv6 }}]
>>   accepted_payload_size 8192
>>   resolve_retries 4
>>
>>   hold valid  60s
>>   hold obsolete   30s
>>   hold timeout300s
>>
>>   timeout resolve 20s
>>   timeout retry   1s
>>
>> In practice, we've found it may use another IPv6 (e.g., one bound for 
>> failover), which results in resolution failures.
> 
> I must confess I have no idea :-/  Normally it should work if it's
> accepted by the config, but maybe you've hit a big. I'm CCing Emeric.
> 
> Thanks,
> Willy
> 
> 
> 





Re: [3.0.8] Nameserver with source param

2025-03-07 Thread Willy Tarreau
Hi Luke,

On Fri, Mar 07, 2025 at 02:28:04PM +0700, Luke Seelenbinder wrote:
> Hi list,
> 
> We had a quick question. Does `nameserver` support the `source` parameter? It 
> appears to in the documentation and the config validates, but it seems 
> HAProxy may ignore it.
> 
> Our relevant config:
> 
> resolvers default
>   # Note: we prefer using AWS, but we can't due to: 
> https://github.com/haproxy/haproxy/issues/1845
>   #nameserver aws1 tcp6@[2600:9000:5300:f500::1]:53 source [{{ public_ipv6 }}]
>   #nameserver aws2 tcp6@[2600:9000:5302:cc00::1]:53 source [{{ public_ipv6 }}]
>   nameserver g1 tcp6@[2001:4860:4860::]:53 source [{{ public_ipv6 
> }}]
>   nameserver g2 tcp6@[2001:4860:4860::8844]:53 source [{{ public_ipv6 
> }}]
>   nameserver opendnstcp6@[2620:0:ccc::2]:53source [{{ public_ipv6 
> }}]
>   accepted_payload_size 8192
>   resolve_retries 4
> 
>   hold valid  60s
>   hold obsolete   30s
>   hold timeout300s
> 
>   timeout resolve 20s
>   timeout retry   1s
> 
> In practice, we've found it may use another IPv6 (e.g., one bound for 
> failover), which results in resolution failures.

I must confess I have no idea :-/  Normally it should work if it's
accepted by the config, but maybe you've hit a big. I'm CCing Emeric.

Thanks,
Willy




Re: native QUIC is supported since OpenSSL 3.5

2025-03-06 Thread Willy Tarreau
Hi Ilya,

On Thu, Mar 06, 2025 at 01:54:25PM +0100,  ??? wrote:
> Hello,
> 
> 
> likely you already heard the news
> 
> QUIC server post-rebase nits · openssl/openssl@b48145c
> 

Well, we'll see if anyone ends up making any use of it, 4 years after
QUIC stacks have all been developed and are still improving. Having a
transport layer implemented in a crypto library is completely upside-
down and unlikely to integrate well in many projects...

And finally we're well placed to know that you can't declare "Tada,
now we have a working QUIC stack". QUIC was designed for continuous
experimentation and improvements, not for being tied deeply into a
lib that will not evolve in the field for several years due to
security constraints.

Given that even the minimal part that was asked for 5 or 6 years now
by many (the so-called "boringssl API") was done differently, we can
easily imagine that the rest of the API is not used like others would
possibly expect (or maybe it's just another NIH reason to justify
having waited so long).

Daniel Stenberg covered some of these points there recently:

   https://daniel.haxx.se/blog/2025/02/16/openssl-does-a-quic-api/

Cheers,
Willy




Re: [PR] FEATURE: Enhance handling of non-RFC conformant syslog messages

2025-03-04 Thread Roberto Moreda
Sounds great. Thx!

Enviado desde dispositivo móvil. 
Sent from mobile device. 
---
Roberto Moreda
Allenta Consulting
http://allenta.com

> On 4 Mar 2025, at 19:14, Aurelien DARRAGON  wrote:
> 
> Hi Rober,
> 
>> On 3/3/25 22:42, Roberto Moreda wrote:
>> Hi again.
>> 
>> For completeness, I'm going to update my haproxy fork with three commits 
>> that correspond to what you suggested. See them attached and hopefully they 
>> should fit the bill 😊.
>> 
>> I added a minor fix to avoid a warning on a possible null pointer in this 
>> snippet of syslog_fd_handler():
>> 
>> struct listener *l = objt_listener(fdtab[fd].owner);
>> struct proxy *frontend;
>> int max_accept;
>> 
>> BUG_ON(!l);
>> frontend = l->bind_conf->frontend;   /* <-- moved this assignment after 
>> BUG_ON */
>> 
>> Cheers,
>> 
>>  Rober
> 
> Thanks for that!
> 
> Just to let you know that we are not forgetting you, we are currently
> discussing the patches between team members to perform some minor
> adjustments so that they can be merged :)
> 
> We'll let you know how it goes
> 
> Aurelien


Re: [PR] FEATURE: Enhance handling of non-RFC conformant syslog messages

2025-03-04 Thread Aurelien DARRAGON
Hi Rober,

On 3/3/25 22:42, Roberto Moreda wrote:
> Hi again.
> 
> For completeness, I'm going to update my haproxy fork with three commits that 
> correspond to what you suggested. See them attached and hopefully they should 
> fit the bill 😊.
> 
> I added a minor fix to avoid a warning on a possible null pointer in this 
> snippet of syslog_fd_handler():
> 
> struct listener *l = objt_listener(fdtab[fd].owner);
> struct proxy *frontend;
> int max_accept;
> 
> BUG_ON(!l);
> frontend = l->bind_conf->frontend;   /* <-- moved this assignment after 
> BUG_ON */
> 
> Cheers,
> 
>   Rober

Thanks for that!

Just to let you know that we are not forgetting you, we are currently
discussing the patches between team members to perform some minor
adjustments so that they can be merged :)

We'll let you know how it goes

Aurelien




RE: [Request received]

2025-03-04 Thread Antifraud Response
Hello Team,

We seek an update on the ticket number : 124492.

CSC Digital Brand Services is working on behalf of Invesco Group Services, Inc. 
as their authorized agent charged with to bringing infringing content online 
into compliance, and we have been informed that there is currently a website 
hosted by 185.68.16.9 that is involved in  impersonating and involved in 
Investment scam to target the customers of Invesco Group Services, Inc..


The infringing content can be found at:
https://www.invesco-properties.space/
https://www.invescope.space/
http://invesco-group.site

On behalf of Invesco Group Services, Inc., CSC Digital Brand Services requests 
that the Web site(s) listed above be deactivated immediately.

Regards,
Harish Kurapati
Analyst | Global AntiFraud Team
24x7 GSRT: 1 866 880 7025
24x7 GSRT Intl: 1 408 477 7706
antifraud.respo...@cscglobal.com
CSC®
251 Little Falls Drive
Wilmington, Delaware 19808-1674
USA
cscdbs.com




From: IPRoyal support 
Sent: Sunday, March 2, 2025 9:36 AM
To: Antifraud Response 
Cc: network ; haproxy 
Subject: [Request received]


Your request (124492) has been received and is being reviewed by our support 
staff.

To add additional comments, reply to this email.
The content of this email is confidential and intended for the recipient 
specified in the message only. It is strictly forbidden to share any part of 
this message with any third party, without the written consent of the sender. 
If you received this message by mistake, please reply to this message and 
follow with its deletion, so that we can ensure such a mistake does not occur 
in the future.
[4W0436-PEKK4]


Re: [PR] FEATURE: Enhance handling of non-RFC conformant syslog messages

2025-03-03 Thread Roberto Moreda
Hi again.

For completeness, I'm going to update my haproxy fork with three commits that 
correspond to what you suggested. See them attached and hopefully they should 
fit the bill 😊.

I added a minor fix to avoid a warning on a possible null pointer in this 
snippet of syslog_fd_handler():

struct listener *l = objt_listener(fdtab[fd].owner);
struct proxy *frontend;
int max_accept;

BUG_ON(!l);
frontend = l->bind_conf->frontend;   /* <-- moved this assignment after BUG_ON 
*/

Cheers,

  Rober




---
Roberto Moreda
Allenta Consulting (+34 881922600)
Privacidad / Privacy

On Mar 3, 2025, at 19:08, Roberto Moreda  wrote:

If you don't mind to split the patches (or even edit details), please go ahead. 
Credit is always shared 😉.

I meant that my original idea was slightly different and it was through 
dialogue that we found the good one 😊. Thx a lot.

Enviado desde dispositivo móvil.
Sent from mobile device.
---
Roberto Moreda
Allenta Consulting
http://allenta.com

On 3 Mar 2025, at 18:56, Roberto Moreda  wrote:


+
+#define PR_O2_DONTPARSELOG   0x0200 /* don't parse log messages */
+#define PR_O2_ASSUME_RFC6587_NTF 0x0400 /* assume that we are going to 
receive just non-transparent framing messages */
/* unused : 0x000..0x8000 */

I would add a small note in the comments to mention that they are
log-proxy specific. Also the 0x0100 may be used instead of
0x0400. Plus the "unused" comment below them should be updated.

Maybe I'm wrong, but note that the previous lines in the file are:

#define PR_O2_RSTRICT_REQ_HDR_NAMES_BLK  0x0040 /* reject request with 
header names containing chars outside of [0-9a-zA-Z-] charset */
#define PR_O2_RSTRICT_REQ_HDR_NAMES_DEL  0x0080 /* remove request header 
names containing chars outside of [0-9a-zA-Z-] charset */
#define PR_O2_RSTRICT_REQ_HDR_NAMES_NOOP 0x0100 /* preserve request header 
names containing chars outside of [0-9a-zA-Z-] charset */
#define PR_O2_RSTRICT_REQ_HDR_NAMES_MASK 0x01c0 /* mask for 
restrict-http-header-names option */

I used 0x0200 and 0x0400 to avoid collisions —we don't want a future 
where PR_O2_RSTRICT_REQ_HDR_NAMES_NOOP could be used side-to-side with 
PR_O2_ASSUME_RFC6587_NTF 0x0400—.

Also, the "unused" comment seems not updated originally because I guess is just 
a hint on the fact that 0x8000 and on are "reserved".
Just let me know about this and I'll patch the patch 😊.

If you don't mind to split the patches (or even edit details), please go ahead. 
Credit is always shared 😉.

Thx!

  Rober

---
Roberto Moreda
Allenta Consulting (+34 881922600)
Privacidad / Privacy

On Mar 3, 2025, at 18:41, Aurelien DARRAGON  wrote:



On 3/3/25 17:37, Roberto Moreda wrote:
Thank you for sharing your thoughts, I really appreciate it.

I do really like the idea of having a regular frontend section with "mode log" 
in the future. Considering this, I fully agree on the approach that you suggest.

Notes:

* The new two options take two bits in the proxy->options2.
* I replicate the loop to read options as it is in cfgparse-listen.c (only over 
options2). I added an explicit initialization px->options2 = 0.



Great, thanks for the quick turnaround!

It looks good to me. Just a few minor things:

+
+#define PR_O2_DONTPARSELOG   0x0200 /* don't parse log messages */
+#define PR_O2_ASSUME_RFC6587_NTF 0x0400 /* assume that we are going to 
receive just non-transparent framing messages */
/* unused : 0x000..0x8000 */

I would add a small note in the comments to mention that they are
log-proxy specific. Also the 0x0100 may be used instead of
0x0400. Plus the "unused" comment below them should be updated.


I'm attaching the new patch. I you prefer me to split it (or any other change), 
just let me know.

If you can split the patches (one for the options eval in
cfg_parse_log_forward()), one for prepare_log_message() and the last one
with the actual options, it would be super! I can do that on your behalf
if you want though :)

Thanks!





0001-MEDIUM-log-add-options-eval-for-log-forward.patch
Description: 0001-MEDIUM-log-add-options-eval-for-log-forward.patch


0002-MINOR-log-detach-prepare-from-parse-message.patch
Description: 0002-MINOR-log-detach-prepare-from-parse-message.patch


0003-MEDIUM-log-add-dontparselog-and-assume-rfc6587-ntf-o.patch
Description:  0003-MEDIUM-log-add-dontparselog-and-assume-rfc6587-ntf-o.patch


Re: [PR] FEATURE: Enhance handling of non-RFC conformant syslog messages

2025-03-03 Thread Roberto Moreda
If you don't mind to split the patches (or even edit details), please go ahead. 
Credit is always shared 😉.

I meant that my original idea was slightly different and it was through 
dialogue that we found the good one 😊. Thx a lot.

Enviado desde dispositivo móvil.
Sent from mobile device.
---
Roberto Moreda
Allenta Consulting
http://allenta.com

On 3 Mar 2025, at 18:56, Roberto Moreda  wrote:


+
+#define PR_O2_DONTPARSELOG   0x0200 /* don't parse log messages */
+#define PR_O2_ASSUME_RFC6587_NTF 0x0400 /* assume that we are going to 
receive just non-transparent framing messages */
/* unused : 0x000..0x8000 */

I would add a small note in the comments to mention that they are
log-proxy specific. Also the 0x0100 may be used instead of
0x0400. Plus the "unused" comment below them should be updated.

Maybe I'm wrong, but note that the previous lines in the file are:

#define PR_O2_RSTRICT_REQ_HDR_NAMES_BLK  0x0040 /* reject request with 
header names containing chars outside of [0-9a-zA-Z-] charset */
#define PR_O2_RSTRICT_REQ_HDR_NAMES_DEL  0x0080 /* remove request header 
names containing chars outside of [0-9a-zA-Z-] charset */
#define PR_O2_RSTRICT_REQ_HDR_NAMES_NOOP 0x0100 /* preserve request header 
names containing chars outside of [0-9a-zA-Z-] charset */
#define PR_O2_RSTRICT_REQ_HDR_NAMES_MASK 0x01c0 /* mask for 
restrict-http-header-names option */

I used 0x0200 and 0x0400 to avoid collisions —we don't want a future 
where PR_O2_RSTRICT_REQ_HDR_NAMES_NOOP could be used side-to-side with 
PR_O2_ASSUME_RFC6587_NTF 0x0400—.

Also, the "unused" comment seems not updated originally because I guess is just 
a hint on the fact that 0x8000 and on are "reserved".
Just let me know about this and I'll patch the patch 😊.

If you don't mind to split the patches (or even edit details), please go ahead. 
Credit is always shared 😉.

Thx!

  Rober

---
Roberto Moreda
Allenta Consulting (+34 881922600)
Privacidad / Privacy

On Mar 3, 2025, at 18:41, Aurelien DARRAGON  wrote:



On 3/3/25 17:37, Roberto Moreda wrote:
Thank you for sharing your thoughts, I really appreciate it.

I do really like the idea of having a regular frontend section with "mode log" 
in the future. Considering this, I fully agree on the approach that you suggest.

Notes:

* The new two options take two bits in the proxy->options2.
* I replicate the loop to read options as it is in cfgparse-listen.c (only over 
options2). I added an explicit initialization px->options2 = 0.



Great, thanks for the quick turnaround!

It looks good to me. Just a few minor things:

+
+#define PR_O2_DONTPARSELOG   0x0200 /* don't parse log messages */
+#define PR_O2_ASSUME_RFC6587_NTF 0x0400 /* assume that we are going to 
receive just non-transparent framing messages */
/* unused : 0x000..0x8000 */

I would add a small note in the comments to mention that they are
log-proxy specific. Also the 0x0100 may be used instead of
0x0400. Plus the "unused" comment below them should be updated.


I'm attaching the new patch. I you prefer me to split it (or any other change), 
just let me know.

If you can split the patches (one for the options eval in
cfg_parse_log_forward()), one for prepare_log_message() and the last one
with the actual options, it would be super! I can do that on your behalf
if you want though :)

Thanks!




Re: [PR] FEATURE: Enhance handling of non-RFC conformant syslog messages

2025-03-03 Thread Roberto Moreda
+
+#define PR_O2_DONTPARSELOG   0x0200 /* don't parse log messages */
+#define PR_O2_ASSUME_RFC6587_NTF 0x0400 /* assume that we are going to 
receive just non-transparent framing messages */
/* unused : 0x000..0x8000 */

I would add a small note in the comments to mention that they are
log-proxy specific. Also the 0x0100 may be used instead of
0x0400. Plus the "unused" comment below them should be updated.

Maybe I'm wrong, but note that the previous lines in the file are:

#define PR_O2_RSTRICT_REQ_HDR_NAMES_BLK  0x0040 /* reject request with 
header names containing chars outside of [0-9a-zA-Z-] charset */
#define PR_O2_RSTRICT_REQ_HDR_NAMES_DEL  0x0080 /* remove request header 
names containing chars outside of [0-9a-zA-Z-] charset */
#define PR_O2_RSTRICT_REQ_HDR_NAMES_NOOP 0x0100 /* preserve request header 
names containing chars outside of [0-9a-zA-Z-] charset */
#define PR_O2_RSTRICT_REQ_HDR_NAMES_MASK 0x01c0 /* mask for 
restrict-http-header-names option */

I used 0x0200 and 0x0400 to avoid collisions —we don't want a future 
where PR_O2_RSTRICT_REQ_HDR_NAMES_NOOP could be used side-to-side with 
PR_O2_ASSUME_RFC6587_NTF 0x0400—.

Also, the "unused" comment seems not updated originally because I guess is just 
a hint on the fact that 0x8000 and on are "reserved".
Just let me know about this and I'll patch the patch 😊.

If you don't mind to split the patches (or even edit details), please go ahead. 
Credit is always shared 😉.

Thx!

  Rober

---
Roberto Moreda
Allenta Consulting (+34 881922600)
Privacidad / Privacy

On Mar 3, 2025, at 18:41, Aurelien DARRAGON  wrote:



On 3/3/25 17:37, Roberto Moreda wrote:
Thank you for sharing your thoughts, I really appreciate it.

I do really like the idea of having a regular frontend section with "mode log" 
in the future. Considering this, I fully agree on the approach that you suggest.

Notes:

* The new two options take two bits in the proxy->options2.
* I replicate the loop to read options as it is in cfgparse-listen.c (only over 
options2). I added an explicit initialization px->options2 = 0.



Great, thanks for the quick turnaround!

It looks good to me. Just a few minor things:

+
+#define PR_O2_DONTPARSELOG   0x0200 /* don't parse log messages */
+#define PR_O2_ASSUME_RFC6587_NTF 0x0400 /* assume that we are going to 
receive just non-transparent framing messages */
/* unused : 0x000..0x8000 */

I would add a small note in the comments to mention that they are
log-proxy specific. Also the 0x0100 may be used instead of
0x0400. Plus the "unused" comment below them should be updated.


I'm attaching the new patch. I you prefer me to split it (or any other change), 
just let me know.

If you can split the patches (one for the options eval in
cfg_parse_log_forward()), one for prepare_log_message() and the last one
with the actual options, it would be super! I can do that on your behalf
if you want though :)

Thanks!




Re: [PR] FEATURE: Enhance handling of non-RFC conformant syslog messages

2025-03-03 Thread Aurelien DARRAGON



On 3/3/25 17:37, Roberto Moreda wrote:
> Thank you for sharing your thoughts, I really appreciate it.
> 
> I do really like the idea of having a regular frontend section with "mode 
> log" in the future. Considering this, I fully agree on the approach that you 
> suggest.
> 
> Notes:
> 
> * The new two options take two bits in the proxy->options2.
> * I replicate the loop to read options as it is in cfgparse-listen.c (only 
> over options2). I added an explicit initialization px->options2 = 0.
> 


Great, thanks for the quick turnaround!

It looks good to me. Just a few minor things:

> +
> +#define PR_O2_DONTPARSELOG   0x0200 /* don't parse log messages */
> +#define PR_O2_ASSUME_RFC6587_NTF 0x0400 /* assume that we are going to 
> receive just non-transparent framing messages */
>  /* unused : 0x000..0x8000 */

I would add a small note in the comments to mention that they are
log-proxy specific. Also the 0x0100 may be used instead of
0x0400. Plus the "unused" comment below them should be updated.


> I'm attaching the new patch. I you prefer me to split it (or any other 
> change), just let me know.

If you can split the patches (one for the options eval in
cfg_parse_log_forward()), one for prepare_log_message() and the last one
with the actual options, it would be super! I can do that on your behalf
if you want though :)

Thanks!





Re: [PR] FEATURE: Enhance handling of non-RFC conformant syslog messages

2025-03-03 Thread Roberto Moreda
Thank you for sharing your thoughts, I really appreciate it.

I do really like the idea of having a regular frontend section with "mode log" 
in the future. Considering this, I fully agree on the approach that you suggest.

Notes:

* The new two options take two bits in the proxy->options2.
* I replicate the loop to read options as it is in cfgparse-listen.c (only over 
options2). I added an explicit initialization px->options2 = 0.

I'm attaching the new patch. I you prefer me to split it (or any other change), 
just let me know.
Thx!

  Rober



---
Roberto Moreda
Allenta Consulting (+34 881922600)
Privacidad / Privacy

On Mar 3, 2025, at 16:28, Aurelien DARRAGON  wrote:


My rationale was to separate clearly the set of options. As I saw already two 
cluttered set of flags in options and options2, I guessed it would be clearer 
to  create a new one and avoid unions or context-dependent meanings to save a 
few bytes per config. Also I saw in struct proxy that there are two 
declarations (no_options and no_options2) marked specifically as "used only 
during configuration parsing", so I guessed that it shouldn't be a big deal 
adding a new one.


Well that's a good point. I was trying to avoid eating those additional
bytes for non-log proxies but if it ends up being more complicated I'm
not sure it's worth the trouble you're right.

As I thought about this a bit more, I think it doesn't really make sense
to add dedicated log_options field if we expect such options to be
relevant for both frontend log proxies (log-forward) and backend log
proxies (backend with "mode log").

While frontend log-proxies (log-forward) have a dedicated config section
for historical reason, which makes it obvious that the proxy type is
different during option parsing, it isn't the case for backend log
proxies (backend with "mode log"). Indeed, at parsing time we don't know
the type of the proxy yet, so we can only parse the config on the fly,
storing all possible options even if they are not compatible with the
final proxy type, and try to resolve options after all the config was
parsed to detect potential configuration errors.

What I'm trying to tell is that, either we go the simplest route, which
is to add a way to configure options for log-forward section
specifically, and we don't try to anticipate the future nor consider
that the options may be used for log backends or regular proxies.

Or we try to integrate log-oriented options in the existing "generic"
API (options+options2). While doing so is not very elegant due to the
log oriented options being mixed with http/tcp oriented options
(although we can always add comments to specify that some options are
only relevant under a certain context in addition to the PR_MODE_SYSLOG
which indicates that they are to be used with log proxies), the main
advantage is that some log-oriented options could be exposed for log
frontends and log backends at the same time. We even thought of the
possibility to configure log frontends (log-forward section) in a more
generic way in the future: a regular frontend section with "mode log"
set, like we already have for log backends, and this latter approach
would remain compatible with this, while the first one is not.

So all things considered I'm tempted to say that we should stick to the
"ugly" option/option2 flags, having the flag declared as usual for the
PR_MODE_SYSLOG with a note or name that suggests that they are only
relevant for log oriented proxies. For now since we cannot configure a
log-forward section with a simple frontend, we would still require the
"for" loop you implemented in cfg_parse_log_forward() to iterate over
options with PR_MODE_SYSLOG mode + FRONTEND capability to evaluate
available options, at least as a transition solution.

Ideally it should be done in a preliminary patch (just have the options
evaluating logic under cfg_parse_log_forward() for options with
PR_MODE_SYSLOG + FRONTEND cap in a dedicated patch, before the patch
that adds prepare_log_message() and before the one that effectively adds
the 2 options + their handling in syslog functions)

Once this is done I think we're ready for a merge, please let me know if
it looks ok to you :)

Thanks!



add-dontparselog-and-assume-rfc6587-ntf.patch
Description: add-dontparselog-and-assume-rfc6587-ntf.patch


Re: [PR] FEATURE: Enhance handling of non-RFC conformant syslog messages

2025-03-03 Thread Aurelien DARRAGON
> 
> My rationale was to separate clearly the set of options. As I saw already two 
> cluttered set of flags in options and options2, I guessed it would be clearer 
> to  create a new one and avoid unions or context-dependent meanings to save a 
> few bytes per config. Also I saw in struct proxy that there are two 
> declarations (no_options and no_options2) marked specifically as "used only 
> during configuration parsing", so I guessed that it shouldn't be a big deal 
> adding a new one.


Well that's a good point. I was trying to avoid eating those additional
bytes for non-log proxies but if it ends up being more complicated I'm
not sure it's worth the trouble you're right.

As I thought about this a bit more, I think it doesn't really make sense
to add dedicated log_options field if we expect such options to be
relevant for both frontend log proxies (log-forward) and backend log
proxies (backend with "mode log").

While frontend log-proxies (log-forward) have a dedicated config section
for historical reason, which makes it obvious that the proxy type is
different during option parsing, it isn't the case for backend log
proxies (backend with "mode log"). Indeed, at parsing time we don't know
the type of the proxy yet, so we can only parse the config on the fly,
storing all possible options even if they are not compatible with the
final proxy type, and try to resolve options after all the config was
parsed to detect potential configuration errors.

What I'm trying to tell is that, either we go the simplest route, which
is to add a way to configure options for log-forward section
specifically, and we don't try to anticipate the future nor consider
that the options may be used for log backends or regular proxies.

Or we try to integrate log-oriented options in the existing "generic"
API (options+options2). While doing so is not very elegant due to the
log oriented options being mixed with http/tcp oriented options
(although we can always add comments to specify that some options are
only relevant under a certain context in addition to the PR_MODE_SYSLOG
which indicates that they are to be used with log proxies), the main
advantage is that some log-oriented options could be exposed for log
frontends and log backends at the same time. We even thought of the
possibility to configure log frontends (log-forward section) in a more
generic way in the future: a regular frontend section with "mode log"
set, like we already have for log backends, and this latter approach
would remain compatible with this, while the first one is not.

So all things considered I'm tempted to say that we should stick to the
"ugly" option/option2 flags, having the flag declared as usual for the
PR_MODE_SYSLOG with a note or name that suggests that they are only
relevant for log oriented proxies. For now since we cannot configure a
log-forward section with a simple frontend, we would still require the
"for" loop you implemented in cfg_parse_log_forward() to iterate over
options with PR_MODE_SYSLOG mode + FRONTEND capability to evaluate
available options, at least as a transition solution.

Ideally it should be done in a preliminary patch (just have the options
evaluating logic under cfg_parse_log_forward() for options with
PR_MODE_SYSLOG + FRONTEND cap in a dedicated patch, before the patch
that adds prepare_log_message() and before the one that effectively adds
the 2 options + their handling in syslog functions)

Once this is done I think we're ready for a merge, please let me know if
it looks ok to you :)

Thanks!




Re: [PR] FEATURE: Enhance handling of non-RFC conformant syslog messages

2025-03-03 Thread Roberto Moreda
Ouch, my mail client messed up the quotations... :-(
I hope context could help clarify mu answer. Sorry.

---
Roberto Moreda
Allenta Consulting<http://www.allenta.com> (+34 881922600)
Privacidad / Privacy<http://allenta.com/mail-privacy>

On Mar 3, 2025, at 13:33, Roberto Moreda  wrote:

Hi, Aurelien.

Sorry for the indentation issues. See the corrected patch attached. Also I 
declared a pointer to frontend at the top of syslog_fd_handler() to have the 
same access pattern as in syslog_io_handler().

Regarding your extra notes:

I don't have an opinion regarding the "proxy->options_log" options
dedicated to log-forward proxies. Since log-forward section only
supports keywords explicitly handled in cfg_parse_log_forward() (so
proxy->options and proxy->options2 are currently unused), I'm wondering
if we could re-use the same memory area, either have PR_O_* flags that
have a different meaning when set on log-forward proxy, or share the
same memory area by leveraging an union for options_log member.

My rationale was to separate clearly the set of options. As I saw already two 
cluttered set of flags in options and options2, I guessed it would be clearer 
to  create a new one and avoid unions or context-dependent meanings to save a 
few bytes per config. Also I saw in struct proxy that there are two 
declarations (no_options and no_options2) marked specifically as "used only 
during configuration parsing", so I guessed that it shouldn't be a big deal 
adding a new one.

Truth is that I can't imagine right now a situation in the future where 
proxy->options and proxy->options2 could be useful in log-forward section 
(let's go crazy; transforming log messages into payloads to be sent using 
HTTP?), but it would be a PITA if that happens.

Also it
looks like cfg_opts_log[] is a bit overkill because only the name->flag
value association is used. I'll let other give their opinion on that point.

My approach is that this way we can use the same scheme of things as with 
proxy->options and proxy->options2, including the possibility of using 
PR_CAP_NONE to disable options at build time. Again, I can't see the future, 
but what about the possibility of options that could cross boundaries from 
log-forward to other type of proxies? Some type of stickiness at log message 
level? IDK, but I think that following a well established pattern is a good 
thing (like declaring a frontend pointer for syslog_fd_handler as in 
syslog_io_handler).

Anyway, I'm more that willing to try other approaches if that makes easier for 
this to be included as a feature.
Thanks a lot in advance.

  Rober



---
Roberto Moreda
Allenta Consulting<http://www.allenta.com/> (+34 881922600)
Privacidad / Privacy<http://allenta.com/mail-privacy>

On Mar 3, 2025, at 10:35, Aurelien DARRAGON  wrote:


  Hi!

  As discussed few weeks ago in issue #2856 , this is a PR
  that adds two useful options to the log-forward section.

  ```
  option dontparselog
Enables HAProxy to relay syslog messages
  without attempting to parse and
restructure them, useful for
  forwarding messages that may not conform to
traditional formats.
  This option should be used with the format raw setting on
  destination log targets to ensure the original message content is
  preserved.

  option assume-rfc6587-ntf
Directs HAProxy to
  treat incoming TCP log streams always as using
non-transparent
  framing. This option simplifies the framing logic and ensures
  consistent handling of messages, particularly useful when dealing with
  improperly formed starting characters.
  ```

  Thanks a lot
  in advance for taking this into consideration.
  Best,
  Rober

Hi Rober,

Thanks for your contribution.
Here are some remarks:

diff --git a/src/log.c b/src/log.c
index 90b5645cb2e6..234f83e2426f 100644
--- a/src/log.c
+++ b/src/log.c
@@ -5376,6 +5376,24 @@ void app_log(struct list *loggers, struct buffer *tag, 
int level, const char *fo

   __send_log(NULL, loggers, tag, level, logline, data_len, 
default_rfc5424_sd_log_format, 2);
}
+
+/*
+ * This function sets up the initial state for a log message by preparing
+ * the buffer, setting default values for the log level and facility, and
+ * initializing metadata fields. It is used before parsing or constructing
+ * a log message to ensure all fields are in a known state.
+ */
+void prepare_log_message(char *buf, size_t buflen, int *level, int *facility,

idendation issue below

+   
 struct ist *metadata, char **message, size_t *size)
+{
+   *level = *facility = -1;
+
+   *message = buf;
+   *size = buflen;
+
+   memset(metadata, 0, LOG_META_FIELDS*sizeof(struct ist));
+}



@@ -6203,6 +6206,34 @@ int cfg_parse_log_forward(const char *file, int linenum, 
char **args, int kwm)
   }
  

Re: [PR] FEATURE: Enhance handling of non-RFC conformant syslog messages

2025-03-03 Thread Roberto Moreda
Hi, Aurelien.

Sorry for the indentation issues. See the corrected patch attached. Also I 
declared a pointer to frontend at the top of syslog_fd_handler() to have the 
same access pattern as in syslog_io_handler().

Regarding your extra notes:

I don't have an opinion regarding the "proxy->options_log" options
dedicated to log-forward proxies. Since log-forward section only
supports keywords explicitly handled in cfg_parse_log_forward() (so
proxy->options and proxy->options2 are currently unused), I'm wondering
if we could re-use the same memory area, either have PR_O_* flags that
have a different meaning when set on log-forward proxy, or share the
same memory area by leveraging an union for options_log member.

My rationale was to separate clearly the set of options. As I saw already two 
cluttered set of flags in options and options2, I guessed it would be clearer 
to  create a new one and avoid unions or context-dependent meanings to save a 
few bytes per config. Also I saw in struct proxy that there are two 
declarations (no_options and no_options2) marked specifically as "used only 
during configuration parsing", so I guessed that it shouldn't be a big deal 
adding a new one.

Truth is that I can't imagine right now a situation in the future where 
proxy->options and proxy->options2 could be useful in log-forward section 
(let's go crazy; transforming log messages into payloads to be sent using 
HTTP?), but it would be a PITA if that happens.

Also it
looks like cfg_opts_log[] is a bit overkill because only the name->flag
value association is used. I'll let other give their opinion on that point.

My approach is that this way we can use the same scheme of things as with 
proxy->options and proxy->options2, including the possibility of using 
PR_CAP_NONE to disable options at build time. Again, I can't see the future, 
but what about the possibility of options that could cross boundaries from 
log-forward to other type of proxies? Some type of stickiness at log message 
level? IDK, but I think that following a well established pattern is a good 
thing (like declaring a frontend pointer for syslog_fd_handler as in 
syslog_io_handler).

Anyway, I'm more that willing to try other approaches if that makes easier for 
this to be included as a feature.
Thanks a lot in advance.

  Rober



---
Roberto Moreda
Allenta Consulting<http://www.allenta.com> (+34 881922600)
Privacidad / Privacy<http://allenta.com/mail-privacy>

On Mar 3, 2025, at 10:35, Aurelien DARRAGON  wrote:


  Hi!

  As discussed few weeks ago in issue #2856 , this is a PR
  that adds two useful options to the log-forward section.

  ```
  option dontparselog
Enables HAProxy to relay syslog messages
  without attempting to parse and
restructure them, useful for
  forwarding messages that may not conform to
traditional formats.
  This option should be used with the format raw setting on
  destination log targets to ensure the original message content is
  preserved.

  option assume-rfc6587-ntf
Directs HAProxy to
  treat incoming TCP log streams always as using
non-transparent
  framing. This option simplifies the framing logic and ensures
  consistent handling of messages, particularly useful when dealing with
  improperly formed starting characters.
  ```

  Thanks a lot
  in advance for taking this into consideration.
  Best,
  Rober

Hi Rober,

Thanks for your contribution.
Here are some remarks:

diff --git a/src/log.c b/src/log.c
index 90b5645cb2e6..234f83e2426f 100644
--- a/src/log.c
+++ b/src/log.c
@@ -5376,6 +5376,24 @@ void app_log(struct list *loggers, struct buffer *tag, 
int level, const char *fo

   __send_log(NULL, loggers, tag, level, logline, data_len, 
default_rfc5424_sd_log_format, 2);
}
+
+/*
+ * This function sets up the initial state for a log message by preparing
+ * the buffer, setting default values for the log level and facility, and
+ * initializing metadata fields. It is used before parsing or constructing
+ * a log message to ensure all fields are in a known state.
+ */
+void prepare_log_message(char *buf, size_t buflen, int *level, int *facility,

idendation issue below

+   
 struct ist *metadata, char **message, size_t *size)
+{
+   *level = *facility = -1;
+
+   *message = buf;
+   *size = buflen;
+
+   memset(metadata, 0, LOG_META_FIELDS*sizeof(struct ist));
+}



@@ -6203,6 +6206,34 @@ int cfg_parse_log_forward(const char *file, int linenum, 
char **args, int kwm)
   }
   cfg_log_forward->timeout.client = MS_TO_TICKS(timeout);
   }
+   else if (strcmp(args[0], "option") == 0) {
+   int optnum;
+
+   if (*(args[1]) == '\0') {
+   ha_alert("parsing [%s:%d]: '%s' expe

Re: [PR] FEATURE: Enhance handling of non-RFC conformant syslog messages

2025-03-03 Thread Aurelien DARRAGON
>> diff --git a/src/log.c b/src/log.c
>> index 234f83e2426f..af9c4c704e94 100644
>> --- a/src/log.c
>> +++ b/src/log.c
>> @@ -5759,7 +5759,8 @@ void syslog_fd_handler(int fd)
>>  
>> prepare_log_message(buf->area, buf->data, &level, 
>> &facility, metadata, &message, &size);
>>  
>> -   parse_log_message(buf->area, buf->data, &level, 
>> &facility, metadata, &message, &size);
>> +   if (!(l->bind_conf->frontend->options_log & 
>> PR_OL_DONTPARSELOG))
> 
> I would rather have "struct proxy *frontend = strm_fe(s);" and use that
> frontend pointer to access options instead of l->bind_conf->frontend.
> 

Forget about that remark, there is no stream from the UDP handler
context.. my bad. However it would be cleaner to retrieve the frontend
at the top of the function so that it may me used similarly to the
syslog_io_handler() function using "frontend" pointer later in the function.




Re: [PR] FEATURE: Enhance handling of non-RFC conformant syslog messages

2025-03-03 Thread Aurelien DARRAGON
sticking rule after config parsing.
>   * Returns 1 for success and 0 for failure
>   */



> diff --git a/src/log.c b/src/log.c
> index 234f83e2426f..af9c4c704e94 100644
> --- a/src/log.c
> +++ b/src/log.c
> @@ -5759,7 +5759,8 @@ void syslog_fd_handler(int fd)
>  
> prepare_log_message(buf->area, buf->data, &level, 
> &facility, metadata, &message, &size);
>  
> -   parse_log_message(buf->area, buf->data, &level, 
> &facility, metadata, &message, &size);
> +   if (!(l->bind_conf->frontend->options_log & 
> PR_OL_DONTPARSELOG))

I would rather have "struct proxy *frontend = strm_fe(s);" and use that
frontend pointer to access options instead of l->bind_conf->frontend.

> +   parse_log_message(buf->area, buf->data, 
> &level, &facility, metadata, &message, &size);



Well, aside from indentation issues, I'm fine with the first patch.


Regarding the second one, the doc and behavior look good to me, However
I don't have an opinion regarding the "proxy->options_log" options
dedicated to log-forward proxies. Since log-forward section only
supports keywords explicitly handled in cfg_parse_log_forward() (so
proxy->options and proxy->options2 are currently unused), I'm wondering
if we could re-use the same memory area, either have PR_O_* flags that
have a different meaning when set on log-forward proxy, or share the
same memory area by leveraging an union for options_log member. Also it
looks like cfg_opts_log[] is a bit overkill because only the name->flag
value association is used. I'll let other give their opinion on that point.

Thanks :)

Aurelien




Re: Unsubscribe

2025-02-25 Thread Willy Tarreau
Hi Danijel,

On Tue, Feb 25, 2025 at 07:15:46PM +0100, Danijel Starman wrote:
> Hi Willy,
> 
> I've also tried to unsubscribe several times in the past, there is no mail,
> even in spam.
> Can you unsubscribe me too?

Done!

For future ones, please no need to spam the list, just send to me directly!

Regards,
Willy




Re: Unsubscribe

2025-02-25 Thread Danijel Starman
Hi Willy,

I've also tried to unsubscribe several times in the past, there is no mail,
even in spam.
Can you unsubscribe me too?

Best Regards,
Danijel


On Tue, 25 Feb 2025 at 10:26, Willy Tarreau  wrote:

> On Tue, Feb 25, 2025 at 09:19:54AM +, Alistair Lowe wrote:
> > Same issue here.
> > 
> > From: Rémi Lapeyre 
> > Sent: Tuesday, 25 February 2025 08:58:21
> > To: Aleksandar Lazic ; Büchter, Sven <
> s.buech...@lvm.de>
> > Cc: haproxy@formilux.org 
> > Subject: Re: Unsubscribe
> >
> > I don't know if Sven has the same issue but I'm not sure
> > haproxy+unsubscr...@formilux.org<mailto:haproxy+unsubscr...@formilux.org
> >
> > works, I have tried to unsubscribe a few times using this email address
> and
> > still receive messages from the list.
>
> Maybe the confirmation message arrived in your spam box ?
> Anyway, I've manually unsubscribed both of you now (in addition
> to Sven which I did privately).
>
> Best regards,
> Willy
>
>
>


Re: Unsubscribe

2025-02-25 Thread Willy Tarreau
On Tue, Feb 25, 2025 at 09:19:54AM +, Alistair Lowe wrote:
> Same issue here.
> 
> From: Rémi Lapeyre 
> Sent: Tuesday, 25 February 2025 08:58:21
> To: Aleksandar Lazic ; Büchter, Sven 
> Cc: haproxy@formilux.org 
> Subject: Re: Unsubscribe
> 
> I don't know if Sven has the same issue but I'm not sure
> haproxy+unsubscr...@formilux.org<mailto:haproxy+unsubscr...@formilux.org>
> works, I have tried to unsubscribe a few times using this email address and
> still receive messages from the list.

Maybe the confirmation message arrived in your spam box ?
Anyway, I've manually unsubscribed both of you now (in addition
to Sven which I did privately).

Best regards,
Willy




Re: [PATCH] BUILD: add possibility to use different QuicTLS variants

2025-02-25 Thread Willy Tarreau
On Tue, Feb 25, 2025 at 07:53:34AM +0100, Ilia Shipitsin wrote:
> initially QuicTLS started as a patchset on top of OpenSSL,
> currently project has started its own journey as QuicTLS
> 
> somehow we need both
> 
> ML: https://www.mail-archive.com/haproxy@formilux.org/msg45574.html
> GH: https://github.com/quictls/quictls/issues/244

Applied, thanks Ilya!
Willy




Re: Unsubscribe

2025-02-25 Thread Alistair Lowe
Same issue here.

From: Rémi Lapeyre 
Sent: Tuesday, 25 February 2025 08:58:21
To: Aleksandar Lazic ; Büchter, Sven 
Cc: haproxy@formilux.org 
Subject: Re: Unsubscribe

I don't know if Sven has the same issue but I'm not sure 
haproxy+unsubscr...@formilux.org<mailto:haproxy+unsubscr...@formilux.org> 
works, I have tried to unsubscribe a few times using this email address and 
still receive messages from the list.

Best,
Rémi

De : Aleksandar Lazic 
Date : mardi, 25 février 2025 à 09:26
À : Büchter, Sven 
Cc : haproxy@formilux.org 
Objet : Re: Unsubscribe

Hi Sven.

On 2025-02-25 (Di.) 08:28, Büchter, Sven wrote:
> Hi! Please Unsubscribe s.buech...@lvm.de <mailto:s.buech...@lvm.de> from
> Mailinglist!

It's  the same answer to the subscribe mail from 2025-02-06 :-)
https://www.mail-archive.com/haproxy@formilux.org/msg45581.html

This should be done by yourself.
https://www.haproxy.org/#tact

Regards
Alex




Re: Unsubscribe

2025-02-25 Thread Rémi Lapeyre
I don't know if Sven has the same issue but I'm not sure 
haproxy+unsubscr...@formilux.org<mailto:haproxy+unsubscr...@formilux.org> 
works, I have tried to unsubscribe a few times using this email address and 
still receive messages from the list.

Best,
Rémi

De : Aleksandar Lazic 
Date : mardi, 25 février 2025 à 09:26
À : Büchter, Sven 
Cc : haproxy@formilux.org 
Objet : Re: Unsubscribe

Hi Sven.

On 2025-02-25 (Di.) 08:28, Büchter, Sven wrote:
> Hi! Please Unsubscribe s.buech...@lvm.de <mailto:s.buech...@lvm.de> from
> Mailinglist!

It's  the same answer to the subscribe mail from 2025-02-06 :-)
https://www.mail-archive.com/haproxy@formilux.org/msg45581.html

This should be done by yourself.
https://www.haproxy.org/#tact

Regards
Alex




Re: Unsubscribe

2025-02-25 Thread Aleksandar Lazic

Hi Sven.

On 2025-02-25 (Di.) 08:28, Büchter, Sven wrote:
Hi! Please Unsubscribe s.buech...@lvm.de  from 
Mailinglist!


It's  the same answer to the subscribe mail from 2025-02-06 :-)
https://www.mail-archive.com/haproxy@formilux.org/msg45581.html

This should be done by yourself.
https://www.haproxy.org/#tact

Regards
Alex




Re: HAProxy Runtime-API CLI

2025-02-22 Thread Willy Tarreau
Ho Robert,

On Thu, Feb 20, 2025 at 09:11:15PM +0100, Robert Schönthal wrote:
> 
> 
> Hey Folks,
> 
> i want to introduce a new CLI interacting with the Runtime-API and like to
> gather feedback. If its useful et all and what could be improved. It was
> more or less a weekend project because i was tired using socat and always
> lookup the commands from the official documentation.
> 
> So i combined everything into one small CLI utility written in GO with the
> help of the lovely https://charm.sh library.
> 
> What does it do:
> 
>- present a nice (hopefully) table about all backends and servers
>(parsed from the show servers state command)
>- present a filterable list of all commands available (parsed from the
>help command)
>- make it easy to choose a command from this list, and execute it (with
>a help about available arguments)
> 
> here is the github repo: https://github.com/digitalkaoz/haproxy-runtime-cli
> under the latest release you will find a dedicated binary for your OS.
> 
> Thanks for feedback in advance [image: :slight_smile:]

Cool, looks great, thanks for sharing! I like the fact that it presents
the command's help message while typing. However I'm missing a way to
enter the beginning of a command I know instead of having to search for
it in the list. E.g. I'd like to be able to issue "show info" or "debug
dev counters" directly from the prompt. And your auto-completion mechanism
would likely allow to automatically refine it while I'm typing. This would
also possibly permit to access protected commands (those requiring expert
mode for example).

Cheers,
Willy




Re: [PATCH] MINOR: compression: Introduce minimum size

2025-02-22 Thread Willy Tarreau
On Fri, Feb 21, 2025 at 10:37:57PM +0100, Vincent Dechenaux wrote:
> This is the introduction of "minsize-req" and "minsize-res".
> These two options allow you to set the minimum payload size required for
> compression to be applied.
> This helps save CPU on both server and client sides when the payload does
> not need to be compressed.

Perfect, thank you very much. I didn't find anything to comment on, even
the defaults are properly handled. Great work! Now merged, thank you,
and do not hesitate to contribute more patches of the same quality ;-)

Willy




Re: [PATCH] MINOR: compression: tune.comp.minsize

2025-02-21 Thread Willy Tarreau
On Fri, Feb 21, 2025 at 12:50:51PM +0100, Vincent Dechenaux wrote:
> Hello,
> 
> Oh thanks I'm glad to hear that!

You're welcome!

> I hesitated to not do it globally and to do it per proxy, and I ended
> up doing it globally to mimic "tune.comp.maxlevel".

In fact the maxlevel is more about preserving the process' resources
when using zlib, that's mostly why it's global. Same for the
max-comp-cpu-usage or I don't exactly remember how it's called.

> For my personal case I don't have that requirement, to have this
> customisation level but that may be a good thing, yes.
> 
> How would it look like?
> Something like "compression minsize-req" and "compression minsize-res" ?
> If so, what about "compression minsize" (not sure due to "algo" and
> "type" being marked as legacy).

That could work, yes. Indeed maybe prefer minsize-req and minsize-res
so that we can choose based on the direction since we can even compress
requests (some users in cloud environments paying for inter-area traffic
are doing this to avoid paying for traffic twice for large uploads).

> I could definitely send a new patch in that way.

That would be great then!

Do not hesitate to ask if you have doubts or questions.

Thanks,
Willy




Re: [PATCH] MINOR: compression: tune.comp.minsize

2025-02-21 Thread Vincent Dechenaux
Hello,

Oh thanks I'm glad to hear that!

I hesitated to not do it globally and to do it per proxy, and I ended
up doing it globally to mimic "tune.comp.maxlevel".
For my personal case I don't have that requirement, to have this
customisation level but that may be a good thing, yes.

How would it look like?
Something like "compression minsize-req" and "compression minsize-res" ?
If so, what about "compression minsize" (not sure due to "algo" and
"type" being marked as legacy).

I could definitely send a new patch in that way.

Thanks,
Vincent

Le ven. 21 févr. 2025 à 04:01, Willy Tarreau  a écrit :
>
> Hello Vincent,
>
> On Thu, Feb 20, 2025 at 11:26:04PM +0100, Vincent Dechenaux wrote:
> > This option allows you to set the minimum payload size required for
> > compression to be applied.
> > This helps save CPU on both server and client sides when the payload does
> > not need to be compressed.
>
> Thank you. I think that this is an excellent idea. Your patch looks super
> clean and I'm fine with taking it as is. I'm just having a question before
> doing so: shouldn't we do this in the proxy where compression is enabled
> instead of fixing this globally ? I don't know the precise use case where
> you faced this situation, but I'm wondering if there could be situation
> where users would want to compress any size on some frontends, and only
> large objects on high-speed links. I don't know if you have an opinion on
> this.
>
> Thanks!
> Willy




Re: [PATCH] CI: QUIC Interop: clean old docker images

2025-02-21 Thread Willy Tarreau
Hi Ilya,

On Fri, Feb 14, 2025 at 09:51:04PM +0100, Ilia Shipitsin wrote:
> currently temporary docker images are kept forever. let's delete
> outdated ones

Sorry for the delay, now merged!

Thank you,
Willy




Re: [PATCH] MINOR: compression: tune.comp.minsize

2025-02-20 Thread Willy Tarreau
Hello Vincent,

On Thu, Feb 20, 2025 at 11:26:04PM +0100, Vincent Dechenaux wrote:
> This option allows you to set the minimum payload size required for
> compression to be applied.
> This helps save CPU on both server and client sides when the payload does
> not need to be compressed.

Thank you. I think that this is an excellent idea. Your patch looks super
clean and I'm fine with taking it as is. I'm just having a question before
doing so: shouldn't we do this in the proxy where compression is enabled
instead of fixing this globally ? I don't know the precise use case where
you faced this situation, but I'm wondering if there could be situation
where users would want to compress any size on some frontends, and only
large objects on high-speed links. I don't know if you have an opinion on
this.

Thanks!
Willy




Re: [PATCH] CI: QUIC Interop: clean old docker images

2025-02-20 Thread Илья Шипицин
gentle ping

пт, 14 февр. 2025 г. в 21:51, Ilia Shipitsin :

> currently temporary docker images are kept forever. let's delete
> outdated ones
> ---
>  .github/workflows/quic-interop-aws-lc.yml   | 9 +
>  .github/workflows/quic-interop-libressl.yml | 9 +
>  2 files changed, 18 insertions(+)
>
> diff --git a/.github/workflows/quic-interop-aws-lc.yml
> b/.github/workflows/quic-interop-aws-lc.yml
> index f14f496e8..28fb8fff1 100644
> --- a/.github/workflows/quic-interop-aws-lc.yml
> +++ b/.github/workflows/quic-interop-aws-lc.yml
> @@ -38,6 +38,15 @@ jobs:
>  SSLLIB: AWS-LC
>tags: ghcr.io/${{  github.repository
> }}:aws-lc
>
> +  - name: Cleanup registry
> +uses: actions/delete-package-versions@v5
> +with:
> +  owner: ${{ github.repository_owner }}
> +  package-name: 'haproxy'
> +  package-type: container
> +  min-versions-to-keep: 1
> +  delete-only-untagged-versions: 'true'
> +
>run:
>  needs: build
>  strategy:
> diff --git a/.github/workflows/quic-interop-libressl.yml
> b/.github/workflows/quic-interop-libressl.yml
> index 4a7313ec9..166069bca 100644
> --- a/.github/workflows/quic-interop-libressl.yml
> +++ b/.github/workflows/quic-interop-libressl.yml
> @@ -38,6 +38,15 @@ jobs:
>  SSLLIB: LibreSSL
>tags: ghcr.io/${{  github.repository
> }}:libressl
>
> +  - name: Cleanup registry
> +uses: actions/delete-package-versions@v5
> +with:
> +  owner: ${{ github.repository_owner }}
> +  package-name: 'haproxy'
> +  package-type: container
> +  min-versions-to-keep: 1
> +  delete-only-untagged-versions: 'true'
> +
>run:
>  needs: build
>  strategy:
> --
> 2.46.0.windows.1
>
>


Re: Is there a converter from type IP to binary?

2025-02-15 Thread Willy Tarreau
Hi Andreas,

On Sat, Feb 15, 2025 at 02:30:48PM +, Andreas Mock wrote:
> Hi all,
> 
> the fetch method 'src' is returning an IP address with type 'ip'.
> As soon as I use it with %[src] I get a human readable string
> representation of that ip address. That means I have something
> like ip2str converter.
> 
> Is there a converter in HAProxy to get the binary representation
> of that ip address?
> 
> IPv4 is 4 bytes. IPv6 is 16 bytes.
> 
> Something like src,ip2bin which would be a 4 byte binary for IPv4
> and a 16 byte binary for IPv6?

Interesting. We indeed don't have anything explicit to do this. Actually
the IP address (IPv4 or IPv6) *is* internally in binary formay and there's
an implicit cast from address to binary that does nothing more than a
memcpy(). If you pass the address to something that consumes a binary
type on input, it will be automatically cast to this type. For example,
if used with a binary stick-table it should automatically cast. Could you
please provide an example of how you're trying to use it where you'd like
to get the binary format that it does not propose ? That might help to
figure if we could for example insert a dummy conversion in between to
force the cast, or any such a thing.

Willy




Re: AWS-LC : Incompatibilities and suggested config

2025-02-13 Thread Artur

Hello William,

Le 13/02/2025 à 11:31, William Lallemand a écrit :

Go is in fact not required, you only need it if you want to activate FIPS.
You can compile like this:

cmake -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=1 -DDISABLE_GO=1 
-DDISABLE_PERL=1 \
   -DBUILD_TESTING=0 -DCMAKE_INSTALL_PREFIX=${BUILDSSL_DESTDIR} ..


Thanks for the tip. It's simpler this way. Actually, I'm currently using 
static aws-lc library while compiling haproxy.



You could remove all DHE-* ciphers as they are not implemented in AWS-LC they 
are ignored. Regarding the TLSv1.3
ciphersuites, only 3 are implemented so you could keep the default values.
Sorry for the DHE-* ciphers, there were no in my config but I copy/paste 
strings from Mozilla Generator instead and I didn't see the DHE-* 
ciphers listed at the end. My bad. :/

TLSv1.2 is already the minimum on bind lines in recent HAProxy versions.

Regarding no-tls-tickets, it depends if you want to avoid entirely resuming a 
previous TLS session, or if you want to
use "stateful tickets" instead of "stateless" ones that uses the HAProxy cache.

If you want to disable completely TLS resume on bind lines, you need in 
addition to no-tls-tickets:
'tune.ssl.cachesize 0' in the global section.
Note that stateful resumption is not implemented for TLSv1.3 in AWS-LC.


I have to investigate more about resuming TLS sessions and security 
concerns. And also 0-RTT for the same reason, I didn't activate it yet.


Otherwise, I didn't notice any other problem with haproxy + aws-lc. The 
only one I had was related to DH params file option.


Thanks a lot for your time and tips. I appreciate it.

--
Best regards,
Artur





Re: AWS-LC : Incompatibilities and suggested config

2025-02-13 Thread William Lallemand
Hello Artur,

On Thu, Feb 13, 2025 at 12:19:40AM +0100, Artur wrote:
> Subject: Re: AWS-LC : Incompatibilities and suggested config
> Hello Willy and William,
> 
> Thank you for your explanations and suggestions.
> 
> I've checked the ciphers supported by aws-lc and with help of Mozilla SSL
> Configuration Generator I have now a reasonable configuration for haproxy.
> As it may be of some interest, I post it here. I'm currently running haproxy
> 3.1.3.
> There was no problem to compile haproxy+aws-lc on Debian 12 and Debian 11.
> However on Debian 11, one have to enable backports to get an up-to-date
> golang package (and cmake if you want).
> The dependencies for aws-lc compilation are cmake/golang/libunwind-dev
> (other than build-essentials).

Go is in fact not required, you only need it if you want to activate FIPS.
You can compile like this:

cmake -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=1 -DDISABLE_GO=1 
-DDISABLE_PERL=1 \
  -DBUILD_TESTING=0 -DCMAKE_INSTALL_PREFIX=${BUILDSSL_DESTDIR} ..


> The compilation process is exactly as described in haproxy INSTALL file.
> 
> haproxy has been configured/built with something like this (distribution
> INSTALL file was helpful here):
> 
> make -j $(nproc) ARCH_FLAGS=-s TARGET=linux-glibc CPU_CFLAGS=-march=native
> USE_OPENSSL_AWSLC=1 SSL_INC=/opt/aws-lc/include SSL_LIB=/opt/aws-lc/lib
> USE_QUIC=1 [...] all
> 
> I can't see LDFLAGS in INSTALL examples. In previous haproxy versions with
> quictls it was set to : LDFLAGS="-L/opt/quictls/lib
> -Wl,-rpath,/opt/quictls/lib". I suppose it's no longer necessary or it's not
> necessary with aws-lc.
> 

In fact it only depends where you installed your library and if this is a 
static or a shared library. If you didn't
specify -DBUILD_SHARED_LIBS=1 it would build statically the library so you 
won't depend on a .so but would include the
.a in HAProxy.


> haproxy ciphers setup :
> 
> |# generated 2025-02-12, Mozilla Guideline v5.7, HAProxy 3.0, OpenSSL 3.4.0,
> intermediate config, no HSTS # 
> https://ssl-config.mozilla.org/#server=haproxy&version=3.0&config=intermediate&openssl=3.4.0&hsts=false&guideline=5.7
> global # intermediate configuration ssl-default-bind-curves
> X25519:prime256v1:secp384r1 ssl-default-bind-ciphers 
> ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305
> ssl-default-bind-ciphersuites 
> TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
> ssl-default-bind-options prefer-client-ciphers ssl-min-ver TLSv1.2 
> no-tls-tickets
> ssl-default-server-curves X25519:prime256v1:secp384r1
> ssl-default-server-ciphers 
> ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305
> ssl-default-server-ciphersuites 
> TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
> ssl-default-server-options ssl-min-ver TLSv1.2 no-tls-tickets
> 
> Please comment if you have some suggestions or enhancements to this config.
> 

You could remove all DHE-* ciphers as they are not implemented in AWS-LC they 
are ignored. Regarding the TLSv1.3
ciphersuites, only 3 are implemented so you could keep the default values.

TLSv1.2 is already the minimum on bind lines in recent HAProxy versions.

Regarding no-tls-tickets, it depends if you want to avoid entirely resuming a 
previous TLS session, or if you want to
use "stateful tickets" instead of "stateless" ones that uses the HAPoxy cache.

If you want to disable completely TLS resume on bind lines, you need in 
addition to no-tls-tickets:
'tune.ssl.cachesize 0' in the global section.
Note that stateful resumption is not implemented for TLSv1.3 in AWS-LC.

Regards,

-- 
William Lallemand




Re: Can't find a solution for a config problem

2025-02-13 Thread Ciprian Dorin Craciun

On 2/12/25 17:25, Andreas Mock wrote:

http-request track-sc0 capture.req.hdr(0),lower,crc32c(1) => does work

So, my question: How can I construct a key consisting of both parts
in any way concatenated? Is is possible as a oneliner on

http-request track-sc0 

or do I have to do some intermediate variable assignment?

Any hint (or solution) would be very appreciated.




I don't think you can easily do it in one go, but you could do something 
on these lines:


* use `set-var-fmt` to set a string variable to concatenation of both 
the hostname and the source, something like:


http-request set-var-fmt(txn.track_key) "%[src] %[req.fhdr(host)]"

* then use your original idea to convert this string into a number with 
something like the `xxh3` converter and use that as the key of your table;



Note that `crc32c`, and even `xxh3` could lead to collisions, and if you 
use this for security purposes, then perhaps you should use `hmac` (see 
the documentation), or at least use the `xxh3(seed)` variant.


Also note, that by default track tables of type `integer` are 32 bit in 
size, thus if collisions are an issue, perhaps you should switch to 
`type binary len 16` (for 128 bits), and use the appropriate convertor.


Finally, don't forget to pay special attention to canonicalization of 
the concatenation, so that (if this is used for security purposes), the 
attacker can't spoof something via the `Host` header.  (My proposal of 
`%[src] ...` should be OK.)






Re: AWS-LC : Incompatibilities and suggested config

2025-02-12 Thread Artur

Hello Willy and William,

Thank you for your explanations and suggestions.

I've checked the ciphers supported by aws-lc and with help of Mozilla 
SSL Configuration Generator I have now a reasonable configuration for 
haproxy.
As it may be of some interest, I post it here. I'm currently running 
haproxy 3.1.3.
There was no problem to compile haproxy+aws-lc on Debian 12 and Debian 
11. However on Debian 11, one have to enable backports to get an 
up-to-date golang package (and cmake if you want).
The dependencies for aws-lc compilation are cmake/golang/libunwind-dev 
(other than build-essentials).

The compilation process is exactly as described in haproxy INSTALL file.

haproxy has been configured/built with something like this (distribution 
INSTALL file was helpful here):


make -j $(nproc) ARCH_FLAGS=-s TARGET=linux-glibc 
CPU_CFLAGS=-march=native USE_OPENSSL_AWSLC=1 SSL_INC=/opt/aws-lc/include 
SSL_LIB=/opt/aws-lc/lib USE_QUIC=1 [...] all


I can't see LDFLAGS in INSTALL examples. In previous haproxy versions 
with quictls it was set to : LDFLAGS="-L/opt/quictls/lib 
-Wl,-rpath,/opt/quictls/lib". I suppose it's no longer necessary or it's 
not necessary with aws-lc.


haproxy ciphers setup :

|# generated 2025-02-12, Mozilla Guideline v5.7, HAProxy 3.0, OpenSSL 
3.4.0, intermediate config, no HSTS # 
https://ssl-config.mozilla.org/#server=haproxy&version=3.0&config=intermediate&openssl=3.4.0&hsts=false&guideline=5.7 
global # intermediate configuration ssl-default-bind-curves 
X25519:prime256v1:secp384r1 ssl-default-bind-ciphers 
ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305 
ssl-default-bind-ciphersuites 
TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256 
ssl-default-bind-options prefer-client-ciphers ssl-min-ver TLSv1.2 
no-tls-tickets ssl-default-server-curves X25519:prime256v1:secp384r1 
ssl-default-server-ciphers 
ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305 
ssl-default-server-ciphersuites 
TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256 
ssl-default-server-options ssl-min-ver TLSv1.2 no-tls-tickets |


Please comment if you have some suggestions or enhancements to this config.

--
Best regards,
Artur


Re: Count Inflight Requests in a stick-table

2025-02-11 Thread Willy Tarreau
On Tue, Feb 11, 2025 at 09:50:33AM +0100, Ansgar Jazdzewski wrote:
> Hi,
> 
> i make use of track-sc, however it did not work as i like too:
> 
> so this ich my Layer3 code for SRC-IP
> ```
> frontend https
> maxconn 10
> bind ipv4@:443,ipv6@:443 mss 1280 ssl crt
> /etc/haproxy/ssl/default.pem crt /etc/haproxy/ssl/ verify none alpn
> h2,http/1.1
> 
> # Track connection rate per IP in the defined table
> tcp-request connection track-sc2 src table limit_src
> 
> # Define an ACL for rate limiting (750 connections per second per IP)
> acl conn_rate_exceeded sc_conn_cur(2,limit_src) gt 10
> 
> # Drop only excessive connections (above 750/sec), allow others
> tcp-request connection reject if conn_rate_exceeded
> 
> 
> backend limit_src
>stick-table type ipv6 size 64k expire 1m store conn_cur
> 
> ```
> 
> My goal is to allow a continuous request flow toward // so
> that users can access public profiles, but prevent excessive requests
> from multiple source IPs overwhelming a single profile.

But did you actually *look* at sc_trackers() that I suggested in my
response ? I'm not seeing it in your configuration so I doubt you've
tested it. Also please don't top-post, that makes it very unconvenient
to comment (and usually it encourages to skip important information).

Thanks,
Willy




Re: Count Inflight Requests in a stick-table

2025-02-11 Thread Ansgar Jazdzewski
Hi,

i make use of track-sc, however it did not work as i like too:

so this ich my Layer3 code for SRC-IP
```
frontend https
maxconn 10
bind ipv4@:443,ipv6@:443 mss 1280 ssl crt
/etc/haproxy/ssl/default.pem crt /etc/haproxy/ssl/ verify none alpn
h2,http/1.1

# Track connection rate per IP in the defined table
tcp-request connection track-sc2 src table limit_src

# Define an ACL for rate limiting (750 connections per second per IP)
acl conn_rate_exceeded sc_conn_cur(2,limit_src) gt 10

# Drop only excessive connections (above 750/sec), allow others
tcp-request connection reject if conn_rate_exceeded


backend limit_src
   stick-table type ipv6 size 64k expire 1m store conn_cur

```

My goal is to allow a continuous request flow toward // so
that users can access public profiles, but prevent excessive requests
from multiple source IPs overwhelming a single profile.

Thanks,
Ansgar

Am Di., 11. Feb. 2025 um 09:08 Uhr schrieb Willy Tarreau :
>
> Hi Ansgar,
>
> On Tue, Feb 11, 2025 at 08:49:29AM +0100, Ansgar Jazdzewski wrote:
> > Hi Folks,
> >
> > I'm looking for a way to count the number of in-flight operations per
> > user (extracted from the URL path) and store that value in a variable.
> > My goal is to track and enforce a per-user concurrency limit using
> > HAProxy's stick tables and GPC.
> >
> > My approach is to use a GPC counter, incrementing it on request and
> > decrementing it when the response is sent.
> >
> > Draft Configuration;
> > ```
> > frontend http-in
> > bind *:80
> >
> > stick-table type string size 1m expire 10m store gpc0
> > http-request set-var(txn.user) path,regsub(^/([^/]+)/.*$,\1)
> > http-request track-sc0 var(txn.user)
> >
> > # Increase in-flight counter
> > http-request set-var(txn.gpc0) sc_inc_gpc0()
> >
> > # Limit concurrent requests per user to 5
> > acl user_over_limit sc_get_gpc0() gt 5
> > http-request deny if user_over_limit
> >
> > # Decrease in-flight counter when response is sent
> > http-response set-var(txn.gpc0) sc_dec_gpc0()
> > ...
> > ```
> >
> > However, sc_dec_gpc0() does not seem to be implemented yet. Do you
> > think such a function is needed, or is there another approach I could
> > take to track in-flight operations per user effectively?
>
> There is much simpler. Please have a look at sc_trackers(). It returns
> the number of active "track-sc" on a given entry. I think it does
> exactly what you're looking for, without requiring to increment nor
> decrement a counter.
>
> Regards,
> Willy




Re: Count Inflight Requests in a stick-table

2025-02-11 Thread Willy Tarreau
Hi Ansgar,

On Tue, Feb 11, 2025 at 08:49:29AM +0100, Ansgar Jazdzewski wrote:
> Hi Folks,
> 
> I'm looking for a way to count the number of in-flight operations per
> user (extracted from the URL path) and store that value in a variable.
> My goal is to track and enforce a per-user concurrency limit using
> HAProxy's stick tables and GPC.
> 
> My approach is to use a GPC counter, incrementing it on request and
> decrementing it when the response is sent.
> 
> Draft Configuration;
> ```
> frontend http-in
> bind *:80
> 
> stick-table type string size 1m expire 10m store gpc0
> http-request set-var(txn.user) path,regsub(^/([^/]+)/.*$,\1)
> http-request track-sc0 var(txn.user)
> 
> # Increase in-flight counter
> http-request set-var(txn.gpc0) sc_inc_gpc0()
> 
> # Limit concurrent requests per user to 5
> acl user_over_limit sc_get_gpc0() gt 5
> http-request deny if user_over_limit
> 
> # Decrease in-flight counter when response is sent
> http-response set-var(txn.gpc0) sc_dec_gpc0()
> ...
> ```
> 
> However, sc_dec_gpc0() does not seem to be implemented yet. Do you
> think such a function is needed, or is there another approach I could
> take to track in-flight operations per user effectively?

There is much simpler. Please have a look at sc_trackers(). It returns
the number of active "track-sc" on a given entry. I think it does
exactly what you're looking for, without requiring to increment nor
decrement a counter.

Regards,
Willy




Re: Want to Publish Quality Content on Your Blog (haproxy)

2025-02-09 Thread Nextage Solution
Hi,

I hope this email finds you well.

I understand that you may be currently occupied with several important
matters. However, I wanted to kindly request that you take a moment to
consider a potential business opportunity that could be beneficial for both
of us.

I would greatly appreciate it if you could spare some time to discuss this
opportunity further.

Thank you for your consideration, and I look forward to hearing back from
you soon.

Best regards,
Shoaib Mughal

On Mon, Feb 3, 2025 at 8:39 AM Nextage Solution 
wrote:

> Hi,
>
> I hope you're doing well!
>
> My name is Shoaib Mughal, and I’m an SEO Link Builder and Blogger Outreach
> Manager at DesignsValley.com. I came across your blog, and I must say,
> I’m impressed with the quality of content on your website, [haproxy]. I’d
> love to discuss a potential collaboration with you.
>
> We’re looking to publish high-quality content on your website on behalf of
> our clients, and we’re happy to compensate you for the same. To proceed
> smoothly, I’d appreciate it if you could share the following details:
>
> ✅ What are your pricing details for:
>
>-
>
>General Posts
>-
>
>CBD-related Posts
>-
>
>Link Insertions in Existing Posts
>
> ✅ Guest Post Guidelines:
>
>-
>
>How many dofollow backlinks do you allow per post?
>-
>
>Do you use a Sponsored Tag?
>-
>
>What’s the required word count for guest articles?
>
> ✅ Additional Details:
>
>-
>
>Preferred payment methods?
>-
>
>Turnaround time (TAT) for publishing?
>
> We value long-term collaborations and are looking for trusted partners
> like you. Let me know your thoughts, and I’d be happy to discuss this
> further. I’m looking forward to your positive response.
>
> Best regards,
> Shoaib Mughal
> SEO & Blogger Outreach Manager
>


Re: [PATCH] BUG/MINOR: 51 degree: handle a possible strdup() failure

2025-02-08 Thread Илья Шипицин
please feel free to either apply or discard patch.

сб, 8 февр. 2025 г. в 07:15, Willy Tarreau :

> Hi Ilya,
>
> On Tue, Feb 04, 2025 at 10:50:24PM +0100,  ??? wrote:
> > well, there are ~10 unhandled "malloc" and ~3 unhandled "calloc" in 51d.c
> > I'm against reviewing/fixing them altogether
> >
> > can we fix it step by step instead ?
>
> If you really need to fix them step by step, that can be OK, but you
> just need to keep in mind that changing a function's construct to
> handle bugs in a way that leaves bugs is always a pain to review and
> even debug later. I think that there might be some cases where bugs
> are sufficiently unrelated to be fixed separately, but for example
> when you need to revisit the unrolling of a function's free() calls
> at the end, you often need to do it all at once in the function, and
> in this case I'd rather not try to reintroduce existing bugs for the
> purpose of sticking to step-by-step, but rather commit a single
> "revisit error paths in function foo() to properly release all
> allocations".
>
> > as for missing space, I can fix (or this can be fixed when applying)
>
> OK noted, thanks!
> Willy
>


Re: Question about mimalloc (pronounced "me-malloc") and HAProxy

2025-02-08 Thread Aleksandar Lazic

Hi Willy.

On 2025-02-08 (Sa.) 05:49, Willy Tarreau wrote:

Hi Alex,

On Sat, Feb 08, 2025 at 01:41:28AM +0100, Aleksandar Lazic wrote:

Hi.

Have anybody tried to use this malloc library with HAProxy?
https://github.com/microsoft/mimalloc


Oh, thank you for this link! No I never tested it but I've read their
README and it looks super appealing, precisely because their main focus
is worst case latency. While jemalloc performs super fast (which is also
reflected in their benchmarks), we've got a few reports of some extremely
long free() operations trying to purge too many objects and taking more
than one second, which is absolutely not acceptable.

The MIT license even allows to easily embed it into existing projects,
though contrary to the venerable old dlmalloc, it's no longer a single
file at all, and the code is a bit dirty with inconsistent coding styles
between functions clearly coming from different contributors, so I
suspect it could easily trigger warnings if built with the project's
main files and build options. I'm also seeing that it requires special
build options for valgrind and ASAN.

I'll ping the few heavy users who experienced watchdogs in free(), in
case they want to give it a try.


Great, I'm very curious if this library could have any impact (+,-,=) to HAP :-)


Thank you!
Willy


Regards
Alex




Re: [PATCH] BUG/MINOR: 51 degree: handle a possible strdup() failure

2025-02-07 Thread Willy Tarreau
Hi Ilya,

On Tue, Feb 04, 2025 at 10:50:24PM +0100,  ??? wrote:
> well, there are ~10 unhandled "malloc" and ~3 unhandled "calloc" in 51d.c
> I'm against reviewing/fixing them altogether
> 
> can we fix it step by step instead ?

If you really need to fix them step by step, that can be OK, but you
just need to keep in mind that changing a function's construct to
handle bugs in a way that leaves bugs is always a pain to review and
even debug later. I think that there might be some cases where bugs
are sufficiently unrelated to be fixed separately, but for example
when you need to revisit the unrolling of a function's free() calls
at the end, you often need to do it all at once in the function, and
in this case I'd rather not try to reintroduce existing bugs for the
purpose of sticking to step-by-step, but rather commit a single
"revisit error paths in function foo() to properly release all
allocations".

> as for missing space, I can fix (or this can be fixed when applying)

OK noted, thanks!
Willy




Re: Question about mimalloc (pronounced "me-malloc") and HAProxy

2025-02-07 Thread Willy Tarreau
Hi Alex,

On Sat, Feb 08, 2025 at 01:41:28AM +0100, Aleksandar Lazic wrote:
> Hi.
> 
> Have anybody tried to use this malloc library with HAProxy?
> https://github.com/microsoft/mimalloc

Oh, thank you for this link! No I never tested it but I've read their
README and it looks super appealing, precisely because their main focus
is worst case latency. While jemalloc performs super fast (which is also
reflected in their benchmarks), we've got a few reports of some extremely
long free() operations trying to purge too many objects and taking more
than one second, which is absolutely not acceptable.

The MIT license even allows to easily embed it into existing projects,
though contrary to the venerable old dlmalloc, it's no longer a single
file at all, and the code is a bit dirty with inconsistent coding styles
between functions clearly coming from different contributors, so I
suspect it could easily trigger warnings if built with the project's
main files and build options. I'm also seeing that it requires special
build options for valgrind and ASAN.

I'll ping the few heavy users who experienced watchdogs in free(), in
case they want to give it a try.

Thank you!
Willy




Re: [PATCH] DOC: option redispatch should mention persist options

2025-02-06 Thread Willy Tarreau
On Wed, Feb 05, 2025 at 07:42:15AM +, Lukas Tribus wrote:
> "option redispatch" remains vague in which cases a session would persist;
> let's mention "option persist" and "force-persist" as an example so folks
> don't draw the conclusion that this may be default.
> 
> Should be backported to stable branches.

Good idea indeed, thank you Lukas for pointing that one out and fixing it!
Now merged.
Willy




Re: Add me to mailing list

2025-02-06 Thread Aleksandar Lazic

Hi.

On 2025-02-06 (Do.) 07:43, Büchter, Sven wrote:

Sven Büchter


This should be done by yourself.
https://www.haproxy.org/#tact

BR
Alex




Re: AWS-LC : Incompatibilities and suggested config

2025-02-05 Thread William Lallemand
Sending this back, looks like I got block by the RBL again.

On Wed, Feb 05, 2025 at 06:07:39PM +0100, Artur wrote:
> Hello !
> 
> I'm testing aws-lc library with haproxy (3.1) and I was surprised to get a
> start failure after migration from quictls to aws-lc :
> 
> [ALERT] : config : parsing [/etc/haproxy/haproxy.cfg:19] : unknown keyword
> 'ssl-dh-param-file' in 'global' section; did you mean
> 'tune.ssl.default-dh-param' maybe ?
> 
> I removed 'ssl-dh-param-file' and haproxy started. However it made me wonder
> if there is some other differences/limitations related to aws-lc.
> I've already seen that some ciphers are not available in aws-lc.
> 
> So, I'm currently looking for a suggested (basic/secure) config for use with
> aws-lc. Maybe some articles are available to explain haproxy and aws-lc
> interactions from admin point of view ?

Hello Artur,

Indeed there are some differences with OpenSSL. We do have a page which talk 
about AWS-LC but it is not complete
unfortunately https://github.com/haproxy/wiki/wiki/SSL-Libraries-Support-Status

However, AWS-LC maintains a good documentation about their difference with 
OpenSSL:

https://github.com/haproxy/wiki/wiki/SSL-Libraries-Support-Status
https://github.com/aws/aws-lc/blob/main/docs/porting/functionality-differences.md


The biggest difference is indeed the old ciphers that were removed from the 
library, that's why the dh-param parameter
does not work, because the DHE ciphers does not exists in AWS-LC, ECDHE is 
recommended instead.

Renegociation is something that is also not implemented completely, but that's 
not a required features nowadays.
Stateful session resumption, meaning the SSL session cache, is also only 
implemented with TLSv1.2, with TLSv1.3 only
ticket resumption is available.

Basically AWS-LS focuses more on modern features, and does not try to implement 
the old ones that should disappear from
the ecosystem.

Regards,

-- 
William Lallemand




Re: AWS-LC : Incompatibilities and suggested config

2025-02-05 Thread Willy Tarreau
Hello Artur,

On Wed, Feb 05, 2025 at 06:07:39PM +0100, Artur wrote:
> Hello !
> 
> I'm testing aws-lc library with haproxy (3.1) and I was surprised to get a
> start failure after migration from quictls to aws-lc :
> 
> [ALERT] : config : parsing [/etc/haproxy/haproxy.cfg:19] : unknown keyword
> 'ssl-dh-param-file' in 'global' section; did you mean
> 'tune.ssl.default-dh-param' maybe ?

Hmmm bad, we need to improve this. We *know* that some keywords (very few
actually) may depend on certain libs, and we'd need to make special cases
for a few of them and emit a warning instead, such as:

  Warning: the ssl-dh-param-file file is not supported by aws-lc and will
  be ignored. If you don't use dh algos (you shouldn't), you can safely
  remove that line to get rid of this warning.

> I removed 'ssl-dh-param-file' and haproxy started. However it made me wonder
> if there is some other differences/limitations related to aws-lc.
> I've already seen that some ciphers are not available in aws-lc.

That's what I understood, yes. I don't know the exact extent of differences
though they are very thin since aws-lc has ancestry from boringSSL which
itself is derived from openssl. I'd be tempted to suggest that the
differences between openssl versions and aws-lc should be in the same
range as the difference between major openssl versions built with default
settings (i.e. some algos are sometimes deprecated etc). Maybe William
has other differences in mind.

> So, I'm currently looking for a suggested (basic/secure) config for use with
> aws-lc. Maybe some articles are available to explain haproxy and aws-lc
> interactions from admin point of view ?

I'd say that for now it's the closest working alternative to openssl we've
seen to date and it's getting a growing importance in our tests and usages.
That's the one I'm using by default when I build haproxy tens to hundreds
of times a day during development for example. We've started to work on an
article comparing the various libs but it takes an amazing amount of time
so it's still not ready. But at this point I would say that I'm confident
this lib is prod-ready, and that we'll write more about it soon.

Regarding the visible differences, we definitely need to do a bit better
and at least take care of a few keywords that make configs break.

Thanks for this very useful feedback!
Willy




Re: [PATCH] BUG/MINOR: 51 degree: handle a possible strdup() failure

2025-02-04 Thread Илья Шипицин
well, there are ~10 unhandled "malloc" and ~3 unhandled "calloc" in 51d.c
I'm against reviewing/fixing them altogether

can we fix it step by step instead ?

as for missing space, I can fix (or this can be fixed when applying)

пт, 3 янв. 2025 г. в 05:14, Willy Tarreau :

> Hi Ilya,
>
> On Thu, Jan 02, 2025 at 10:02:01PM +0100,  ??? wrote:
> > ??, 2 ???. 2025 ?. ? 21:46, Miroslav Zagorac :
> >
> > > On 02. 01. 2025. 21:40,  ??? wrote:
> > > > Honestly, I think those elements must be deallocated on program exit,
> > > > not only if something failed during allocation.
> > > >
> > > > but I did not check that
> > > >
> > >
> > > That is correct.  However, the calloc() result is not checked before
> > > strdup()
> > > either, so the patch is not good.
> > >
> >
> > I did not pretend to add "calloc" check for this patch.
> > we have dedicated *dev/coccinelle/unchecked-calloc.cocci *script which
> > allows us to detect unchecked "calloc". no worry, it won't be forgotten
>
> In general, when reworking some functions' memory allocation and checks,
> it's better to fix it at once when you find that multiple checks are
> missing, rather than attempting incremental fixes that remain partially
> incorrect.
>
> One reason is that very often, dealing with allocations unrolling requires
> an exit label where allocations are unrolled in reverse order, and doing
> them one at a time tends to stay away from that approach. Or sometimes
> you'll figure that fixing certain unchecked allocations require to
> completely change the approach that was used for previous fixes.
>
> Thus if you think you've figured how to completely fix that function, do
> not hesitate, please just fix it all at once, indicating in the commit
> message what you fixed. If you think you can fix it incrementatlly without
> having to change your fix later, then it's fine to do it that way as well
> of course.
>
> > > >>> + if (name->name == NULL) {
> > > >>> + memprintf(err,"Out of memory.");
>   
> BTW, beware of the missing space here.
>
> > > >>> + goto fail_free_name;
> > > >>> + }
>
> Cheers,
> Willy
>


Re: state of QuicTLS

2025-01-31 Thread William Lallemand
Hello Ilia,

On Thu, Jan 30, 2025 at 09:54:29PM +0100, Илья Шипицин wrote:
> Hello,
> 
> currently HAProxy is built against https://github.com/quictls/openssl as
> reference QUIC implementation.
> 
> however, recently QuicTLS started new wave in
> https://github.com/quictls/quictls
> 
> shall we switch to it ?
> 
> Ilia

I don't know, what we currently support is the +quic patchset on top of 
openssl, because that's also what we support in
our enterprise version with a 1.1.1 LTS.

I don't know how the library will diverge and if we will support it. Our 
current effort is around AWS-LC, but quictls
could be interesting in the future.

Maybe we should have a weekly job, but I don't think a push job is interesting 
for now, we can keep the current
openssl+quic one for that.

Regards,

-- 
William Lallemand




Re: [PATCH 0/1] Add 4 new sample fetches to get information from ClientHello message

2025-01-30 Thread William Lallemand
Hello Mariam,

On Wed, Jan 29, 2025 at 05:59:37AM -0600, Mariam John wrote:
> Hello William,
> 
>   Thank you for your feedback on this PR and apologies for the delay. I 
> addressed all the comments from
> your last review.:
> - included the SSL bind equivalent for each fetch and fixed a typo in the 
> name of a fetch. 
> - Undid the changes to buf-t.h 
> - Updated test to support AWS-LC
> - Added a new function to do generic clienthello parsing that you can be used 
> in every fetch in payload.c
> 
> Thanks,
> Mariam.
 
Thanks Mariam, I made comments inline, in addition to the following comments.

Note that there are a lot of spaces/tabs problems in your patch, you should try 
enable a way to differenciate them in
your editor. We are using tabs for indentation, and spaces for alignment, you 
could look for examples here
https://www.haproxy.org/coding-style.html

Try to do a `git log -p` of your patches before submitting, it should show you 
the trailing spaces and tabs in red.
If not, you must enable 'color.ui = auto' in your git configuration.

I didn't mention all spaces/tabs problems in my review, but I could fix the 
remaining in your next iteration if that's
fine with you.

The patch is becoming quite big because of the changes with your new parsing 
function, You should split this in multiple
parts to be more readable, it's quite difficult to understand the changes 
currently, because the chunks are mixed up in
the patch.

You could proceed in this order:

1. a patch with your new fetches like you've done previously, without the 
smp_client_hello_parse()
2. a patch with your new reg-test
3. a patch which introduces the new smp_client_hello_parse() function and uses 
it in every fetches

On Wed, Jan 29, 2025 at 05:59:38AM -0600, Mariam John wrote:
> Subject: [PATCH 1/1] MINOR: sample: Add sample fetches for enhanced 
> observability for TLS ClientHello
> Add new sample fetches to get the ciphers, supported groups, key shares and 
> signature algorithms
> that the client supports during a TLS handshake as part of the contents of a 
> TLS ClientHello.
> Currently we can get the following contents of the ClientHello message: 
> SNI(req_ssl_sni) and
> TLS protocol version(req_ssl_ver). The following new sample fetches will be 
> added to get the
> following contents of the ClientHello message exchanged during the TLS 
> handshake (supported by
> the client):
> - req.ssl_cipherlist: Returns the binary form of the list of symmetric cipher 
> options
> - req.ssl_sigalgs: Returns the binary form of the list of signature algorithms
> - req.ssl_keyshare_groups: Return the binary format of the list of key share 
> group
> - req.ssl_supported_groups: Returns the binary form of the list of supported 
> groups used in key exchange
> 
> This added functionality would allow routing with fine granularity pending 
> the capabilities the client
> indicates in the ClientHello. For example, this would allow the ability to 
> enable TLS passthrough or
> TLS termination based on the supported groups detected in the ClientHello 
> message. Another usage is to
> take client key shares into consideration when deciding which of the client 
> supported groups should be
> used for groups considered to have 'equal security level' as well as enabling 
> fine grain selection of
> certificate types(beyond the RSA vs ECC distinction). All of that is relevant 
> in the context of rapidly
> upcoming PQC operation modes.
> 
> Fixes: #2532
> ---
>  doc/configuration.txt   |  66 ++
>  reg-tests/checks/tcp-check-client-hello.vtc |  82 +++
>  src/payload.c   | 697 ++--
>  3 files changed, 479 insertions(+), 366 deletions(-)
>  create mode 100644 reg-tests/checks/tcp-check-client-hello.vtc
> 
> diff --git a/doc/configuration.txt b/doc/configuration.txt
> index da9d8b540..ab39da283 100644
> --- a/doc/configuration.txt
> +++ b/doc/configuration.txt
>  [...]
> +
> +req.ssl_supported_groups binary
> +  Returns the binary form of the list of supported groups supported by the 
> client
> +  as reported in the TLS ClientHello and used for key exchange which can 
> include
> +  both elliptic curve and non-EC key exchange. Note that this only applies 
> to raw
> +  contents found in the request buffer and not to contents deciphered via an 
> SSL
> +  data layer, so this will not  work with "bind" lines having the "ssl" 
> option.
> +  Refer to "ssl_fc_eclist_bin" hich is the SSL bind equivalent that can be 
> used

  ^ small typo there "which"

> +  when the "ssl" option is specified.
> +
> +  Examples :
> + # Wait for a client hello for at most 5 seconds
> + tcp-request inspect-delay 5s
> + tcp-request content accept if { req.ssl_hello_type 1 }
> + use-server fe3 if { req.ssl_supported_groups, be2hex(:,2),lower -m sub 
> 0017 }
> + server fe3  ${htst_fe3_addr}:${htst_fe3_port}
> +
>  req.ssl_st_ext : integer
>Returns 0 if the clie

Re: Authentication/authorization implementation in haproxy, possibly with Redis

2025-01-13 Thread Christopher Faulet

Hi Lucas,

Le 10/01/2025 à 2:20 PM, Lucas Rolff a écrit :

I ended up trying to give SPOE a try, since it seemed like a possible contender 
for this use case.

I tried to give the LUA, Python, Rust and Go examples a go, and it seems like I 
need as many workers/threads on the agents, as I have concurrently processed 
requests in haproxy.


I cannot say for Go and Rust SPOA, but I've checked the LUA/Python SPOA 
(https://github.com/haproxy/spoa-server) and indeed, a worker is only able to 
process a message at a time. In fact, there is exactly one connection per worker 
that processes one message at a time. So it does not scale at all. It is more a 
proof of concept than a production ready agent.


Whatever. To make it work, you must set a maxconn on your SPOA servers to not 
open more connections than the agent can accept. In fact, I suggest to set a 
slightly lower maxconn. For instance a maxconn set to 5 for 8 or 10 workers. 
Because the maxconn is unfortunately not a so strict value than it should be.


Note that the SPOE was rewritten in 3.1. On older versions, the SPOP connexion 
management is far from perfect. So if you really want to go with the SPOE, I 
suggest to use the 3.1.




Is the expectation of SPOE, that the agents are fully event-driven as well, to 
be able to handle the AGENT-HELLO frames coming from haproxy, and then get them 
processed (within the processing timeout).
If that's the case, that obviously complicates the usage of SPOE a lot more


The SPOE does not expect anything special about the agents. But they must be 
sized to handle the load. In this case, it is obvious the spoa-server will never 
be able to handle many concurrent connections. And because it does not support 
the "pipelining" mode, there is no way to multiplex the requests.


--
Christopher Faulet





Re: Authentication/authorization implementation in haproxy, possibly with Redis

2025-01-11 Thread Tim Düsterhus

Hi

On 1/10/25 00:11, Lucas Rolff wrote:

- I can do it the ugly way, and proxy the request from haproxy to a small 
Golang app or similar, and then let the Go application talk to S3 backend 
directly


This is what I do with my haproxy-auth-request Lua script, it's working 
well for my use case: https://github.com/TimWolla/haproxy-auth-request


Best regards
Tim Düsterhus




Re: Authentication/authorization implementation in haproxy, possibly with Redis

2025-01-10 Thread Lucas Rolff
I ended up trying to give SPOE a try, since it seemed like a possible contender 
for this use case.

I tried to give the LUA, Python, Rust and Go examples a go, and it seems like I 
need as many workers/threads on the agents, as I have concurrently processed 
requests in haproxy.

Effectively I spawned the ps_lua.loa example:
===
./spoa -f ps_lua.lua
SPOA is listening on port 12345
SPOA worker 01 started
SPOA worker 02 started
SPOA worker 03 started
SPOA worker 04 started
SPOA worker 05 started
(string) "Load lua message processors"
(string) "Load lua message processors"
(string) "Load lua message processors"
(string) "Load lua message processors"
(string) "Load lua message processors"
===

If I do an "h2load -t 1 -n 10 -c 5 http://localhost/test.txt";, all requests 
succeed. The average response time is 262us:
===
finished in 3.83ms, 2610.97 req/s, 6.87MB/s
requests: 10 total, 10 started, 10 done, 10 succeeded, 0 failed, 0 errored, 0 
timeout
status codes: 10 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 26.96KB (27610) total, 7.07KB (7240) headers (space savings 9.16%), 
19.54KB (20010) data
 min max mean sd+/- sd
time for request:  307us   699us   470us   137us80.00%
time for connect: 1.08ms  2.34ms  1.77ms   480us60.00%
time to 1st byte: 3.00ms  3.14ms  3.05ms57us80.00%
req/s   : 554.62  586.64  571.08   11.4660.00%
===

Now, the issue starts if I increase clients from 5 to 10. Exactly half the 
requests will reach the 1000ms processing timeout configured.

I'm assuming this is because when haproxy tries to call the SPOE filter with 
the requests, the agent (being it the LUA, Python and Go examples) are not able 
to accept the request the millisecond haproxy tries to send it to the agent.
On spoe-agent, I have `timeout hello 5s` configured.

e.g. if I simply make the ps_lua.lua output "Hello World" in the 
spoa.register_message function, I get exactly 5 "hello world", instead of 10, 
and eventually after the 5 second timeout has been reached for `hello`, I get 
the various "failed to read/write" frames:
===
/spoa -f ps_lua.lua
SPOA is listening on port 12345
SPOA worker 01 started
SPOA worker 02 started
SPOA worker 03 started
SPOA worker 04 started
SPOA worker 05 started
(string) "Load lua message processors"
(string) "Load lua message processors"
(string) "Load lua message processors"
(string) "Load lua message processors"
(string) "Load lua message processors"
hello world
hello world
hello world
hello world
hello world


1736514434.980989 [01] Failed to read Haproxy NOTIFY frame
1736514434.980992 [03] Failed to read Haproxy NOTIFY frame
1736514434.980992 [04] Failed to read Haproxy NOTIFY frame
1736514434.980992 [02] Failed to read Haproxy NOTIFY frame
1736514434.980991 [05] Failed to read Haproxy NOTIFY frame
1736514434.981003 [01] Close the client socket because of I/O errors
1736514434.981004 [03] Close the client socket because of I/O errors
1736514434.981011 [04] Close the client socket because of I/O errors
1736514434.981015 [02] Close the client socket because of I/O errors
1736514434.981016 [05] Close the client socket because of I/O errors
1736514434.981045 [01] Failed to write Agent frame
1736514434.981046 [03] Failed to write Agent frame
1736514434.981049 [01] Close the client socket because of I/O errors
1736514434.981049 [03] Close the client socket because of I/O errors
1736514434.981058 [04] Failed to write Agent frame
1736514434.981061 [02] Failed to write Agent frame
1736514434.981061 [04] Close the client socket because of I/O errors
1736514434.981061 [05] Failed to write Agent frame
1736514434.981062 [02] Close the client socket because of I/O errors
1736514434.981066 [05] Close the client socket because of I/O errors
===

Is the expectation of SPOE, that the agents are fully event-driven as well, to 
be able to handle the AGENT-HELLO frames coming from haproxy, and then get them 
processed (within the processing timeout).
If that's the case, that obviously complicates the usage of SPOE a lot more

Best Regards,
Lucas Rolff


> On 10 Jan 2025, at 00:11, Lucas Rolff  wrote:
> 
> Hello,
> 
> I'm looking into possibilities to implement some slightly more complex logic 
> into haproxy that's being used when talking to origins.
> Looking through the documentation, I see I can obviously use Lua, which 
> offers great flexibility.
> 
> I'm looking to implement a few additional checks and balances before I 
> forward requests to the origin/backend for being processed.
> 
> Two of these things are:
> - validation of bearer tokens (API tokens) on haproxy level
> - S3 signing
> 
> Bearer tokens are stored in a Redis instance, and I can obviously write Lua 
> that's ran for a given http-request, but from what I understand, the way the 
> Lua is ran, means I obviously have to establish the whole Redis 
> instance/connection every time the code is executed, and I ca

Re: [RFC] MEDIUM: lua,http,spoe: add PATCH method to lua httpclient

2025-01-09 Thread Miroslav Zagorac
On 09. 01. 2025. 13:22, Chris Hibbert wrote:
> Well my first contribution is off to an embarrassing start, here is the
> proposed patch attached!
> 

Hello Chris,

I would just like to comment on the part of your patch related to
addons/ot/src/util.c:

I think that instead of HTTP_METH_PATCH, the string content "PATCH" (that is,
HTTP_METH_STR_PATCH which is defined as "PATCH") should be used, and not an
enum as you have.

An example of that part of the patch is in the attachment.


Best regards,

-- 
Miroslav Zagoracdiff --git a/addons/ot/include/util.h b/addons/ot/include/util.h
index 776ddd203..f5739b2a8 100644
--- a/addons/ot/include/util.h
+++ b/addons/ot/include/util.h
@@ -23,6 +23,7 @@
 #define HTTP_METH_STR_OPTIONS   "OPTIONS"
 #define HTTP_METH_STR_GET   "GET"
 #define HTTP_METH_STR_HEAD  "HEAD"
+#define HTTP_METH_STR_PATCH "PATCH"
 #define HTTP_METH_STR_POST  "POST"
 #define HTTP_METH_STR_PUT   "PUT"
 #define HTTP_METH_STR_DELETE"DELETE"
diff --git a/addons/ot/src/util.c b/addons/ot/src/util.c
index fd040164d..4689fbf03 100644
--- a/addons/ot/src/util.c
+++ b/addons/ot/src/util.c
@@ -585,6 +585,11 @@ int flt_ot_sample_to_str(const struct sample_data *data, char *value, size_t siz
 
 		(void)memcpy(value, HTTP_METH_STR_HEAD, retval + 1);
 	}
+	else if (data->u.meth.meth == HTTP_METH_PATCH) {
+		retval = FLT_OT_STR_SIZE(HTTP_METH_STR_PATCH);
+
+		(void)memcpy(value, HTTP_METH_STR_PATCH, retval + 1);
+	}
 	else if (data->u.meth.meth == HTTP_METH_POST) {
 		retval = FLT_OT_STR_SIZE(HTTP_METH_STR_POST);
 


Re: [RFC] MEDIUM: lua,http,spoe: add PATCH method to lua httpclient

2025-01-09 Thread Chris Hibbert
Well my first contribution is off to an embarrassing start, here is the
proposed patch attached!

On Thu, 9 Jan 2025 at 12:19, Chris Hibbert  wrote:

> Currently the httpclient supports these methods: get, head, put, post,
> delete
>
> https://www.arpalert.org/src/haproxy-lua-api/2.9/index.html#httpclient-class
>
> I need patch support, and this has also been requested by others here:
> https://github.com/haproxy/haproxy/issues/2489#issuecomment-2003414422
>
> I have made what I believe are the correct code changes, and have tested
> that I can now send PATCH via the httpclient - this is working fine.
>
> There are some changes I needed to make to add PATCH into the list of
> methods, and a few surrounding changes which I'm not sure how are used/to
> test (e.g. open telemetry, spoe code), I'm very happy to remove any
> unnecessary changes which those with more experience say are not required.
>
> TIA.
>


0001-Add-PATCH-method-to-httpclient.patch
Description: Binary data


Re: [PATCH 1/1] MINOR: sample: Add sample fetches for enhanced observability for TLS ClientHello

2025-01-09 Thread William Lallemand
Hello Mariam,

On Thu, Jan 09, 2025 at 02:36:09AM -0600, Mariam John wrote:
> Subject: [PATCH 1/1] MINOR: sample: Add sample fetches for enhanced 
> observability for TLS ClientHello
> Add new sample fetches to get the ciphers, supported groups, key shares and 
> signature algorithms
> that the client supports during a TLS handshake as part of the contents of a 
> TLS ClientHello.
> Currently we can get the following contents of the ClientHello message: 
> SNI(req_ssl_sni) and
> TLS protocol version(req_ssl_ver). The following new sample fetches will be 
> added to get the
> following contents of the ClientHello message exchanged during the TLS 
> handshake (supported by
> the client):
> - req.ssl_cipherlist: Returns the binary form of the list of symmetric cipher 
> options
> - req.ssl_sigalgs: Returns the binary form of the list of signature algorithms
> - req.ssl_keyshare_groups: Return the binary format of the list of key share 
> group
> - req.ssl_supported_groups: Returns the binary form of the list of supported 
> groups used in key exchange
> 
> This added functionality would allow routing with fine granularity pending 
> the capabilities the client
> indicates in the ClientHello. For example, this would allow the ability to 
> enable TLS passthrough or
> TLS termination based on the supported groups detected in the ClientHello 
> message. Another usage is to
> take client key shares into consideration when deciding which of the client 
> supported groups should be
> used for groups considered to have 'equal security level' as well as enabling 
> fine grain selection of
> certificate types(beyond the RSA vs ECC distinction). All of that is relevant 
> in the context of rapidly
> upcoming PQC operation modes.
> 
> Fixes: #2532

That's better regarding the CI, but I think you missed my inline comments in my 
previous mail!

See below for my inline comments:

> ---
>  doc/configuration.txt   |  61 ++
>  include/haproxy/buf-t.h |   2 +
>  reg-tests/checks/tcp-check-client-hello.vtc |  84 +++
>  src/payload.c   | 629 +++-
>  4 files changed, 775 insertions(+), 1 deletion(-)
>  create mode 100644 reg-tests/checks/tcp-check-client-hello.vtc
> 
> diff --git a/doc/configuration.txt b/doc/configuration.txt
> index 76b622bce..6d5ea4a7d 100644
> --- a/doc/configuration.txt
> +++ b/doc/configuration.txt
> @@ -25030,6 +25030,10 @@ req_ssl_sni  
>  string
>  req.ssl_st_extinteger
>  req.ssl_ver   integer
>  req_ssl_ver   integer
> +req.ssl_cipherlistbinary
> +req.ssl_sigalgs   binary
> +req.ssl_keyshare_groups   binary
> +req.ssl_supported_groups  binary
>  res.len   integer
>  res.payload(,)binary
>  res.payload_lv(,[,])binary
> @@ -25234,6 +25238,63 @@ req_ssl_sni : string (deprecated)
>   use_backend bk_allow if { req.ssl_sni -f allowed_sites }
>   default_backend bk_sorry_page
>  
> +req.ssl_cipherlist binary
> +  Returns the binary form of the list of symmetric cipher options supported 
> by
> +  the client as reported in the contents of a TLS ClientHello. Note that this
> +  only applies to raw contents found in the request buffer and not to 
> contents
> +  deciphered via an SSL data layer, so this will not work with "bind" lines
> +  having the "ssl" option.
> +

I think you should mention the SSL bind equivalent keyword for each fetch when 
it exists, that would help users to find
how to do it, when they use the "ssl" option.


> +  Examples :
> + # Wait for a client hello for at most 5 seconds
> + tcp-request inspect-delay 5s
> + tcp-request content accept if { req.ssl_hello_type 1 }
> + use-server fe3 if { req.ssl_cipherlist,be2hex(:,2),lower -m sub 
> 1302:009f }
> + server fe3  ${htst_fe3_addr}:${htst_fe3_port}
> +
> +req.ssl_ssl_sigalgs binary

Typo there, "ssl_ssl".

> +  Returns the binary form of the list of signature algorithms supported by 
> the
> +  client as reported in the TLS ClientHello. This is available as a client 
> hello
> +  extension. Note that this only applies to raw contents found in the request
> +  buffer and not to contents deciphered via an SSL data layer, so this will 
> not
> +  work with "bind" lines having the "ssl" option.
> +
> +  Examples :
> + # Wait for a client hello for at most 5 seconds
> + tcp-request inspect-delay 5s
> + tcp-request content accept if { req.ssl_hello_type 1 }
> + use-server fe4 if { req.ssl_sigalgs,be2hex(:,2),lower -m sub 0403:0805 }
> + server fe4  ${htst_fe4_addr}:${htst_fe4_port}
> +
> [...]
> diff --git a/include/haproxy/buf-t.h b/include/haproxy/buf-t.h

Re: RSA & ECC certificates bundling on the same ip with aws-lc

2025-01-08 Thread Andrii Ustymenko

Hello William,

Thanks for the prompt reply.

So, as 3.1 is not LTS version, that would mean we would need to wait for 
release of 3.2 which is hopefully soon


Thanks again!

On 08/01/2025 16:31, William Lallemand wrote:

Hello Andrii,

On Wed, Jan 08, 2025 at 04:23:56PM +0100, Andrii Ustymenko wrote:

Dear list,

As of now haproxy supports hosting different types of certificates on the
same ip with certificates bundling:
https://docs.haproxy.org/3.0/configuration.html#ssl-load-extra-files

That works fine with Openssl library, but doesn't seem to work with aws-lc
ssl library.

When haproxy is built with aws-lc ssl haproxy is able to use only one
certificate per endpoint.

I have tried the following configurations with aws-lc ssl:

1) Multiple crt and ciphers in bind:

/bind 0.0.0.0:443 ssl crt example-rsa.pem crt example-esdsa.pem/

In this case the first declared certificate is used. Depending on the order
it can be ecc or rsa

2) Bundling as described in
https://docs.haproxy.org/3.0/configuration.html#ssl-load-extra-files:

/bind 0.0.0.0:443 ssl crt example.pem/

And two files with certificate extensions:

/example.pem.ecdsa
example.pem.rsa/

In this case always ecc (ecdsa) certificate is being used.

Both examples above work fine with openssl

Are there any other options to try?

Thanks!

We are still working on improving the AWS-LC support in HAProxy, and some of 
the features require an up to date version.
We try to detail our progress on this page: 
https://github.com/haproxy/wiki/wiki/SSL-Libraries-Support-Status

The ECDSA+RSA selection requires HAProxy 3.1 and an up to date AWS-LC version, 
you won't be able to make it work with
haproxy 3.0.

Regards,



--

Best regards,

Andrii Ustymenko





Re: RSA & ECC certificates bundling on the same ip with aws-lc

2025-01-08 Thread William Lallemand
Hello Andrii,

On Wed, Jan 08, 2025 at 04:23:56PM +0100, Andrii Ustymenko wrote:
> Dear list,
> 
> As of now haproxy supports hosting different types of certificates on the
> same ip with certificates bundling:
> https://docs.haproxy.org/3.0/configuration.html#ssl-load-extra-files
> 
> That works fine with Openssl library, but doesn't seem to work with aws-lc
> ssl library.
> 
> When haproxy is built with aws-lc ssl haproxy is able to use only one
> certificate per endpoint.
> 
> I have tried the following configurations with aws-lc ssl:
> 
> 1) Multiple crt and ciphers in bind:
> 
> /bind 0.0.0.0:443 ssl crt example-rsa.pem crt example-esdsa.pem/
> 
> In this case the first declared certificate is used. Depending on the order
> it can be ecc or rsa
> 
> 2) Bundling as described in
> https://docs.haproxy.org/3.0/configuration.html#ssl-load-extra-files:
> 
> /bind 0.0.0.0:443 ssl crt example.pem/
> 
> And two files with certificate extensions:
> 
> /example.pem.ecdsa
> example.pem.rsa/
> 
> In this case always ecc (ecdsa) certificate is being used.
> 
> Both examples above work fine with openssl
> 
> Are there any other options to try?
> 
> Thanks!

We are still working on improving the AWS-LC support in HAProxy, and some of 
the features require an up to date version.
We try to detail our progress on this page: 
https://github.com/haproxy/wiki/wiki/SSL-Libraries-Support-Status

The ECDSA+RSA selection requires HAProxy 3.1 and an up to date AWS-LC version, 
you won't be able to make it work with
haproxy 3.0.

Regards,

-- 
William Lallemand




Re: HAProxy 3.1 with OpenSSL 3.0 vs AWS-LC v1.42

2025-01-08 Thread Илья Шипицин
Please note that ppa is built using USE_QUIC_OPENSSL_COMPAT=1 which is not
fully QUIC, but a simulated QUIC on top of OpenSSL. it misses QUIC features
like 0-RTT: SSL Libraries Support Status · haproxy/wiki Wiki


OpenSSL-3.0 is known for degradation on highly concurrent systems, i.e.
with many simultaneous threads running. especially in establishing new
connections, maybe you won't observe it on long-running keepalive mode.

ср, 8 янв. 2025 г. в 12:09, Lucas Rolff :

> Hello,
>
> Sorry for my lengthy post, but I wanted to give as much info upfront as
> possible, since it takes a bunch of guesswork out of it!
>
> I've recently started testing a combo of HAProxy 3.1 and Varnish 7.6 for
> some content delivery / offloading, and I'm a bit curious if people have
> any data/suggestions/optimizations that can be done to push things further
> in terms of performance.
>
> I tried to use the HAProxy PPA for Ubuntu 24.04 (noble) (
> https://launchpad.net/~vbernat/+archive/ubuntu/haproxy-3.1 ), thanks
> Vincent for providing these!
> The build from the PPA uses the OS distributed OpenSSL, which is 3.0.13 on
> Ubuntu 24.04.
> I also have a custom build where I compiled in AWS-LC version 1.42.
>
> PPA distribution:
> Build options :
>   TARGET  = linux-glibc
>   CC  = x86_64-linux-gnu-gcc
>   CFLAGS  = -O2 -g -fwrapv -g -O2 -fno-omit-frame-pointer
> -mno-omit-leaf-frame-pointer -flto=auto -ffat-lto-objects
> -fstack-protector-strong -fstack-clash-protection -Wformat
> -Werror=format-security -fcf-protection
> -fdebug-prefix-map=/build/haproxy-ScKxv0/haproxy-3.1.1=/usr/src/haproxy-3.1.1-1ppa1~noble
> -Wdate-time -D_FORTIFY_SOURCE=3
>   OPTIONS = USE_OPENSSL=1 USE_LUA=1 USE_SLZ=1 USE_OT=1 USE_QUIC=1
> USE_PROMEX=1 USE_PCRE2=1 USE_PCRE2_JIT=1 USE_QUIC_OPENSSL_COMPAT=1
>   DEBUG   =
>
> Feature list : -51DEGREES +ACCEPT4 +BACKTRACE -CLOSEFROM +CPU_AFFINITY
> +CRYPT_H -DEVICEATLAS +DL -ENGINE +EPOLL -EVPORTS +GETADDRINFO -KQUEUE
> -LIBATOMIC +LIBCRYPT +LINUX_CAP +LINUX_SPLICE +LINUX_TPROXY +LUA +MATH
> -MEMORY_PROFILING +NETFILTER +NS -OBSOLETE_LINKER +OPENSSL -OPENSSL_AWSLC
> -OPENSSL_WOLFSSL +OT -PCRE +PCRE2 +PCRE2_JIT -PCRE_JIT +POLL +PRCTL
> -PROCCTL +PROMEX -PTHREAD_EMULATION +QUIC +QUIC_OPENSSL_COMPAT +RT
> +SHM_OPEN +SLZ +SSL -STATIC_PCRE -STATIC_PCRE2 +TFO +THREAD +THREAD_DUMP
> +TPROXY -WURFL -ZLIB
>
> My own build:
> Build options :
>   TARGET  = linux-glibc
>   CC  = cc
>   CFLAGS  = -O2 -g -fwrapv
>   OPTIONS = USE_OPENSSL_AWSLC=1 USE_SLZ=1 USE_QUIC=1 USE_PROMEX=1
> USE_PCRE2=1 USE_PCRE2_JIT=1
>   DEBUG   =
>
> Feature list : -51DEGREES +ACCEPT4 +BACKTRACE -CLOSEFROM +CPU_AFFINITY
> +CRYPT_H -DEVICEATLAS +DL -ENGINE +EPOLL -EVPORTS +GETADDRINFO -KQUEUE
> -LIBATOMIC +LIBCRYPT +LINUX_CAP +LINUX_SPLICE +LINUX_TPROXY -LUA -MATH
> -MEMORY_PROFILING +NETFILTER +NS -OBSOLETE_LINKER +OPENSSL +OPENSSL_AWSLC
> -OPENSSL_WOLFSSL -OT -PCRE +PCRE2 +PCRE2_JIT -PCRE_JIT +POLL +PRCTL
> -PROCCTL +PROMEX -PTHREAD_EMULATION +QUIC -QUIC_OPENSSL_COMPAT +RT
> +SHM_OPEN +SLZ +SSL -STATIC_PCRE -STATIC_PCRE2 +TFO +THREAD +THREAD_DUMP
> +TPROXY -WURFL -ZLIB
>
> I know there's quite a few differences in the CFlags, but I went with it
> anyway!
>
> Test System:
> - E5-2698v4 (20 cores, 40 threads)
> - 128GB of 2133MHz DDR4 RAM (16GB DIMMs, in the correct banks)
> - Ubuntu 24.04 with generic kernel
>
> Haproxy config:
> global
> log /dev/log local0
> log /dev/log local1 notice
> chroot /var/lib/haproxy
> stats socket /run/haproxy/admin.sock mode 660 level admin
> stats timeout 30s
> user haproxy
> group haproxy
> maxconn 5
> daemon
>
> # Default SSL material locations
> ca-base /etc/ssl/certs
> crt-base /etc/ssl/private
>
> # See:
> https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
> ssl-default-bind-ciphers
> ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
> ssl-default-bind-ciphersuites
> TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
> ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets
>
> defaults
> log global
> mode http
> option httplog
> option dontlognull
> timeout connect 5000
> timeout client  5
> timeout server  5
> errorfile 400 /etc/haproxy/errors/400.http
> errorfile 403 /etc/haproxy/errors/403.http
> errorfile 408 /etc/haproxy/errors/408.http
> errorfile 500 /etc/haproxy/errors/500.http
> errorfile 502 /etc/haproxy/errors/502.http
> errorfile 503 /etc/haproxy/errors/503.http
> errorfile 504 /etc/haproxy/errors/504.http
>
> frontend ft_web
> bind *:80,:::80 v6only
> bind *:443,:::443 v6only ssl crt /etc/haproxy/ssl/
> bind quic4@:443 ssl crt /etc/haproxy/ssl/ alpn h3
> bind quic6@:443 ssl crt /etc/haproxy/ssl/ alpn h3
>
>  

Re: 3.1.x /dev/shm files?

2025-01-07 Thread William Lallemand
On Tue, Jan 07, 2025 at 03:57:15PM +0100, Christian Ruppert wrote:
> It was restarted, not reloaded:
> 
> zsh 10053 # lsof | grep haproxy | grep DEL
> haproxy   10493  root DEL   REG
> 0,21319 /dev/shm/haproxy_startup_logs_10491
> haproxy   10494   haproxy DEL   REG
> 0,1  66990 /dev/zero
> haproxy   10494 10495 haproxy haproxy DEL   REG
> 0,1  66990 /dev/zero
> haproxy   10494 10496 haproxy haproxy DEL   REG
> 0,1  66990 /dev/zero
> haproxy   10494 10497 haproxy haproxy DEL   REG
> 0,1  66990 /dev/zero
> haproxy   10494 10498 haproxy haproxy DEL   REG
> 0,1  66990 /dev/zero
> haproxy   10494 10499 haproxy haproxy DEL   REG
> 0,1  66990 /dev/zero
> haproxy   10494 10500 haproxy haproxy DEL   REG
> 0,1  66990 /dev/zero
> haproxy   10494 10501 haproxy haproxy DEL   REG
> 0,1  66990 /dev/zero
> 
> zsh 10054 # ps aux|grep hapro
> root 10493  0.0  0.0  92416  8384 ?Ss   12:22   0:00
> /usr/sbin/haproxy -D -W -p /run/haproxy.pid -f /etc/haproxy/haproxy.cfg -S
> /run/haproxy-master.sock
> haproxy  10494  0.0  0.0 547340 28852 ?Sl   12:22   0:08
> /usr/sbin/haproxy -D -W -p /run/haproxy.pid -f /etc/haproxy/haproxy.cfg -S
> /run/haproxy-master.sock
> root 15127  0.0  0.0   6600  2176 pts/6S+   15:55   0:00 grep
> --color=auto hapro
> 
> 3.1.1 stable. I'm not sure if 3.1.0 was affected as well but before 3.1.x I
> haven't seen that for sure.
> 

In this case that's your version of lsof which is not behaving like mine, just 
check in /proc/10493/fd, you shouldn't
see any FD attached to the shm. It's just the shm which is not munmap().

My version of lsof is showing a "mem" type with a description, not the original 
file:

haproxy   222416  root  mem   REG   
0,2949606 [anon_shmem:errors:startup_logs] (stat: No such file 
or directory)

The fd is really closed just after opening the shm:
https://git.haproxy.org/?p=haproxy-3.1.git;a=blob;f=src/errors.c;h=8c508e7af2c5b2cf48f5c7e87529bce8afcfcfde;hb=HEAD#l122

-- 
William Lallemand




Re: 3.1.x /dev/shm files?

2025-01-07 Thread Christian Ruppert

On 2025-01-07 15:51, William Lallemand wrote:

On Tue, Jan 07, 2025 at 02:59:44PM +0100, William Lallemand wrote:

Subject: Re: 3.1.x /dev/shm files?
On Tue, Jan 07, 2025 at 02:47:48PM +0100, Christian Ruppert wrote:
> Subject: Re: 3.1.x /dev/shm files?
> On 2025-01-07 14:39, William Lallemand wrote:
> > On Tue, Jan 07, 2025 at 12:35:43PM +0100, Christian Ruppert wrote:
> > > Subject: 3.1.x /dev/shm files?
> > > Hey,
> > >
> > > nothing major but something I noticed:
> > > It looks like in 3.1.x, at least 3.1.1, HAProxy opens and keeps open a
> > > /dev/shm file, like here:
> > > `haproxy   10493  root DEL   REG
> > > 0,21319 /dev/shm/haproxy_startup_logs_10491`
> > > This behavior seems new. Is it intended that way or a bug?
> > > The FD should closed and/or that file should exist and not being
> > > deleted
> > > IMO.
> > >
> > > 1. Start HAProxy
> > > 2. lsof | grep haproxy | grep DEL
> > >
> >
> > Hello Christian,
> >
> > That's the normal behavior, it was introduced with USE_SHM_OPEN in 2.7,
> > and is activated by default on linux-glibc,
> > linux-musl and freebsd.
> >
> > It's a mecanism that opens a shm to store the startup-logs between the
> > old processes and the new one during reload.
> > Allowing to show logs on the "reload" command from the master CLI for
> > example. The shm file is opened and then deleted
> > so we don't pollute the filesystem, the FD is then kept so we can still
> > access to the shared memory.
> >
> > The mecanism changed a little bit in 3.1, in previous versions the FD
> > was closed after the reload and then reopened for
> > a new reload, but now we keep the same SHM between reloads. But once
> > haproxy is stopped it won't leak anywhere.
> >
> > Regards,
>
> Hi William,
>
> alright. Thanks! :)
>

Note that with a recent kernel and lsof you have additionnal details:

% sudo lsof | grep shm | grep haproxy
haproxy   201470  root  mem   REG  
 0,2945682 [anon_shmem:errors:startup_logs] 
(stat: No such file or directory)


We set an ID on the shared memory zones so we are able to identify 
them.


I checked again because I had some doubt, but my explanation was not 
exact. In 3.1 we only keep the shmem between the
worker and the master but we close the FD and the shmem is not kept 
during reload.


So if you still have an open FD it could result from an old bug, when 
upgrading from 3.0 to 3.1 for example. You could

check if the HAPROXY_STARTUPLOGS_FD still exist in the master.


It was restarted, not reloaded:

zsh 10053 # lsof | grep haproxy | grep DEL
haproxy   10493  root DEL   REG  
 0,21319 /dev/shm/haproxy_startup_logs_10491
haproxy   10494   haproxy DEL   REG  
  0,1  66990 /dev/zero
haproxy   10494 10495 haproxy haproxy DEL   REG  
  0,1  66990 /dev/zero
haproxy   10494 10496 haproxy haproxy DEL   REG  
  0,1  66990 /dev/zero
haproxy   10494 10497 haproxy haproxy DEL   REG  
  0,1  66990 /dev/zero
haproxy   10494 10498 haproxy haproxy DEL   REG  
  0,1  66990 /dev/zero
haproxy   10494 10499 haproxy haproxy DEL   REG  
  0,1  66990 /dev/zero
haproxy   10494 10500 haproxy haproxy DEL   REG  
  0,1  66990 /dev/zero
haproxy   10494 10501 haproxy haproxy DEL   REG  
  0,1  66990 /dev/zero


zsh 10054 # ps aux|grep hapro
root 10493  0.0  0.0  92416  8384 ?Ss   12:22   0:00 
/usr/sbin/haproxy -D -W -p /run/haproxy.pid -f /etc/haproxy/haproxy.cfg 
-S /run/haproxy-master.sock
haproxy  10494  0.0  0.0 547340 28852 ?Sl   12:22   0:08 
/usr/sbin/haproxy -D -W -p /run/haproxy.pid -f /etc/haproxy/haproxy.cfg 
-S /run/haproxy-master.sock
root 15127  0.0  0.0   6600  2176 pts/6S+   15:55   0:00 grep 
--color=auto hapro


3.1.1 stable. I'm not sure if 3.1.0 was affected as well but before 
3.1.x I haven't seen that for sure.


--
Regards,
Christian Ruppert




Re: 3.1.x /dev/shm files?

2025-01-07 Thread William Lallemand
On Tue, Jan 07, 2025 at 02:59:44PM +0100, William Lallemand wrote:
> Subject: Re: 3.1.x /dev/shm files?
> On Tue, Jan 07, 2025 at 02:47:48PM +0100, Christian Ruppert wrote:
> > Subject: Re: 3.1.x /dev/shm files?
> > On 2025-01-07 14:39, William Lallemand wrote:
> > > On Tue, Jan 07, 2025 at 12:35:43PM +0100, Christian Ruppert wrote:
> > > > Subject: 3.1.x /dev/shm files?
> > > > Hey,
> > > > 
> > > > nothing major but something I noticed:
> > > > It looks like in 3.1.x, at least 3.1.1, HAProxy opens and keeps open a
> > > > /dev/shm file, like here:
> > > > `haproxy   10493  root DEL   REG
> > > > 0,21319 /dev/shm/haproxy_startup_logs_10491`
> > > > This behavior seems new. Is it intended that way or a bug?
> > > > The FD should closed and/or that file should exist and not being
> > > > deleted
> > > > IMO.
> > > > 
> > > > 1. Start HAProxy
> > > > 2. lsof | grep haproxy | grep DEL
> > > > 
> > > 
> > > Hello Christian,
> > > 
> > > That's the normal behavior, it was introduced with USE_SHM_OPEN in 2.7,
> > > and is activated by default on linux-glibc,
> > > linux-musl and freebsd.
> > > 
> > > It's a mecanism that opens a shm to store the startup-logs between the
> > > old processes and the new one during reload.
> > > Allowing to show logs on the "reload" command from the master CLI for
> > > example. The shm file is opened and then deleted
> > > so we don't pollute the filesystem, the FD is then kept so we can still
> > > access to the shared memory.
> > > 
> > > The mecanism changed a little bit in 3.1, in previous versions the FD
> > > was closed after the reload and then reopened for
> > > a new reload, but now we keep the same SHM between reloads. But once
> > > haproxy is stopped it won't leak anywhere.
> > > 
> > > Regards,
> > 
> > Hi William,
> > 
> > alright. Thanks! :)
> > 
> 
> Note that with a recent kernel and lsof you have additionnal details:
> 
> % sudo lsof | grep shm | grep haproxy
> haproxy   201470  root  mem   REG 
>   0,2945682 [anon_shmem:errors:startup_logs] (stat: No such 
> file or directory)
> 
> We set an ID on the shared memory zones so we are able to identify them.

I checked again because I had some doubt, but my explanation was not exact. In 
3.1 we only keep the shmem between the
worker and the master but we close the FD and the shmem is not kept during 
reload.

So if you still have an open FD it could result from an old bug, when upgrading 
from 3.0 to 3.1 for example. You could
check if the HAPROXY_STARTUPLOGS_FD still exist in the master.

-- 
William Lallemand




Re: 3.1.x /dev/shm files?

2025-01-07 Thread William Lallemand
On Tue, Jan 07, 2025 at 02:47:48PM +0100, Christian Ruppert wrote:
> Subject: Re: 3.1.x /dev/shm files?
> On 2025-01-07 14:39, William Lallemand wrote:
> > On Tue, Jan 07, 2025 at 12:35:43PM +0100, Christian Ruppert wrote:
> > > Subject: 3.1.x /dev/shm files?
> > > Hey,
> > > 
> > > nothing major but something I noticed:
> > > It looks like in 3.1.x, at least 3.1.1, HAProxy opens and keeps open a
> > > /dev/shm file, like here:
> > > `haproxy   10493  root DEL   REG
> > > 0,21319 /dev/shm/haproxy_startup_logs_10491`
> > > This behavior seems new. Is it intended that way or a bug?
> > > The FD should closed and/or that file should exist and not being
> > > deleted
> > > IMO.
> > > 
> > > 1. Start HAProxy
> > > 2. lsof | grep haproxy | grep DEL
> > > 
> > 
> > Hello Christian,
> > 
> > That's the normal behavior, it was introduced with USE_SHM_OPEN in 2.7,
> > and is activated by default on linux-glibc,
> > linux-musl and freebsd.
> > 
> > It's a mecanism that opens a shm to store the startup-logs between the
> > old processes and the new one during reload.
> > Allowing to show logs on the "reload" command from the master CLI for
> > example. The shm file is opened and then deleted
> > so we don't pollute the filesystem, the FD is then kept so we can still
> > access to the shared memory.
> > 
> > The mecanism changed a little bit in 3.1, in previous versions the FD
> > was closed after the reload and then reopened for
> > a new reload, but now we keep the same SHM between reloads. But once
> > haproxy is stopped it won't leak anywhere.
> > 
> > Regards,
> 
> Hi William,
> 
> alright. Thanks! :)
> 

Note that with a recent kernel and lsof you have additionnal details:

% sudo lsof | grep shm | grep haproxy
haproxy   201470  root  mem   REG   
0,2945682 [anon_shmem:errors:startup_logs] (stat: No such file 
or directory)

We set an ID on the shared memory zones so we are able to identify them.


-- 
William Lallemand




Re: 3.1.x /dev/shm files?

2025-01-07 Thread Christian Ruppert

On 2025-01-07 14:39, William Lallemand wrote:

On Tue, Jan 07, 2025 at 12:35:43PM +0100, Christian Ruppert wrote:

Subject: 3.1.x /dev/shm files?
Hey,

nothing major but something I noticed:
It looks like in 3.1.x, at least 3.1.1, HAProxy opens and keeps open a
/dev/shm file, like here:
`haproxy   10493  root DEL   REG
0,21319 /dev/shm/haproxy_startup_logs_10491`
This behavior seems new. Is it intended that way or a bug?
The FD should closed and/or that file should exist and not being 
deleted

IMO.

1. Start HAProxy
2. lsof | grep haproxy | grep DEL



Hello Christian,

That's the normal behavior, it was introduced with USE_SHM_OPEN in 2.7, 
and is activated by default on linux-glibc,

linux-musl and freebsd.

It's a mecanism that opens a shm to store the startup-logs between the 
old processes and the new one during reload.
Allowing to show logs on the "reload" command from the master CLI for 
example. The shm file is opened and then deleted
so we don't pollute the filesystem, the FD is then kept so we can still 
access to the shared memory.


The mecanism changed a little bit in 3.1, in previous versions the FD 
was closed after the reload and then reopened for
a new reload, but now we keep the same SHM between reloads. But once 
haproxy is stopped it won't leak anywhere.


Regards,


Hi William,

alright. Thanks! :)

--
Regards,
Christian Ruppert




Re: 3.1.x /dev/shm files?

2025-01-07 Thread William Lallemand
On Tue, Jan 07, 2025 at 12:35:43PM +0100, Christian Ruppert wrote:
> Subject: 3.1.x /dev/shm files?
> Hey,
> 
> nothing major but something I noticed:
> It looks like in 3.1.x, at least 3.1.1, HAProxy opens and keeps open a
> /dev/shm file, like here:
> `haproxy   10493  root DEL   REG
> 0,21319 /dev/shm/haproxy_startup_logs_10491`
> This behavior seems new. Is it intended that way or a bug?
> The FD should closed and/or that file should exist and not being deleted
> IMO.
> 
> 1. Start HAProxy
> 2. lsof | grep haproxy | grep DEL
> 

Hello Christian,

That's the normal behavior, it was introduced with USE_SHM_OPEN in 2.7, and is 
activated by default on linux-glibc,
linux-musl and freebsd.

It's a mecanism that opens a shm to store the startup-logs between the old 
processes and the new one during reload.
Allowing to show logs on the "reload" command from the master CLI for example. 
The shm file is opened and then deleted
so we don't pollute the filesystem, the FD is then kept so we can still access 
to the shared memory.

The mecanism changed a little bit in 3.1, in previous versions the FD was 
closed after the reload and then reopened for
a new reload, but now we keep the same SHM between reloads. But once haproxy is 
stopped it won't leak anywhere.

Regards,

-- 
William Lallemand




Re: [PATCH 1/1] MINOR: sample: Add sample fetches for enhanced observability for TLS ClientHello

2025-01-06 Thread William Lallemand
Hello Mariam,

On Thu, Jan 02, 2025 at 06:16:28PM -0600, Mariam John wrote:
> Subject: [PATCH 1/1] MINOR: sample: Add sample fetches for enhanced 
> observability for TLS ClientHello
> Add new sample fetches to get the ciphers, supported groups, key shares and 
> signature algorithms
> that the client supports during a TLS handshake as part of the contents of a 
> TLS ClientHello.
> Currently we can get the following contents of the ClientHello message: 
> SNI(req_ssl_sni) and
> TLS protocol version(req_ssl_ver). The following new sample fetches will be 
> added to get the
> following contents of the ClientHello message exchanged during the TLS 
> handshake (supported by
> the client):
> - req.ssl_ciphers: Returns the binary form of the list of symmetric cipher 
> options
> - req.ssl_sigalgs: Returns the binary form of the list of signature algorithms
> - req.ssl_keyshare_groups: Return the binary format of the list of key share 
> groups
> - req.ssl_supported_groups: Returns the binary form of the list of supported 
> groups used in key exchange
>
> This added functionality would allow routing with fine granularity pending 
> the capabilities the client
> indicates in the ClientHello. For example, this would allow the ability to 
> enable TLS passthrough or
> TLS termination based on the supported groups detected in the ClientHello 
> message. Another usage is to
> take client key shares into consideration when deciding which of the client 
> supported groups should be
> used for groups considered to have 'equal security level' as well as enabling 
> fine grain selection of
> certificate types(beyond the RSA vs ECC distinction). All of that is relevant 
> in the context of rapidly
> upcoming PQC operation modes.
> 
> Fixes: #2532

Indeed this is really useful, It totally makes sense from a feature point of 
view, I think we can improve your patch a
little bit before integration but I'd be happy to have these fetches!

We already have some clienthello parsing when called with the SSL binds, some 
of the ssl_fc_* fetches use a capture
system which stores some extensions in the struct ssl_capture. This is done in 
src/ssl_sock.c in the
ssl_sock_parse_clienthello() function. It's done with an SSL context, but only 
because the callback system is providing
it, the clienthello is also parsed manually in there. In the future we could 
maybe do the clienthello parsing in the
same function rather than implementing 2 differents methods for TCP binds and 
SSL binds, but don't worry about that for
now.

What's interesting in there is how we store the data, and we should try to 
provide the same output and stay consistent
with the naming:

- req.ssl_ciphers has its SSL bind equivalent which is ssl_fc_cipherlist_bin, 
maybe we could call the new fetch
  req.ssl_cipherlist. Both provides the raw binary ciphers as stored in the 
clienthello, and is limited by the
  cipherlist len in the clienthello.

- req.ssl_sigalgs has its ssl_fc_sigalgs_bin equivalent, so we are good with 
the naming, it also uses the sigalgs len
  and everything is raw, should be good!

- req.ssl_keyshare_groups don't have any equivalent, data is processed and this 
is not the raw data, but only the group
  field in each KeyShareEntry. The output generated looks like 
ssl_fc_eclist_bin.

- req.ssl_supported_groups have a SSL binds equivalent, which is 
ssl_fc_eclist_bin, maybe we can call your new fetch
  req.ssl_fc_eclist? Well that might be confusing and using the TLS extension 
name is probably better...


I pushed your patch on our CI and there are some failures, you should try to 
clone the HAProxy repository on your github
account in order to check if your reg-test is working correctly:

https://github.com/haproxy/haproxy/actions/runs/12635483567/job/35205484779


> diff --git a/doc/configuration.txt b/doc/configuration.txt
> index 76b622bce..c1ec84a67 100644
> --- a/doc/configuration.txt
> +++ b/doc/configuration.txt
> [...]
> +req.ssl_ssl_sigalgs binary
 ^ typo there

> +  Returns the binary form of the list of signature algorithms supported by 
> the
> +  client as reported in the TLS ClientHello. This is available as a client 
> hello
> +  extension. Note that this only applies to raw contents found in the request
> +  buffer and not to contents deciphered via an SSL data layer, so this will 
> not
> +  work with "bind" lines having the "ssl" option.


I think you should mention the SSL bind equivalent keyword for each fetch, that 
would help users to find how to do it.

> diff --git a/include/haproxy/buf-t.h b/include/haproxy/buf-t.h
> index 5c59b0aaf..59472da84 100644
> --- a/include/haproxy/buf-t.h
> +++ b/include/haproxy/buf-t.h
> @@ -31,11 +31,13 @@
>  #include 
>  
>  /* Structure defining a buffer's head */
> +#define buffer_bin_size 16  /* Space to hold up to 8 code points, e.g 
> for key share groups */
>  struct buffer {
>   size_t size;/* buffer size in bytes */
>   char  *area;   

Re: HAproxy load balancing query

2025-01-03 Thread Dev Ops
Hi willy,

Thank you, if you are able please pass this query to that team it will be 
really helpful. Further we are trying to test with LVS as well.

Thank you

Get Outlook for iOS<https://aka.ms/o0ukef>

From: Willy Tarreau 
Sent: Friday, January 3, 2025 11:57:37 PM
To: Shehan Jayawardane 
Cc: Aleksandar Lazic ; Joshua Turnbull 
; haproxy@formilux.org ; Dev Ops 
; Sathiska Udayanga ; Tharka 
Karunanayake 
Subject: Re: HAproxy load balancing query

Hi Shehan,

On Fri, Jan 03, 2025 at 06:05:57PM +, Shehan Jayawardane wrote:
> Hi Aleksandar,
>
> Thanks for the information.
> We have gone through what you have shared. And yes, with HAproxy it is not
> possible. But is it the same for HAproxy enterprise edition? Can't we load
> balance Radius traffic?

Due to the UDP module I suspect it could be possible since Radius is
expected to remain relatively simple, though I can't promise anything.
You may want to contact someone there to know more about it (or if you
want I can pass your e-mail along so that someone recontacts you).

Have you considered LVS as a first approach though ? I understand that
there are situations where it may not be easy to use but generally it's
quite straight forward and less intrusive than a proxy for UDP services
since it works at the packet or connection level. It only needs to be
placed on a machine serving as the default gateway, generally in
combination with keepalived for health checks (and optionally VRRP).

Cheers,
Willy


Re: HAproxy load balancing query

2025-01-03 Thread Илья Шипицин
I would suggest using RADIUS as a dedicated proxy

config/Proxy <https://wiki.freeradius.org/config/Proxy>

as for HAProxy Enterprise edition, here's open source mailing list.
probably, you should reach HAProxy Enterprise sales and talk to them.

пт, 3 янв. 2025 г. в 19:24, Shehan Jayawardane :

> Hi Aleksandar,
>
> Thanks for the information.
> We have gone through what you have shared. And yes, with HAproxy it is not
> possible. But is it the same for HAproxy enterprise edition? Can't we load
> balance Radius traffic?
>
>
> Best Regards,
> Shehan Jayawardane
> *Head of Engineering*
> *sheh...@nvision.lk *
> www.thryvz.com
>
>
> --
> *From:* Aleksandar Lazic 
> *Sent:* 02 January 2025 18:05
> *To:* Shehan Jayawardane 
> *Cc:* Willy Tarreau ; Joshua Turnbull ;
> haproxy@formilux.org ; Dev Ops ;
> Sathiska Udayanga ; Tharka Karunanayake <
> thar...@nvision.lk>
> *Subject:* Re: HAproxy load balancing query
>
> Hi Shehan.
>
> On 2025-01-02 (Do.) 13:13, Shehan Jayawardane wrote:
> > Hi Willy,
> >
> > Thanks for the information.
> > Actually, our UDP traffic are radius requests to be forwarded to 1812,
> and 1813
> > UDP ports. we need to load balance these requests. Can we load balance
> these UDP
> > requests?
>
> Looks like you have not read the GH thread which was suggested from Willy.
>
> https://github.com/haproxy/haproxy/issues/62
>
> Cite from the last comment.
>  > ... For radius, since we're not using it, it's unlikely to appear any
> time
> soon. ...
>
> Please be so kind and read the answers and follow the links which are
> suggested
> from the responded E-Mail.
>
> In Short: No, HAProxy can't be used to LB Radius!
>
> Regards
> Alex
>
> > Best Regards,
> > Shehan Jayawardane
> > *Head of Engineering*
> > _shehanj@nvision.lk_
> > www.thryvz.com <http://www.thryvz.com/>
> >
> >
> >
> 
> > *From:* Willy Tarreau 
> > *Sent:* 02 January 2025 17:27
> > *To:* Shehan Jayawardane 
> > *Cc:* Joshua Turnbull ; haproxy@formilux.org
> > ; Dev Ops ; Sathiska Udayanga
> > 
> > *Subject:* Re: HAproxy load balancing query
> > Hi Shehan,
> >
> > On Thu, Jan 02, 2025 at 11:25:25AM +, Shehan Jayawardane wrote:
> >> Hi Willy,
> >>
> >> Good day to you.
> >> After long time I'm requesting this query as well.
> >> with HA proxy are we able to load balance udp traffic?
> >
> > There isn't such a thing as "udp traffic" but there are udp-based
> services
> > and that makes a huge difference. Haproxy supports load-balancing syslog
> > messages that it can receive over udp/tcp and send to udp/tcp, possibly
> > even completing them (e.g. prepend the source address).
> >
> > For the rest, you'll almost always need service-specific handling since
> > some services are message-based, others connection-based, uni- or bi-
> > directional. Most services depend on the source address being joinable
> > by the server, or may alter the behavior based on the source IP address
> > (e.g. syslog, dns if using zones/geoip). There are already a number of
> > dedicated proxies for various UDP-based services, which will always do
> > the job better than a generic proxy. For example, for syslog, a tool
> > such as rsyslog will be able to use the RELP protocol for reliable
> > delivery and even to queue messages to avoid losses. For DNS there's
> > dnsdist etc.
> >
> > For completeness, there's a module dealing with all these particularities
> > in as much a generic way as possible in the haproxy enterprise packages,
> > and I'm personally glad I don't have to maintain that because it's very
> > far away from haproxy's scope.
> >
> > For more details and background on all of this, and some of the technical
> > challenges, I invite you to read the discussion in this issue:
> >
> > https://github.com/haproxy/haproxy/issues/62 <
> https://github.com/haproxy/
> > haproxy/issues/62>
> >
> > Hoping this helps,
> > Willy
>
>


Re: HAproxy load balancing query

2025-01-03 Thread Willy Tarreau
Hi Shehan,

On Fri, Jan 03, 2025 at 06:05:57PM +, Shehan Jayawardane wrote:
> Hi Aleksandar,
> 
> Thanks for the information.
> We have gone through what you have shared. And yes, with HAproxy it is not
> possible. But is it the same for HAproxy enterprise edition? Can't we load
> balance Radius traffic?

Due to the UDP module I suspect it could be possible since Radius is
expected to remain relatively simple, though I can't promise anything.
You may want to contact someone there to know more about it (or if you
want I can pass your e-mail along so that someone recontacts you).

Have you considered LVS as a first approach though ? I understand that
there are situations where it may not be easy to use but generally it's
quite straight forward and less intrusive than a proxy for UDP services
since it works at the packet or connection level. It only needs to be
placed on a machine serving as the default gateway, generally in
combination with keepalived for health checks (and optionally VRRP).

Cheers,
Willy




Re: HAproxy load balancing query

2025-01-03 Thread Shehan Jayawardane
Hi Aleksandar,

Thanks for the information.
We have gone through what you have shared. And yes, with HAproxy it is not 
possible. But is it the same for HAproxy enterprise edition? Can't we load 
balance Radius traffic?


Best Regards,
Shehan Jayawardane
Head of Engineering
sheh...@nvision.lk
www.thryvz.com<http://www.thryvz.com/>


[cid:d74ef290-1823-40f6-b3b3-3fb37b6da548]

From: Aleksandar Lazic 
Sent: 02 January 2025 18:05
To: Shehan Jayawardane 
Cc: Willy Tarreau ; Joshua Turnbull ; 
haproxy@formilux.org ; Dev Ops ; 
Sathiska Udayanga ; Tharka Karunanayake 

Subject: Re: HAproxy load balancing query

Hi Shehan.

On 2025-01-02 (Do.) 13:13, Shehan Jayawardane wrote:
> Hi Willy,
>
> Thanks for the information.
> Actually, our UDP traffic are radius requests to be forwarded to 1812, and 
> 1813
> UDP ports. we need to load balance these requests. Can we load balance these 
> UDP
> requests?

Looks like you have not read the GH thread which was suggested from Willy.

https://github.com/haproxy/haproxy/issues/62

Cite from the last comment.
 > ... For radius, since we're not using it, it's unlikely to appear any time
soon. ...

Please be so kind and read the answers and follow the links which are suggested
from the responded E-Mail.

In Short: No, HAProxy can't be used to LB Radius!

Regards
Alex

> Best Regards,
> Shehan Jayawardane
> *Head of Engineering*
> _shehanj@nvision.lk_
> www.thryvz.com<http://www.thryvz.com> <http://www.thryvz.com/>
>
>
> 
> *From:* Willy Tarreau 
> *Sent:* 02 January 2025 17:27
> *To:* Shehan Jayawardane 
> *Cc:* Joshua Turnbull ; haproxy@formilux.org
> ; Dev Ops ; Sathiska Udayanga
> 
> *Subject:* Re: HAproxy load balancing query
> Hi Shehan,
>
> On Thu, Jan 02, 2025 at 11:25:25AM +, Shehan Jayawardane wrote:
>> Hi Willy,
>>
>> Good day to you.
>> After long time I'm requesting this query as well.
>> with HA proxy are we able to load balance udp traffic?
>
> There isn't such a thing as "udp traffic" but there are udp-based services
> and that makes a huge difference. Haproxy supports load-balancing syslog
> messages that it can receive over udp/tcp and send to udp/tcp, possibly
> even completing them (e.g. prepend the source address).
>
> For the rest, you'll almost always need service-specific handling since
> some services are message-based, others connection-based, uni- or bi-
> directional. Most services depend on the source address being joinable
> by the server, or may alter the behavior based on the source IP address
> (e.g. syslog, dns if using zones/geoip). There are already a number of
> dedicated proxies for various UDP-based services, which will always do
> the job better than a generic proxy. For example, for syslog, a tool
> such as rsyslog will be able to use the RELP protocol for reliable
> delivery and even to queue messages to avoid losses. For DNS there's
> dnsdist etc.
>
> For completeness, there's a module dealing with all these particularities
> in as much a generic way as possible in the haproxy enterprise packages,
> and I'm personally glad I don't have to maintain that because it's very
> far away from haproxy's scope.
>
> For more details and background on all of this, and some of the technical
> challenges, I invite you to read the discussion in this issue:
>
> https://github.com/haproxy/haproxy/issues/62 <https://github.com/haproxy/
> haproxy/issues/62>
>
> Hoping this helps,
> Willy



Re: [PATCH] BUG/MINOR: 51 degree: handle a possible strdup() failure

2025-01-02 Thread Willy Tarreau
Hi Ilya,

On Thu, Jan 02, 2025 at 10:02:01PM +0100,  ??? wrote:
> ??, 2 ???. 2025 ?. ? 21:46, Miroslav Zagorac :
> 
> > On 02. 01. 2025. 21:40,  ??? wrote:
> > > Honestly, I think those elements must be deallocated on program exit,
> > > not only if something failed during allocation.
> > >
> > > but I did not check that
> > >
> >
> > That is correct.  However, the calloc() result is not checked before
> > strdup()
> > either, so the patch is not good.
> >
> 
> I did not pretend to add "calloc" check for this patch.
> we have dedicated *dev/coccinelle/unchecked-calloc.cocci *script which
> allows us to detect unchecked "calloc". no worry, it won't be forgotten

In general, when reworking some functions' memory allocation and checks,
it's better to fix it at once when you find that multiple checks are
missing, rather than attempting incremental fixes that remain partially
incorrect.

One reason is that very often, dealing with allocations unrolling requires
an exit label where allocations are unrolled in reverse order, and doing
them one at a time tends to stay away from that approach. Or sometimes
you'll figure that fixing certain unchecked allocations require to
completely change the approach that was used for previous fixes.

Thus if you think you've figured how to completely fix that function, do
not hesitate, please just fix it all at once, indicating in the commit
message what you fixed. If you think you can fix it incrementatlly without
having to change your fix later, then it's fine to do it that way as well
of course.

> > >>> + if (name->name == NULL) {
> > >>> + memprintf(err,"Out of memory.");
  
BTW, beware of the missing space here.

> > >>> + goto fail_free_name;
> > >>> + }

Cheers,
Willy




Re: [PATCH] BUG/MINOR: 51 degree: handle a possible strdup() failure

2025-01-02 Thread Miroslav Zagorac
On 02. 01. 2025. 22:02, Илья Шипицин wrote:
> I did not pretend to add "calloc" check for this patch.
> we have dedicated *dev/coccinelle/unchecked-calloc.cocci *script which
> allows us to detect unchecked "calloc". no worry, it won't be forgotten
> 

OK then;  thank you for your help.

-- 
Miroslav Zagorac

What can change the nature of a man?




Re: [PATCH] BUG/MINOR: 51 degree: handle a possible strdup() failure

2025-01-02 Thread Илья Шипицин
чт, 2 янв. 2025 г. в 21:46, Miroslav Zagorac :

> On 02. 01. 2025. 21:40, Илья Шипицин wrote:
> > Honestly, I think those elements must be deallocated on program exit,
> > not only if something failed during allocation.
> >
> > but I did not check that
> >
>
> That is correct.  However, the calloc() result is not checked before
> strdup()
> either, so the patch is not good.
>

I did not pretend to add "calloc" check for this patch.
we have dedicated *dev/coccinelle/unchecked-calloc.cocci *script which
allows us to detect unchecked "calloc". no worry, it won't be forgotten



>
> >>>   while (*(args[cur_arg])) {
> >>>   name = calloc(1, sizeof(*name));
> >>>   name->name = strdup(args[cur_arg]);
> >>> + if (name->name == NULL) {
> >>> + memprintf(err,"Out of memory.");
> >>> + goto fail_free_name;
> >>> + }
> >>>   LIST_APPEND(&global_51degrees.property_names,
> &name->list);
> >>>   ++cur_arg;
> >>>   }
> >>>
> >>>   return 0;
> >>> +
> >>> +fail_free_name:
> >>> + free(name);
> >>> +fail:
> >>> + return -1;
>
> --
> Miroslav Zagorac
>
> What can change the nature of a man?
>


Re: [PATCH] BUG/MINOR: 51 degree: handle a possible strdup() failure

2025-01-02 Thread Miroslav Zagorac
On 02. 01. 2025. 21:40, Илья Шипицин wrote:
> Honestly, I think those elements must be deallocated on program exit,
> not only if something failed during allocation.
> 
> but I did not check that
> 

That is correct.  However, the calloc() result is not checked before strdup()
either, so the patch is not good.

>>>   while (*(args[cur_arg])) {
>>>   name = calloc(1, sizeof(*name));
>>>   name->name = strdup(args[cur_arg]);
>>> + if (name->name == NULL) {
>>> + memprintf(err,"Out of memory.");
>>> + goto fail_free_name;
>>> + }
>>>   LIST_APPEND(&global_51degrees.property_names, &name->list);
>>>   ++cur_arg;
>>>   }
>>>
>>>   return 0;
>>> +
>>> +fail_free_name:
>>> + free(name);
>>> +fail:
>>> + return -1;

-- 
Miroslav Zagorac

What can change the nature of a man?




Re: [PATCH] BUG/MINOR: 51 degree: handle a possible strdup() failure

2025-01-02 Thread Илья Шипицин
Honestly, I think those elements must be deallocated on program exit,
not only if something failed during allocation.

but I did not check that

чт, 2 янв. 2025 г. в 20:13, Miroslav Zagorac :

> On 02. 01. 2025. 15:18, Ilia Shipitsin wrote:
> > This defect was found by the coccinelle script "unchecked-strdup.cocci".
> > It can be backported to all supported branches.
>
> Hello Ilia,
>
> due to allocating memory for list elements, in case of impossibility of
> memory
> allocation, the previously added list elements should be removed and the
> memory occupied by them should be deallocated.
>
> Because of that I think the second part of the patch is not good enough and
> I'm not sure that it contributes to the quality of the code.
>
> Of course, the code checks if the memory is allocated, so it might prevent
> a
> segmentation fault at some point, but it won't prevent a memory leak.
>
> >   while (*(args[cur_arg])) {
> >   name = calloc(1, sizeof(*name));
> >   name->name = strdup(args[cur_arg]);
> > + if (name->name == NULL) {
> > + memprintf(err,"Out of memory.");
> > + goto fail_free_name;
> > + }
> >   LIST_APPEND(&global_51degrees.property_names, &name->list);
> >   ++cur_arg;
> >   }
> >
> >   return 0;
> > +
> > +fail_free_name:
> > + free(name);
> > +fail:
> > + return -1;
>
>
> Best regards,
>
> --
> Miroslav Zagorac
>
> What can change the nature of a man?
>


  1   2   3   4   5   6   7   8   9   10   >