Re: haproxy.org bug pages broken (missing html headers and footer?)

2023-09-27 Thread Artur

Hello,

And https://www.haproxy.org/bugs/index.html is an empty document.
There is a link for it on haproxy.org home page (as Known bugs).

Le 27/09/2023 à 23:49, Lukas Tribus a écrit :

Hello,

looks like the bug pages are broken; they contain the table of bugs
but there is really no formatting happening and it appears the entire
HTML header and footer is missing:

Example:
http://www.haproxy.org/bugs/bugs-2.4.html
http://www.haproxy.org/bugs/bugs-2.6.2.html


BR,

Lukas


--
Best regards,
Artur




Re: WebTransport support/roadmap

2023-08-17 Thread Artur

Le 17/08/2023 à 11:46, Aleksandar Lazic a écrit :

On 2023-08-17 (Do.) 10:14, Artur wrote:
Feature request submitted: 
https://github.com/haproxy/haproxy/issues/2256


Thank you. I have added a simple picture based on your E-Mails, hope I 
have understood your request properly.


Sorry, I was not accurate enough. The primary idea was to get UDP 
WebTransport working end-to-end as it provides some specific skills not 
available in tcp websocket transport.


--
Best regards,
Artur




Re: WebTransport support/roadmap

2023-08-17 Thread Artur

Feature request submitted: https://github.com/haproxy/haproxy/issues/2256

--
Best regards,
Artur




Re: WebTransport support/roadmap

2023-08-17 Thread Artur

Hello,

Thank you for your answers.

Le 16/08/2023 à 20:01, Aleksandar Lazic a écrit :
Please can you open a Feature request on 
https://github.com/haproxy/haproxy/issues so that anybody, maybe you 
:-), can pick it and implement it.


I'll do it. Unfortunately, my dev skills are limited so I'm not ready to 
work on such a complex project. But if I can help, I'll do.


--
Best regards,
Artur




WebTransport support/roadmap

2023-08-16 Thread Artur

Hello !

I wonder if there is a roadmap to support WebTransport protocol in haproxy.

There are some explanations/references (if needed) from socket.io dev 
team that started to support it :


https://socket.io/get-started/webtransport

--
Best regards,
Artur


Re: [PATCH] DOC: quic: fix misspelled tune.quic.socket-owner

2023-06-07 Thread Artur

Hello Willy,

 I understand, thank you for the explanation.

Have a nice holidays ! ;)

Le 07/06/2023 à 14:55, Willy Tarreau a écrit :

Hello Artur,

On Tue, Jun 06, 2023 at 03:18:31PM +0200, Artur wrote:

About the backporting instructions I was not sure how far it should be
backported. I preferred to skip it instead of giving an erroneous
instruction.
Maybe someone can explain if this backport instruction is really required
and what to do if one is unsure about how to backport.

You should see them as a time saver for the person doing the backports,
that's why we like patch authors to provide as much useful information
as they can. Sometimes even just adding "this patch probably needs to be
backported" or "the feature was already there in 2.7 and maybe before so
the patch may need to be backported at least there" will be a hint to
the person that they should really check twice if they don't find it
the first time.

As a rule of thumb, just keep in mind that the commit message part of
a patch is the one where humans talk to humans, and that anything that
crosses your head and that can help decide if a patch has to be
backported, could be responsible for a regression, needs to be either
fixed or reverted etc is welcome.

Thanks,
Willy


--
Best regards,
Artur


Re: [PATCH] DOC: quic: fix misspelled tune.quic.socket-owner

2023-06-06 Thread Artur

Le 06/06/2023 à 14:52, Amaury Denoyelle a écrit :

Do not hesitate to give us feedback if you test QUIC support :)


Yes, I will. I deployed haproxy+quic (recommended setup) on one site 
with some trafic so I need few days to get visitors' feedback, if any.


At this point I can't see anything wrong for all the tests I've made.

--
Best regards,
Artur




Re: [PATCH] DOC: quic: fix misspelled tune.quic.socket-owner

2023-06-06 Thread Artur

Hello Tim,

Thank you for your help. I forgot to include patch description in the 
body. My bad. Luckily Amaury was there. :)


About the backporting instructions I was not sure how far it should be 
backported. I preferred to skip it instead of giving an erroneous 
instruction.
Maybe someone can explain if this backport instruction is really 
required and what to do if one is unsure about how to backport.


Le 06/06/2023 à 14:54, Tim Düsterhus a écrit :

Hi Artur,

On 6/6/23 14:42, Artur wrote:

DOC: quic: fix misspelled tune.quic.socket-owner

Commit 511ddd5 introduced tune.quic.socket-owner parameter
related to QUIC socket behaviour.
However it was misspelled in configuration.txt in 'bind' section as
tune.quic.conn-owner.



I'm not a committer, but a regular contributor. I had a look: The 
patch looks pretty good, but the commit message is lacking a body, 
which is required as per:


https://github.com/haproxy/haproxy/blob/a475448161b406b0b81f5b551336417b05426492/CONTRIBUTING#L562-L567 



As you already found the commit that introduced the issue, it would be 
a good opportunity to add a reference to the message body, something 
like:


The typo was introduced in commit 
511ddd5785266c149dfa593582512239480e1688 ("MINOR: quic: define config 
option for socket per conn") and needs to be backported together with 
that commit (2.8 and possible 2.7).


Best regards
Tim Düsterhus



--
Best regards,
Artur




[PATCH] DOC: quic: fix misspelled tune.quic.socket-owner

2023-06-06 Thread Artur

DOC: quic: fix misspelled tune.quic.socket-owner

Commit 511ddd5 introduced tune.quic.socket-owner parameter
related to QUIC socket behaviour.
However it was misspelled in configuration.txt in 'bind' section as
tune.quic.conn-owner.
From a446bfc3cf50f58cde0bdc36e93154099771bf9e Mon Sep 17 00:00:00 2001
From: Artur Pydo 
Date: Tue, 6 Jun 2023 11:49:59 +0200
Subject: [PATCH] DOC: quic: fix misspelled tune.quic.socket-owner

---
 doc/configuration.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index adcd00414..b147b501c 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -4936,7 +4936,7 @@ bind / [, ...] [param*]
   was the FD of an accept(). Should be used carefully.
 - 'quic4@' -> address is resolved as IPv4 and protocol UDP
   is used. Note that to achieve the best performance with a
-  large traffic you should keep "tune.quic.conn-owner" on
+  large traffic you should keep "tune.quic.socket-owner" on
   connection. Else QUIC connections will be multiplexed
   over the listener socket. Another alternative would be to
   duplicate QUIC listener instances over several threads,
-- 
2.30.2



Re: tune.quic.socket-owner misspelled in configuration.txt (bind section)

2023-06-06 Thread Artur

Hello Amaury,

Le 06/06/2023 à 09:30, Amaury Denoyelle a écrit :

On Mon, Jun 05, 2023 at 07:21:37PM +0200, Artur wrote:

Hello,
In the following commit tune.quic.socket-owner parameter is introduced.
However, in configuration.txt, line 4629, it's misspelled as
tune.quic.conn-owner.
https://github.com/haproxy/haproxy/commit/511ddd5785266c149dfa593582512239480e1688
I can fill a "bug" report on github if necessary.

Indeed good catch.

If you have the time, can you provide us a proper patch for this ? There
is some insight in the CONTRIBUTING file for this. If not, I will submit
the change myself.


Let me check the process to provide a patch in CONTRIBUTING. I'll be 
very happy to provide even a very little contribution to haproxy. :)

If I have any problem I'll keep you informed.

--
Best regards,
Artur




tune.quic.socket-owner misspelled in configuration.txt (bind section)

2023-06-05 Thread Artur

Hello,

In the following commit tune.quic.socket-owner parameter is introduced. 
However, in configuration.txt, line 4629, it's misspelled as 
tune.quic.conn-owner.


https://github.com/haproxy/haproxy/commit/511ddd5785266c149dfa593582512239480e1688

I can fill a "bug" report on github if necessary.

--
Best regards,
Artur


Re: Debian + QUIC / HTTP/3

2023-06-05 Thread Artur

Thank you Илья and Dinko.

What I can see is that haproxy doc suggest using QuicTLS library.
The build process is well explained in Dockerfile. That's perfect.

I've also seen some information about haproxy 2.6 configuration for 
HTTP/3 over QUIC in the following article. I imagine it may be suitable 
for 2.8 version as well...


https://www.haproxy.com/blog/announcing-haproxy-2-6

--
Best regards,
Artur




Debian + QUIC / HTTP/3

2023-06-05 Thread Artur

Hello,

What is suggested/recommended way to get QUIC / HTTP/3 working in 
haproxy on Debian ?


--
Best regards,
Artur




Re: [ANNOUNCE] haproxy-2.6.10

2023-03-13 Thread Artur

Hello,

There is something unclear for me.

10/03/2023 20:43, Willy Tarreau wrote:

HAProxy 2.6.10 was released on 2023/03/10. It added 78 new commits
after version 2.6.9.

A bit more than half of the commits are HTTP3/QUIC fixes. However, as
indicated in the 2.8-dev5 announce, a concurrency bug introduced in 2.5
was fixed in this version, that may cause freezes and crashes when some
HTTP/1 backend connections are closed by the server exactly at the same
time they're going to be reused by another thread. Another different bug
also affecting idle connections since 2.2 was fixed, possibly causing an
occasional crash. One possible work-around if you've faced such issues
recently is to disable inter-thread connection reuse with this directive
in the global section:

tune.idle-pool.shared off

But beware that this may increase the total number of connections kept
established with your backend servers depending the reuse frequency and
the number of threads.


Is this a workaround for the previous versions or one need to apply this 
setting also to the current version ?

Thank you for your clarification.

--

Best regards,
Artur




Re: TCP connections resets during backend transfers

2022-10-28 Thread Artur

Hello,

OK, these lost connections during transfers servers side was related to 
some firewall or hardware timing out long-lived tcp connections.

To solve this problem I added 'option tcpka' in defaults haproxy section.
Moreover, one can also adjust the following kernel variables :

net.ipv4.tcp_keepalive_intvl = 75
net.ipv4.tcp_keepalive_probes = 9
net.ipv4.tcp_keepalive_time = 7200

In my situation I had to change these to something like :

net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_time = 120

Le 18/10/2022 à 11:15, Artur wrote :

Hello,

While renewing a node.js servers and a galera cluster (mariadb) I'm 
seeing an unexpected behaviour on TCP connections between node.js 
application and mariadb.

There is a lot of connections resets during transfers on backend side.

My previous (working) setup was based on Debian 10, mariadb 10.5, 
node.js 16 (and some dependencies) and haproxy 2.6.
I had a server running several node.js processes and a 3-node galera 
mariadb cluster.
To provide some HA, I configured haproxy as a TCP proxy for mariadb 
connections.

The usual setup is :
node.js -> haproxy -> mariadb
node.js application uses a connection pool to maintain several open 
connections to database server that may be idle for a long time.
The timeouts are adjusted in haproxy to avoid disconnecting idle 
connections.

This setup worked just fine on old servers.

Then I've setup new servers on Debian 11: a new mariadb galera cluster 
(10.6), a new node.js application server (no real changes in node.js 
software versions there) and haproxy (2.6.6 currently).
The global setup of all of this is quite the same as before but not 
exactly the same. I tried however to be as close as possible to the 
old setup.
Now, once I started the node.js application, the database connections 
are established and after about 20 minutes I start to see application 
warnings about lost connections to database.
On haproxy stats page I can see lot of 'connections resets during 
tranfers' backend side.
On database side I can see idle processes that stay there even if I 
close node.js application or restart haproxy. These have to timeout or 
be killed to disappear. As if there was no communication any more 
between haproxy and mariadb (on these tcp connections).
At the same moment other database connections are established or 
continue to function. Maybe something related to idle connections ?


If it may help : all these servers are VMs in OVH public cloud and 
communications between servers are established through a private vlan 
in the same datacenter.


If I remove haproxy from workflow (node.js -> mariadb) I cannot see 
any error anymore. But I don't understand why it worked fine before 
and is working this way right now...

Any help is welcome.

My current haproxy setup :

global
  log /dev/log  local0
  log /dev/log  local1 notice
  chroot /var/lib/haproxy
  stats socket /run/haproxy/admin.sock mode 660 level admin
  stats timeout 30s
  user haproxy
  group haproxy
  daemon

  # Default SSL material locations
  ca-base /etc/ssl/certs
  crt-base /etc/ssl/private

  # See: 
https://ssl-config.mozilla.org/#server=haproxy=2.0.3=intermediate
  ssl-default-bind-ciphers 
ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
  ssl-default-bind-ciphersuites 
TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256

  ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets

  ssl-dh-param-file /etc/haproxy/ssl/dhparams.pem
  tune.ssl.default-dh-param 2048

  maxconn 5

  #nosplice

defaults
  log global
  option dontlognull
  option dontlog-normal
  timeout connect 5000
  timeout client  5
  timeout server  5

  #option tcpka

  errorfile 400 /etc/haproxy/errors/400.http
  errorfile 403 /etc/haproxy/errors/403.http
  errorfile 408 /etc/haproxy/errors/408.http
  errorfile 500 /etc/haproxy/errors/500.http
  errorfile 502 /etc/haproxy/errors/502.http
  errorfile 503 /etc/haproxy/errors/503.http
  errorfile 504 /etc/haproxy/errors/504.http

  option splice-auto
  option splice-request
  option splice-response

frontend db3_front
  bind 127.0.1.1:3306
  mode tcp
  # haproxy client connection timeout is 1 second longer than the 
default mariadb wait_timeout which is 28800 seconds

  # this avoids haproxy to close an idle connection with no reason
  timeout client 28801s
  maxconn 1
  no log
  default_backend db3_back

backend db3_back
  mode tcp
  # haproxy server connection timeout is 1 second longer than the 
default mariadb wait_timeout which is 28800 seconds

  # this avoids haproxy to close an idle connection with no reason
  timeout server 28801s
  option mysql-check user hacheck post-41
  fullconn 1
  timeout check 10s
  server db3sbg5 10.140.154.94:3306 maxc

Re: TCP connections resets during backend transfers

2022-10-21 Thread Artur

Hello John,

Thank you for your answer.

The timeout for client and server is set to 28801s in frontend and 
backend sections (should replace values set in default section).
28801s is one second more than wait_timeout and interactive_timeout set 
in Mariadb.
Moreover the retries I can see in application log appera already after 
about 20 minutes.


Le 20/10/2022 à 20:52, John Lauro wrote :
That's what 50s?  You are probably doing pooling and it's using LRU 
instead of actually cycling through connections.  At least that is 
what I have seen node typically do.


Instead of 50 seconds, try:
    timeout client          12h
    timeout server          12h

You might want to enable logging on haproxy and general logging on 
maria.  If you see what I have seen in the past, you will notice that 
most of the SQL requests come through one connection, then next 
highest from a second, and so-on until you get to a connection that is 
mostly idle.


*From:* Artur 
*Sent:* Tuesday, October 18, 2022 5:15 AM
*To:* haproxy 
*Subject:* TCP connections resets during backend transfers
Hello,

While renewing a node.js servers and a galera cluster (mariadb) I'm
seeing an unexpected behaviour on TCP connections between node.js
application and mariadb.
There is a lot of connections resets during transfers on backend side.

My previous (working) setup was based on Debian 10, mariadb 10.5,
node.js 16 (and some dependencies) and haproxy 2.6.
I had a server running several node.js processes and a 3-node galera
mariadb cluster.
To provide some HA, I configured haproxy as a TCP proxy for mariadb
connections.
The usual setup is :
node.js -> haproxy -> mariadb
node.js application uses a connection pool to maintain several open
connections to database server that may be idle for a long time.
The timeouts are adjusted in haproxy to avoid disconnecting idle
connections.
This setup worked just fine on old servers.

Then I've setup new servers on Debian 11: a new mariadb galera cluster
(10.6), a new node.js application server (no real changes in node.js
software versions there) and haproxy (2.6.6 currently).
The global setup of all of this is quite the same as before but not
exactly the same. I tried however to be as close as possible to the old
setup.
Now, once I started the node.js application, the database connections
are established and after about 20 minutes I start to see application
warnings about lost connections to database.
On haproxy stats page I can see lot of 'connections resets during
tranfers' backend side.
On database side I can see idle processes that stay there even if I
close node.js application or restart haproxy. These have to timeout or
be killed to disappear. As if there was no communication any more
between haproxy and mariadb (on these tcp connections).
At the same moment other database connections are established or
continue to function. Maybe something related to idle connections ?

If it may help : all these servers are VMs in OVH public cloud and
communications between servers are established through a private vlan in
the same datacenter.

If I remove haproxy from workflow (node.js -> mariadb) I cannot see any
error anymore. But I don't understand why it worked fine before and is
working this way right now...
Any help is welcome.

My current haproxy setup :

global
   log /dev/log  local0
   log /dev/log  local1 notice
   chroot /var/lib/haproxy
   stats socket /run/haproxy/admin.sock mode 660 level admin
   stats timeout 30s
   user haproxy
   group haproxy
   daemon

   # Default SSL material locations
   ca-base /etc/ssl/certs
   crt-base /etc/ssl/private

   # See:
https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fssl-config.mozilla.org%2F%23server%3Dhaproxy%26server-version%3D2.0.3%26config%3Dintermediatedata=05%7C01%7Cjohn.lauro%40covenanteyes.com%7C1615f59ae41445e417e708dab0e9783f%7C41175d2868f5486593eb6372ba83c5bb%7C0%7C0%7C638016814063244113%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7Csdata=U2r2enW1n3iM%2BFRo2FXU0Ob63XPE6Wcry3WZSg7t0wU%3Dreserved=0 
<https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fssl-config.mozilla.org%2F%23server%3Dhaproxy%26server-version%3D2.0.3%26config%3Dintermediatedata=05%7C01%7Cjohn.lauro%40covenanteyes.com%7C1615f59ae41445e417e708dab0e9783f%7C41175d2868f5486593eb6372ba83c5bb%7C0%7C0%7C638016814063244113%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7Csdata=U2r2enW1n3iM%2BFRo2FXU0Ob63XPE6Wcry3WZSg7t0wU%3Dreserved=0>

   ssl-default-bind-ciphers
ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
   ssl-default-bind-ciphersuites
TLS_AES_128_GCM_SHA256:TLS_AES

TCP connections resets during backend transfers

2022-10-18 Thread Artur
 or tcpka) but 
errors are still there.
I also tried previous haproxy versions (2.6.5, 2.6.4) but it doesn't 
solve the problem.


--
Best regards,
Artur




server cookie value uniqueness

2022-09-06 Thread Artur

Hello !

I'm adding two servers s01 and s02 to the current config and setting the 
same cookie value as for existing s1 and s2.

These cookies are here to permit sticky sessions.
What may be the behaviour of haproxy in this situation ?
'haproxy -c' on the configuration file does not show any warning or error.

backend one
    ...
    option redispatch
    balance roundrobin
    server s1 10.100.0.93:2000 cookie s1
    server s2 10.100.0.93:2001 cookie s2
    server s01 10.100.3.101:2000 cookie s1
    server s02 10.100.3.101:2001 cookie s2
    option allbackups
    server sb1 10.100.131.33:2000 cookie s1 backup
    server sb2 10.100.131.33:2001 cookie s2 backup

--
Best regards,
Artur


Re: [ANNOUNCE] haproxy-2.6.0

2022-06-14 Thread Artur

Hello Vincent,

No plan to prepare 2.6 packages for Debian 10 ?

If you can, I'm interested. Thank you.

Le 03/06/2022 à 23:43, Vincent Bernat a écrit :

  ❦ 31 May 2022 17:56 +02, Willy Tarreau:


HAProxy 2.6.0 was released on 2022/05/31. It added 57 new commits
after version 2.6-dev12, essentially small bug fixes, QUIC counters
and doc updates.

It's available on haproxy.debian.net. No QUIC support as neither Debian
nor Ubuntu has the appropriate library.


--
Best regards,
Artur




Re: server check inter and timeout check relation

2022-03-14 Thread Artur

Le 14/03/2022 à 11:40, Christopher Faulet a écrit :

Le 3/14/22 à 10:53, Artur a écrit :

Hello,

I'd like to know how checks behaves depending on the "inter" and
"timeout check" settings.

Let's try this simplified setup :

backend back
   mode tcp
   timeout check 5s
   server s1 1.2.3.4:80 check inter 2s
   server s2 1.2.3.5:80 check inter 2s

"inter 2s" is the default setup. We should have there one check every 2s
if everything is optimal.
"timeout check 5s" specify that the server check can take up to 5s (once
the connection established).

In this configuration, what happens if the check takes more than 2 
seconds ?

Does haproxy wait (up to 5 seconds) for this check to finish before
launching another check or it's still launching checks every 2s anyway ?



Hi,

For a given server, inter/fastinter/downinter timeouts are used to 
define the delay between the end of a health-check and the beginning 
of the following one. This is independent on the evaluation time. Thus 
in your example, a health-check will still run 2s after the end of the 
previous one, independently on its duration.



OK, I got it. One check at a time and 2s between each check.

However as "timeout check" is set to 5 seconds, each check cannot run 
longer that 5 seconds. It means that if the backend server does not send 
data before the 5 seconds elapsed, the check fails.

Am I right ?

--
Best regards,
Artur




server check inter and timeout check relation

2022-03-14 Thread Artur

Hello,

I'd like to know how checks behaves depending on the "inter" and 
"timeout check" settings.


Let's try this simplified setup :

backend back
 mode tcp
 timeout check 5s
 server s1 1.2.3.4:80 check inter 2s
 server s2 1.2.3.5:80 check inter 2s

"inter 2s" is the default setup. We should have there one check every 2s 
if everything is optimal.
"timeout check 5s" specify that the server check can take up to 5s (once 
the connection established).


In this configuration, what happens if the check takes more than 2 seconds ?
Does haproxy wait (up to 5 seconds) for this check to finish before 
launching another check or it's still launching checks every 2s anyway ?


--
Best regards,
Artur




Re: Bad backend selected

2021-06-07 Thread Artur
Hello Tim,

Le 07/06/2021 à 19:13, t...@bastelstu.be a écrit :
> Artur,
>
> [cc'ing Amaury]
>
> Am 2021-06-07 16:46, schrieb Artur:
>> However the only difference is the 443 port explicitly specified in the
>> later request.
>> I am not sure it's something specific to 2.4.0, but I've never seen it
>> before.
>> Is it an expected behaviour ? If so, how can I change my acls to correct
>> it ?
>
> I encountered the same issue (incidentally also with socket.io). It's
> happening for WebSockets via HTTP/2. These are newly supported
> starting with HAProxy 2.4. The "broken" requests are most likely
> Firefox, while the working ones are not Firefox. I already have a
> private email thread with a few developers regarding this behavior.
>
> Best regards
> Tim Düsterhus

My problem was mainly caused by the port appearing in the url and an
incorrect use of hdr(host) leading to bad backend choice.
Once the websocket connection request correctly forwarded to the right
backend I can't see any problem (request ended with 101 websocket
upgrade http status code).
However I have no idea how the 443 port appeared in request, I was
unable to reproduce this kind of requests myself with Firefox or other
browsers. Quite strange.
FYI, our application use socket.io(-client) node.js/javascript modules.

-- 
Best regards,
Artur




Re: Bad backend selected

2021-06-07 Thread Artur
Le 07/06/2021 à 17:22, Jarno Huuskonen a écrit :
> Hello,
>
> On Mon, 2021-06-07 at 16:46 +0200, Artur wrote:
>> Hello,
>>
>> I'm currently running haproxy 2.4.0 and I can see something strange in
>> the way haproxy selects a backend for processing some requests.
>>
>> This is simplified frontend configuration that should select between
>> static and dynamic (websocket) content URIs based on path_beg.
>>
>> frontend wwws
>>     bind 0.0.0.0:443 ssl crt /etc/haproxy/ssl/server.pem alpn
>> h2,http/1.1
>>     mode http
>>
>>     acl is_static_prod31    path_beg /p31/
>>     acl is_dynamic_prod31   path_beg /n/p31/
>>     acl is_domain_name hdr(host) -i domain.name
>>
>>     use_backend ws_be_prod31 if is_dynamic_prod31 is_domain_name
>>     use_backend www_be_prod  if is_static_prod31 is_domain_name
>>
>>     default_backend www_be_prod
>>
>> What I can see in logs is that some requests are correctly processed and
>> redirected to dynamic backends (websockets servers) for processing :
>>
>> Jun  7 15:44:41 host haproxy[9384]: 1.2.3.4:56952
>> [07/Jun/2021:15:43:31.926] wwws~ ws_be_prod31/s1 5/0/1/3/70015 101 421 -
>> - --VN 34/34/27/8/0 0/0 "GET https://domain.name/n/p31/socket.io/...
>> HTTP/2.0"
>>
>> While others are wrongly processed by the static web server :
>>
>> Jun  7 15:50:06 host haproxy[9384]: 1.2.3.4:61037
>> [07/Jun/2021:15:50:06.157] wwws~ www_be_prod/web1 6/0/1/1/7 404 9318 - -
>>  34/34/0/0/0 0/0 "GET https://domain.name:443/n/p31/socket.io/...
>> HTTP/2.0"
>>
>> However the only difference is the 443 port explicitly specified in the
>> later request.
>> I am not sure it's something specific to 2.4.0, but I've never seen it
>> before.
>> Is it an expected behaviour ? If so, how can I change my acls to correct
>> it ?
> Does it work if you use
> hdr_dom(https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#7.3.6-req.hdr)
> for the host header acl:
> (acl is_domain_name hdr_dom(host) -i domain.name)
> (or some other match that ignores port in Host header).
>
> -Jarno

Yes, it seems to work fine now. Thank you. I realized the port number is
part of Host: header if explicitly specified in request.

However as in my setup (removed part) I also have to check for dev*
hostnames I would like to know the exact hdr_dom(host) behaviour.
With this example : acl acl1 hdr_dom(host) -i domain.name
1) Host: domain.name:443 -> acl1 matches
2) Host: dimain.name -> acl1 matches
3) Host: dev.domain.name:443 -> acl1 does not match
4) Host: dev.domain.name -> acl1 does not match

Am I right ? (I suppose I can also use hdr_beg(host) to check for the
beginning of the hostname)

-- 
Best regards,
Artur




Bad backend selected

2021-06-07 Thread Artur
Hello,

I'm currently running haproxy 2.4.0 and I can see something strange in
the way haproxy selects a backend for processing some requests.

This is simplified frontend configuration that should select between
static and dynamic (websocket) content URIs based on path_beg.

frontend wwws
    bind 0.0.0.0:443 ssl crt /etc/haproxy/ssl/server.pem alpn
h2,http/1.1
    mode http

    acl is_static_prod31    path_beg /p31/
    acl is_dynamic_prod31   path_beg /n/p31/
    acl is_domain_name hdr(host) -i domain.name

    use_backend ws_be_prod31 if is_dynamic_prod31 is_domain_name
    use_backend www_be_prod  if is_static_prod31 is_domain_name

    default_backend www_be_prod

What I can see in logs is that some requests are correctly processed and
redirected to dynamic backends (websockets servers) for processing :

Jun  7 15:44:41 host haproxy[9384]: 1.2.3.4:56952
[07/Jun/2021:15:43:31.926] wwws~ ws_be_prod31/s1 5/0/1/3/70015 101 421 -
- --VN 34/34/27/8/0 0/0 "GET https://domain.name/n/p31/socket.io/...
HTTP/2.0"

While others are wrongly processed by the static web server :

Jun  7 15:50:06 host haproxy[9384]: 1.2.3.4:61037
[07/Jun/2021:15:50:06.157] wwws~ www_be_prod/web1 6/0/1/1/7 404 9318 - -
 34/34/0/0/0 0/0 "GET https://domain.name:443/n/p31/socket.io/...
HTTP/2.0"

However the only difference is the 443 port explicitly specified in the
later request.
I am not sure it's something specific to 2.4.0, but I've never seen it
before.
Is it an expected behaviour ? If so, how can I change my acls to correct
it ?

-- 
Best regards,
Artur




Re: [ANNOUNCE] haproxy-2.4.0

2021-05-17 Thread Artur
Hello,

Thank you for this new release.

When can we expect prebuilt packages for Debian on haproxy.debian.net ?

Le 14/05/2021 à 11:56, Willy Tarreau a écrit :
> HAProxy 2.4.0 was released on 2021/05/14.

-- 
Best regards,
Artur




Re: Backend servers backup setup

2020-09-08 Thread Artur
Hello,

I didn't see any answer or comment on my inquiry.
I supposed that one will say it's not possible or there is a miracle
solution or it may be a new feature. :)
Could you please tell me what would be the right hypothesis ?

Le 01/09/2020 à 11:08, Artur a écrit :
> Hello,
>
> I need your help on configuring servers backup in a backend.
> This is my current (simplified) backend setup :
>
> backend ws_be
>     mode http
>     option redispatch
>     cookie c insert indirect nocache attr "SameSite=Lax"
>     balance roundrobin
>     server s1 1.2.3.3:1234 cookie s1 check
>     server s2 1.2.3.3:2345 cookie s2 check
>     option allbackups
>     server sb1 2.3.4.5:3456 cookie s1 check backup
>     server sb2 2.3.4.5:4567 cookie s2 check backup
>
> FYI, the servers of this backend are node.js processes (dynamic content
> and websockets).
> Case 1 : If s1 or s2 is DOWN all the connexions are redispatched to the
> remaining UP server (s1 or s2).
> Case 2 : If s1 AND s2 are DOWN, all the connexions are redispatched on
> sb1 and sb2 backup servers.
>
> In the second case, the global application performance is similar to the
> normal situation where all main servers are UP.
> However, in the case one, the application performance can be degraded
> because there is only one server serving requests instead of two (and
> backup servers are inactive).
> I would like to modify the current setup so if there is a main server
> down, it's at once replaced by a backup server and all the connexions
> redispatched from DOWN server to a backup server.
> Of course, there may be variations :
> - 1 main server DOWN -> Corresponding backup server activated
> - 1 main server DOWN -> all backup servers activated
> - 1 main server DOWN -> some backup servers activated
>
> Any idea on how to achieve this ?
>
-- 
Best regards,
Artur




Backend servers backup setup

2020-09-01 Thread Artur
Hello,

I need your help on configuring servers backup in a backend.
This is my current (simplified) backend setup :

backend ws_be
    mode http
    option redispatch
    cookie c insert indirect nocache attr "SameSite=Lax"
    balance roundrobin
    server s1 1.2.3.3:1234 cookie s1 check
    server s2 1.2.3.3:2345 cookie s2 check
    option allbackups
    server sb1 2.3.4.5:3456 cookie s1 check backup
    server sb2 2.3.4.5:4567 cookie s2 check backup

FYI, the servers of this backend are node.js processes (dynamic content
and websockets).
Case 1 : If s1 or s2 is DOWN all the connexions are redispatched to the
remaining UP server (s1 or s2).
Case 2 : If s1 AND s2 are DOWN, all the connexions are redispatched on
sb1 and sb2 backup servers.

In the second case, the global application performance is similar to the
normal situation where all main servers are UP.
However, in the case one, the application performance can be degraded
because there is only one server serving requests instead of two (and
backup servers are inactive).
I would like to modify the current setup so if there is a main server
down, it's at once replaced by a backup server and all the connexions
redispatched from DOWN server to a backup server.
Of course, there may be variations :
- 1 main server DOWN -> Corresponding backup server activated
- 1 main server DOWN -> all backup servers activated
- 1 main server DOWN -> some backup servers activated

Any idea on how to achieve this ?

-- 
Best regards,
Artur




Re: Haproxy 2.2 LTS package for Debian Stretch oldstable

2020-08-04 Thread Artur
Hello,

I've seen the package is available now. Thank you.

Le 03/08/2020 à 22:49, Vincent Bernat a écrit :
>
> Well, you are the second person asking this in a short time, so I will
> provide one. My rationale is that 2.2 is quite new and Stretch is
> already maintained as LTS. When maintained as LTS, we loose the ability
> to use backports (there is no LTS for official backports), so it may
> make the maintenance more difficult. However, it's unlikely the build
> system will change much during the lifecycle of the 2.2 and since it
> doesn't require backports, we should be fine.

-- 
Best regards,
Artur




Haproxy 2.2 LTS package for Debian Stretch oldstable

2020-08-03 Thread Artur
Hello,

It would be nice to have a Debian Stretch package for the current LTS
2.2 branch in backports. It seems it's not available for now.

-- 
Best regards,
Artur




Re: [ANNOUNCE] haproxy-2.1.5

2020-06-03 Thread Artur
Hello,

What is the risk to trigger this bug in real life ? How to avoid it
before the next haproxy release ?
In my setup haproxy 2.1.5 has threads enabled by default (Debian Buster
backports).

Le 02/06/2020 à 19:28, William Dauchy a écrit :
> On Tue, Jun 2, 2020 at 12:13 PM William Dauchy  wrote:
>> it seems like I broke something with this commit, but I did not have
>> it in v2.2
> small followup:
> Sorry for that one, the backport was not exactly as I thought, and so
> no test were done before release outside of 2.2 branch:
> - a small mistake in index within a loop
> - more importantly, srv->idle_conns and srv->safe_conns are not
> thread-safe list in 2.0 and 2.1
>
> I choose to revert the changes < 2.2

-- 
Best regards,
Artur




Re: haproxy 2.1 package for Debian 9 Stretch oldstable

2019-12-17 Thread Artur
Hi,

Le 17/12/2019 à 17:10, Vincent Bernat a écrit :
>
> I have pushed HAProxy 2.1.1 for Stretch. Tell me if everything is OK for
> you.

I tested also. Everything seems to be OK. Thank you.

-- 
Best regards,
Artur




Re: [2.1.1] http-request replace-uri does not work

2019-12-17 Thread Artur
Hello,

Le 17/12/2019 à 08:41, Julien Pivotto a écrit :
> On 17 Dec 06:58, Willy Tarreau wrote:
>> On Tue, Dec 17, 2019 at 06:08:56AM +0100, Willy Tarreau wrote:
>>> But now I'm starting to suspect that most of the problem comes from the
>>> fact that people who used to rely on regex in the past will not as easily
>>> perform their rewrites using set-path as they would using a replace rule
>>> which is very similar to the old set. So probably we'd need to introduce
>>> a "replace-path" action and suggest it in the warning emitted for reqrep.
>>>
>>> I think it is important that we properly address such needs and am
>>> willing to backport anything like this to 2.1 to ease the transition if
>>> that's the best solution.
>> What about this ? It does exactly what's needed for me. It's self-contained
>> enough that we could get it backported to 2.1 and maybe even to 2.0 (though
>> it would require some adaptations to legacy mode there).
>>
>> Willy
> I am in favour of replace-path.

Yes, it will be nice to have it.

For my current setup I tested set-path as suggested by Willy and it
works fine with the following setup :

    acl is_dev_qd hdr(host) -i dev.q.d dev.q.d
    acl is_qd hdr(host) -i q.d qs.d www.q.d www.qs.d
    acl is_ppds path_beg -i /PPDSlide/
    http-request set-path /d3%[path] if is_ppds is_dev_question_direct
    http-request set-path /p3%[path] if is_ppds is_question_direct

While using replace-uri, my mistake was to blindly follow the official
documentation examples.
None of these uses absolute uri form.

-- 
Best regards,
Artur




Re: haproxy 2.1 package for Debian 9 Stretch oldstable

2019-12-17 Thread Artur
Hi,

Le 17/12/2019 à 10:28, Vincent Bernat a écrit :
>  ❦ 16 décembre 2019 22:15 +01, Artur :
>
>> While checking for haproxy 2.1 package for Debian Stretch on
>> https://haproxy.debian.net/, I saw it wasn't available (yet ?).
>>
>> Do you plan to build haproxy deb packages for this version of Debian,
>> it's still supported as oldstable for now ?
> Hello,
>
> I didn't plan to do uploads for Stretch for this version of HAProxy.
> This is a non-LTS version of HAProxy, so I am only targeting recent
> distributions. If you find another people interested in this version as
> well, I'll add it.

Good to know, not a real problem for me. Thank you for the quick answer.

-- 
Best regards,
Artur




haproxy 2.1 package for Debian 9 Stretch oldstable

2019-12-16 Thread Artur
Hello,

While checking for haproxy 2.1 package for Debian Stretch on
https://haproxy.debian.net/, I saw it wasn't available (yet ?).

Do you plan to build haproxy deb packages for this version of Debian,
it's still supported as oldstable for now ?

-- 
Best regards,
Artur




Re: [2.1.1] http-request replace-uri does not work

2019-12-16 Thread Artur
Hello Cyril,

Thanks a lot for the confirmation.

Le 16/12/2019 à 20:20, Cyril Bonté a écrit :
> Hi Artur,
>
> Le 16/12/2019 à 19:06, Artur a écrit :
>> Hello,
>>
>> This is an extract of my frontend configuration working perfectly on
>> 2.0.11.
>>
>> frontend wwws
>>  bind 0.0.0.0:443 ssl crt /etc/haproxy/ssl/server.pem alpn
>> h2,http/1.1
>>  mode http
>>  acl is_dev_qd hdr(host) -i dev.q.d dev.qs.d
>>  acl is_qd hdr(host) -i q.d qs.d www.q.d www.qs.d
>>  http-request replace-uri ^/PPDSlide/(.*) /d3/PPDSlide/\1 if
>> is_dev_qd
>>  http-request replace-uri ^/PPDSlide/(.*) /p3/PPDSlide/\1 if
>> is_qd
>>  
>>
>> URLs like https://q.d/PPDSlide/testfile are correctly rewritten to
>> https://q.d/p3/PPDSlide/testfile and forwarded to the backend.
>>
>> Once I switched to 2.1.1, haproxy no longer rewrites the URI and the
>> URIs remains unchanged while forwarded to the backend. I had to
>> downgrade to have the usual behaviour.
>>
>> Is it a bug or something changed in normal haproxy behaviour with 2.1
>> release ?
>
> I can confirm the issue.
>
> It seems to happen with h2 requests only, since commit #30ee1efe67.
> haproxy normalizes the URI but replace-uri doesn't take into account
> this information. The fix should be easy for replace-uri (If someone
> wants to work on it, I won't have time this week).
>
> http://git.haproxy.org/?p=haproxy-2.1.git;a=commit;h=30ee1efe67
>
>
-- 
Best regards,
Artur




[2.1.1] http-request replace-uri does not work

2019-12-16 Thread Artur
Hello,

This is an extract of my frontend configuration working perfectly on 2.0.11.

frontend wwws
    bind 0.0.0.0:443 ssl crt /etc/haproxy/ssl/server.pem alpn
h2,http/1.1
    mode http
    acl is_dev_qd hdr(host) -i dev.q.d dev.qs.d
    acl is_qd hdr(host) -i q.d qs.d www.q.d www.qs.d
    http-request replace-uri ^/PPDSlide/(.*) /d3/PPDSlide/\1 if
is_dev_qd
    http-request replace-uri ^/PPDSlide/(.*) /p3/PPDSlide/\1 if is_qd
    

URLs like https://q.d/PPDSlide/testfile are correctly rewritten to
https://q.d/p3/PPDSlide/testfile and forwarded to the backend.

Once I switched to 2.1.1, haproxy no longer rewrites the URI and the
URIs remains unchanged while forwarded to the backend. I had to
downgrade to have the usual behaviour.

Is it a bug or something changed in normal haproxy behaviour with 2.1
release ?

-- 
Best regards,
Artur




Get rid of TCP "Connect from..." logs

2019-09-11 Thread Artur
Hello,

My current 2.0.5 haproxy logs a lot of "useless" messages such as :

Sep 11 13:10:08 server haproxy[28163]: Connect from 127.0.0.1:39951 to
127.0.0.1:6379 (r1_front/TCP)

My configuration is something like (I removed lines not related to
logging) :

global
    log /dev/log    local0
    log /dev/log    local1 notice
defaults
    log global
    option  dontlognull
    option dontlog-normal
frontend r1_front
    bind 127.0.0.1:6379
    mode tcp
    option dontlog-normal
    option dontlognull
    default_backend r1_back
backend r1_back
    mode tcp
...

I don't want to remove all logs, however logging "normal" connections
information is not needed.
At the moment I was unable to disable those "Connect from" lines.

Any idea on what I'm doing wrong here ?

By the way, any pointers to articles on how to well manage logging with
haproxy are welcome. I always had a problem to correctly understand
logging in haproxy.

Thanks a lot for your help.

-- 
Best regards,
Artur




Replace deprecated reqrep

2019-07-08 Thread Artur
Hello,

Could you please suggest how to rewrite following rules written with
'regrep' with 'http-request replace-uri' :

frontend www
 reqrep ^([^\ ]*)\ /p3/js/(.*) \1\ /p3/js-min/\2

The idea is to rewrite something similar to "GET /p3/js/file.js
HTTP/1.1" with  "GET /p3/js-min/file.js HTTP/1.1".

-- 
Best regards,
Artur




Re: Timeout tuning for websocket proxy

2018-02-23 Thread Artur

  
  
Hello,

Le 16/02/2018 à 22:47, Aleksandar Lazic
  a écrit :


  The timeout for Websockets is this

https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-timeout%20tunnel

What's the setting for this timeout in your config?

Please share also which version you use maybe the doclink is wrong for
the version you use.



Le 16/02/2018 à 21:42, Baptiste a
  écrit :


  

  This blog article should answer your
questions:
https://www.haproxy.com/fr/blog/websockets-load-balancing-with-haproxy/


That said, since the time of this article, we release
  the "tcp-ut" that you may want to set as well to detect
  faster when a client has disconnected.


Note that a best practice is to implement an
  application layer "ping" every 1 minute and set the
  timeout tunnel to 61s.
  

  


Thank you all for the links and sorry for a late feedback.
Haproxy is the current stable version 1.8.4.
My current 'timeout tunnel' is 1h.
However in node.js websocket engine the ping is currently set at 15
seconds and the ping timeout is at 30 seconds.
So it seems that 1h tunnel timeout  is quite too long. Something
like suggested 61 seconds may be much more appropriate (or even less
than that?).
I'm trying to be cautious about setting this timeout too low because
there are mobile clients and mobile networks may experience some
important lags sometimes.

I am not sure tcp-ut may help here as there is an internal websocket
keepalive feature enabled, so there is always some traffic passing
through the TCP connection.
    -- 
Best regards,
Artur
  




Timeout tuning for websocket proxy

2018-02-16 Thread Artur
Hello,

I have a haproxy setup with node.js application in backend.
Clients connect websockets to node.js application through haproxy and
these connections are usually established for a long time, one or more
hours.

I wondered if there is any need to adjust default haproxy.cfg timeouts
at different levels?
As I'm here, maybe you have some other suggestions on websocket specific
setup/options tuning?

-- 
Best regards,
Artur




Re: How to password protect a subdirectory

2017-02-02 Thread Artur
Hi Jarno,

Thanks a lot. Very helpful. Works very well.

Jarno Huuskonen wrote :
> Hi,
>
> On Tue, Jan 31, Artur wrote:
>> Hello,
>>
>> I'm currently serving public static content from a webserver behind haproxy.
>> What could be the right way to password protect only a single
>> subdirectory (and all its content) with haproxy ?
>> / -> Public
>> /directory/private -> Password protected
> You could try with something like this:
>
> userlist testlist
> user test insecure-password test
>
> your listen/frontend:
>   acl need_auth path_beg /directory/private
>   acl auth_ok http_auth(testlist)
>
>   http-request auth if need_auth !auth_ok
>
> For more information check userlists / auth docs:
> https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#3.4
> https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#7.3.6-http_auth
> https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4.2-http-request
>
> -Jarno
>
-- 
Best regards,
Artur




How to password protect a subdirectory

2017-01-31 Thread Artur
Hello,

I'm currently serving public static content from a webserver behind haproxy.
What could be the right way to password protect only a single
subdirectory (and all its content) with haproxy ?
/ -> Public
/directory/private -> Password protected

-- 
Best regards,
Artur




Limiting max connections on backends and warn user

2016-07-11 Thread Artur
Hello,

I'm currently testing a quite simple setup. Haproxy in front of 1 nginx
and 3 backends running a node.js application. Nginx serves static files
only and I have no issue with it.

There are 2 frontends, one for http and one for https.

In a standard configuration the browser will ask for a http dynamic page
then once the page loaded the browser will establish a https websocket
connection for a long time. It also works fine.

However to avoid any overload on node.js backend servers I would like to
limit the number of simultaneous connections on these. It's possible to
define maxconn in backends and that's fine but if the backend server is
overloaded the connecting user is not warned as it may happen while
establishing the websocket connection. The user cannot see the 503 error
in this case.

So I would like to warn the user connecting that there is any kind of
overload on the server if the backends limits are (nearly) reached.
Right now it cannot be done inside the application so I wonder if there
is a way to do that in the frontends before any request is sent to the
backends. A kind of dynamic 503 page may be a solution here. It should
be triggered only if there is a threshold reached on backends.

There is an extract of the current test setup :

frontend www
bind 0.0.0.0:80
mode http

acl is_static_dev   path_beg /application
acl is_dynamic_dev  path_beg /nodejs/application

use_backend websocket_backend_dev if is_dynamic_dev
use_backend www_backend_dev if is_static_dev

default_backend www_backend

frontend wwws
bind 0.0.0.0:443 ssl crt /etc/haproxy/ssl/server.pem
mode http

acl is_static_dev   path_beg /application
acl is_dynamic_dev  path_beg /nodejs/application

use_backend websocket_backend_dev if is_dynamic_dev
use_backend www_backend_dev if is_static_dev

default_backend www_backend

backend www_backend_dev
mode http
option forwardfor
server web1 127.0.0.1:81 weight 1 maxconn 8192

backend www_backend
mode http
option forwardfor
server web1 127.0.0.1:81 weight 1 maxconn 8192

backend websocket_backend_dev
mode http
option forwardfor
option http-server-close
option forceclose
no option httpclose
option persist
option redispatch
fullconn 2

cookie rd insert indirect nocache
balance roundrobin
server s1 127.0.0.1:3026 weight  95 maxconn 100 cookie s1
server s2 127.0.0.1:3030 weight 100 maxconn 100 cookie s2
server s3 127.0.0.1:3031 weight 100 maxconn 100 cookie s3

BTW: Thank's a lot for HAProxy, it's a very good software.

-- 
Best regards,
Artur