Rpm version 2.4.14

2022-03-14 Thread Eli Bechavod
Hii guys,
I am looking for rpm to version 2.4.14 and didn’t found that ..

Why on image base centos/rhel did you stop in 1.8 ? I saw that I can
install with a makefile but it old way .. :( .

I would to sound if you have any solutions

Thanks
Eli


[ANNOUNCE] haproxy-2.0.28

2022-03-14 Thread Christopher Faulet

Hi,

HAProxy 2.0.28 was released on 2022/03/14. It added 37 new commits
after version 2.0.27.

It was announced few week ago. Now, it is released ! Sorry for the delay. The
main issues fixed in this version are:

  * A tiny race condition in the scheduler affecting the rare multi-
threaded tasks. In some cases, a task could be finishing to run on one
thread and expiring on another one, just in the process of being
requeued to the position being in the process of being calculated by the
thread finishing with it. The most likely case was the peers task
disabling the expiration while waiting for other peers to be locked,
causing such a non-expirable task to be queued and to block all other
timers from expiring (typically health checks, peers and resolvers, but
others were affected). This could only happen at high peers traffic rate
but it definitely did. When built with the suitable options such as
DEBUG_STRICT it would immediately crash (which is how it was
detected). This bug was present since 2.0.

  * A bug in the Set-Cookie2 response parser may result in an infinite loop
triggering the watchdog if a server sends this while it belongs to a
backend configured with cookie persistence. Usually cookie-based
persistence is not used with untrusted servers, but if that was the
case, the following rule would be usable as a workaround for the time it
takes to upgrade:

http-response del-header Set-Cookie2

It reminded us that 2.5 years ago we were discussing about completely
dropping Set-Cookie2 which never succeeded in field, Tim has opened an
issue so that we don't forget to remove it after 2.6. This issue was
diagnosed, reported and fixed by Andrew McDermott and Grant Spence.
This bug was there since 1.9.

  * A bug in the H2 multiplexer. An error during the response processing,
after the HEADERS frame parsing, led to a wakeup loop consuming all the
CPU because the error was not properly reported to the upper layer. For
instance, this happened if an invalid header value, an invalid status
code or a forbidden header was found in the response. Note that only
HAProxy >= 2.4 are affected by this issue.

  * A bug in the SPOE error handling. When a connection to an agent dies,
there may still be requests pending that are tied to this connection.
The list of such requests is scanned so that they can be aborted, except
that the condition to scan the list was incorrect, and when these
requests were finally aborted upon processing timeout, they were
updating the memory area they used to point to, which could have been
reused for anything, causing random crashes very commonly seen in libc's
malloc/free va openssl, or haproxy pools with corrupted pointers.  In
short, anyone using SPOE must absolutely update to apply the fix
otherwise any bug they face cannot be trusted as we know there's a rare
but real case of memory corruption there. This bug was present since
1.8.

  * An issue in the pass-through multiplexer leading to a connection leak on
the server side when timeout occurred during the connection
establishment. In this case, the server connection was detached from the
application stream but not closed. At this stage the connection could
only be closed by the server, if it was finally accepted, or by the
kernel, after all SYN retries. All versions as far as 2.3 are affected
by this bug.

  * A FD leak on reload failures. When the master process is reloaded on a
new config, it will try to connect to the previous process' socket to
retrieve all known listening FDs to be reused by the new listeners. If
listeners were removed, their unused FDs are simply closed. However
there's a catch. In case a socket fails to bind, the master will cancel
its startup and switch to wait mode for a new operation to happen. In
this case it didn't close the possibly remaining FDs that were left
unused.

  * A FD leak of a sockpair upon a failed reload.  When starting HAProxy in
master-worker, the master pre-allocate a struct mworker_proc and do a
socketpair() before the configuration parsing. If the configuration
loading failed, the FD was never closed because they aren't part of
listener, they are not even in the fdtab.

  * It was possible to temporarily lose the stats sockets upon reloads in
master-worker mode in case of early error (e.g. missing config file), in
which case the socket transfer from the older process couldn't happen.

  * Some issues about errors on buffers allocation. First, in the H1
multiplexer. If we failed to send data because we failed to allocate the
H1 output buffer, the H1 stream was erroneously woken up. This led to a
wakeup loop to send more data while it is not possible because there is
no output buffer. Then, in process_stream(), if we failed to allocate
the 

[ANNOUNCE] haproxy-2.2.22

2022-03-14 Thread Christopher Faulet

Hi,

HAProxy 2.2.22 was released on 2022/03/14. It added 13 new commits
after version 2.2.21.

This one contains the same fixes than the 2.3.19. So, I'm not going to be
really original:

  * An issue in the pass-through multiplexer leading to a connection leak on
the server side when timeout occurred during the connection
establishment. In this case, the server connection was detached from the
application stream but not closed. At this stage the connection could
only be closed by the server, if it was finally accepted, or by the
kernel, after all SYN retries. All versions as far as 2.3 are affected
by this bug.

  * An issue in the master CLI. When a command was sent to a worker, the
errors, especially write errors, during the response processing were not
properly handled. The session could remain stuck if a client quickly
closed the connection before the response was fully sent. The maxconn
value of the master CLI is set 10. Thus, it could quickly be
unresponsive if this happened several times.

  * An issue with all HTX applets. The end of a message was only reported at
the HTX level. The channel's flags were not updated accordingly. The
only known visible effect of this bug was some server aborts erroneously
reported in the stats counters.

  * Proxy mode (tcp, http, cli...) is not properly reported when
displayed. Missing "syslog" and "peers" mode can now be reported.

  * The anti-loop protection in process_stream() was improved to only count
the no-progress calls.

This release cycle was performed to be able to finally release the
2.0.28. It was announced few weeks ago. Twice. But it was delayed because of
lack of time. This time, it must be released tomorrow morning !

Thanks everyone for your help and your contributions!

Please find the usual URLs below :
   Site index   : http://www.haproxy.org/
   Discourse: http://discourse.haproxy.org/
   Slack channel: https://slack.haproxy.org/
   Issue tracker: https://github.com/haproxy/haproxy/issues
   Wiki : https://github.com/haproxy/wiki/wiki
   Sources  : http://www.haproxy.org/download/2.2/src/
   Git repository   : http://git.haproxy.org/git/haproxy-2.2.git/
   Git Web browsing : http://git.haproxy.org/?p=haproxy-2.2.git
   Changelog: http://www.haproxy.org/download/2.2/src/CHANGELOG
   Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/


---
Complete changelog :
Christian Ruppert (1):
  DOC: Fix usage/examples of deprecated ACLs

Christopher Faulet (9):
  BUG/MINOR: hlua: Set conn-stream/channel EOI flags at the end of request
  BUG/MINOR: stats: Set conn-stream/channel EOI flags at the end of request
  BUG/MINOR: cache: Set conn-stream/channel EOI flags at the end of request
  BUG/MINOR: promex: Set conn-stream/channel EOI flags at the end of request
  DEBUG: cache: Update underlying buffer when loading HTX message in cache 
applet
  BUG/MEDIUM: mcli: Properly handle errors and timeouts during reponse 
processing
  DEBUG: stream: Add the missing descriptions for stream trace events
  DEBUG: stream: Fix stream trace message to print response buffer state
  BUG/MAJOR: mux-pt: Always destroy the backend connection on detach

William Lallemand (1):
  BUG/MINOR: cli: shows correct mode in "show sess"

Willy Tarreau (2):
  BUG/MINOR: stream: make the call_rate only count the no-progress calls
  BUILD: tree-wide: mark a few numeric constants as explicitly long long

--
Christopher Faulet



[ANNOUNCE] haproxy-2.3.19

2022-03-14 Thread Christopher Faulet

Hi,

HAProxy 2.3.19 was released on 2022/03/14. It added 14 new commits
after version 2.3.18.

All fixes included in this release were already described in the 2.4.14
announcement. Here is a cut-paste of relevant parts:

  * An issue in the pass-through multiplexer leading to a connection leak on
the server side when timeout occurred during the connection
establishment. In this case, the server connection was detached from the
application stream but not closed. At this stage the connection could
only be closed by the server, if it was finally accepted, or by the
kernel, after all SYN retries. All versions as far as 2.3 are affected
by this bug.

  * An issue in the master CLI. When a command was sent to a worker, the
errors, especially write errors, during the response processing were not
properly handled. The session could remain stuck if a client quickly
closed the connection before the response was fully sent. The maxconn
value of the master CLI is set 10. Thus, it could quickly be
unresponsive if this happened several times.

  * An issue with all HTX applets. The end of a message was only reported at
the HTX level. The channel's flags were not updated accordingly. The
only known visible effect of this bug was some server aborts erroneously
reported in the stats counters.

  * Proxy mode (tcp, http, cli...) is not properly reported when
displayed. Missing "syslog" and "peers" mode can now be reported.

  * The anti-loop protection in process_stream() was improved to only count
the no-progress calls.

Thanks everyone for your help and your contributions!

Please find the usual URLs below :
   Site index   : http://www.haproxy.org/
   Discourse: http://discourse.haproxy.org/
   Slack channel: https://slack.haproxy.org/
   Issue tracker: https://github.com/haproxy/haproxy/issues
   Wiki : https://github.com/haproxy/wiki/wiki
   Sources  : http://www.haproxy.org/download/2.3/src/
   Git repository   : http://git.haproxy.org/git/haproxy-2.3.git/
   Git Web browsing : http://git.haproxy.org/?p=haproxy-2.3.git
   Changelog: http://www.haproxy.org/download/2.3/src/CHANGELOG
   Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/


---
Complete changelog :
Christian Ruppert (1):
  DOC: Fix usage/examples of deprecated ACLs

Christopher Faulet (9):
  BUG/MINOR: hlua: Set conn-stream/channel EOI flags at the end of request
  BUG/MINOR: stats: Set conn-stream/channel EOI flags at the end of request
  BUG/MINOR: cache: Set conn-stream/channel EOI flags at the end of request
  BUG/MINOR: promex: Set conn-stream/channel EOI flags at the end of request
  DEBUG: cache: Update underlying buffer when loading HTX message in cache 
applet
  BUG/MEDIUM: mcli: Properly handle errors and timeouts during reponse 
processing
  DEBUG: stream: Add the missing descriptions for stream trace events
  DEBUG: stream: Fix stream trace message to print response buffer state
  BUG/MAJOR: mux-pt: Always destroy the backend connection on detach

William Lallemand (2):
  BUG/MINOR: add missing modes in proxy_mode_str()
  BUG/MINOR: cli: shows correct mode in "show sess"

Willy Tarreau (2):
  BUG/MINOR: stream: make the call_rate only count the no-progress calls
  BUILD: tree-wide: mark a few numeric constants as explicitly long long

--
Christopher Faulet



[ANNOUNCE] haproxy-2.4.15

2022-03-14 Thread Christopher Faulet

Hi,

HAProxy 2.4.15 was released on 2022/03/14. It added 26 new commits
after version 2.4.14.

This one contains more or less the same fixes than the 2.5.5, except
2.5-specific ones :

  * An issue in the pass-through multiplexer leading to a connection leak on
the server side when timeout occurred during the connection
establishment. In this case, the server connection was detached from the
application stream but not closed. At this stage the connection could
only be closed by the server, if it was finally accepted, or by the
kernel, after all SYN retries. All versions as far as 2.3 are affected
by this bug.

  * An issue in the master CLI. When a command was sent to a worker, the
errors, especially write errors, during the response processing were not
properly handled. The session could remain stuck if a client quickly
closed the connection before the response was fully sent. The maxconn
value of the master CLI is set 10. Thus, it could quickly be
unresponsive if this happened several times.

  * A possible null deref in the htx_xfer_blks() function, when headers or
trailers were partially transferred. Concretely, it was only possible
when H2 trailers were copied from the mux to the channel buffer.

  * An issue with all HTX applets. The end of a message was only reported at
the HTX level. The channel's flags were not updated accordingly. The
only known visible effect of this bug was some server aborts erroneously
reported in the stats counters.

  * A theoretical risk of memleak in session_accept_fd() because of a wrong
goto label on the error path.

  * An alignment issue with pool_head structure.

  * Proxy mode (tcp, http, cli...) is not properly reported when
displayed. Missing "syslog" and "peers" mode can now be reported.

  * "no-memory-trimming" global option was added to disable call to
malloc_trim(). Some users with very large numbers of connections have
been facing extremely long malloc_trim() calls on reload that managed to
trigger the watchdog! That's a bit counter-productive. It's even
possible that some implementations are not perfectly reliable or that
their trimming time grows quadratically with the memory used. With this
option, it is possible to disable this mechanism.

  * The anti-loop protection in process_stream() was improved to only count
the no-progress calls.

Thanks everyone for your help and your contributions!

Please find the usual URLs below :
   Site index   : http://www.haproxy.org/
   Discourse: http://discourse.haproxy.org/
   Slack channel: https://slack.haproxy.org/
   Issue tracker: https://github.com/haproxy/haproxy/issues
   Wiki : https://github.com/haproxy/wiki/wiki
   Sources  : http://www.haproxy.org/download/2.4/src/
   Git repository   : http://git.haproxy.org/git/haproxy-2.4.git/
   Git Web browsing : http://git.haproxy.org/?p=haproxy-2.4.git
   Changelog: http://www.haproxy.org/download/2.4/src/CHANGELOG
   Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/


---
Complete changelog :
Christian Ruppert (1):
  DOC: Fix usage/examples of deprecated ACLs

Christopher Faulet (12):
  BUG/MEDIUM: htx: Fix a possible null derefs in htx_xfer_blks()
  REGTESTS: fix the race conditions in normalize_uri.vtc
  REGTESTS: fix the race conditions in secure_memcmp.vtc
  BUG/MINOR: hlua: Set conn-stream/channel EOI flags at the end of request
  BUG/MINOR: stats: Set conn-stream/channel EOI flags at the end of request
  BUG/MINOR: cache: Set conn-stream/channel EOI flags at the end of request
  BUG/MINOR: promex: Set conn-stream/channel EOI flags at the end of request
  DEBUG: cache: Update underlying buffer when loading HTX message in cache 
applet
  BUG/MEDIUM: mcli: Properly handle errors and timeouts during reponse 
processing
  DEBUG: stream: Add the missing descriptions for stream trace events
  DEBUG: stream: Fix stream trace message to print response buffer state
  BUG/MAJOR: mux-pt: Always destroy the backend connection on detach

Ilya Shipitsin (3):
  CI: github actions: add OpenTracing builds
  CI: github actions: use cache for OpenTracing
  CI: github actions: use cache for SSL libs

William Lallemand (2):
  BUG/MINOR: add missing modes in proxy_mode_str()
  BUG/MINOR: cli: shows correct mode in "show sess"

Willy Tarreau (8):
  CI: github actions: add the output of $CC -dM -E-
  BUG/MINOR: pool: always align pool_heads to 64 bytes
  BUG/MEDIUM: pools: fix ha_free() on area in the process of being freed
  MINOR: pools: add a new global option "no-memory-trimming"
  BUILD: pools: fix backport of no-memory-trimming on non-linux OS
  BUG/MINOR: session: fix theoretical risk of memleak in session_accept_fd()
  BUG/MINOR: stream: make the call_rate only count the no-progress calls
  BUILD: tree-wide: mark a few 

[ANNOUNCE] haproxy-2.5.5

2022-03-14 Thread Christopher Faulet

Hi,

HAProxy 2.5.5 was released on 2022/03/14. It added 39 new commits
after version 2.5.4.

The main issues fixed in this version are:

  * An issue in the pass-through multiplexer leading to a connection leak on
the server side when timeout occurred during the connection
establishment. In this case, the server connection was detached from the
application stream but not closed. At this stage the connection could
only be closed by the server, if it was finally accepted, or by the
kernel, after all SYN retries. All versions as far as 2.3 are affected
by this bug.

  * Two issues in the HTTP client applet. First it was possible to trigger
an infinite loop when the same HTTP client lua instance was used to send
several POST requests. A counter was not reset between the requests.
Then, the applet was unexpectedly able to consume the response before
its analysis by the application stream. To hit the bug, the applet's I/O
handler had to be scheduled before the stream one. The result was a
crash because of a NULL dereferenced pointer.

  * An issue in the master CLI. When a command was sent to a worker, the
errors, especially write errors, during the response processing were not
properly handled. The session could remain stuck if a client quickly
closed the connection before the response was fully sent. The maxconn
value of the master CLI is set 10. Thus, it could quickly be
unresponsive if this happened several times.

  * A possible null deref in the htx_xfer_blks() function, when headers or
trailers were partially transferred. Concretely, it was only possible
when H2 trailers were copied from the mux to the channel buffer.

  * A crash with the FCGI health-checks. When the multi-level source and
destination addresses were introduced, a bug was also introduced. The
FCGI multiplexer was relying on the server stream-interface to set some
parameters (REMOTE_ADDR/REMOTE_PORT and SERVER_NAME/SERVER_PORT). But
there is no stream-interface with the health-check because there is no
stream. Now, the server connection is used instead of the
stream-interface when the origin is a health-check.

  * A design issue for listener-less streams. When a stream was created from
a session without listener, the request analyzers were not properly
set. Concretely, it is only an issue for client applets, more
specifically the HTTP ones. Thus only the HTTP client was affected by
this bug. However, there was no visible effect.

  * An issue with all HTX applets. The end of a message was only reported at
the HTX level. The channel's flags were not updated accordingly. The
only known visible effect of this bug was some server aborts erroneously
reported in the stats counters.

  * A theoretical risk of memleak in session_accept_fd() because of a wrong
goto label on the error path.

  * An alignment issue with pool_head structure.

  * Some build issues were fixed. kFreeBSD is now a distinct target, the old
HA_ATOMIC_LOAD() macro now supports const pointers, few numeric
constants are explicitly marked as long long,

In addition, it adds some improvements:

  * Proxy mode (tcp, http, cli...) is not properly reported when
displayed. Missing "syslog" and "peers" mode can now be reported.

  * "no-memory-trimming" global option was added to disable call to
malloc_trim(). Some users with very large numbers of connections have
been facing extremely long malloc_trim() calls on reload that managed to
trigger the watchdog! That's a bit counter-productive. It's even
possible that some implementations are not perfectly reliable or that
their trimming time grows quadratically with the memory used. With this
option, it is possible to disable this mechanism.

  * The dark mode support of the stat page was updated to be applied on
socket rows.

As usual, people using the 2.5 branch are encouraged to migrate to this
version. Thanks everyone for your help and your contributions!

Please find the usual URLs below :
   Site index   : http://www.haproxy.org/
   Discourse: http://discourse.haproxy.org/
   Slack channel: https://slack.haproxy.org/
   Issue tracker: https://github.com/haproxy/haproxy/issues
   Wiki : https://github.com/haproxy/wiki/wiki
   Sources  : http://www.haproxy.org/download/2.5/src/
   Git repository   : http://git.haproxy.org/git/haproxy-2.5.git/
   Git Web browsing : http://git.haproxy.org/?p=haproxy-2.5.git
   Changelog: http://www.haproxy.org/download/2.5/src/CHANGELOG
   Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/


---
Complete changelog :
Christopher Faulet (16):
  BUG/MEDIUM: mux-fcgi: Don't rely on SI src/dst addresses for FCGI 
health-checks
  BUG/MEDIUM: htx: Fix a possible null derefs in htx_xfer_blks()
  REGTESTS: fix the race conditions in normalize_uri.vtc
  REGTESTS: fix the 

Re: server check inter and timeout check relation

2022-03-14 Thread Artur

Le 14/03/2022 à 11:40, Christopher Faulet a écrit :

Le 3/14/22 à 10:53, Artur a écrit :

Hello,

I'd like to know how checks behaves depending on the "inter" and
"timeout check" settings.

Let's try this simplified setup :

backend back
   mode tcp
   timeout check 5s
   server s1 1.2.3.4:80 check inter 2s
   server s2 1.2.3.5:80 check inter 2s

"inter 2s" is the default setup. We should have there one check every 2s
if everything is optimal.
"timeout check 5s" specify that the server check can take up to 5s (once
the connection established).

In this configuration, what happens if the check takes more than 2 
seconds ?

Does haproxy wait (up to 5 seconds) for this check to finish before
launching another check or it's still launching checks every 2s anyway ?



Hi,

For a given server, inter/fastinter/downinter timeouts are used to 
define the delay between the end of a health-check and the beginning 
of the following one. This is independent on the evaluation time. Thus 
in your example, a health-check will still run 2s after the end of the 
previous one, independently on its duration.



OK, I got it. One check at a time and 2s between each check.

However as "timeout check" is set to 5 seconds, each check cannot run 
longer that 5 seconds. It means that if the backend server does not send 
data before the 5 seconds elapsed, the check fails.

Am I right ?

--
Best regards,
Artur




Re: server check inter and timeout check relation

2022-03-14 Thread Christopher Faulet

Le 3/14/22 à 10:53, Artur a écrit :

Hello,

I'd like to know how checks behaves depending on the "inter" and
"timeout check" settings.

Let's try this simplified setup :

backend back
   mode tcp
   timeout check 5s
   server s1 1.2.3.4:80 check inter 2s
   server s2 1.2.3.5:80 check inter 2s

"inter 2s" is the default setup. We should have there one check every 2s
if everything is optimal.
"timeout check 5s" specify that the server check can take up to 5s (once
the connection established).

In this configuration, what happens if the check takes more than 2 seconds ?
Does haproxy wait (up to 5 seconds) for this check to finish before
launching another check or it's still launching checks every 2s anyway ?



Hi,

For a given server, inter/fastinter/downinter timeouts are used to define the 
delay between the end of a health-check and the beginning of the following one. 
This is independent on the evaluation time. Thus in your example, a health-check 
will still run 2s after the end of the previous one, independently on its duration.


--
Christopher Faulet



server check inter and timeout check relation

2022-03-14 Thread Artur

Hello,

I'd like to know how checks behaves depending on the "inter" and 
"timeout check" settings.


Let's try this simplified setup :

backend back
 mode tcp
 timeout check 5s
 server s1 1.2.3.4:80 check inter 2s
 server s2 1.2.3.5:80 check inter 2s

"inter 2s" is the default setup. We should have there one check every 2s 
if everything is optimal.
"timeout check 5s" specify that the server check can take up to 5s (once 
the connection established).


In this configuration, what happens if the check takes more than 2 seconds ?
Does haproxy wait (up to 5 seconds) for this check to finish before 
launching another check or it's still launching checks every 2s anyway ?


--
Best regards,
Artur