Re: HAProxy 2.5 performs 302 redirect before responding with a 503 when httpchk fails

2022-01-25 Thread Bruno Henc
Hello Norman,

Usually checking with nbsrv() [1] does the trick:

backend ssl_backend-logi
acl path_root path /
acl backend_down nbsrv(ssl_backend-logi) eq 0
redirect location /TXDLR_RA if path_root !backend_down

[1] https://cbonte.github.io/haproxy-dconv/2.5/configuration.html#7.3.2-nbsrv

Regards,

Bruno Henc

Re: TLS handshake error

2020-09-17 Thread Bruno Henc
Move ../test/recipes/80-test_ssl_new.t outside of the build root. That means 
"throw out". rm -f ../test/recipes/80-test_ssl_new.t also works.




‐‐‐ Original Message ‐‐‐
On Tuesday, September 15, 2020 8:28 PM, vcjouni  
wrote:

> Hi,
>
> I tested for openssl-1.1.1g.tar.gz from openssl.org in Linux Mint 19.3:
>
> $ patch -p1 < reorder-sigalgs.patch
> patching file ssl/t1_lib.c
>
> ./config
>
> make
>
> make test
>
> Test Summary Report
>
> -
>
> ../test/recipes/80-test_ssl_new.t    (Wstat: 256 Tests: 29
> Failed: 1)
>   Failed test:  20
>   Non-zero exit status: 1
> Files=155, Tests=1466, 76 wallclock secs ( 1.74 usr  0.10 sys + 75.96
> cusr  9.72 csys = 87.52 CPU)
> Result: FAIL
> Makefile:207: recipe for target '_tests' failed
>
> What did you mean by thrown that test out? Now that test failed.
>
> Br,
>
> Jouni
>
> On 9/15/20 4:36 PM, Bruno Henc wrote:
>
> > Hi,
> > Last time I saw this error it involved TLS decryption by firewalls that 
> > didn't support RSA-PSS. Why they blow up
> > when the new, more secure RSA-PSS signature algorithms are used beats me, 
> > but it's principally on them for not supporting the latest IETF standards.
> > Attached is a patch that reorders the signature algorithms
> > in openssl 1.1.1 so that the pkcs1 ones are first. Also,
> > the test/recipes/80-test_ssl_new.t test needs to be thrown out for this to 
> > work. I would recommend trying to get this to work without docker and 
> > kubernetes first, and using the -d haproxy option to get more detailed 
> > OpenSSL logging.
> > It is also likely you will need to set ssl-default-bind-curves since the 
> > firewalls in question do not support curve x25519 either. This is a HAProxy 
> > 2.2 option (https://www.haproxy.com/blog/announcing-haproxy-2-2/). 
> > ssl-default-bind-curves P-256 should/could the trick, although tuning this 
> > to include all supported curves should be done for production traffic.
> > As for implementing this in HAProxy, SSL_CTX_set1_sigalgs_list could be 
> > used in the part of the code that initializes the TLS connection, but it 
> > seems that somewhere in the handshake code the signature algorithms get 
> > renegotiated.
> > I haven't had any luck in identifying where exactly this renegotiation 
> > happens. Someone else might have more luck in writing up a patch that adds 
> > a "sigalgs" or similarly named option to adjust the signature algorithms.
> > Regards,
> > Bruno
> > ‐‐‐ Original Message ‐‐‐
> > On Tuesday, September 15, 2020 1:45 PM, vcjouni 
> > jouni.rosen...@valuecode.com wrote:
> >
> > > Hi!
> > > We can not get haproxy-ingress to work with TLS authentication. Only
> > > option to get this work is by using force-tlsv12 and then only Chrome
> > > works. Problem is TLS handshake decrypt error when using RSA-PSS
> > > signature algorithm, handshake fails every time. When we use
> > > force-tlsv12, only Chrome will change signature back to pkcs1 and then
> > > session is ok. Safari, Edge or IE does not work with any option and they
> > > keep offering RSA-PSS signature.
> > > This same CA bundle and cards has been used before with citrix netscaler
> > > ingress controller without problems (TLSv1.2). Smart cards are
> > > government provided FINEID cards, so we can't test them command line,
> > > only by using those cards with browser.
> > > I already discussed with haproxy-ingress builder jcmoraisjr and he
> > > suggested us to ask from haproxy mailing list.
> > > Docker image: image: quay.io/jcmoraisjr/haproxy-ingress
> > > haproxy -vv
> > > 
> > > HA-Proxy version 2.0.17 2020/07/31 - https://haproxy.org/
> > > Build options :
> > >   TARGET  = linux-glibc
> > >   CPU = generic
> > >   CC  = gcc
> > >   CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement
> > > -fwrapv -Wno-address-of-packed-member -Wno-unused-label
> > > -Wno-sign-compare -Wno-unused-parameter -Wno-old-style-declaration
> > > -Wno-ignored-qualifiers -Wno-clobbered -Wno-missing-field-initializers
> > > -Wno-implicit-fallthrough -Wno-stringop-overflow -Wno-cast-function-type
> > > -Wtype-limits -Wshift-negative-value -Wshift-overflow=2
> > > -Wduplicated-cond -Wnull-dereference
> 

Re: TLS handshake error

2020-09-15 Thread Bruno Henc
Hi,

Last time I saw this error it involved TLS decryption by firewalls that didn't 
support RSA-PSS. Why they blow up
when the new, more secure RSA-PSS signature algorithms are used beats me, but 
it's principally _on them_ for not supporting the latest IETF standards.

Attached is a patch that reorders the signature algorithms
in openssl 1.1.1 so that the pkcs1 ones are first. Also,
the test/recipes/80-test_ssl_new.t test needs to be thrown out for this to 
work. I would recommend trying to get this to work without docker and 
kubernetes first, and using the -d haproxy option to get more detailed OpenSSL 
logging.

It is also likely you will need to set ssl-default-bind-curves since the 
firewalls in question do not support curve x25519 either. This is a HAProxy 2.2 
option (https://www.haproxy.com/blog/announcing-haproxy-2-2/). 
ssl-default-bind-curves P-256 should/could the trick, although tuning this to 
include all supported curves should be done for production traffic.

As for implementing this in HAProxy, SSL_CTX_set1_sigalgs_list could be used in 
the part of the code that initializes the TLS connection, but it seems that 
somewhere in the handshake code the signature algorithms get renegotiated.
I haven't had any luck in identifying where exactly this renegotiation happens. 
Someone else might have more luck in writing up a patch that adds a "sigalgs" 
or similarly named option to adjust the signature algorithms.

Regards,

Bruno

‐‐‐ Original Message ‐‐‐
On Tuesday, September 15, 2020 1:45 PM, vcjouni  
wrote:

> Hi!
>
> We can not get haproxy-ingress to work with TLS authentication. Only
> option to get this work is by using force-tlsv12 and then only Chrome
> works. Problem is TLS handshake decrypt error when using RSA-PSS
> signature algorithm, handshake fails every time. When we use
> force-tlsv12, only Chrome will change signature back to pkcs1 and then
> session is ok. Safari, Edge or IE does not work with any option and they
> keep offering RSA-PSS signature.
>
> This same CA bundle and cards has been used before with citrix netscaler
> ingress controller without problems (TLSv1.2). Smart cards are
> government provided FINEID cards, so we can't test them command line,
> only by using those cards with browser.
>
> I already discussed with haproxy-ingress builder jcmoraisjr and he
> suggested us to ask from haproxy mailing list.
>
> Docker image: image: quay.io/jcmoraisjr/haproxy-ingress
>
> haproxy -vv
>
> 
>
> HA-Proxy version 2.0.17 2020/07/31 - https://haproxy.org/
> Build options :
>   TARGET  = linux-glibc
>   CPU = generic
>   CC  = gcc
>   CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement
> -fwrapv -Wno-address-of-packed-member -Wno-unused-label
> -Wno-sign-compare -Wno-unused-parameter -Wno-old-style-declaration
> -Wno-ignored-qualifiers -Wno-clobbered -Wno-missing-field-initializers
> -Wno-implicit-fallthrough -Wno-stringop-overflow -Wno-cast-function-type
> -Wtype-limits -Wshift-negative-value -Wshift-overflow=2
> -Wduplicated-cond -Wnull-dereference
>   OPTIONS = USE_PCRE2=1 USE_PCRE2_JIT=1 USE_GETADDRINFO=1 USE_OPENSSL=1
> USE_LUA=1 USE_ZLIB=1
>
> Feature list : +EPOLL -KQUEUE -MY_EPOLL -MY_SPLICE +NETFILTER -PCRE
> -PCRE_JIT +PCRE2 +PCRE2_JIT +POLL -PRIVATE_CACHE +THREAD
> -PTHREAD_PSHARED -REGPARM -STATIC_PCRE -STATIC_PCRE2 +TPROXY
> +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT +CRYPT_H -VSYSCALL +GETADDRINFO
> +OPENSSL +LUA +FUTEX +ACCEPT4 -MY_ACCEPT4 +ZLIB -SLZ +CPU_AFFINITY +TFO
> +NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL -SYSTEMD -OBSOLETE_LINKER
> +PRCTL +THREAD_DUMP -EVPORTS
>
> Default settings :
>   bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
>
> Built with multi-threading support (MAX_THREADS=64, default=2).
> Built with OpenSSL version : OpenSSL 1.1.1g  21 Apr 2020
> Running on OpenSSL version : OpenSSL 1.1.1g  21 Apr 2020
> OpenSSL library supports TLS extensions : yes
> OpenSSL library supports SNI : yes
> OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
> Built with Lua version : Lua 5.3.5
> Built with network namespace support.
> Built with transparent proxy support using: IP_TRANSPARENT
> IPV6_TRANSPARENT IP_FREEBIND
> Built with zlib version : 1.2.11
> Running on zlib version : 1.2.11
> Compression algorithms supported : identity("identity"),
> deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
> Built with PCRE2 version : 10.35 2020-05-09
> PCRE2 library supports JIT : yes
> Encrypted password support via crypt(3): yes
> Built with the Prometheus exporter as a service
>
> Available polling systems :
>   epoll : pref=300,  test result OK
>    poll : pref=200,  test result OK
>  select : pref=150,  test result OK
> Total: 3 (3 usable), will use epoll.
>
> Available multiplexer protocols :
> (protocols marked as  cannot be specified using 'proto' keyword)
> h2 : mode=HTX    side=FE|BE mux=H2
>   h2 : mode=HTTP   side=FE    mux=H2
>     : mode=HTX 

Re: Debugging ssl handshake failures

2020-09-09 Thread Bruno Henc
Corrected build instructions attached. openssl-2.2.2.2 should be haproxy-2.2.3.
Regards,
Bruno

apt-debuild
Description: Binary data


Re: Debugging ssl handshake failures

2020-09-09 Thread Bruno Henc
Hi,

> I take it that means theres no internal debug logging for the tls errors that 
> we can just expose via logfile?

Proof of concept patches are attached with build instructions. You may wish to 
edit the haproxy-2.2.3/rules/debian folder to increase the -j setting to your 
current number of cores.
The "disambiguate-ssl-handshake-errors-1.patch" only adds additional error 
messages for the initial ClientHello processing - realistically, it's only 
useful to see if there is no SNI being sent (bots, healthchecks are the usual 
offenders).
The "disambiguate-ssl-handshake-errors-2.patch" implements everything the first 
patch implements, adds a trash chunk for logging additional error data to the 
conn structure, and reuses the SSL error logging logic from ssl_sock_dump_errors

Practically, this means that memory usage is higher - if I recall correctly 
(and it's way to late/early at this point) it's a 16KB overhead per connection 
(echo "show pools" | socat stdio /var/run/haproxy.sock will have a more 
detailed breakdown). Watching the output of show pools is recommended - while I 
haven't noticed a memory leak, keeping an eye on the trash pool is a good idea.

The fcgi protocol is also affected by the addition of the extra_err_data. I 
have to do a smoke test if "proto fcgi" behaves as expected, or if there's a 
potential segfault.

The patch works, but it requires more extensive testing. Sharing it as-is since 
I might not be able to pursue this further in a significant way for some time.

The mapping between the error messages and the potential causes can be a bit 
obscure, but it's still useful.
E.g. an invalid SNI when using strict-sni maps to:
tls_post_process_client_hello: no shared cipher
If there's a cipher mismatch, this also maps to the above error message.
A protocol version mismatch (e.g. trying TLS1 when only TLS1.2 is supported) 
results in:
tls_early_post_process_client_hello: unsupported protocol.

The list of error codes is available upstream at 
https://github.com/openssl/openssl/blob/OpenSSL_1_1_1-stable/crypto/err/openssl.txt#L2774
 .

Regarding the packet capture question - exporting libpcap data via SPOE might 
be possible. It's an ongoing topic.

Regards,

Brunodiff -ur a/include/haproxy/connection.h b/include/haproxy/connection.h
--- a/include/haproxy/connection.h  2020-09-10 05:49:02.705917730 +0200
+++ b/include/haproxy/connection.h  2020-09-10 05:12:30.287797636 +0200
@@ -382,8 +382,10 @@
struct connection *conn;
 
conn = pool_alloc(pool_head_connection);
+   conn->extra_err_data = alloc_trash_chunk();
if (likely(conn != NULL))
-   conn_init(conn);
+   if(likely(conn->extra_err_data != NULL))
+   conn_init(conn);
return conn;
 }
 
@@ -458,7 +460,7 @@
sess->idle_conns--;
session_unown_conn(sess, conn);
}
-
+   free_trash_chunk(conn->extra_err_data);
sockaddr_free(&conn->src);
sockaddr_free(&conn->dst);
 
@@ -697,8 +699,12 @@
case CO_ER_SSL_CRT_FAIL:  return "SSL client certificate not trusted";
case CO_ER_SSL_MISMATCH:  return "Server presented an SSL certificate different from the configured one";
case CO_ER_SSL_MISMATCH_SNI: return "Server presented an SSL certificate different from the expected one";
-   case CO_ER_SSL_HANDSHAKE: return "SSL handshake failure";
+   case CO_ER_SSL_HANDSHAKE:return "SSL handshake failure";
case CO_ER_SSL_HANDSHAKE_HB: return "SSL handshake failure after heartbeat";
+   case CO_ER_SSL_HSHK_CL_SNI_GBRSH: return "SSL handshake failure: ClientHello server name (SNI) null, too long, or invalid";
+   case CO_ER_SSL_HSHK_CL_S_SNI: return "SSL handshake failure: ClientHello server name (SNI) missing. SNI is required when strict-sni is used";
+   case CO_ER_SSL_HSHK_CL_CIPHERS:   return "SSL handshake failure: ClientHello ciphers are invalid";
+   case CO_ER_SSL_HSHK_CL_SIGALGS:   return "SSL handshake failure: ClientHello signature algorithms are invalid";
case CO_ER_SSL_KILLED_HB: return "Stopped a TLSv1 heartbeat attack (CVE-2014-0160)";
case CO_ER_SSL_NO_TARGET: return "Attempt to use SSL on an unknown target (internal error)";
 
diff -ur a/include/haproxy/connection-t.h b/include/haproxy/connection-t.h
--- a/include/haproxy/connection-t.h2020-09-10 05:49:02.705917730 +0200
+++ b/include/haproxy/connection-t.h2020-09-10 05:13:00.007799264 +0200
@@ -233,6 +233,10 @@
CO_ER_SSL_MISMATCH_SNI, /* Server presented an SSL certificate different from the expected one */
CO_ER_SSL_HANDSHAKE,/* SSL error during handshake */
CO_ER_SSL_HANDSHAKE_HB, /* SSL error during handshake with heartbeat present */
+   CO_ER_SSL_HSHK_CL_SNI_GBRSH, /* SSL handshake failure: ClientHello server name (SNI) null, too long, or invalid */
+   CO_ER_SSL_HSHK_CL_S_SNI, /* SSL handshake 

Re: Debugging ssl handshake failures

2020-09-01 Thread Bruno Henc
‐‐‐ Original Message ‐‐‐
On Tuesday, September 1, 2020 6:57 PM, Kevin McArthur  
wrote:

> Hi haproxy
>
> I'm wondering if there is any way to debug the error message "www-https/1: 
> SSL handshake failure"? I've tried increasing log levels to debug etc, but 
> nothing seems to log about why the failure occurred

My first step would be to setup a custom log format that uses log converters 
with the appropriate fetches [1]:
log-format "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC %CS %tsc 
%ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r %[ssl_fc_protocol] %[ssl_fc_cipher]"
However, there is no substitute for a tcpdump that captures the TLS handshake. 
Wireshark is your best friend, especially if there are firewalls in front of 
HAProxy -
in which case I would recommend looking at ssl_fc_session_key option [3] to be 
able to
decrypt the session in wireshark. Preferably, consider limiting for which 
clients
the key is logged for (to protect regular clients). Something along these lines 
may do the trick: http-request set-var(txn.session_key) ssl_fc_session_key if { 
src -i 10.10.10.10 }
Then add %[var(txn.session_key)] to the log format.

> We've had a strange regression when upgrading from the 1.x series that 
> presented as very long 'Establishing SSL Connection' times in Chrome, but the 
> connections would eventually go through and load the page with an expected 
> cipher etc.

May I suggest trying out req.ssl_sni instead of ssl_fc_sni while 
troubleshooting the issue [4]? The latter usually requires a tcp-request 
inspect delay 5s or similar line (in tcp mode) to be used reliably. The former 
is set after the session has been decrypted so it's 100% available once you hit 
the backend rule processing.

Chrome has some strange probing logic that causes false positives and clogs the 
logs [5] and has some interesting side-effects [6]. I'd check the wireshark 
traffic to see which requests are triggering the errors. Also, is Chrome the 
only browser that is affected by this?

Just my 2 cents.
Best regards,

Bruno H.

[1] 
https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#7.3.4-ssl_fc_cipher
[2] https://www.haproxy.com/blog/introduction-to-haproxy-logging/
[3] https://www.haproxy.com/blog/announcing-haproxy-2-2/
[4] 
https://www.haproxy.com/documentation/hapee/latest/deployment-guides/tls-infrastructure/
[5] 
https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#4-option%20http-ignore-probes
[6] https://www.theregister.com/2020/08/21/chromiums_dns_network/

Re: max header size

2020-08-12 Thread Bruno Henc
Hi Ionel,

Could you please try setting tune.bufsize 32768 in the global section?
(See the associated configuration manual entry for tune.bufsize for
a possible answer to both of your questions and for the memory-usage 
implications).

Regards,

Bruno

‐‐‐ Original Message ‐‐‐
On Wednesday, August 12, 2020 6:25 PM, Ionel GARDAIS 
 wrote:

> Hi list,
>
> I've upgrade to 2.2.2 and now haproxy sends HTTP 500 (logged as PH--).
> 'show errors' displays no error.
>
> After further digging, it looks like the content of "http-response 
> set-header" is too big.
>
> What is the limit enforced by haproxy as a header size ? Is the limit the 
> same when piling a set-header followed by multiple add-header ?
>
> Thanks,
> Ionel

HAProxy 2.0 "stick on src table mypeers/mytable" does not result in peers binding to socket address

2019-08-30 Thread Bruno Henc
Greetings,

Using "stick on src table mypeers/stickysrc" in a backend results in HAProxy 
deciding not to bind to the appropriate peers address for the local host (i.e. 
HAProxy thinks there are no stick tables in use). However using a http-request 
track-sc0 line will result in haproxy listening on the peers address. Also, 
defining the stick table in the backend itself or in a dummy backend also works.

The configuration below illustrates the issue:
peers mypeers
bind 159.65.21.107:1024
#peer hpx01 159.65.21.142:1024
#peer hpx02 159.65.21.107:1024
server hpx01 159.65.21.142:1024
server hpx02
table src_tracking type string size 10m store 
http_req_rate(10s),http_req_cnt
table stickysrc type ip size 1m expire 1h store gpc0

listen ft_main
mode http
bind *:80
stick on src table mypeers/stickysrc #peers mypeers #DOES NOT WORK
#stick-table type ip size 1m expire 1h store gpc0 peers mypeers
#stick on src #WORKS
#stick on src track_src #WORKS
#http-request track-sc0 src table mypeers/src_tracking #WORKS
#http-request track-sc0 src table mypeers/stickysrc#WORKS
server local 127.0.0.1:81 check

#backend track_src
#stick-table type ip size 1m expire 1h store gpc0 peers mypeers

Issue affects old (peers) and new (server/bind) peers section syntax. The issue 
only appears where there is only stick tables defined in the peers section - 
defining a dummy backend results in HAProxy binding to the peers socket address.

Limited testing shows that the mypeers/stickysrc isn't being populated on new 
connections either.

Issue reported by duggles on freenode.
The new syntax was introduced in 
[https://github.com/haproxy/haproxy/commit//1b8e68e89a](https://github.com/haproxy/haproxy/commit/1b8e68e89a)

Regards,

Bruno Henc

Re: HA Proxy Support for RedHat 8 Enquiries

2019-08-21 Thread Bruno Henc
The RHEL7 package for HAProxy Enterprise is fully compatible with RHEL8, 
and there's also a build against openssl 1.1.1 , so for all intents and 
purposes one can start using it on RHEL8.



Direct RHEL8 support should arrive with the release of HAProxy 
Enterprise 2.0 which should arrive at the end of Q3 or at the start of 
Q4.  We can expedite the process if needed.



If you have any further questions regarding the enterprise version feel 
free to reach out at supp...@haproxy.com or sa...@haproxy.com, the 
mailing list is oriented towards questions regarding open source 
development of the community edition.


On 8/21/19 9:42 AM, Eng, Lijwee wrote:


Hi HA Proxy Team,

Would like to check is HA Proxy compatible with RHEL 8, from the 
current compatibility , based on the current documentation, 1-9r1 
supports up to RHEL 7.


Will RHEL 8 be supported as well ?

https://www.haproxy.com/documentation/hapee/1-9r1/getting-started/os-hardware/

Please advise, thank you!

Regards

*LiJwee Eng*

Systems Engineer

*Dell Technologies**| *Data Protection Solutions

Mobile +65 97516931

lijwee@dell.com <mailto:lijwee@dell.com>**


--
Bruno Henc
Support Engineer
HAProxy Technologies - Powering your uptime!
375 Totten Pond Road, Suite 302 | Waltham, MA 02451, US
+1 (844) 222-4340 | www.haproxy.com <https://www.haproxy.com/>


Re: FW: HAProxy??

2019-07-11 Thread Bruno Henc
Hello Austin, for any sales inquiries regarding HAProxy Enterprise 
Edition please contact sales @ haproxy . com or use


the webform at https://www.haproxy.com/contact-us/ .

The mailing list is for the discussion of HAProxy Community Edition.

I have forward your email to the sales team which will reach out to you 
with further information.


Regards,

On 7/11/19 3:15 PM, Austin Getz wrote:


Hello Team,

Can you please provide two quotes for the below for ETS?




--
Bruno Henc
Support Engineer
HAProxy Technologies - Powering your uptime!
375 Totten Pond Road, Suite 302 | Waltham, MA 02451, US
+1 (844) 222-4340 | www.haproxy.com <https://www.haproxy.com/>


Re: httplog clf missing values

2019-05-20 Thread Bruno Henc
Hello Aleksandar,

The Common Log Format is defined as:

 log-format "%{+Q}o %{-Q}ci - - [%trg] %r %ST %B \"\" \"\" %cp \ %ms %ft %b %s 
%TR %Tw %Tc %Tr %Ta %tsc %ac %fc \ %bc %sc %rc %sq %bq %CC %CS %hrl %hsl"

The empty fields are expected if you haven't configured http request and 
response header captures. Therefore, %hrl and %hsl are empty.


Best regards,

Bruno

‐‐‐ Original Message ‐‐‐
On Saturday, May 18, 2019 7:33 PM, Aleksandar Lazic  wrote:

> Hi.
>
> I tried today this settings and miss some values in the log.
>
> frontend https-in
>   option httplog clf
>   option http-use-htx
> ...
>
>
> :::Client - - [18/May/2019:17:27:56 +] "GET
> /ocs/v2.php/apps/notifications/api/v2/notifications HTTP/2.0" 200 691 "" ""
> 51818 485 "https-in~" "nextcloud-backend" "server-cloud" 0 0 0 84 84  27 
> 4 0
> 0 0 0 0 "" ""
>
> Do I make something wrong or should I open a ticket because it's a bug?
>
> Regards
> Aleks





Re: Clarification needed on memory use behavior when using cache & nbthread

2019-03-27 Thread Bruno Henc
Hello Robin,

If there is production traffic on the node, it is possible that there are 
multiple haproxy instances still handling requests and until they finish 
serving requests (and the haproxy process finishes execution) the memory cannot 
be freed.

To avoid this problem, I would highly recommend setting an appropriate 
hard-stop-after value in the global section of your configuration that should 
be tuned to the frequency of the reloads (higher frequency of reloads = lower 
hard-stop-after value). However, a too low value should be avoided as it will 
cause abnormal termination for long-running sessions. See 
https://www.haproxy.com/documentation/hapee/1-8r2/onepage/#hard-stop-after for 
more information.

Can you check if there is more than two haproxy processes running on the 
instance you are having issues with?

The expected memory usage will highly depend on how long the old instances are 
running - values north of 2 x the size of the cache are prudent, as they should 
avoid the behavior above. Also, other HAProxy internal structures also consume 
resources: stick tables, the ssl cache, the http buffers associated with each 
session all consume memory.
I would highly recommend checking out the performance section of the 
configuration manual to help with capacity planning.
https://www.haproxy.com/documentation/hapee/1-8r2/onepage/#3.2

Hope this helps.

Regards,

Bruno Henc

‐‐‐ Original Message ‐‐‐
On Wednesday, March 27, 2019 8:39 PM, Robin Björklin 
 wrote:

> Hi,
>
> I've tried using the new haproxy cache with "total-max-size 4095" and 
> "nbthread 4" on a machine with 12GB of RAM and I'm getting hit with "[ALERT] 
> 084/21 (1) : Unable to allocate cache." after a couple of reloads.
>
> HAProxy is started by running: /usr/bin/podman run --rm --name haproxy 
> --network=host -v /opt/haproxy/conf:/usr/local/etc/haproxy:ro 
> haproxy:1.9-alpine -W -db -f /usr/local/etc/haproxy
>
> The configuration is being reloaded by running: /usr/bin/podman kill --signal 
> USR2 haproxy
>
> What's the expected memory usage when reloading? 2x the size of the cache or 
> even more?
>
> Best regards,
> Robin Bjorklin

Re: DNS Resolver Issues

2019-03-21 Thread Bruno Henc
Hello Daniel,


You might be missing the hold-valid directive in your resolvers section: 
https://www.haproxy.com/documentation/hapee/1-9r1/onepage/#5.3.2-timeout

This should force HAProxy to fetch the DNS record values from the resolver.

A reload of the HAProxy instance also forces the instances to query all records 
from the resolver.

Can you please retest with the updated configuration and report back the 
results?


Best regards,

Bruno Henc

‐‐‐ Original Message ‐‐‐
On Thursday, March 21, 2019 12:09 PM, Daniel Schneller 
 wrote:

> Hello!
>
> Friendly bump :)
> I'd be willing to amend the documentation once I understand what's going on :D
>
> Cheers,
> Daniel
>
> > On 18. Mar 2019, at 20:28, Daniel Schneller 
> > daniel.schnel...@centerdevice.com wrote:
> > Hi everyone!
> > I assume I am misunderstanding something, but I cannot figure out what it 
> > is.
> > We are using haproxy in AWS, in this case as sidecars to applications so 
> > they need not
> > know about changing backend addresses at all, but can always talk to 
> > localhost.
> > Haproxy listens on localhost and then forwards traffic to an ELB instance.
> > This works great, but there have been two occasions now, where due to a 
> > change in the
> > ELB's IP addresses, our services went down, because the backends could not 
> > be reached
> > anymore. I don't understand why haproxy sticks to the old IP address 
> > instead of going
> > to one of the updated ones.
> > There is a resolvers section which points to the local dnsmasq instance 
> > (there to send
> > some requests to consul, but that's not used here). All other traffic is 
> > forwarded on
> > to the AWS DNS server set via DHCP.
> > I managed to get timely updates and updated backend servers when using 
> > server-template,
> > but form what I understand this should not really be necessary for this.
> > This is the trimmed down sidecar config. I have not made any changes to dns 
> > timeouts etc.
> > resolvers default
> >
> > dnsmasq
> >
> > 
> >
> > nameserver local 127.0.0.1:53
> > listen regular
> > bind 127.0.0.1:9300
> > option dontlog-normal
> > server lb-internal loadbalancer-internal.xxx.yyy:9300 resolvers default 
> > check addr loadbalancer-internal.xxx.yyy port 9300
> > listen templated
> > bind 127.0.0.1:9200
> > option dontlog-normal
> > option httpchk /haproxy-simple-healthcheck
> > server-template lb-internal 2 loadbalancer-internal.xxx.yyy:9200 resolvers 
> > default check port 9299
> > To simulate changing ELB adresses, I added entries for 
> > loadbalancer-internal.xxx.yyy in /etc/hosts
> > and to be able to control them via dnsmasq.
> > I tried different scenarios, but could not reliably predict what would 
> > happen in all cases.
> > The address ending in 52 (marked as "valid" below) is a currently (as of 
> > the time of testing)
> > valid IP for the ELB. The one ending in 199 (marked "invalid") is an unused 
> > private IP address
> > in my VPC.
> > Starting with /etc/hosts:
> > 10.205.100.52 loadbalancer-internal.xxx.yyy # valid
> > 10.205.100.199 loadbalancer-internal.xxx.yyy # invalid
> > haproxy starts and reports:
> > regular: lb-internal UP/L7OK
> > templated: lb-internal1 DOWN/L4TOUT
> > lb-internal2 UP/L7OK
> > That's expected. Now when I edit /etc/hosts to only contain the invalid 
> > address
> > and restart dnsmasq, I would expect both proxies to go fully down. But only 
> > the templated
> > proxy behaves like that:
> > regular: lb-internal UP/L7OK
> > templated: lb-internal1 DOWN/L4TOUT
> > lb-internal2 MAINT (resolution)
> > Reloading haproxy in this state leads to:
> > regular: lb-internal DOWN/L4TOUT
> > templated: lb-internal1 MAINT (resolution)
> > lb-internal2 DOWN/L4TOUT
> > After fixing /etc/hosts to include the valid server again and restarting 
> > dnsmasq:
> > regular: lb-internal DOWN/L4TOUT
> > templated: lb-internal1 UP/L7OK
> > lb-internal2 DOWN/L4TOUT
> > Shouldn't the regular proxy also recognize the change and bring the backend 
> > up or down
> > depending on the DNS change? I have waited for several health check rounds 
> > (seeing
> > "* L4TOUT" and "L4TOUT") toggle, but it still never updates.
> > I also tried to have only the invalid address in /etc/hosts, then 
> > restarting haproxy.
> > The regular backends will never recognize it when I add the valid one back 
> > in.
> > T

Re: High CPU with Haproxy 1.9.4 (and 1.9.2)

2019-03-13 Thread Bruno Henc

Hello Nick,


Haproxy-1.9 is acting strange under certain conditions, I'll get back to 
you once I run some tests.



I would recommend following the usual procedure when dealing with such 
bugs: bisecting.


Ideally, you should start with HAProxy 1.7 and work your way to HAProxy 
1.9 . This is a bit tedious, especially because of the configuration 
changes, but is usually a good way to trace down the issue.



https://tech-blog.cv-library.co.uk/2014/10/08/debugging-haproxy-via-git-bisect/ 
describes the general approach.



If you can share your configuration (excluding any sensitive details 
like IPs, passwords etc.) and operating system version here, and the 
estimated number of requests per second, I'll see if I can reproduce the 
issue in a lab setting (and do the aforementioned bisecting).



Best regards,

Bruno Henc

On 3/13/19 2:08 PM, Mark Janssen wrote:

Hi,

I've recenly switched a system over from 1.6.9, which has been running 
fine for years, to 1.9.4.
I've updated the configuration to use nbthread instead of nbproc, and 
cleaned up the config a lot.


A few times now, however, i've seen haproxy using all available CPU on 
the system, even when traffic is mostly idle (or when the loadbalancer 
isn't even active anymore after a failover to the 2nd node).


There is some output on the udp syslog, and the proxy still seems to 
work fine, but i'm only seeing a small subset of the requests.


Is there any thing that can point me in the right direction?

echo "show info" | nc 127.0.0.1 14567
Name: HAProxy
Version: 1.9.4
Release_date: 2019/02/06
Nbthread: 8
Nbproc: 1
Process_num: 1
Pid: 20931
Uptime: 0d 2h31m20s
Uptime_sec: 9080
Memmax_MB: 0
PoolAlloc_MB: 27
PoolUsed_MB: 27
PoolFailed: 0
Ulimit-n: 223153
Maxsock: 223153
Maxconn: 10
Hard_maxconn: 10
CurrConns: 36
CumConns: 390500
CumReq: 5700138
MaxSslConns: 0
CurrSslConns: 36
CumSslConns: 182620
Maxpipes: 11500
PipesUsed: 0
PipesFree: 2
ConnRate: 0
ConnRateLimit: 0
MaxConnRate: 689
SessRate: 0
SessRateLimit: 0
MaxSessRate: 689
SslRate: 0
SslRateLimit: 0
MaxSslRate: 168
SslFrontendKeyRate: 0
SslFrontendMaxKeyRate: 145
SslFrontendSessionReuse_pct: 0
SslBackendKeyRate: 0
SslBackendMaxKeyRate: 0
SslCacheLookups: 48562
SslCacheMisses: 4962
CompressBpsIn: 0
CompressBpsOut: 0
CompressBpsRateLim: 0
ZlibMemUsage: 0
MaxZlibMemUsage: 0
Tasks: 657
Run_queue: 4294967285
Idle_pct: 51
node: grtzlb2
Stopping: 0
Jobs: 96
Unstoppable Jobs: 0
Listeners: 59
ActivePeers: 0
ConnectedPeers: 0
DroppedLogs: 0
BusyPolling: 0



--
Mark Janssen  -- m...@sig-io.nl <mailto:m...@sig-io.nl>
Unix / Linux Open-Source and Internet Consultant


Re: Adding Configuration parts via File

2019-03-08 Thread Bruno Henc

Hello Philipp,


I don't think there is a capability to include a list of ACLs. However, 
you can load the ip addresses once via the -f parameter:



acl is_admin src -f /etc/haproxy/admin_ip_list.txt


You would have to define an acl in each section, but the IP list would 
be the same for all rules.



For a more detailed overview of ACLs, check out this blog post:

https://www.haproxy.com/blog/introduction-to-haproxy-acls/


I do have to admit that including ACLs is a neat idea. Alternatively, 
global ACLs would be nice.



Does this workaround solve your use case?


Best regards,


Bruno Henc


On 3/8/19 2:17 PM, Philipp Kolmann wrote:

Hi,

I have ACLs for Source-IPs for Admins for several services. These ACLs 
are identical for multiple listener-sections.


Would it be possible to have a file with several acl snipplets and 
source that at the proper section of the config file multiple times?

I haven't found anything in the docs that would make this possible.

My wished Setup:

admin_acl.conf:

acl is_admin src 10.0.0.1
acl is_admin src 10.0.0.2
acl is_admin src 10.0.0.3
acl is_admin src 10.0.0.4


haproxy.cfg:

listen service1
    bind 10.1.0.10:80
    include admin_acl.conf

     more parameters ...


listen service2
    bind 10.1.0.20:80
    include admin_acl.conf

     more parameters ...


listen service3
    bind 10.1.0.30:80
    include admin_acl.conf

     more parameters ...


The admin_acl needs to be maintained only once and can be used 
multiple times.


Is this already possible? Could such an include option be made for the 
config files?


thanks
Philipp





Re: %[] in use-server directives

2019-02-19 Thread Bruno Henc

Hi,


The following links should be able to help you out:

https://www.haproxy.com/blog/dynamic-configuration-haproxy-runtime-api/#dynamically-scaling-backend-servers

https://www.haproxy.com/blog/dynamic-scaling-for-microservices-with-runtime-api/#runtime-api

You might need to build a development version of HAProxy to take 
advantage of the latest features.



Let me know if you get stuck.


Best regards,


Bruno Henc

On 2/19/19 9:45 PM, Joao Morais wrote:



Em 19 de fev de 2019, à(s) 05:57, Willy Tarreau  escreveu:

In the past it was not possible
to dynamically create servers

I think I misunderstood something, but... how do one dynamically create a new 
server?






Re: haproxy reverse proxy to https streaming backend

2019-02-16 Thread Bruno Henc

Hello Thomas,


This looks like an interesting problem. If I have any spare time I'll 
take a more detailed look, although it's sad that the original author 
hasn't added https support yet. This would probably solve all of your 
woes and avoid having haproxy just to downgrade https to http.



If I have any more information, I will try to share it as soon as possible.


Thank you for your patience and the detailed report on the issue you are 
experiencing.



Best regards,


Bruno Henc


This looks like an interesting problem, though it is sad that

On 2/16/19 4:06 PM, Thomas Schmiedl wrote:

Hello,

I use the xupnpd2 mediaserver (https://github.com/clark15b/xupnpd2) on
my router to display some hls-streams on my TV. xupnpd2 doesn't support
https. The author doesn't want add https support.

My idea is to use haproxy in this scenario on the router:
xupnpd2 (client) <---> http-traffic <---> haproxy <---> https-traffic
<---> https://www.mall.tv/zive (server)

xupnpd2 should receive the playlist (.m3u8) and the media-chunks (.ts)
locally via haproxy over http.

I use haproxy-1.9.4, my config:
frontend myfrontend
bind :8080
default_backend mybackend

backend mybackend
server node1 skenderbeu.gjirafa.com:443 ssl verify none

When using vlc-player directly with haproxy, it works without problems.
When using vlc-player with xupnpd2 and via haproxy, the displayed stream
(from this site: https://www.mall.tv/planespotting) is 4 hours behind
the actual time.

I hope someone could help me.

Best regards,
Thomas





Re: Weighted Backend's

2019-02-11 Thread Bruno Henc
on.



Let me know if you have any questions.

Best regards,

Bruno Henc

On 2/11/19 10:15 PM, James Root wrote:

Hey Aleks,

Thank you for the reply, I should have included my version. I am 
currently using HAProxy 1.8, but moving up a version is a possibility. 
I understand what your example is doing, but it has the same issue my 
original example has I think, that I have to have one unix socket per 
cluster. In my case, "cluster" is just a small collection of servers 
with the same service, but there could be dozens of these clusters.


In our setup, an inbound request gets routed to the correct backend 
based on the host header (in my original example, this backend would 
be "haproxy-test"). But then I effectively want to A/B test between 
the two clusters that can serve this backend. I could put ever server 
in this one backend, with the proper weights, but that isn't exactly 
what I am looking for. Ideally, I would like if one cluster goes out 
completely that 503s get returned for any requests that would normally 
get round robined to that cluster. The only way I could find to 
actually enforce weighting between two clusters was to forward the 
request through a socket to a new "frontend" (functionally this is the 
same as running a proxy instance per cluster). This seems to work, but 
I am looking for a way to do it without opening up a large amount of 
unix sockets.



On Wed, Feb 6, 2019 at 11:43 AM Aleksandar Lazic <mailto:al-hapr...@none.at>> wrote:


Hi James.

Am 06.02.2019 um 16:16 schrieb James Root:
> Hi All,
>
> I am doing some research and have not really found a great way
to configure
> HAProxy to get the desired results. The problem I face is that I
a service
> backed by two separate collections of servers. I would like to
split traffic
> between these two clusters (either using percentages or
weights). Normally, I
> would configure a single backend and calculate my weights to get
the desired
> effect. However, for my use case, the list of servers can be
update dynamically
> through the API. To maintain correct weighting, I would then have to
> re-calculate the weights of every entry to maintain a correct
balance.
>
> An alternative I found was to do the following in my
configuration file:
>
> backend haproxy-test
> balance roundrobin
> server cluster1 u...@cluster1.sock weight 90
> server cluster2 u...@cluster2.sock weight 10
>
> listen cluster1
>     bind u...@cluster1.sock
>     balance roundrobin
>     server s1 127.0.0.1:8081 <http://127.0.0.1:8081>
<http://127.0.0.1:8081>
>
> listen cluster2
>     bind u...@cluster2.sock
>     balance roundrobin
>     server s1 127.0.0.1:8082 <http://127.0.0.1:8082>
<http://127.0.0.1:8082>
>     server s2 127.0.0.1:8083 <http://127.0.0.1:8083>
<http://127.0.0.1:8083>
>
> This works, but is a bit nasty because it has to take another
round trip through
> the kernel. Ideally, there would be a way to accomplish this
without having to
> open unix sockets, but I couldn't find any examples or any leads
in the haproxy
> docs.
>
> I was wondering if anyone on this list had any ideas to
accomplish this without
> using extra unix sockets? Or an entirely different way to get
the same effect?

Well as we don't know which version of HAProxy do you use I will
suggest you a
solution based on 1.9.

I would try to use the set-priority-* feature


https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#4.2-http-request%20set-priority-class

https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#4.2-http-request%20set-priority-offset


https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#7.3.2-prio_class

https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#7.3.2-prio_offset

https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#7.3.3-src

I would try the following, untested but I think you get the idea.

frontend clusters

  bind u...@cluster1.sock
  bind u...@cluster2.sock

  balance roundrobin

  # I'm not sure if src works with unix sockets like this
  # maybe you need to remove the unix@ part.
  acl src-cl1 src u...@cluster1.sock
  acl src-cl2 src u...@cluster2.sock

  http-request set-priority-class -10s if src-cl1
  http-request set-priority-class +10s if src-cl2

#  http-request set-priority-offset 5s if LOGO
#  http-request set-priority-offset 5s if LOGO

  use_backend cluster1 if priority-class < 5
  use_backend cluster2 if priority-class > 5


backend cluster1
    server s1 127.0.0.1:8081 <h

Re: Anyone heard about DPDK?

2019-02-10 Thread Bruno Henc

Hi,


Another good explanation on what DPDK does is available here:

https://learning.oreilly.com/videos/oscon-2017/9781491976227/9781491976227-video306685

https://wiki.fd.io/images/1/1d/40_Gbps_IPsec_on_commodity_hardware.pdf



On 2/10/19 12:21 PM, Aleksandar Lazic wrote:

Am 10.02.2019 um 12:06 schrieb Lukas Tribus:

On Sun, 10 Feb 2019 at 10:48, Aleksandar Lazic  wrote:

Hi.

I have seen this in some twitter posts and asked me if it's something useable 
for a Loadbalancer like HAProxy ?

https://www.dpdk.org/

To be honest it looks like a virtual NIC, but I'm not sure.

See:
https://www.mail-archive.com/haproxy@formilux.org/msg26748.html

8-O Sorry I have forgotten that Question.
Sorry the noise and thanks for your patience.


lukas

Greetings
Aleks





Re: Automatic Redirect transformations using regex?

2019-01-22 Thread Bruno Henc

Hello Joao Guimaraes,


The following lines should accomplish what you described in your email:


    acl is_main_site hdr(Host) -i www.mysite.com mysite.com
    http-request set-var(req.scheme) str(https) if { ssl_fc }
    http-request set-var(req.scheme) str(http) if !{ ssl_fc }

    http-request redirect code 301 location 
%[var(req.scheme)]://%[req.hdr(Host),regsub(www.,,i),lower,field(2,'.')].mysite.com%[capture.req.uri] 
if !is_main_site



Explained line by line, the ACL is_main_site prevents a redirect loop 
(www.mysite.com redirected to mysite.www or some other terrible 
monstrosity often found when dealing with redirects). I highly recommend 
thoroughly testing any redirect before deploying to production, as 
redirect loops are quite nasty to debug.



The second and third line define a variable req.scheme that is used to 
redirect either to http or https versions of a site. If you're doing 
HTTPS only, you can drop these two lines and hardcode the following line 
to redirect directly to HTTPS:



    acl is_main_site hdr(Host) -i www.mysite.com mysite.com
    http-request redirect code 301 location 
https://%[req.hdr(Host),regsub(www.,,i),lower,field(2,'.')].mysite.com%[capture.req.uri] 
if !is_main_site


_Please note that set-var requires haproxy 1.7 or any later version.
_

Also, if you are not performing SSL termination on the HAProxy instance 
doing the redirect you will probably need to read a header value (most 
likely X-Forwarded-Proto) instead of using { ssl_fc } to correctly set 
the req.var variable ( alternatively, you can use the header value 
directly for e.g. X-Forwarded-Proto by starting the redirect with 
%[hdr(X-Forwarded_Proto)]:// ).



Finally, the redirect itself can be explained:

  http-request redirect code 301 location 
%[var(req.scheme)]://%[req.hdr(Host),regsub(www.,,i),lower,field(2,'.')].mysite.com%[capture.req.uri] 
if !is_main_site



As explained above, this part sets the HTTP scheme to either http;// or 
https://


This part strips the www. prefix (if present) from e.g. www.mysite.fr to 
leave only mysite.fr . The i flag in regsub means that a 
case-insensitive match is performed. If you need to match multiple 
patterns (e.g. pictures.mysite.fr, chain multiple regsub statements.


Lower simply turns everything lowercase.

Field does the magic in this redirect and splits the prepared header 
string by the separator '.' into a list (starting with index 1). We are 
only interested in the 2 part, that is, the TLD. Please note that any 
insanity with ccTLDs 
<https://en.wikipedia.org/wiki/Country_code_top-level_domain> 
(mysite.co.uk), multilevel subdomains (my.pictures.mysite.fr) or similar 
won't work with this redirect. If you need a redirect with general 
support for those, I recommend using regirep. Alternatively, if you need 
to cover just one ccTLD, you can use regsub to replace .co.uk with .uk  
. Also, as Aleksandar Lazic mentioned in his reply, haproxy map files 
are also an option. Map files might be more pleasant that regirep if you 
need to handle something exotic.



capture.req.uri saves the whole URI (path + query_string) so if you 
accessed mysite.fr/cute.php?cat the redirect would go to 
fr.mysite.com/cute.php?cat . if you just used path, you would loose the 
?cat query parameter at the end.



Hope this helps, My apologies for the longer email but covering the 
general case of the problem requires mentioning the major caveats you 
might experience. Turns out, rewriting URLs is a non-trivial (and rather 
not-fun) exercise.


Let me know if you have any questions.

Best regards,

Bruno Henc


On 1/21/19 11:40 PM, Joao Guimaraes wrote:

Hi Haproxy team!

I've been trying to figure out how to perform automatic redirects 
based on source URL transformations.


*Basically I need the following redirect: *

mysite.*abc* redirected to *abc*.mysite.com <http://mysite.com>.


Note that mysite.abc is not fixed, must apply to whatever abc wants to be.

*Other examples:*
*
*

mysite.fr <http://mysite.fr> TO fr.mysite.com <http://fr.mysite.com>
mysite.es <http://mysite.es> TO es.mysite.com <http://es.mysite.com>
mysite.us <http://mysite.us> TO us.mysite.com <http://us.mysite.com>
mysite.de <http://mysite.de> TO de.mysite.com <http://de.mysite.com>
mysite.uk <http://mysite.uk> TO uk.mysite.com <http://uk.mysite.com>


Thanks in advance!
Joao Guimaraes




Re: SOAP service healthcheck

2018-12-05 Thread Bruno Henc

Hello,


One option is to implement a sidecar/watchdog service which queries the 
SOAP service directly and exposes a /check URI that HAProxy can use for 
the http-check.



I'm not sure if there's a direct way to POST data to the http check, I 
will let you know if I find one.



Hope this helps.


Best regards,


Bruno Henc


On 12/6/18 8:28 AM, Māra Grīnberga wrote:

Hi,

I'm new to Haproxy and I've a task for which I can't seem to find a 
solution online. Probably, I'm not looking in the right places.
I need to check if a SOAP service responds before sending requests to 
the server. I've read about this option:

       option httpchk GET /check
        http-check expect string OK
I think, it's what I need. But is there a way to pass SOAP envelope to 
this "/check" service?


Any suggestions and help would be appreciated!

Best regards,
Mara


Re: Regarding Client IP

2018-11-15 Thread Bruno Henc

Hello,


To get the client IP information on the smtp server you will need to 
configure haproxy to send proxy protocol data and the smtp server to 
receive it. Postfix supports proxy protocol and you can see at 
https://www.haproxy.com/blog/efficient-smtp-relay-infrastructure-with-postfix-and-load-balancers/ 
how it can be implemented. For more information about proxy protocol 
support in different software  see also 
https://www.haproxy.com/blog/haproxy/proxy-protocol/ .


Which smtp server are you using and which operating system are you 
running it on?



Best regards,


Bruno Henc


On 11/16/18 7:13 AM, Ram Chandra wrote:


Dear Team,

I have configured haproxy for smtp server  and it is working fine
but i am getting haproxy ip instead of client ip.

Please suggest, it`s urgent.








Thanks &Regards :

/* R.C. भाकर
*/
/*MO*-9001092999 *|* //*E-mail* - ram.chan...@dil.in
<mailto:ram.chan...@dil.in>/


[XGENFOOTER]

[-XGENFOOTER]

Do not Remove:
[HID]2018111611425013[-HID]


Re: Balance based on network/cpu load

2018-11-13 Thread Bruno Henc

Hello,


Not sure if there is a direct way to do this, but you can always create 
a monitoring process that will use the haproxy runtime API to MAINT or 
DRAIN a server until the CPU / network load drops. So you have a simple 
watchdog process which reads the output from your monitoring tools to 
decide if a server needs to be disabled or re-enabled.



Hope this helps.


Best regards,


Bruno Henc

On 11/13/18 9:27 AM, Jessy van Baal wrote:


Hi there!

Is there a way that HAProxy 1.8 can balance based on the network or 
CPU load on the backend servers?
Let’s say, a backend server has 90% CPU usage, it gets out of the load 
balancing pool for a while until it gets stable.


Thanks in advance.

Yours sincerely,

Jessy van Baal