Re: [PATCH 1/4] CLEANUP: cfgparse: Remove unused label end

2018-02-19 Thread Willy Tarreau
On Tue, Feb 20, 2018 at 12:49:43AM +0100, Tim Duesterhus wrote: > This removes the end label from parse_process_number() which > is unused since 5ab51775e736511b7e54f42e080dcef76a284da9, which > first was released in haproxy 1.8.0. (...) Thanks Tim, all 4 patches applied. Willy

What is the difference between session and request?

2018-02-19 Thread flamesea12
Hi all I found that there are fe_conn, fe_req_rate, fe_sess_rate, be_conn and be_sess_rate, but there is no be_req_rate. I understand that there might be multiple requests in one connection, what is a session here? And how can I get be_req_rate? Thank you

[PATCH 4/4] CLEANUP: pools: Remove unused end label in memory.h

2018-02-19 Thread Tim Duesterhus
This removes the end label from memory.h. The labels are unused as of cf975d46bca2515056a4f55e55fedbbc7b4eda59 which is unreleased (and incidentally the first commit containing those labels, thus they never have been used). --- include/common/memory.h | 4 ++-- 1 file changed, 2 insertions(+), 2

[PATCH 1/4] CLEANUP: cfgparse: Remove unused label end

2018-02-19 Thread Tim Duesterhus
This removes the end label from parse_process_number() which is unused since 5ab51775e736511b7e54f42e080dcef76a284da9, which first was released in haproxy 1.8.0. --- src/cfgparse.c | 1 - 1 file changed, 1 deletion(-) diff --git a/src/cfgparse.c b/src/cfgparse.c index 40facd5da..27d7eee7b 100644

[PATCH 2/4] CLEANUP: spoe: Remove unused label retry

2018-02-19 Thread Tim Duesterhus
This removes the retry labels from spoe_send_frame and spoe_recv_frame which are unused since d5216d474d69856a282e4443f180af2093a80d6c, which is unreleased, but was backported to haproxy 1.8 as f13f3a4babdb1ce23a7e982c765704bca728111a. --- src/flt_spoe.c | 2 -- 1 file changed, 2 deletions(-)

[PATCH 3/4] CLEANUP: h2: Remove unused labels from mux_h2.c

2018-02-19 Thread Tim Duesterhus
This removes the unused next_header_block and try_again labels from mux_h2.c. try_again is unused as of a76e4c21839cafd036fbe755416569206502c1d9, which first appeared in haproxy 1.8.0. next_header_block is unused as of 872855998bd03d5224e0e5cd6aef9b91e2a6de1d, which was backported to haproxy

Re: BUG/MINOR: dns: false positive downgrade of accepted_payload_size

2018-02-19 Thread Lukas Tribus
Hello Baptiste, On 19 February 2018 at 18:59, Baptiste wrote: > Hi guys, > > While working with consul, I discovered a "false positive" corner case which > triggers a downgrade of the accepted_payload_size. Is this downgrade at good thing in the first place? Doesn't it hide

Re: HAPROXY + keepalived + NFSv4 (NFS Ganesha)

2018-02-19 Thread Shawn Heisey
On 2/19/2018 10:08 AM, TomK wrote: > Wondering if there is a way to setup an HA NFSv4 server using HAPROXY > and keepalived or if anyone tried that doesn't result in the client > disconnecting with this error even when using the VIP through a basic > HAPROXY + keepalived config: > >

HAPROXY + keepalived + NFSv4 (NFS Ganesha)

2018-02-19 Thread TomK
Hey Guy's, Wondering if there is a way to setup an HA NFSv4 server using HAPROXY and keepalived or if anyone tried that doesn't result in the client disconnecting with this error even when using the VIP through a basic HAPROXY + keepalived config: [root@ipaclient01 ~]# cd /n -bash: cd: /n:

Re: [PATCH]; BUILD/(VERY) MINOR

2018-02-19 Thread David CARLIER
Oh right makes a lot of sense let s drop it then :-) On 19 February 2018 at 07:19, Willy Tarreau wrote: > Hi David, > > On Mon, Feb 12, 2018 at 02:14:28PM +, David CARLIER wrote: > > Hi > > > > I had this patch locally for couple of weeks just having the proper > current > >

Re: haproxy 1.8 ssl backend server leads to server session aborts

2018-02-19 Thread Willy Tarreau
Hi Christopher, On Mon, Feb 19, 2018 at 03:24:09PM +0100, Christopher Faulet wrote: > Someone on discourse reports a problem with this patch: > > https://discourse.haproxy.org/t/random-sa-errors-with-haproxy-1-8-3/2116/6 > > I asked him to test the attached patch. But It could be cool to have

Re: haproxy 1.8 ssl backend server leads to server session aborts

2018-02-19 Thread Christopher Faulet
Le 14/02/2018 à 18:53, Willy Tarreau a écrit : On Wed, Feb 14, 2018 at 06:20:42PM +0100, Mateusz Malek wrote: Hi, On 14.02.2018 17:53, Willy Tarreau wrote: On Wed, Feb 14, 2018 at 05:29:57PM +0100, Olivier Houchard wrote: What about what's attached, instead ? I think it should work.

connection limit per server, scattered to several backends

2018-02-19 Thread muellste
Hey list, I am looking for a way to limit HTTP connections for a server which is part of several backends. I have a configuration like this: backend be1 server srv1 192.168.0.1 maxconn 100 server srv2 192.168.0.2 maxconn 100 backend be2 balance uri whole server srv1 192.168.0.1

Re: What is a nice way to bypass the maintenance mode for certain IP's?

2018-02-19 Thread Willy Tarreau
Hi, On Mon, Feb 19, 2018 at 12:18:36PM +, Pieter Vogelaar wrote: > Hi, > > At the moment if we set backends in maintenance mode, the servers can't be > reached by anyone. > Is it possible to still allow traffic from certain IP's (of the office > network) so that testing can be done, before

Re: [PATCH] DOC: cfgparse: Warn on option (tcp|http)log in backend

2018-02-19 Thread Willy Tarreau
Hi Tim, On Mon, Feb 19, 2018 at 12:55:53PM +0100, Tim Düsterhus wrote: > Willy, > > Am 05.02.2018 um 20:52 schrieb Tim Duesterhus: > > The option does not seem to have any effect since at least haproxy > > 1.3. Also the `log-format` directive already warns when being used > > in a backend.>

What is a nice way to bypass the maintenance mode for certain IP's?

2018-02-19 Thread Pieter Vogelaar
Hi, At the moment if we set backends in maintenance mode, the servers can’t be reached by anyone. Is it possible to still allow traffic from certain IP’s (of the office network) so that testing can be done, before the backend is available to the general public again? Best regards, Pieter

Re: [PATCH] DOC: cfgparse: Warn on option (tcp|http)log in backend

2018-02-19 Thread Tim Düsterhus
Willy, Am 05.02.2018 um 20:52 schrieb Tim Duesterhus: > The option does not seem to have any effect since at least haproxy > 1.3. Also the `log-format` directive already warns when being used > in a backend.> *snip* Once again I did not receive a reply for two weeks [1] and you handled later

[PATCH] Add a testcase for my multi-port + multi-server listener

2018-02-19 Thread Philipp Kolmann
Hi, I had a patch for my issue with multi-port + multi-server listener that got fixed in 1.7.10 that seems to have gotten forgotten. I add this patch again. Thanks for considering adding it to the test cases. thanks Philipp --