Re: BUG/MINOR: dns: false positive downgrade of accepted_payload_size

2018-02-21 Thread Lukas Tribus
Hello Baptiste,



On 21 February 2018 at 19:59, Lukas Tribus  wrote:
> Baptiste, I don't think you'd find the symptoms I have in mind
> acceptable on a load-balancer, so there has to be a misunderstanding
> here. I would like to do some tests, maybe I can come up with a simple
> testcase that shows the behavior and then we can review the situation
> based on that testcase; I will probably need a few days for this
> though.

So this is what I did: I pulled current haproxy master (5e64286bab)
and applied your patch on top of it. I also added "hold obsolete 30s"
to the configuration in all those tests.


Two things that I noticed:
- GoogleDNS and recent Bind instances (and probably many others) don't
actually truncate the response; they don't add any A records to the
response when they set TC - so the TC response is not incomplete but
actually completely empty (repro: use testcase vs 8.8.8.8 max payload
1280)
- OpenDNS (208.67.222.222) actually truncates the response (just like
old bind instances), however haproxy is unable to parse that response,
so a TC response from OpenDNS is always rejected (repro: use testcase
vs 208.67.222.222 max payload 1280)

So surprisingly enough in both of those 2 cases, the "auto-downgrade"
does not reduce the amount of servers in the backend, instead it kills
the backend completely. With your patch and with "hold obsolete 30s"
of course.

What I was actually looking for is a testcase that reduces the amount
of servers in the backend, but I guess that would require a DNS server
that actually truncates the reply "old-style" and at the same time
does not cause haproxy do reject the response, but I don't know what
haproxy doesn't like about the OpenDNS TC response.


Back to the original testcase though:
- testcase config attached
- "100_pointing_to.localhost.ltri.eu" returns 100 A records in the
localhost range, it requires aprox. 1600 byte in payload size
- we can trigger the "auto-downgrade" very easily by shortly
interrupting DNS traffic via a iptables rule (iptables -A INPUT -i
eth0 -s 8.8.8.8 -j DROP && sleep 10 && iptables -D INPUT -i eth0 -s
8.8.8.8 -j DROP)
- after we triggered the auto-downgrade, haproxy does not recover and
no backend servers will be alive, until we reload


Auto-downgrade behaves exactly as expected in our previous
conversation. The exact end-result depends on the behavior of the DNS
server. But none of those cases are desirable:

Case 1 (Testcase Google DNS, recent Bind):
- when auto-downgrade fires the response will be TC without any
records; Haproxy will disable all servers and the entire backend will
be down (fix: restart haproxy)

Case 2 (Testcase OpenDNS):
- when auto-downgrade fires the response will be TC, which haproxy is
unable to parse;  Haproxy will disable all servers and the entire
backend will be down (fix: restart haproxy)

Case 3 (assumption based on what ASA on discourse reports, likely old Bind):
- when auto-downgrade fires and the response is TC, TC is ignored
which means the reply is considered, downgrading the number of servers
in the backend to a lower number (whatever fitted in the 1280 reply),
which most likely will overload the existing backend servers (after
all, there is probably a reason a certain number of servers is in the
DNS)


"hold obsolete" can only help if haproxy is able to recover; but the
auto-downgrade makes sure no future DNS requests works as expected so
whatever value "hold obsolete" is set to, once "hold obsolete" is
over, the problem will show up.


Lets talk about the likelihood of an admin configuring a payload size
above 1280: I think its safe to assume that this is configured based
on actual needs, so an admin would hit one of the 3 cases above,
unless I'm missing something. I completely fail to understand the
benefit of this feature in haproxy.


So based on this tests and cases, I would ask you again to consider
removing this altogether.



cheers,
lukas


auto-payload-size-downgrade-testcase.cfg
Description: Binary data


Re: BUG/MINOR: dns: false positive downgrade of accepted_payload_size

2018-02-21 Thread Lukas Tribus
Hello Baptiste,



I'm sorry if my comments are blunt, but I think this discussion is
important and I do not want my messages to be ambiguous. I do
appreciate all the work you are doing in the DNS subsystem.



On 21 February 2018 at 18:05, Baptiste  wrote:
>> However in Haproxy the administrator *explicitly* configures a higher
>> payload size, because this higher payload size is probably actually
>> *required* to get all backend servers. Silently downgrading the
>> payload size is harmful in my opinion here.
>>
>
> Maybe there is a misunderstood here.
> HAProxy already downgrade "silently" the accepted payload size. I mean, this
> "feature" already exists.
> This patch simply ensure the downgrade happens in timeout cases only.

I am aware that this patch is merely addressing a corner case of this
already existing feature, I'm not saying the patch doesn't fix that
corner case. I am saying that I have a strong opinion about this
feature in the first place and I responded to this thread merely
because I was not aware of this feature previously.



>> Ok, but you can see how ignoring the TC flag, combined with automatic
>> payload size downgrade will make the backend servers number fluctuate
>> with a little bit of packet loss? So with 40 servers in DNS and the
>> loss of a single packet we will downgrade the entire backend to
>> whatever fitted in the 1280 byte downgraded response.
>>
>> I would call this behavior highly undesirable.
>>
>
> You can play with "hold obsolete "  to ensure that unseen records are
> kept for  time.

I don't see how this is supposed to address the problem. Payload size
is downgraded permanently (as I've found out below), not only per
retry request, so we will forever use 1280, which will not contain the
complete server set (after all, that's why the admin raised the
payload size in the first place), so we will be missing - possibly a
lot of - backend servers until we reload haproxy (which will work
until the first DNS packet is lost), than haproxy will degrade again
when the first DNS packet lost.



>> Failing over to TCP on TC responses is the proper solution to this
>> all, but in the meantime I would argue that we should make the
>> behavior as predictable and stable as we can.
>>
>>
>> Let me know what you think,
>>
>
> I think that until we can do DNS over TCP, it is safer to downgrade the
> announced accepted payload size in case of a timeout and ensure we'll still
> receive answer than never ever downgrade and stop receiving answers.

I disagree (strongly). We can't do anything about TC and DNS over TCP
in haproxy 1.8. But it is my opinion that auto downgrading accepted
payload size, while ignoring TC, is problematic in *a lot* of
situations, with a very questionable benefit.

When is the auto downgrade supposed to help? When we have fluctuating
PathMTU issues in a datacenter I think we should fail hard and fast,
not hide the problem. Other than that (hiding PathMTU problems) it
will kneecap the loadbalancer when a single IP fragment of a DNS
response is lost, by reducing the amount of available servers.


Sure, if we would write lookup code for a recursive DNS server we
should implement this fallback in every case. But we would also have
proper TC handling in that case, so we would not work with truncated
responses and we would certainly not permanently downgrade our ability
to get large responses.



> As I explained above, this "feature" (downgrade) already exists in HAProxy,
> but is applied to too many cases.
> This patch simply ensure that it is applied to timeouts only.
> So I don't see what's wrong with this patch.

I don't disagree with the patch, I disagree with the feature; I am
fully aware that it already exist, I was not aware of the feature
before you send the patch.


You mentioned an RFC suggestion, and I believe RFC6891 #6.2.5 may be
what you are talking about:
https://tools.ietf.org/html/rfc6891#section-6.2.5


And it indeed suggests to fall back to a lower payload size, however:

- it is a MAY: "A requestor MAY choose to implement a fallback"
- it is kind-of implied (when reading the entire paragraph) that this
is only relevant when the *default* payload size is above 1280 (so not
relevant to haproxy, we don't default above 1280, we don't even enable
edns0 by default), it's imo irrelevant when the admin is actually
configuring the payload size
- its obvious the TCP fallback on TC responses has to work
- it imo also implies a fallback for the next retry-request, not a
fallback for the entire runtime of the application



> - create a new command on the CLI to set the accepted payload to a value
> decided by the admin (so he can perform an upgrade in case a downgrade
> happened)

I don't understand; are you saying the feature currently downgrades
the payload size for the resolver *permanently*, not just for the next
retry? Therefor, when haproxy downgrades *currently*, we have to
actually reload haproxy to restore the accept payload 

Re: BUG/MINOR: dns: false positive downgrade of accepted_payload_size

2018-02-21 Thread Baptiste
On Wed, Feb 21, 2018 at 11:07 AM, Lukas Tribus  wrote:

> Hello Baptiste,
>
>
> On 21 February 2018 at 08:45, Baptiste  wrote:
> >> Is this downgrade at good thing in the first place? Doesn't it hide
> >> configuration and network issues, make troubleshooting more complex
> >> and the haproxy behavior less predictable?
> >
> >
> > It is an rfc recommendation (rfc number is commented somewhere in the
> source
> > code, but I am on a mobile and can't access it).
> > Its purpose is to hide networking issues when responses has to cross the
> > internet and behavior is not predictable.
>
> And I can see how this would be useful in end-user situations,
> browser, smartphone apps, etc.
>
> However in Haproxy the administrator *explicitly* configures a higher
> payload size, because this higher payload size is probably actually
> *required* to get all backend servers. Silently downgrading the
> payload size is harmful in my opinion here.
>
>
Maybe there is a misunderstood here.
HAProxy already downgrade "silently" the accepted payload size. I mean,
this "feature" already exists.
This patch simply ensure the downgrade happens in timeout cases only.



> >> When we see a response with the TC flag set, do we drop it or do we
> >> still consider the DNS response?
> >
> > Haproxy ignores tc flag for now.
>
> Ok, but you can see how ignoring the TC flag, combined with automatic
> payload size downgrade will make the backend servers number fluctuate
> with a little bit of packet loss? So with 40 servers in DNS and the
> loss of a single packet we will downgrade the entire backend to
> whatever fitted in the 1280 byte downgraded response.
>
> I would call this behavior highly undesirable.
>
>
You can play with "hold obsolete "  to ensure that unseen records
are kept for  time.



>
> Note: networks/firewalls/router may more likely drop fragmented IP
> packets than "normal" IP packets (as they may try to reassemble them
> to actually apply layer 4+ ACLs, which requires buffers, which can
> overflow) and this makes it even worse.
>
>
>
> > Later, it will have to trigger a failover to tcp.
>
> Failing over to TCP on TC responses is the proper solution to this
> all, but in the meantime I would argue that we should make the
> behavior as predictable and stable as we can.
>

Let me know what you think,
>
>
I think that until we can do DNS over TCP, it is safer to downgrade the
announced accepted payload size in case of a timeout and ensure we'll still
receive answer than never ever downgrade and stop receiving answers.

As I explained above, this "feature" (downgrade) already exists in HAProxy,
but is applied to too many cases.
This patch simply ensure that it is applied to timeouts only.
So I don't see what's wrong with this patch.

Now, I agree the "silent" application of the downgrade could be problematic
and we could imagine some solutions to fix this:
- emit a WARNING message / log when it happens
- create a new command on the CLI to display current accepted payload
- create a new command on the CLI to set the accepted payload to a value
decided by the admin (so he can perform an upgrade in case a downgrade
happened)

Any other suggestion is welcome.

Baptiste


Re: Haproxy 1.8.4 400's with http/2

2018-02-21 Thread Lukas Tribus
Hello Sander,

make sure you use "option http-keep-alive" as http mode, specifically
httpclose will cause issue with H2.

If that's not it, please share the configuration; also you may want to
try enabling proxy_ignore_client_abort in the nginx backend [1].



cheers,
lukas


[1] 
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_ignore_client_abort

On 21 February 2018 at 15:29, Sander Klein  wrote:
> Hi All,
>
> Today I tried enabling http/2 on haproxy 1.8.4. After enabling all requests
> to a certain backend started to give 400's while requests to other backend
> worked as expected. I get the following in haproxy.log:
>
> Feb 21 14:31:35 localhost haproxy[22867]:
> 2001:bad:coff:ee:cd97:5710:4515:7c73:52553 [21/Feb/2018:14:31:30.690]
> backend-name/backend-04 1/0/1/-1/4758 400 1932 - - CH-- 518/215/0/0/0 0/0
> {host.name.tld|Mozilla/5.0
> (Mac||https://referred.name.tld/some/string?f=%7B%22la_la_la%22:%7B%22v%22:%22thingy%22%7D%7D}
> {} "GET /some/path/here/filename.jpg HTTP/1.1"
>
> The backend server is nginx which proxies to a nodejs application. When
> looking at the request on nginx it gives an HTTP 499 error.
>
> Is this a known issue? Or, is this a new H2 related issue?
>
> Anyway I can do some more troubleshooting?
>
> Greets,
>
> Sander



Haproxy 1.8.4 400's with http/2

2018-02-21 Thread Sander Klein

Hi All,

Today I tried enabling http/2 on haproxy 1.8.4. After enabling all 
requests to a certain backend started to give 400's while requests to 
other backend worked as expected. I get the following in haproxy.log:


Feb 21 14:31:35 localhost haproxy[22867]: 
2001:bad:coff:ee:cd97:5710:4515:7c73:52553 [21/Feb/2018:14:31:30.690] 
backend-name/backend-04 1/0/1/-1/4758 400 1932 - - CH-- 518/215/0/0/0 
0/0 {host.name.tld|Mozilla/5.0 
(Mac||https://referred.name.tld/some/string?f=%7B%22la_la_la%22:%7B%22v%22:%22thingy%22%7D%7D} 
{} "GET /some/path/here/filename.jpg HTTP/1.1"


The backend server is nginx which proxies to a nodejs application. When 
looking at the request on nginx it gives an HTTP 499 error.


Is this a known issue? Or, is this a new H2 related issue?

Anyway I can do some more troubleshooting?

Greets,

Sander

0x2E78FBE8.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature


Re: haproxy-1.8 build failure on FreeBSD/i386 (clang)

2018-02-21 Thread David CARLIER
Well, Haproxy uses directly a handmade Makefile and it might be possible to
write a short c code test to detect such features (just thinking aloud).

On 21 February 2018 at 14:15, Dmitry Sivachenko  wrote:

>
> > On 21 Feb 2018, at 16:33, David CARLIER  wrote:
> >
> > Might be irrelevant idea, but is it not possible to detect it via simple
> code test into the Makefile eventually ?
>
>
> Did you mean configure?  :)


Re: haproxy-1.8 build failure on FreeBSD/i386 (clang)

2018-02-21 Thread Dmitry Sivachenko

> On 21 Feb 2018, at 16:33, David CARLIER  wrote:
> 
> Might be irrelevant idea, but is it not possible to detect it via simple code 
> test into the Makefile eventually ?


Did you mean configure?  :)


Re: haproxy-1.8 build failure on FreeBSD/i386 (clang)

2018-02-21 Thread David CARLIER
Might be irrelevant idea, but is it not possible to detect it via simple
code test into the Makefile eventually ?


Re: Fwd: Plans for 1.9

2018-02-21 Thread David CARLIER
Here a Work in Progress diff, here I simply build both static and shared
libraries, maybe Haproxy folks would prefer only one of those but wanted to
give the choice.
the main entry point from haproxy.c then becomes hap_main one which is the
default but in a fuzzer's perspective, it might be preferrable to just
implement what need to be tested (I just started to play with LLVM/Fuzzer
and haproxy and not sure yet if either it will belong to contrib or not
committed at all).

Let me know if I m taking a relatively correct direction for the specific
Haproxy changes.

Thanks.

On 19 February 2018 at 07:46, David CARLIER  wrote:

> Yes in the case of LLVM/fuzzer, it defines main entry point (thus if your
> tests you need to define a function entry point to receive the data) hence
> it is better if haproxy was a library. Now since haproxy always has been
> "monolithic" I was not sure it would appeal :-)
>
> On 19 February 2018 at 07:26, Willy Tarreau  wrote:
>
>> Hi David,
>>
>> On Mon, Feb 12, 2018 at 03:38:15PM +, David CARLIER wrote:
>> > -- Forwarded message --
>> > From: David CARLIER 
>> > Date: 12 February 2018 at 15:37
>> > Subject: Plans for 1.9
>> > To: w...@1wt.eu
>> >
>> >
>> > Was thinking as a contrib work, making haproxy more fuzzer "compliant"
>> > (AFL and LLVM/fuzzer for example) which would mean turning haproxy into
>> a
>> > shared with a separated exe but not sure it would be ever accepted :-).
>>
>> To be honnest, I have absolutely no idea how it works, so I guess you'll
>> have to give a bit more details here. Making a shared lib out of haproxy
>> probably isn't a big deal. Sometimes it's just a matter of renaming main()
>> and linking with -shared. I don't know if that would be compatible with
>> what you need however.
>>
>> Willy
>>
>
>
From 65d5fcd73fcc084df6d779c932176e859d1a9269 Mon Sep 17 00:00:00 2001
From: David Carlier 
Date: Wed, 21 Feb 2018 13:07:48 +
Subject: [PATCH] BUILD/MEDIUM: haproxy : build as libraries

The goal here is simply to build Haproxy as libraries
and a separated executable
---
 Makefile  | 23 ++-
 include/common/main.h | 25 +
 src/haproxy.c |  2 +-
 src/main.c| 43 +++
 4 files changed, 87 insertions(+), 6 deletions(-)
 create mode 100644 include/common/main.h
 create mode 100644 src/main.c

diff --git a/Makefile b/Makefile
index 2acf5028..8e6a3082 100644
--- a/Makefile
+++ b/Makefile
@@ -433,6 +433,8 @@ ifeq ($(VERDATE),)
 VERDATE := $(shell (grep -v '^\$$Format' VERDATE 2>/dev/null || touch VERDATE) | head -n 1 | cut -f1 -d' ' | tr '-' '/')
 endif
 
+VERSIONNUM = $(shell cat VERSION 2>/dev/null | tr -dc '0-9\.')
+
  Build options
 # Do not change these ones, enable USE_* variables instead.
 OPTIONS_CFLAGS  =
@@ -803,7 +805,7 @@ EBTREE_DIR := ebtree
  Global compile options
 VERBOSE_CFLAGS = $(CFLAGS) $(TARGET_CFLAGS) $(SMALL_OPTS) $(DEFINE)
 COPTS  = -Iinclude -I$(EBTREE_DIR) -Wall
-COPTS += $(CFLAGS) $(TARGET_CFLAGS) $(SMALL_OPTS) $(DEFINE) $(SILENT_DEFINE)
+COPTS += $(CFLAGS) -fPIC $(TARGET_CFLAGS) $(SMALL_OPTS) $(DEFINE) $(SILENT_DEFINE)
 COPTS += $(DEBUG) $(OPTIONS_CFLAGS) $(ADDINC)
 
 ifneq ($(VERSION)$(SUBVERS),)
@@ -894,8 +896,19 @@ DEP = $(INCLUDES) .build_opts
 # Used only to force a rebuild if some build options change
 .build_opts: $(shell rm -f .build_opts.new; echo \'$(TARGET) $(BUILD_OPTIONS) $(VERBOSE_CFLAGS)\' > .build_opts.new; if cmp -s .build_opts .build_opts.new; then rm -f .build_opts.new; else mv -f .build_opts.new .build_opts; fi)
 
-haproxy: $(OPTIONS_OBJS) $(EBTREE_OBJS) $(OBJS)
-	$(LD) $(LDFLAGS) -o $@ $^ $(LDOPTS)
+libhaproxy.so.$(VERSIONNUM): $(OPTIONS_OBJS) $(EBTREE_OBJS) $(OBJS)
+	$(CC) $(COPTS) -Wl,-soname,libhaproxy.so -shared -o libhaproxy.so.$(VERSIONNUM) \
+	$(OPTIONS_OBJS) $(EBTREE_OBJS) $(OBJS) $(LDOPTS) $(LDFLAGS)
+	ln -s libhaproxy.so.$(VERSIONNUM) libhaproxy.so
+
+libhaproxy.a: libhaproxy.so.$(VERSIONNUM) $(OPTIONS_OBJS) $(EBTREE_OBJS) $(OBJS)
+	$(AR) rv $@ $^
+
+#haproxy: $(OPTIONS_OBJS) $(EBTREE_OBJS) $(OBJS)
+#	$(LD) $(LDFLAGS) -o $@ $^ $(LDOPTS)
+
+haproxy: libhaproxy.a
+	$(CC) $(COPTS) -o $@ src/main.c -L. -Wl,-rpath,. -lhaproxy $(LDFLAGS) $(LDOPTS)
 
 $(LIB_EBTREE): $(EBTREE_OBJS)
 	$(AR) rv $@ $^
@@ -910,7 +923,7 @@ src/trace.o: src/trace.c $(DEP)
 	$(CC) $(TRACE_COPTS) -c -o $@ $<
 
 src/haproxy.o:	src/haproxy.c $(DEP)
-	$(CC) $(COPTS) \
+	$(CC) $(COPTS) -fPIC \
 	  -DBUILD_TARGET='"$(strip $(TARGET))"' \
 	  -DBUILD_ARCH='"$(strip $(ARCH))"' \
 	  -DBUILD_CPU='"$(strip $(CPU))"' \
@@ -956,7 +969,7 @@ uninstall:
 	rm -f "$(DESTDIR)$(SBINDIR)"/haproxy
 
 clean:
-	rm -f *.[oas] src/*.[oas] ebtree/*.[oas] haproxy test .build_opts .build_opts.new
+	rm -f *.[oas] src/*.[oas] ebtree/*.[oas] haproxy libhaproxy.* test .build_opts .build_opts.new
 	for dir in . src include/* doc ebtree; do 

Re: What is the difference between session and request?

2018-02-21 Thread Moemen MHEDHBI
Hi,



On 20/02/2018 02:12, flamese...@yahoo.co.jp wrote:
> Hi all
>
> I found that there are fe_conn, fe_req_rate, fe_sess_rate, be_conn and
> be_sess_rate, but there is no be_req_rate.
>
> I understand that there might be multiple requests in one connection,
> what is a session here?

Googling your question will lead to this SO post:
https://stackoverflow.com/questions/33168469/whats-the-exact-meaning-of-session-in-haproxy
where
- A connection is the event of connecting which may lead or not to a
session. A connection counter includes rejected connections, queued
connections, etc ..
- A session is an end-to-end accepted connection. So maybe it is more
accurate to talk about requests per session rather than requests per
connection.

>
> And how can I get be_req_rate?

Unfortunately, this fetch does not seem to be implemented yet.

>
> Thank you

-- 
Moemen MHEDHBI



Re: BUG/MINOR: dns: false positive downgrade of accepted_payload_size

2018-02-21 Thread Lukas Tribus
Hello Baptiste,


On 21 February 2018 at 08:45, Baptiste  wrote:
>> Is this downgrade at good thing in the first place? Doesn't it hide
>> configuration and network issues, make troubleshooting more complex
>> and the haproxy behavior less predictable?
>
>
> It is an rfc recommendation (rfc number is commented somewhere in the source
> code, but I am on a mobile and can't access it).
> Its purpose is to hide networking issues when responses has to cross the
> internet and behavior is not predictable.

And I can see how this would be useful in end-user situations,
browser, smartphone apps, etc.

However in Haproxy the administrator *explicitly* configures a higher
payload size, because this higher payload size is probably actually
*required* to get all backend servers. Silently downgrading the
payload size is harmful in my opinion here.



>> When we see a response with the TC flag set, do we drop it or do we
>> still consider the DNS response?
>
> Haproxy ignores tc flag for now.

Ok, but you can see how ignoring the TC flag, combined with automatic
payload size downgrade will make the backend servers number fluctuate
with a little bit of packet loss? So with 40 servers in DNS and the
loss of a single packet we will downgrade the entire backend to
whatever fitted in the 1280 byte downgraded response.

I would call this behavior highly undesirable.


Note: networks/firewalls/router may more likely drop fragmented IP
packets than "normal" IP packets (as they may try to reassemble them
to actually apply layer 4+ ACLs, which requires buffers, which can
overflow) and this makes it even worse.



> Later, it will have to trigger a failover to tcp.

Failing over to TCP on TC responses is the proper solution to this
all, but in the meantime I would argue that we should make the
behavior as predictable and stable as we can.



Let me know what you think,

Lukas



Issue when connecting to backend server

2018-02-21 Thread Erwan Loaƫc | AT Internet
Hello everyone,

I'm writing to the list to talk about an issue I cannot solve.

I would like to use HAproxy on AWS, and now I'm facing to an issue.
The setup is basic, on frontend (with HTTP and HTTPS binding), and a backend 
with a single backend server.

Frequently, the time to connect to the backend server is equal to the "timeout 
connect" defined in the configuration file.
The connection always succeed, but the time to connect is exactly the same as 
the "timeout connect".

I first thought I could be related to the backend server, but after a deeper 
exploration using a network trace from the HAproxy server, I've found that the 
connection is really trying to be established at "time to frontend request + 
timeout connect".

The behavior has been reproduced with HAproxy 1.8.3, and more recently with 
HAproxy 1.8.4 (Debian 9, OpenSSL 1.0.2l), and it can be reproduced with no load.

Have you ever seen this issue ? Is anyone has an idea where this behavior can 
come from ?

Example of log I can see:

2018-02-21 09:41:05.357 haproxy[1445]: A.B.C.D:45992 [21/Feb/2018:09:41:05.355] 
frontend1~ backend1/instance1 0/0/1/1/2 302 301 - -  1/1/0/0/0 0/0 "GET 
/toto HTTP/1.1"
2018-02-21 09:41:05.493 haproxy[1445]: A.B.C.D:45993 [21/Feb/2018:09:41:05.491] 
frontend1~ backend1/instance1 0/0/1/1/2 302 301 - -  1/1/0/0/0 0/0 "GET 
/toto HTTP/1.1"
2018-02-21 09:41:05.631 haproxy[1445]: A.B.C.D:45994 [21/Feb/2018:09:41:05.629] 
frontend1~ backend1/instance1 0/0/1/0/1 302 301 - -  1/1/0/0/0 0/0 "GET 
/toto HTTP/1.1"
2018-02-21 09:41:05.769 haproxy[1445]: A.B.C.D:45995 [21/Feb/2018:09:41:05.767] 
frontend1~ backend1/instance1 0/0/1/0/1 302 301 - -  1/1/0/0/0 0/0 "GET 
/toto HTTP/1.1"
2018-02-21 09:41:05.901 haproxy[1445]: A.B.C.D:45996 [21/Feb/2018:09:41:05.900] 
frontend1~ backend1/instance1 0/0/1/0/1 302 301 - -  1/1/0/0/0 0/0 "GET 
/toto HTTP/1.1"
2018-02-21 09:41:06.032 haproxy[1445]: A.B.C.D:45997 [21/Feb/2018:09:41:06.031] 
frontend1~ backend1/instance1 0/0/1/0/1 302 301 - -  1/1/0/0/0 0/0 "GET 
/toto HTTP/1.1"
2018-02-21 09:41:06.166 haproxy[1445]: A.B.C.D:45998 [21/Feb/2018:09:41:06.164] 
frontend1~ backend1/instance1 0/0/1/1/2 302 301 - -  1/1/0/0/0 0/0 "GET 
/toto HTTP/1.1"
2018-02-21 09:41:06.300 haproxy[1445]: A.B.C.D:45999 [21/Feb/2018:09:41:06.298] 
frontend1~ backend1/instance1 0/0/0/1/1 302 301 - -  1/1/0/0/0 0/0 "GET 
/toto HTTP/1.1"
2018-02-21 09:41:06.432 haproxy[1445]: A.B.C.D:46000 [21/Feb/2018:09:41:06.430] 
frontend1~ backend1/instance1 0/0/0/1/1 302 301 - -  1/1/0/0/0 0/0 "GET 
/toto HTTP/1.1"
2018-02-21 09:41:11.568 haproxy[1445]: A.B.C.D:46001 [21/Feb/2018:09:41:06.565] 
frontend1~ backend1/instance1 0/0/5002/1/5003 302 301 - -  1/1/0/0/1 0/0 
"GET /toto HTTP/1.1"
2018-02-21 09:41:11.700 haproxy[1445]: A.B.C.D:46003 [21/Feb/2018:09:41:11.699] 
frontend1~ backend1/instance1 0/0/0/1/1 302 301 - -  1/1/0/0/0 0/0 "GET 
/toto HTTP/1.1"
2018-02-21 09:41:11.829 haproxy[1445]: A.B.C.D:46004 [21/Feb/2018:09:41:11.827] 
frontend1~ backend1/instance1 0/0/0/1/1 302 301 - -  1/1/0/0/0 0/0 "GET 
/toto HTTP/1.1"
2018-02-21 09:41:11.958 haproxy[1445]: A.B.C.D:46005 [21/Feb/2018:09:41:11.956] 
frontend1~ backend1/instance1 0/0/0/1/1 302 301 - -  1/1/0/0/0 0/0 "GET 
/toto HTTP/1.1"
2018-02-21 09:41:12.089 haproxy[1445]: A.B.C.D:46006 [21/Feb/2018:09:41:12.087] 
frontend1~ backend1/instance1 0/0/0/1/1 302 301 - -  1/1/0/0/0 0/0 "GET 
/toto HTTP/1.1"
2018-02-21 09:41:12.219 haproxy[1445]: A.B.C.D:46007 [21/Feb/2018:09:41:12.218] 
frontend1~ backend1/instance1 0/0/0/1/1 302 301 - -  1/1/0/0/0 0/0 "GET 
/toto HTTP/1.1"
2018-02-21 09:41:12.350 haproxy[1445]: A.B.C.D:46008 [21/Feb/2018:09:41:12.348] 
frontend1~ backend1/instance1 0/0/1/1/2 302 301 - -  1/1/0/0/0 0/0 "GET 
/toto HTTP/1.1"
2018-02-21 09:41:12.485 haproxy[1445]: A.B.C.D:46009 [21/Feb/2018:09:41:12.483] 
frontend1~ backend1/instance1 0/0/1/0/1 302 301 - -  1/1/0/0/0 0/0 "GET 
/toto HTTP/1.1"

The request at "09:41:06.565" takes 5002ms to connect to backend server. In a 
tcpdump capture I will see that TCP SYN packet sent to backend server will be 
sent at T+5s, (with a timeout connect set to 5s)

Thanks,

--
Erwan