Re: TCP mode and ultra short lived connection

2021-02-12 Thread Willy Tarreau
Hi Maksim,

On Thu, Feb 11, 2021 at 01:20:04PM +0300, ?? ? wrote:
> Thank you very much, Willy!
> 
> Turning off abortonclose (it was enabled globally) for this particular
> session really helped :)

Fantastic, one less bug to chase :-)

Cheers,
Willy



Re: TCP mode and ultra short lived connection

2021-02-11 Thread Максим Куприянов
Thank you very much, Willy!

Turning off abortonclose (it was enabled globally) for this particular
session really helped :)

--
Best regards,
Maksim

вт, 9 февр. 2021 г. в 17:46, Willy Tarreau :

> Hi guys,
>
> > > I faced a problem dealing with l4 (tcp mode) haproxy-based proxy over
> > > Graphite's component receiving metrics from clients and clients who are
> > > connecting just to send one or two Graphite-metrics and disconnecting
> right
> > > after.
> > >
> > > It looks like this
> > > 1. Client connects to haproxy (SYN/SYN-ACK/ACK)
> > > 2. Client sends one line of metric
> > > 3. Haproxy acknowledges receiving this line (ACK to client)
> > > 4. Client disconnects (FIN, FIN-ACK)
> > > 5. Haproxy writes 1/-1/0/0 CC-termination state to log without even
> trying to connect to a backend and send client's data to it.
> > > 6. Metric is lost :(
> > >
> > > If the client is slow enough between steps 1 and 2 or it sends a bunch
> of metrics so haproxy has time to connect to a backend - everything works
> like a charm.
> >
> > The issue though is the client disconnect. If we delay the client
> > disconnect, it could work. Try playing with tc by delaying the
> > incoming FIN packets for a few hundred milliseconds (make sure you
> > only apply this to this particular traffic, for example this
> > particular destination port).
> >
>
> In fact it's not that black-or-white. A client disconnecting first
> in TCP is *always* a protocol design issue, because it leaves the
> source port in TIME_WAIT on the client side for 1 minute (even 4 on
> certain legacy stacks), and once all source ports are blocked like
> this, the client cannot establish new connections anymore.
>
> However, this is a situation we *normally* deal with in haproxy:
>
>   - in TCP, we're *supposed* to respect exactly this sequence, and
> do the same on the other side since it might be the only way to
> pass the protocol from end-to-end ; there's even an series of
> test for this one in the old test-fsm.cfg ;
>
>   - in HTTP, we normally pass the request as-is, and prepare for
> closing after delivering the response (since some clients are
> just netcat scripts).
>
> But it's well known that in HTTP, a FIN from a client after the request
> and before the respones usually corresponds to a browser closing by the
> user clicking "stop" or closing a tab. For this reason there's an
> option "abortonclose" which is used to abort the request before passing
> it to the other side, or while it's still waiting for a connection to
> establish.
>
> It turns out that this "abortonclose" option also works for TCP and
> totally makes sense there for a number of protocols. Thus, one
> possible explanation is that this option is present in the original
> config (maybe even inherited from the defaults section), in which case
> this is the desired behavior. It would also correspond to the CC log
> output (client closed during connect).
>
> But it's also possible that we broke something again. This half-closed
> client situation was broken a few times in the past because it doesn't
> get enough love. It essentially corresponds to a denial-of-service
> attempt and rarely to a normal behavior, and is rarely tested from this
> last perspective. In addition, the idea of leaving blocked source ports
> behind doesn't sound appealing to anyone for a reg-test :-/
>
> > In TCP mode, we need to propagate the close from one side to the
> > other, as we are not aware of the protocol. Not sure if it is possible
> > (or a good idea) to keep sending buffer contents to the backend server
> > when the client is already gone.
>
> It's expected to work and is indeed not a good idea at the same time,
> because this forces haproxy to consume all of its source ports very
> quickly and makes it trivial for a client to block all of its outgoing
> communications by maintaining a load of only ~500 connections per second.
> Once this is assumed however, it must be possible (barring any bug, again).
>
> > "[no] option abortonclose" only affects HTTP, according to the docs.
>
> I'm pretty sure it's not limited to HTTP because I've met PR_O_ABRT_CLOSE
> or something like this quite a few times in the connection setup code.
> However it's very possible that the doc isn't clear about this or only
> focuses on HTTP since it's where this usually matters.
>
> Hoping this helps,
> Willy
>


Re: TCP mode and ultra short lived connection

2021-02-09 Thread Willy Tarreau
Hi guys,

> > I faced a problem dealing with l4 (tcp mode) haproxy-based proxy over
> > Graphite's component receiving metrics from clients and clients who are
> > connecting just to send one or two Graphite-metrics and disconnecting right
> > after.
> >
> > It looks like this
> > 1. Client connects to haproxy (SYN/SYN-ACK/ACK)
> > 2. Client sends one line of metric
> > 3. Haproxy acknowledges receiving this line (ACK to client)
> > 4. Client disconnects (FIN, FIN-ACK)
> > 5. Haproxy writes 1/-1/0/0 CC-termination state to log without even trying 
> > to connect to a backend and send client's data to it.
> > 6. Metric is lost :(
> >
> > If the client is slow enough between steps 1 and 2 or it sends a bunch of 
> > metrics so haproxy has time to connect to a backend - everything works like 
> > a charm.
> 
> The issue though is the client disconnect. If we delay the client
> disconnect, it could work. Try playing with tc by delaying the
> incoming FIN packets for a few hundred milliseconds (make sure you
> only apply this to this particular traffic, for example this
> particular destination port).
> 

In fact it's not that black-or-white. A client disconnecting first
in TCP is *always* a protocol design issue, because it leaves the
source port in TIME_WAIT on the client side for 1 minute (even 4 on
certain legacy stacks), and once all source ports are blocked like
this, the client cannot establish new connections anymore.

However, this is a situation we *normally* deal with in haproxy:

  - in TCP, we're *supposed* to respect exactly this sequence, and
do the same on the other side since it might be the only way to
pass the protocol from end-to-end ; there's even an series of
test for this one in the old test-fsm.cfg ;

  - in HTTP, we normally pass the request as-is, and prepare for
closing after delivering the response (since some clients are
just netcat scripts).

But it's well known that in HTTP, a FIN from a client after the request
and before the respones usually corresponds to a browser closing by the
user clicking "stop" or closing a tab. For this reason there's an
option "abortonclose" which is used to abort the request before passing
it to the other side, or while it's still waiting for a connection to
establish.

It turns out that this "abortonclose" option also works for TCP and
totally makes sense there for a number of protocols. Thus, one
possible explanation is that this option is present in the original
config (maybe even inherited from the defaults section), in which case
this is the desired behavior. It would also correspond to the CC log
output (client closed during connect).

But it's also possible that we broke something again. This half-closed
client situation was broken a few times in the past because it doesn't
get enough love. It essentially corresponds to a denial-of-service
attempt and rarely to a normal behavior, and is rarely tested from this
last perspective. In addition, the idea of leaving blocked source ports
behind doesn't sound appealing to anyone for a reg-test :-/

> In TCP mode, we need to propagate the close from one side to the
> other, as we are not aware of the protocol. Not sure if it is possible
> (or a good idea) to keep sending buffer contents to the backend server
> when the client is already gone.

It's expected to work and is indeed not a good idea at the same time,
because this forces haproxy to consume all of its source ports very
quickly and makes it trivial for a client to block all of its outgoing
communications by maintaining a load of only ~500 connections per second.
Once this is assumed however, it must be possible (barring any bug, again).

> "[no] option abortonclose" only affects HTTP, according to the docs.

I'm pretty sure it's not limited to HTTP because I've met PR_O_ABRT_CLOSE
or something like this quite a few times in the connection setup code.
However it's very possible that the doc isn't clear about this or only
focuses on HTTP since it's where this usually matters.

Hoping this helps,
Willy



Re: TCP mode and ultra short lived connection

2021-02-08 Thread Максим Куприянов
Hi, Lukas!

I didn’t attach dump of haproxy to backend servers packets because there
were no such packets in this particular case. :( this haproxy installation
is heavy loaded with traffic. So it could be the reason haproxy even didn’t
start connecting to a backend in time. If I add some small delay to the
client right after connection or after sending of data - everything works
as expected. Clients differ, so TC could possibly be an only option. But
maybe there is a better way.


Вт, 9 февр. 2021 г. в 02:12, Lukas Tribus :

> Hello,
>
> On Mon, 8 Feb 2021 at 18:14, Максим Куприянов
>  wrote:
> >
> > Hi!
> >
> > I faced a problem dealing with l4 (tcp mode) haproxy-based proxy over
> Graphite's component receiving metrics from clients and clients who are
> connecting just to send one or two Graphite-metrics and disconnecting right
> after.
> >
> > It looks like this
> > 1. Client connects to haproxy (SYN/SYN-ACK/ACK)
> > 2. Client sends one line of metric
> > 3. Haproxy acknowledges receiving this line (ACK to client)
> > 4. Client disconnects (FIN, FIN-ACK)
> > 5. Haproxy writes 1/-1/0/0 CC-termination state to log without even
> trying to connect to a backend and send client's data to it.
> > 6. Metric is lost :(
> >
> > If the client is slow enough between steps 1 and 2 or it sends a bunch
> of metrics so haproxy has time to connect to a backend – everything works
> like a charm.
>
> The issue though is the client disconnect. If we delay the client
> disconnect, it could work. Try playing with tc by delaying the
> incoming FIN packets for a few hundred milliseconds (make sure you
> only apply this to this particular traffic, for example this
> particular destination port).
>
> > Example. First column is a time delta in seconds between packets
>
> It would be useful to have both front and backend tcp connections in
> the same output (and absolute time stamps - delta from the first
> packet, not the previous).
>
>
> You may also want to accelerate the server connect with options like:
>
> option tcp-smart-connect
>
> https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#4-option%20tcp-smart-connect
>
> tfo (needs server support)
>
> https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#tfo%20%28Server%20and%20default-server%20options%29
>
>
>
> > How can I deal with these send-and-forget clients?
>
> In TCP mode, we need to propagate the close from one side to the
> other, as we are not aware of the protocol. Not sure if it is possible
> (or a good idea) to keep sending buffer contents to the backend server
> when the client is already gone. "[no] option abortonclose" only
> affects HTTP, according to the docs.
>
> Maybe Willy can confirm/deny this.
>
>
> Lukas
>


Re: TCP mode and ultra short lived connection

2021-02-08 Thread Lukas Tribus
Hello,

On Mon, 8 Feb 2021 at 18:14, Максим Куприянов
 wrote:
>
> Hi!
>
> I faced a problem dealing with l4 (tcp mode) haproxy-based proxy over 
> Graphite's component receiving metrics from clients and clients who are 
> connecting just to send one or two Graphite-metrics and disconnecting right 
> after.
>
> It looks like this
> 1. Client connects to haproxy (SYN/SYN-ACK/ACK)
> 2. Client sends one line of metric
> 3. Haproxy acknowledges receiving this line (ACK to client)
> 4. Client disconnects (FIN, FIN-ACK)
> 5. Haproxy writes 1/-1/0/0 CC-termination state to log without even trying to 
> connect to a backend and send client's data to it.
> 6. Metric is lost :(
>
> If the client is slow enough between steps 1 and 2 or it sends a bunch of 
> metrics so haproxy has time to connect to a backend – everything works like a 
> charm.

The issue though is the client disconnect. If we delay the client
disconnect, it could work. Try playing with tc by delaying the
incoming FIN packets for a few hundred milliseconds (make sure you
only apply this to this particular traffic, for example this
particular destination port).

> Example. First column is a time delta in seconds between packets

It would be useful to have both front and backend tcp connections in
the same output (and absolute time stamps - delta from the first
packet, not the previous).


You may also want to accelerate the server connect with options like:

option tcp-smart-connect
https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#4-option%20tcp-smart-connect

tfo (needs server support)
https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#tfo%20%28Server%20and%20default-server%20options%29



> How can I deal with these send-and-forget clients?

In TCP mode, we need to propagate the close from one side to the
other, as we are not aware of the protocol. Not sure if it is possible
(or a good idea) to keep sending buffer contents to the backend server
when the client is already gone. "[no] option abortonclose" only
affects HTTP, according to the docs.

Maybe Willy can confirm/deny this.


Lukas



Re: TCP mode and ultra short lived connection

2021-02-08 Thread Илья Шипицин
I have to go to sleep :)

for unknown reason I thought that you are out ot ephemeral ports due to
rapid connection reopen (aka ephemeral ports exhaustion).

вт, 9 февр. 2021 г. в 01:04, Максим Куприянов :

> Илья, thanks for your answer!
>
> Sorry, but It seems to me I didn't make it clear: the problem is the data
> received from these fast clients never reaches backends. But it should be
> delivered in order to be saved.
>
> Maybe there is some way to delay acknowledging of the data received until
> some backend is selected and connected to session?
>
>
> пн, 8 февр. 2021 г. в 22:56, Илья Шипицин :
>
>> I think it is "4. Client disconnects (FIN, FIN-ACK)"
>> if client would send RST instead of FIN, port would have been released
>> immediately.
>>
>>
>> https://stackoverflow.com/questions/13049828/fin-vs-rst-in-tcp-connections
>>
>> RST is much better for short living connections.
>>
>> пн, 8 февр. 2021 г. в 22:17, Максим Куприянов > >:
>>
>>> Hi!
>>>
>>> I faced a problem dealing with l4 (tcp mode) haproxy-based proxy over
>>> Graphite's component receiving metrics from clients and clients who are
>>> connecting just to send one or two Graphite-metrics and disconnecting right
>>> after.
>>>
>>> It looks like this
>>> 1. Client connects to haproxy (SYN/SYN-ACK/ACK)
>>> 2. Client sends one line of metric
>>> 3. Haproxy acknowledges receiving this line (ACK to client)
>>> 4. Client disconnects (FIN, FIN-ACK)
>>> 5. Haproxy writes 1/-1/0/0 CC-termination state to log without even
>>> trying to connect to a backend and send client's data to it.
>>> 6. Metric is lost :(
>>>
>>> If the client is slow enough between steps 1 and 2 or it sends a bunch
>>> of metrics so haproxy has time to connect to a backend – everything works
>>> like a charm.
>>>
>>> How can I deal with these send-and-forget clients?
>>>
>>> Example. First column is a time delta in seconds between packets
>>> 0.00 client haproxy TCP 100 58664 → 2024 [SYN] Seq=0 Win=65535
>>> Len=0 MSS=1220 WS=64 TSval=904701415 TSecr=0 SACK_PERM=1
>>> 0.15 haproxy client TCP 96 2024 → 58664 [SYN, ACK] Seq=0 Ack=1
>>> Win=65535 Len=0 MSS=8840 SACK_PERM=1 TSval=276942420 TSecr=904701415 WS=2048
>>> 0.019105 client haproxy TCP 88 58664 → 2024 [ACK] Seq=1 Ack=1
>>> Win=131264 Len=0 TSval=904701434 TSecr=276942420
>>> 0.90 client haproxy TCP 151 58664 → 2024 [PSH, ACK] Seq=1 Ack=1
>>> Win=131264 Len=63 TSval=904701434 TSecr=276942420
>>> 0.12 haproxy client TCP 88 2024 → 58664 [ACK] Seq=1 Ack=64
>>> Win=65536 Len=0 TSval=276942439 TSecr=904701434
>>> 0.000150 client haproxy TCP 88 58664 → 2024 [FIN, ACK] Seq=64 Ack=1
>>> Win=131264 Len=0 TSval=904701434 TSecr=276942420
>>> 0.58 haproxy client TCP 88 2024 → 58664 [FIN, ACK] Seq=1 Ack=65
>>> Win=65536 Len=0 TSval=276942439 TSecr=904701434
>>>
>>> haproxy -vv
>>> HA-Proxy version 2.2.8-1 2021/01/28 - https://haproxy.org/
>>> Status: long-term supported branch - will stop receiving fixes around Q2
>>> 2025.
>>> Known bugs: http://www.haproxy.org/bugs/bugs-2.2.8.html
>>> Running on: Linux 4.19.91-22 #1 SMP Wed Dec 25 14:25:55 UTC 2019 x86_64
>>> Build options :
>>>   TARGET  = linux-glibc
>>>   CPU = generic
>>>   CC  = gcc
>>>   CFLAGS  = -O2 -g -O2 -fPIE -fstack-protector-strong -Wformat
>>> -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -Wall -Wextra
>>> -Wdeclaration-after-statement -fwrapv -Wno-unused-label -Wno-sign-compare
>>> -Wno-unused-parameter -Wno-clobbered -Wno-missing-field-initializers
>>> -Wtype-limits
>>>   OPTIONS = USE_PCRE2=1 USE_PCRE2_JIT=1 USE_GETADDRINFO=1 USE_OPENSSL=1
>>> USE_LUA=1 USE_ZLIB=1 USE_TFO=1 USE_SYSTEMD=1
>>>   DEBUG   =
>>>
>>> Feature list : +EPOLL -KQUEUE +NETFILTER -PCRE -PCRE_JIT +PCRE2
>>> +PCRE2_JIT +POLL -PRIVATE_CACHE +THREAD -PTHREAD_PSHARED +BACKTRACE
>>> -STATIC_PCRE -STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT
>>> +CRYPT_H +GETADDRINFO +OPENSSL +LUA +FUTEX +ACCEPT4 -CLOSEFROM +ZLIB -SLZ
>>> +CPU_AFFINITY +TFO +NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL +SYSTEMD
>>> -OBSOLETE_LINKER +PRCTL +THREAD_DUMP -EVPORTS
>>>
>>> Default settings :
>>>   bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
>>>
>>> Built with multi-threading support (MAX_THREADS=64, default=32).
>>> Built with OpenSSL version : OpenSSL 1.0.2g  1 Mar 2016
>>> Running on OpenSSL version : OpenSSL 1.0.2g  1 Mar 2016
>>> OpenSSL library supports TLS extensions : yes
>>> OpenSSL library supports SNI : yes
>>> OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2
>>> Built with Lua version : Lua 5.3.1
>>> Built with network namespace support.
>>> Built with zlib version : 1.2.8
>>> Running on zlib version : 1.2.8
>>> Compression algorithms supported : identity("identity"),
>>> deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
>>> Built with transparent proxy support using: IP_TRANSPARENT
>>> IPV6_TRANSPARENT IP_FREEBIND
>>> Built with PCRE2 version : 10.21 2016-01-12
>>> PCRE2 library supports JIT : yes
>>> Encrypted password support via crypt(3): 

Re: TCP mode and ultra short lived connection

2021-02-08 Thread Максим Куприянов
Илья, thanks for your answer!

Sorry, but It seems to me I didn't make it clear: the problem is the data
received from these fast clients never reaches backends. But it should be
delivered in order to be saved.

Maybe there is some way to delay acknowledging of the data received until
some backend is selected and connected to session?


пн, 8 февр. 2021 г. в 22:56, Илья Шипицин :

> I think it is "4. Client disconnects (FIN, FIN-ACK)"
> if client would send RST instead of FIN, port would have been released
> immediately.
>
>
> https://stackoverflow.com/questions/13049828/fin-vs-rst-in-tcp-connections
>
> RST is much better for short living connections.
>
> пн, 8 февр. 2021 г. в 22:17, Максим Куприянов  >:
>
>> Hi!
>>
>> I faced a problem dealing with l4 (tcp mode) haproxy-based proxy over
>> Graphite's component receiving metrics from clients and clients who are
>> connecting just to send one or two Graphite-metrics and disconnecting right
>> after.
>>
>> It looks like this
>> 1. Client connects to haproxy (SYN/SYN-ACK/ACK)
>> 2. Client sends one line of metric
>> 3. Haproxy acknowledges receiving this line (ACK to client)
>> 4. Client disconnects (FIN, FIN-ACK)
>> 5. Haproxy writes 1/-1/0/0 CC-termination state to log without even
>> trying to connect to a backend and send client's data to it.
>> 6. Metric is lost :(
>>
>> If the client is slow enough between steps 1 and 2 or it sends a bunch of
>> metrics so haproxy has time to connect to a backend – everything works like
>> a charm.
>>
>> How can I deal with these send-and-forget clients?
>>
>> Example. First column is a time delta in seconds between packets
>> 0.00 client haproxy TCP 100 58664 → 2024 [SYN] Seq=0 Win=65535 Len=0
>> MSS=1220 WS=64 TSval=904701415 TSecr=0 SACK_PERM=1
>> 0.15 haproxy client TCP 96 2024 → 58664 [SYN, ACK] Seq=0 Ack=1
>> Win=65535 Len=0 MSS=8840 SACK_PERM=1 TSval=276942420 TSecr=904701415 WS=2048
>> 0.019105 client haproxy TCP 88 58664 → 2024 [ACK] Seq=1 Ack=1 Win=131264
>> Len=0 TSval=904701434 TSecr=276942420
>> 0.90 client haproxy TCP 151 58664 → 2024 [PSH, ACK] Seq=1 Ack=1
>> Win=131264 Len=63 TSval=904701434 TSecr=276942420
>> 0.12 haproxy client TCP 88 2024 → 58664 [ACK] Seq=1 Ack=64 Win=65536
>> Len=0 TSval=276942439 TSecr=904701434
>> 0.000150 client haproxy TCP 88 58664 → 2024 [FIN, ACK] Seq=64 Ack=1
>> Win=131264 Len=0 TSval=904701434 TSecr=276942420
>> 0.58 haproxy client TCP 88 2024 → 58664 [FIN, ACK] Seq=1 Ack=65
>> Win=65536 Len=0 TSval=276942439 TSecr=904701434
>>
>> haproxy -vv
>> HA-Proxy version 2.2.8-1 2021/01/28 - https://haproxy.org/
>> Status: long-term supported branch - will stop receiving fixes around Q2
>> 2025.
>> Known bugs: http://www.haproxy.org/bugs/bugs-2.2.8.html
>> Running on: Linux 4.19.91-22 #1 SMP Wed Dec 25 14:25:55 UTC 2019 x86_64
>> Build options :
>>   TARGET  = linux-glibc
>>   CPU = generic
>>   CC  = gcc
>>   CFLAGS  = -O2 -g -O2 -fPIE -fstack-protector-strong -Wformat
>> -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -Wall -Wextra
>> -Wdeclaration-after-statement -fwrapv -Wno-unused-label -Wno-sign-compare
>> -Wno-unused-parameter -Wno-clobbered -Wno-missing-field-initializers
>> -Wtype-limits
>>   OPTIONS = USE_PCRE2=1 USE_PCRE2_JIT=1 USE_GETADDRINFO=1 USE_OPENSSL=1
>> USE_LUA=1 USE_ZLIB=1 USE_TFO=1 USE_SYSTEMD=1
>>   DEBUG   =
>>
>> Feature list : +EPOLL -KQUEUE +NETFILTER -PCRE -PCRE_JIT +PCRE2
>> +PCRE2_JIT +POLL -PRIVATE_CACHE +THREAD -PTHREAD_PSHARED +BACKTRACE
>> -STATIC_PCRE -STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT
>> +CRYPT_H +GETADDRINFO +OPENSSL +LUA +FUTEX +ACCEPT4 -CLOSEFROM +ZLIB -SLZ
>> +CPU_AFFINITY +TFO +NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL +SYSTEMD
>> -OBSOLETE_LINKER +PRCTL +THREAD_DUMP -EVPORTS
>>
>> Default settings :
>>   bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
>>
>> Built with multi-threading support (MAX_THREADS=64, default=32).
>> Built with OpenSSL version : OpenSSL 1.0.2g  1 Mar 2016
>> Running on OpenSSL version : OpenSSL 1.0.2g  1 Mar 2016
>> OpenSSL library supports TLS extensions : yes
>> OpenSSL library supports SNI : yes
>> OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2
>> Built with Lua version : Lua 5.3.1
>> Built with network namespace support.
>> Built with zlib version : 1.2.8
>> Running on zlib version : 1.2.8
>> Compression algorithms supported : identity("identity"),
>> deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
>> Built with transparent proxy support using: IP_TRANSPARENT
>> IPV6_TRANSPARENT IP_FREEBIND
>> Built with PCRE2 version : 10.21 2016-01-12
>> PCRE2 library supports JIT : yes
>> Encrypted password support via crypt(3): yes
>> Built with gcc compiler version 5.4.0 20160609
>> Built with the Prometheus exporter as a service
>>
>> Available polling systems :
>>   epoll : pref=300,  test result OK
>>poll : pref=200,  test result OK
>>  select : pref=150,  test result OK
>> Total: 3 (3 usable), will use epoll.
>>
>> Available 

Re: TCP mode and ultra short lived connection

2021-02-08 Thread Илья Шипицин
I think it is "4. Client disconnects (FIN, FIN-ACK)"
if client would send RST instead of FIN, port would have been released
immediately.


https://stackoverflow.com/questions/13049828/fin-vs-rst-in-tcp-connections

RST is much better for short living connections.

пн, 8 февр. 2021 г. в 22:17, Максим Куприянов :

> Hi!
>
> I faced a problem dealing with l4 (tcp mode) haproxy-based proxy over
> Graphite's component receiving metrics from clients and clients who are
> connecting just to send one or two Graphite-metrics and disconnecting right
> after.
>
> It looks like this
> 1. Client connects to haproxy (SYN/SYN-ACK/ACK)
> 2. Client sends one line of metric
> 3. Haproxy acknowledges receiving this line (ACK to client)
> 4. Client disconnects (FIN, FIN-ACK)
> 5. Haproxy writes 1/-1/0/0 CC-termination state to log without even trying
> to connect to a backend and send client's data to it.
> 6. Metric is lost :(
>
> If the client is slow enough between steps 1 and 2 or it sends a bunch of
> metrics so haproxy has time to connect to a backend – everything works like
> a charm.
>
> How can I deal with these send-and-forget clients?
>
> Example. First column is a time delta in seconds between packets
> 0.00 client haproxy TCP 100 58664 → 2024 [SYN] Seq=0 Win=65535 Len=0
> MSS=1220 WS=64 TSval=904701415 TSecr=0 SACK_PERM=1
> 0.15 haproxy client TCP 96 2024 → 58664 [SYN, ACK] Seq=0 Ack=1
> Win=65535 Len=0 MSS=8840 SACK_PERM=1 TSval=276942420 TSecr=904701415 WS=2048
> 0.019105 client haproxy TCP 88 58664 → 2024 [ACK] Seq=1 Ack=1 Win=131264
> Len=0 TSval=904701434 TSecr=276942420
> 0.90 client haproxy TCP 151 58664 → 2024 [PSH, ACK] Seq=1 Ack=1
> Win=131264 Len=63 TSval=904701434 TSecr=276942420
> 0.12 haproxy client TCP 88 2024 → 58664 [ACK] Seq=1 Ack=64 Win=65536
> Len=0 TSval=276942439 TSecr=904701434
> 0.000150 client haproxy TCP 88 58664 → 2024 [FIN, ACK] Seq=64 Ack=1
> Win=131264 Len=0 TSval=904701434 TSecr=276942420
> 0.58 haproxy client TCP 88 2024 → 58664 [FIN, ACK] Seq=1 Ack=65
> Win=65536 Len=0 TSval=276942439 TSecr=904701434
>
> haproxy -vv
> HA-Proxy version 2.2.8-1 2021/01/28 - https://haproxy.org/
> Status: long-term supported branch - will stop receiving fixes around Q2
> 2025.
> Known bugs: http://www.haproxy.org/bugs/bugs-2.2.8.html
> Running on: Linux 4.19.91-22 #1 SMP Wed Dec 25 14:25:55 UTC 2019 x86_64
> Build options :
>   TARGET  = linux-glibc
>   CPU = generic
>   CC  = gcc
>   CFLAGS  = -O2 -g -O2 -fPIE -fstack-protector-strong -Wformat
> -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -Wall -Wextra
> -Wdeclaration-after-statement -fwrapv -Wno-unused-label -Wno-sign-compare
> -Wno-unused-parameter -Wno-clobbered -Wno-missing-field-initializers
> -Wtype-limits
>   OPTIONS = USE_PCRE2=1 USE_PCRE2_JIT=1 USE_GETADDRINFO=1 USE_OPENSSL=1
> USE_LUA=1 USE_ZLIB=1 USE_TFO=1 USE_SYSTEMD=1
>   DEBUG   =
>
> Feature list : +EPOLL -KQUEUE +NETFILTER -PCRE -PCRE_JIT +PCRE2 +PCRE2_JIT
> +POLL -PRIVATE_CACHE +THREAD -PTHREAD_PSHARED +BACKTRACE -STATIC_PCRE
> -STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT +CRYPT_H
> +GETADDRINFO +OPENSSL +LUA +FUTEX +ACCEPT4 -CLOSEFROM +ZLIB -SLZ
> +CPU_AFFINITY +TFO +NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL +SYSTEMD
> -OBSOLETE_LINKER +PRCTL +THREAD_DUMP -EVPORTS
>
> Default settings :
>   bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
>
> Built with multi-threading support (MAX_THREADS=64, default=32).
> Built with OpenSSL version : OpenSSL 1.0.2g  1 Mar 2016
> Running on OpenSSL version : OpenSSL 1.0.2g  1 Mar 2016
> OpenSSL library supports TLS extensions : yes
> OpenSSL library supports SNI : yes
> OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2
> Built with Lua version : Lua 5.3.1
> Built with network namespace support.
> Built with zlib version : 1.2.8
> Running on zlib version : 1.2.8
> Compression algorithms supported : identity("identity"),
> deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
> Built with transparent proxy support using: IP_TRANSPARENT
> IPV6_TRANSPARENT IP_FREEBIND
> Built with PCRE2 version : 10.21 2016-01-12
> PCRE2 library supports JIT : yes
> Encrypted password support via crypt(3): yes
> Built with gcc compiler version 5.4.0 20160609
> Built with the Prometheus exporter as a service
>
> Available polling systems :
>   epoll : pref=300,  test result OK
>poll : pref=200,  test result OK
>  select : pref=150,  test result OK
> Total: 3 (3 usable), will use epoll.
>
> Available multiplexer protocols :
> (protocols marked as  cannot be specified using 'proto' keyword)
> fcgi : mode=HTTP   side=BEmux=FCGI
> : mode=HTTP   side=FE|BE mux=H1
>   h2 : mode=HTTP   side=FE|BE mux=H2
> : mode=TCPside=FE|BE mux=PASS
>
> Available services : prometheus-exporter
> Available filters :
> [SPOE] spoe
> [COMP] compression
> [TRACE] trace
> [CACHE] cache
> [FCGI] fcgi-app
>
> --
> Best regards,
> Maksim