Re: Long Running TCP Connections and Reloads

2017-09-14 Thread Krishna Kumar (Engineering)
Regarding #1, I think this was fixed sometime back. Maybe you are running
a old version of haproxy?

commit e39683c4d4c527d1b561c3ba3983d26cc3e7f42d
Author: Hongbo Long 
Date:   Fri Mar 10 18:41:51 2017 +0100

BUG/MEDIUM: stream: fix client-fin/server-fin handling

A tcp half connection can cause 100% CPU on expiration.



On Thu, Sep 14, 2017 at 6:59 PM, Pean, David S. 
wrote:

> Hello!
>
> I am using a TCP front-end that potentially keeps connections open for
> several hours, while also frequently issuing reloads due to an id to server
> mapping that is changing constantly. This causes many processes to be
> running at any given time, which generally works as expected. However,
> after some time I see some strange behavior with the processes and stats
> that doesn’t appear to have any pattern to it.
>
> Here is the setup in general:
>
> Every two minutes, there is a process that checks if HAProxy should be
> reloaded. If that is the case, this command is run:
>
> /usr/local/sbin/haproxy -D –f -sf PID
>
> The PID is the current HAProxy process. If there are TCP connections to
> that process, it will stay running until those connection drop, then
> generally it will get killed.
>
> 1. Sometimes a process will appear to not get killed, and have no
> connections. It will be running for several hours and have 99 CPU. When
> straced, it doesn't appear to be actually doing anything -- just clock and
> polls very frequently. Is there some sort of timeout for the graceful
> shutdown of the old processes?
>
> 2. Is it possible for the old processes to accept new connections? Even
> though a pid has been sent the shutdown signal, I have seen requests
> reference old server mappings that would have been in an earlier process.
>
> 3. Often the stats page will become out of whack over time. The number of
> requests per second will become drastically different from what is actually
> occuring. It looks like the old stuck processes might be sending more data
> that is maybe not getting cleared?
>
> Are there any considerations for starting up or reloading when dealing
> with long running connections?
>
> Thanks!
>
> David Pean
>
>
>


cppcheck finding

2017-09-14 Thread Илья Шипицин
hello,

[src/flt_http_comp.c:926] -> [src/flt_http_comp.c:926]: (warning) Either
the condition 'txn' is redundant or there is possible null pointer
dereference: txn.

should there be && instead of || ?

Cheers,
Ilya Shipitsin


Re: Kernel TLS for http/2

2017-09-14 Thread Lukas Tribus
Hello,


Am 05.09.2017 um 10:00 schrieb Willy Tarreau:
> Hi Aleks,
>
> On Mon, Sep 04, 2017 at 09:34:07AM +0200, Aleksandar Lazic wrote:
>> Hi,
>>
>> Have anyone seen KTLS also?
>>
>> https://lwn.net/Articles/666509/
>>
>> https://netdevconf.org/1.2/papers/ktls.pdf
>>
>> looks pretty interesting.
> As I already mentionned (I don't remember to whom), I really don't see *any*
> benefit in this approach and only problems in fact. By the way, others have
> attempted it in the past and failed.

I agree, when we are talking about the haproxy use case (which is
always network to network).

I do find the combination between sendfile and ktls is very interesting
though, for web servers that are waiting for the disk, especially
event-loop based software like nginx.


For haproxy on the other side symmetric crypto performance is not the
problem; asymmetric crypto performance (the handshake) is, because it
it is blocking the event-loop.

Pushing the handshake to worker thread(s) is a possible solution to this,
and I guess would probably eliminate the main reason people have to use
nbproc > 1 today.

I believe this was discussed before and is indeed something Willy has
on his mind.

How difficult the OpenSSL API makes this, I'm not sure. The documentation
certainly leaves "room for improvement" in regard to threading:

https://www.openssl.org/blog/blog/2017/02/21/threads/
https://github.com/openssl/openssl/issues/2165



cheers,

lukas




Long Running TCP Connections and Reloads

2017-09-14 Thread Pean, David S.
Hello!

I am using a TCP front-end that potentially keeps connections open for several 
hours, while also frequently issuing reloads due to an id to server mapping 
that is changing constantly. This causes many processes to be running at any 
given time, which generally works as expected. However, after some time I see 
some strange behavior with the processes and stats that doesn’t appear to have 
any pattern to it.

Here is the setup in general:

Every two minutes, there is a process that checks if HAProxy should be 
reloaded. If that is the case, this command is run:

/usr/local/sbin/haproxy -D –f -sf PID

The PID is the current HAProxy process. If there are TCP connections to that 
process, it will stay running until those connection drop, then generally it 
will get killed.

1. Sometimes a process will appear to not get killed, and have no connections. 
It will be running for several hours and have 99 CPU. When straced, it doesn't 
appear to be actually doing anything -- just clock and polls very frequently. 
Is there some sort of timeout for the graceful shutdown of the old processes?

2. Is it possible for the old processes to accept new connections? Even though 
a pid has been sent the shutdown signal, I have seen requests reference old 
server mappings that would have been in an earlier process.

3. Often the stats page will become out of whack over time. The number of 
requests per second will become drastically different from what is actually 
occuring. It looks like the old stuck processes might be sending more data that 
is maybe not getting cleared?

Are there any considerations for starting up or reloading when dealing with 
long running connections?

Thanks!

David Pean