RE: tcpdump and Haproxy SSL Offloading

2016-06-08 Thread mlist
Hi,

thanks very much, I went a little deeper about Cipher Suite.

Now I changed the ciphers list supported by our haproxy LBs and

increased the security level (always trying to keep ciphers that support old

clients that still use our services - like XP/IE8).



I tried successfully Decrypting a trace file with Wireshark, taken with 
tcpdump. Setting NOT-Ephemeral

Ciphers on haproxy LBs (temporary disabling DHE, EDH chippers on haproxy LBs).

At least for the most part the trace file was Decrypted an shown correctly, but 
we see some parts that appear

as TCP or TLSv1.2 or SSL type “XXX Segment of a reassembled PDU”:



[cid:image002.png@01D1C1A9.2BA67CA0]



apparently of only few bytes, but if we point on the SSL/TLS layer field 
“Encrypted Application Data” we see lot of data.



[cid:image003.png@01D1C1AB.04BE6510]

We can see also 3 tabs for that line:  Frame, Reasembled TCP and Decrypted SSL 
Data, so we can see Decrypted Data,

and also show as HTTP if we Follow TCP traffic.





[cid:image001.png@01D1C1AC.AE248660]



I discovered that in Wireshark TCP protocol preference “Allow Subdissector to 
reassemble TCP streams” allow me to see

Decrypted Traffic, do you know about some other Wireshark configuration about 
these SSL / TCP reassembled PDU so

these can be seen differently in the Wireshark ?





As soon as possible I'll try the Ephemeral case Decryption using client session 
keys (as Igor suggested),

also if it is more difficult as I see there are different ssl handshake so it 
is not so clear

if Browser append or overwrite session key files and if it is simple to put in 
relationship

and to correctly analyze with wireshark these different SSL/HTTP stream, I 
think.



Probably in HAproxy it could be implemented a kind of virtual network interface 
mechanism that tcpdump could connect

to retrieve the Decrypted Traffic, so achieve less dependency from external 
tools and less original environment “contamination / manipulation”.



For haproxy "option  http-ignore-probes" I think this is a solution to test 
before, to evaluate what client see.

In my experience for Browsers that have problems correctly managing pre-connect 
and graceful TCP session closing,

not emitting 408 can totally hide problems that need to be analyzer and 
explained to avoid strange and hidden behavior.

Also if 408 is not an infrastructure problem, the customer perception can be 
different... I know also that these 408/400

can distort the statistics, but is not a simple choice.



Roberto



-Original Message-
From: Lukas Tribus [mailto:lu...@gmx.net]
Sent: domenica 5 giugno 2016 12.16
To: Igor Cicimov ; mlist 
Cc: HAProxy 
Subject: Re: tcpdump and Haproxy SSL Offloading



Hi,





Am 05.06.2016 um 02:19 schrieb Igor Cicimov:

>

> > In haproxy.cfg I used these cipher I found recommended:

> > ciphers ECDHE-RSA-AES256-SHA:RC4-SHA:RC4:HIGH:!MD5:!aNULL:!EDH:!AESGCM

>



I would not recommend this. Check [1] and [2] for some uptodate

recommendations.



Yes, removing ECDHE-RSA-AES256-SHA will force the server to use the

non-FS RC4 cipher.



Regarding the 408 problem, please have a look at the http-ignore-probes

option [3].







Regards,



Lukas







[1] https://wiki.mozilla.org/Security/Server_Side_TLS

[2]

https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=haproxy-1.6=1.0.2=no=intermediate

[3]

http://cbonte.github.io/haproxy-dconv/configuration-1.6.html#4-option%20http-ignore-probes







--

Il messaggio e' stato analizzato alla ricerca di virus o

contenuti pericolosi da MailScanner, ed e'

risultato non infetto.




Re: HTTP Keep Alive : Limit number of sessions in a connection

2016-06-08 Thread CJ Ess
Nginx for instance allows you to limit the number of keep-alive requests
that a client can send on an existing connection - afterwhich the client
connection is closed.
http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_requests
Apache has something similar
http://httpd.apache.org/docs/2.4/mod/core.html#maxkeepaliverequests.
Just pointing these out so you can see that its a common feature.

I'm terminating connections with nginx, then I have a pool of upstream
connections from nginx to haproxy where I allow unlimited keep-alive
requests between nginx and haproxy per connection. The only times the
connections close is when haproxy sends an error response, because it
always closes the connection (I don't know why, just because I get a
non-2xx/3xxx response it doesn't mean that connection in whole is bad). If
I had haproxy terminating the connections directly then I would like a
graceful way to bring those conversations to an end, even if its just
waiting for the existing connections to time out or max out the number of
requests.





On Tue, Jun 7, 2016 at 3:45 PM, Lukas Tribus  wrote:

> Am 07.06.2016 um 21:32 schrieb Manas Gupta:
>
>> Hi Lukas,
>> My understanding was that soft-stop will cater to new connections.
>>
>
> That would mean soft stopping doesn't have any effect at all, basically.
>
> No, that's not the case, but either way your hardware load balancer
> would've already stopped sending you new connections, isn't that correct?
>
>
>
> I am looking for a way to gracefully close current/established
>> keep-alive connections after a certain number of sessions have been
>> served by issuing a FIN or HTTP Header Connection:close
>>
>
> You mean after a certain number of *requests* have been served; no, that
> is not supported and it would be a lot less reliable than the proposed
> solution.
>
>
>
> Lukas
>
>


Re: Graceful restart of Haproxy with SystemD

2016-06-08 Thread Maxime de Roucy
Le mercredi 08 juin 2016 à 21:21 +0200, Vincent Bernat a écrit :
> Just add ExecReload=/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -c
> -q
> before the existing ExecReload.

Indeed:

[root@arch64-f5ff6f8ea5472b3f ~]# rm /tmp/*
rm: cannot remove '/tmp/*': No such file or directory
[root@arch64-f5ff6f8ea5472b3f ~]# systemctl cat test.service
# /etc/systemd/system/test.service
[Unit]
Description=test

[Service]
Type=simple
ExecStart=/usr/bin/sleep 3600
ExecReload=/usr/bin/touch /tmp/ExecReload1
ExecReload=/usr/bin/touch /tmp/ExecReload2
[root@arch64-f5ff6f8ea5472b3f ~]# systemctl start test.service
[root@arch64-f5ff6f8ea5472b3f ~]# systemctl reload test.service
[root@arch64-f5ff6f8ea5472b3f ~]# ls /tmp/
ExecReload1  ExecReload2

Also you will not be able to change the commande line argument (e.g.
adding a file with `-f …`) of the haproxy process with `systemctl
reload`.
But you can now load a directory with `-f`, add files in it and load
them (without service interruption) with a reload.
http://git.haproxy.org/?p=haproxy.git;a=commit;h=379d9c7c14e684ab1dcdb6467a6bf189153c2b1d

Regards
Maxime de Roucy

signature.asc
Description: This is a digitally signed message part


Re: Graceful restart of Haproxy with SystemD

2016-06-08 Thread Vincent Bernat
 ❦  8 juin 2016 12:42 CEST, Andrew Kroenert  :

> Im having issues with haproxy’s systemd service under puppet control.
>
> Ive implemented the systemd service from haproxy contrib folder, which
> has the ExecStartPre command to check the config.
>
> This works for Starts, but not restarts, and while afaik reload does
> not reload if the config file is invalid, it doesn’t seem to inform
> you that it hasn’t actually reloaded.
>
> My current thoughts are to possibly run another wrapper (bash?) which
> does this check and passes to haproxy-systemd-wrapper OR implement the
> check command directly in haproxy-systemd-wrapper itself.
>
> Anyone have any other thoughts about how to do this in systemd?

Just add ExecReload=/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -c -q
before the existing ExecReload.
-- 
Don't just echo the code with comments - make every comment count.
- The Elements of Programming Style (Kernighan & Plauger)



Re: HTTP Keep Alive : Limit number of sessions in a connection

2016-06-08 Thread Lukas Tribus

Hi,


Am 08.06.2016 um 20:51 schrieb CJ Ess:
I'm terminating connections with nginx, then I have a pool of upstream 
connections from nginx to haproxy where I allow unlimited keep-alive 
requests between nginx and haproxy per connection. The only times the 
connections close is when haproxy sends an error response, because it 
always closes the connection (I don't know why, just because I get a 
non-2xx/3xxx response it doesn't mean that connection in whole is bad).


Does this happen in haproxy 1.6.3+ or 1.5.16+ as well?


If I had haproxy terminating the connections directly then I would 
like a graceful way to bring those conversations to an end, even if 
its just waiting for the existing connections to time out or max out 
the number of requests.


Why would a graceful stop not work for that use case? It covers this 
exact use case and is way more reliable than some max amount of time or 
number of request.




Lukas



Re: HTTP Keep Alive : Limit number of sessions in a connection

2016-06-08 Thread Willy Tarreau
On Wed, Jun 08, 2016 at 02:51:06PM -0400, CJ Ess wrote:
> Nginx for instance allows you to limit the number of keep-alive requests
> that a client can send on an existing connection - afterwhich the client
> connection is closed.
> http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_requests
> Apache has something similar
> http://httpd.apache.org/docs/2.4/mod/core.html#maxkeepaliverequests.
> Just pointing these out so you can see that its a common feature.

Yes, these ones are originally servers and the feature was initially
implemented for a very specific reason : limit memory leak in bogus
local applications (namely CGI) and occasionally modules. I used to
see that configured many times in the past for this exact reason.

> I'm terminating connections with nginx, then I have a pool of upstream
> connections from nginx to haproxy where I allow unlimited keep-alive
> requests between nginx and haproxy per connection. The only times the
> connections close is when haproxy sends an error response, because it
> always closes the connection (I don't know why, just because I get a
> non-2xx/3xxx response it doesn't mean that connection in whole is bad).

It's not because you relay 2xx/3xx that the connection is bad but because
the sender of a 4xx or 5xx generally knows that the connection is bad and
wants to close it. There's no way to do anything of a connection on which
you sent a 400 bad request because by definition you couldn't properly
parse what you received on it, so it's a protocol error that you faced.

> If
> I had haproxy terminating the connections directly then I would like a
> graceful way to bring those conversations to an end, even if its just
> waiting for the existing connections to time out or max out the number of
> requests.

There's something I just don't understand in your use case (which joins
Lukas' question in fact) : for what purpose would you want haproxy to
terminate connections in situations other than the ones where the process
is going away ? There could be a very valid use case that's absolutely not
obvious to me and that could be implemented, but in order to implement
something correctly, it must be understood and make sense, and I'd confess
that for now it's not the case, at least for me :-/

Thanks,
Willy



Re: HTTP Keep Alive : Limit number of sessions in a connection

2016-06-08 Thread CJ Ess
I personally don't have a need to limit requests the haproxy side at the
moment, I'm just thought I'd try to help Manas make his case. Hes basically
saying that he wants the option to close the client connection after the
nth request and that seems pretty reasonable to me. Maybe it would help him
with DDOS or to manage the number of ports used by the server, If one
server becomes particularly loaded then forcing the clients to reconnect
gives his load balancer an opportunity to move load around to less utilized
servers. I'm just speculating.


Re: HTTP Keep Alive : Limit number of sessions in a connection

2016-06-08 Thread CJ Ess
I can only speak for 1.5.x but when haproxy issues an error (not to be
confused with passing through an error from the upstream, but haproxy
itself issuing the error due to acl rules or whatever) it just sends the
error file (or the built-in error text) as a blob and closes the
connection. In my case I have nginx in front of haproxy and it rewrites the
error response adding a content-length header and changing the connection:
close header to connection: keep-alive so that the client doesn't have its
connection closed, but the next request from nginx to haproxy will either
be routed though another idle connection or a new connection to haproxy
will be made.

With the graceful stop I believe we're waiting for the clients to stop
sending us traffic and go away - which most of the time they do in
seconds-minutes. I have a lot of bot activity so I generally only get ~50
requests per connection before I deny something and that closes the
connection. Though I have some servers that emit continuous streams of
data, and doing a graceful restart or shutdown of them basically never
really succeeds because it can take months for the clients to be
interrupted and close their connections. For those servers I have to
specifically chop the connections to force the old haproxy processes to die
off and the clients to reconnect to the new ones.

On Wed, Jun 8, 2016 at 3:13 PM, Lukas Tribus  wrote:

> Hi,
>
>
> Am 08.06.2016 um 20:51 schrieb CJ Ess:
>
>> I'm terminating connections with nginx, then I have a pool of upstream
>> connections from nginx to haproxy where I allow unlimited keep-alive
>> requests between nginx and haproxy per connection. The only times the
>> connections close is when haproxy sends an error response, because it
>> always closes the connection (I don't know why, just because I get a
>> non-2xx/3xxx response it doesn't mean that connection in whole is bad).
>>
>
> Does this happen in haproxy 1.6.3+ or 1.5.16+ as well?
>
>
> If I had haproxy terminating the connections directly then I would like a
>> graceful way to bring those conversations to an end, even if its just
>> waiting for the existing connections to time out or max out the number of
>> requests.
>>
>
> Why would a graceful stop not work for that use case? It covers this exact
> use case and is way more reliable than some max amount of time or number of
> request.
>
>
>
> Lukas
>


possible minor memory leak in ssl_get_dh_1024

2016-06-08 Thread Roberto Guimaraes
seems like set_tmp_dh() performs its own allocation. So, it should be 
OK to dh_free immediately after calling the setter.
Not sure the intention was to reuse the allocated local_dh_1024, 
but that's not being done either.

index 5200069..7c17c9a 100644
--- a/src/ssl_sock.c
+++ b/src/ssl_sock.c
@@ -1639,6 +1639,8 @@ int ssl_sock_load_dh_params(SSL_CTX *ctx,
const char *file
)
goto end;

SSL_CTX_set_tmp_dh(ctx, local_dh_1024);
+   free(local_dh_1024);
+   local_dh_1024 = NULL;
}
else {
SSL_CTX_set_tmp_dh_callback(ctx, ssl_get_tmp_dh);

thanks,
roberto




Re: possible minor memory leak in ssl_get_dh_1024

2016-06-08 Thread Roberto Guimaraes
Roberto Guimaraes  writes:

> 
> seems like set_tmp_dh() performs its own allocation. So, it should be 
> OK to dh_free immediately after calling the setter.
> Not sure the intention was to reuse the allocated local_dh_1024, 
> but that's not being done either.
> 
> index 5200069..7c17c9a 100644
> --- a/src/ssl_sock.c
> +++ b/src/ssl_sock.c
>  -1639,6 +1639,8  int ssl_sock_load_dh_params
(SSL_CTX *ctx,
> const char *file
> )
>   goto end;
> 
>   SSL_CTX_set_tmp_dh(ctx, local_dh_1024);
> + free(local_dh_1024);
> + local_dh_1024 = NULL;
>   }
>   else {
>   SSL_CTX_set_tmp_dh_callback(ctx, ssl_get_tmp_dh);
> 
> thanks,
> roberto
> 
> 

darn, make it DH_free()...

index 5200069..37471b6 100644
--- a/src/ssl_sock.c
+++ b/src/ssl_sock.c
@@ -1639,6 +1639,8 @@ int ssl_sock_load_dh_params(SSL_CTX *ctx,
const char *file
)
goto end;

SSL_CTX_set_tmp_dh(ctx, local_dh_1024);
+   DH_free(local_dh_1024);
+   local_dh_1024 = NULL;
}
else {
SSL_CTX_set_tmp_dh_callback(ctx, ssl_get_tmp_dh);






Re: HTTP Keep Alive : Limit number of sessions in a connection

2016-06-08 Thread Willy Tarreau
On Wed, Jun 08, 2016 at 04:17:58PM -0400, CJ Ess wrote:
> I personally don't have a need to limit requests the haproxy side at the
> moment, I'm just thought I'd try to help Manas make his case. Hes basically
> saying that he wants the option to close the client connection after the
> nth request and that seems pretty reasonable to me.

But quite frankly it's not a use case. Believe me, we're quite used to
get requests from people *thinking* they need a certain feature while
in fact they need something totally different, just because they think
that it's the last missing piece of their puzzle. Implementing something
specific like this which will definitely cause new issues and bug reports
here is not going to happen without a really good reason, and for now all
I can read is "I'd like to close after N request". We could also implement
just a sample fetch to return the request number so that it's possible to
add an ACL to block past some point but all of this seems very fishy for
now :-/

> Maybe it would help him
> with DDOS or to manage the number of ports used by the server, If one
> server becomes particularly loaded then forcing the clients to reconnect
> gives his load balancer an opportunity to move load around to less utilized
> servers. I'm just speculating.

That's exactly why I'm interested in the *real* use case, because very likely
this alone will not be enough and will just spray some paint on the real issue.

Cheers,
Willy




Re: HTTP Keep Alive : Limit number of sessions in a connection

2016-06-08 Thread Manas Gupta
Thank you everyone for pitching in.

I will take another stab at explaining my case/context.

So I have a component which issues a lot of requests over a keep-alive
connection to HAProxy. In the middle there is a TCP Load Balancer
(hardware) which only intercepts new tcp connection requests. Once the
tcp connection is established, the client can send as many HTTP
requests as it wants. For lack of a better term, it becomes sticky.

Usually clients (browsers) would send a bunch of requests and then go
away.. effectively closing the TCP connection. In my case, the
component keeps going.

If I force HAProxy to close the keep-alive connection, even when its
being actively used, the client issues a new TCP connection. And the
load balancer can send it somewhere else (to another HAProxy
instance).

In this case, I am not stopping HAproxy for maintenance. I just want
TCP connections to be _not_ established for a long time.

Thanks



On Wed, Jun 8, 2016 at 1:37 PM, Willy Tarreau  wrote:
> On Wed, Jun 08, 2016 at 04:17:58PM -0400, CJ Ess wrote:
>> I personally don't have a need to limit requests the haproxy side at the
>> moment, I'm just thought I'd try to help Manas make his case. Hes basically
>> saying that he wants the option to close the client connection after the
>> nth request and that seems pretty reasonable to me.
>
> But quite frankly it's not a use case. Believe me, we're quite used to
> get requests from people *thinking* they need a certain feature while
> in fact they need something totally different, just because they think
> that it's the last missing piece of their puzzle. Implementing something
> specific like this which will definitely cause new issues and bug reports
> here is not going to happen without a really good reason, and for now all
> I can read is "I'd like to close after N request". We could also implement
> just a sample fetch to return the request number so that it's possible to
> add an ACL to block past some point but all of this seems very fishy for
> now :-/
>
>> Maybe it would help him
>> with DDOS or to manage the number of ports used by the server, If one
>> server becomes particularly loaded then forcing the clients to reconnect
>> gives his load balancer an opportunity to move load around to less utilized
>> servers. I'm just speculating.
>
> That's exactly why I'm interested in the *real* use case, because very likely
> this alone will not be enough and will just spray some paint on the real 
> issue.
>
> Cheers,
> Willy
>



Re: Lua converter not working in 1.6.5 with Lua 5.3.2

2016-06-08 Thread Willy Tarreau
On Wed, Jun 01, 2016 at 02:28:02PM +0200, Thierry FOURNIER wrote:
> I forgot the patches in attachment ;)

Just merged right now.

Thanks Thierry!
Willy



Graceful restart of Haproxy with SystemD

2016-06-08 Thread Andrew Kroenert
Hey All

Im having issues with haproxy’s systemd service under puppet control.

Ive implemented the systemd service from haproxy contrib folder, which has the 
ExecStartPre command to check the config.

This works for Starts, but not restarts, and while afaik reload does not reload 
if the config file is invalid, it doesn’t seem to inform you that it hasn’t 
actually reloaded.

My current thoughts are to possibly run another wrapper (bash?) which does this 
check and passes to haproxy-systemd-wrapper OR implement the check command 
directly in haproxy-systemd-wrapper itself.

Anyone have any other thoughts about how to do this in systemd?

Thanks

Andrew
Viator Inc

Re: HTTP Keep Alive : Limit number of sessions in a connection

2016-06-08 Thread Manas Gupta
On Wed, Jun 8, 2016 at 8:45 PM, Willy Tarreau  wrote:
> On Wed, Jun 08, 2016 at 05:07:29PM -0700, Manas Gupta wrote:
>> Thank you everyone for pitching in.
>>
>> I will take another stab at explaining my case/context.
>>
>> So I have a component which issues a lot of requests over a keep-alive
>> connection to HAProxy. In the middle there is a TCP Load Balancer
>> (hardware) which only intercepts new tcp connection requests. Once the
>> tcp connection is established, the client can send as many HTTP
>> requests as it wants. For lack of a better term, it becomes sticky.
>
> What do you mean by "it becomes sticky" ? Just the fact that it sticks
> to *this* haproxy server ? This seems logical if it works at the TCP
> level. I'm seeing that you want to "fix" this, but how is it a problem
> at all ? Most users would instead find this normal, and even desired.
>

You are correct, its not a problem.

I am simply trying to figure out the best way for this :-
Say I have an HAProxy server with several long running http-keep-alive
connections. I want to send traffic away from this HAProxy server, but
without dropping any connections.

As per Lukas' suggestion, I tried soft-stop and it has the desired
results. But it does _stop_ HAProxy.

>> Usually clients (browsers) would send a bunch of requests and then go
>> away.. effectively closing the TCP connection. In my case, the
>> component keeps going.
>>
>> If I force HAProxy to close the keep-alive connection, even when its
>> being actively used, the client issues a new TCP connection. And the
>> load balancer can send it somewhere else (to another HAProxy
>> instance).
>>
>> In this case, I am not stopping HAproxy for maintenance. I just want
>> TCP connections to be _not_ established for a long time.
>
> Then as Lukas explained, there's a difference between a request count
> and time, even if there's a relation between the two (request rate). If
> we implement it based on an http-request action (or maybe http-response
> action, we need to check), maybe we could consider the connection's age
> instead of the count, to decide to disable client-side keep-alive. Just
> my two cents...
>

I can live with what's there for now. I was just wondering if
keep-alive max HTTP directive could be implemented (it not already
done) using a combination of settings or some stick table fudgery.


> Regards,
> Willy
>



Re: HTTP Keep Alive : Limit number of sessions in a connection

2016-06-08 Thread Willy Tarreau
On Wed, Jun 08, 2016 at 05:07:29PM -0700, Manas Gupta wrote:
> Thank you everyone for pitching in.
> 
> I will take another stab at explaining my case/context.
> 
> So I have a component which issues a lot of requests over a keep-alive
> connection to HAProxy. In the middle there is a TCP Load Balancer
> (hardware) which only intercepts new tcp connection requests. Once the
> tcp connection is established, the client can send as many HTTP
> requests as it wants. For lack of a better term, it becomes sticky.

What do you mean by "it becomes sticky" ? Just the fact that it sticks
to *this* haproxy server ? This seems logical if it works at the TCP
level. I'm seeing that you want to "fix" this, but how is it a problem
at all ? Most users would instead find this normal, and even desired.

> Usually clients (browsers) would send a bunch of requests and then go
> away.. effectively closing the TCP connection. In my case, the
> component keeps going.
> 
> If I force HAProxy to close the keep-alive connection, even when its
> being actively used, the client issues a new TCP connection. And the
> load balancer can send it somewhere else (to another HAProxy
> instance).
> 
> In this case, I am not stopping HAproxy for maintenance. I just want
> TCP connections to be _not_ established for a long time.

Then as Lukas explained, there's a difference between a request count
and time, even if there's a relation between the two (request rate). If
we implement it based on an http-request action (or maybe http-response
action, we need to check), maybe we could consider the connection's age
instead of the count, to decide to disable client-side keep-alive. Just
my two cents...

Regards,
Willy