Re: please check

2014-05-02 Thread Willy Tarreau
On Thu, May 01, 2014 at 03:44:46PM -0400, Rachel Chavez wrote:
 The problem is:
 
 when client sends a request with incomplete body (it has content-length but
 no body) then haproxy returns a 5XX error when it should be a client issue.

It's a bit more complicated than that. When the request body flows from the
client to the server, at any moment the server is free to respond (either
with an error, a redirect, a timeout or whatever). So as soon as we start
to forward a request body from the client to the server, we're *really*
waiting for the server to send a verdict about that request.

 In the session.c file starting in 2404 i make sure that if I haven't
 received the entire body of the request I continue to wait for it by
 keeping AN_REQ_WAIT_HTTP as part of the request analyzers list as long as
 the client read timeout hasn't fired yet.

It's unrelated unfortunately and it cannot work. AN_REQ_WAIT_HTTP is meant
to wait for a *new* request. So if the client doesn't send a complete
request, it's both wrong and dangerous to expect a new request inside the
body. When the body is being forwarded, the request flows through
http_request_forward_body(). This one already tests for the client timeout
as you can see. I'm not seeing any error processing there though, maybe
we'd need to set some error codes there to avoid them getting the default
ones.

 In the proto_http.c file what I tried to do is avoid getting a server
 timeout when the client had timed-out already.

I agree that it's always the *first* timeout which strikes which should
indicate the faulty side, because eventhough they're generally set to the
same value, people who want to enforce a specific processing can set them
apart.

Regards,
Willy




redirect question

2014-05-02 Thread bjun...@gmail.com
Hi,

i'm trying a basic redirect with HAProxy:


frontend http


 acl is_domain hdr_dom(host) -i abc.example.com

 acl root path_reg ^$|^/$


 redirect location http://abc.example.com/?code=1234  code 301  if
is_domain  root


Unfortunately this ends up in a redirect loop.

I suspect the ? - character, when i escape this character with \ the
loop problem is fixed, but now HAProxy redirects to 
http://abc.example.com/\?code=1234;


Thanks,

Bjoern


Re: Patch with some small memory usage fixes

2014-05-02 Thread Willy Tarreau
Hi Dirkjan,

On Mon, Apr 28, 2014 at 04:00:18PM -0700, Dirkjan Bussink wrote:
 Hi all,
 
 When building HAProxy using the Clang Static Analyzer, it found a few cases
 of invalid memory usage and leaks. I?ve attached a patch to fix these cases.

I think there are 3 types of errors fixed by your patch :
  - bugs that occur at runtime (eg: pat_ref_delete()).

  - use-after-free in the error path, which can cause hard-to-diagnose
crashes when config errors are detected ;

  - leaks in the error path which are harmless since the process is exiting
anyway. However I agree to take them as cleanups.

That said, one of your fix introduces a bug here :

diff --git a/src/haproxy.c b/src/haproxy.c
index ed2ff21..c1ec783 100644
--- a/src/haproxy.c
+++ b/src/haproxy.c
@@ -1607,6 +1607,7 @@ int main(int argc, char **argv)
exit(0); /* parent must leave */
}
 
+   free(children);
/* if we're NOT in QUIET mode, we should now close the 3 first 
FDs to ensure
 * that we can detach from the TTY. We MUST NOT do it in other 
cases since
 * it would have already be done, and 0-2 would have been 
affected to listening

Indeed, children is used a few lines below :

if (proc == global.nbproc) {
if (global.mode  MODE_SYSTEMD) {
for (proc = 0; proc  global.nbproc; proc++)
while (waitpid(children[proc], NULL, 0) 
== -1  errno == EINTR);
}
exit(0); /* parent must leave */
}

In order to avoid getting trapped by such risks of use-after-free, I strongly
suggest that you assign a pointer to NULL after freeing it whenever relevant.
It ensures that such cases are detected very early.

Last, as Dmitri mentionned it, please do not add a test for the pointer to be
freed. free(foo) is fine even if foo is null.

Would you mind retransmitting the fixed patch or do you prefer me to fix it
while applying it ?

Thanks,
Willy




[PATCH] Add a configurable support of standardized DH parameters = 1024 bits, disabled by default

2014-05-02 Thread Remi Gacogne
Hi,

This is an updated version of the previous patch. This version adds a
new configuration option named tune.ssl.max-dh-param-size, which sets
the maximum size of the ephemeral DH key used for DHE key exchange, if
no static DH parameters are found in the certificate file.

The default value for max-dh-param-size is set to 1024, thus keeping
the current behavior by default. Setting a higher value (for example
2048 with a 2048 bits RSA/DSA server key) allows an easy upgrade
to stronger ephemeral DH keys (and back if needed).

Regarding the recent discussions on this matter, I agree with the fact
that ECDHE should be preferred over DHE whenever it is available, but
I think we may want to keep offering a decent forward secrecy option to
clients not supporting elliptic curves yet, for example older versions
of Android or clients using OpenSSL 0.9.8.

This patch is a proposal based on the feedback I got from the previous
one, please feel free to criticize anything from the core idea (an easy
way to use stronger DHE key size) to the new parameter's name, I will
gladly welcome any remarks :)


Regards,

-- 
Rémi Gacogne

Aqua Ray
SAS au capital de 105.720 Euros
RCS Créteil 447 997 099
www.aquaray.fr

14, rue Jules Vanzuppe
94854 IVRY-SUR-SEINE CEDEX (France)
Tel : (+33) (0)1 84 04 04 05
Fax : (+33) (0)1 77 65 60 42
From f03b547984c513855383b11dc76aaecbbbc65838 Mon Sep 17 00:00:00 2001
From: Remi Gacogne rgacogne[at]aquaray[dot]fr
Date: Fri, 2 May 2014 15:41:13 +0200
Subject: [PATCH] Add a configurable support of standardized DH parameters =
 1024 bits, disabled by default

When no static DH parameters are specified, this patch makes haproxy
use standardized (rfc 2409 / rfc 3526) DH parameters with prime lenghts
of 1024, 2048, 4096 and 8192 bits for DHE key exchange. The size of the
temporary/ephemeral DH key is computed as the minimum of the RSA/DSA server
key size and the value of a new option named tune.ssl.max-dh-param-size.
---
 doc/configuration.txt |  11 
 include/common/defaults.h |   5 ++
 include/types/global.h|   1 +
 src/cfgparse.c|   8 +++
 src/haproxy.c |   1 +
 src/ssl_sock.c| 154 ++
 6 files changed, 142 insertions(+), 38 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 8207067..1c9e4e6 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -496,6 +496,7 @@ The following keywords are supported in the global section :
- tune.ssl.cachesize
- tune.ssl.lifetime
- tune.ssl.maxrecord
+   - tune.ssl.max-dh-param-size
- tune.zlib.memlevel
- tune.zlib.windowsize
 
@@ -1006,6 +1007,16 @@ tune.ssl.maxrecord number
   best value. Haproxy will automatically switch to this setting after an idle
   stream has been detected (see tune.idletimer above).
 
+tune.ssl.max-dh-param-size number
+ Sets the maximum size of the Diffie-Hellman parameters used for generating
+ the ephemeral/temporary Diffie-Hellman key in case of DHE key exchange. The
+ final size will try to match the size of the server's RSA (or DSA) key (e.g,
+ a 2048 bits temporary DH key for a 2048 bits RSA key), but will not exceed
+ this maximum value. Default value if 1024. Higher values will increase the
+ CPU load, and values greater than 1024 bits are not supported by Java 7 and
+ earlier clients. This value is not used if static Diffie-Hellman parameters
+ are supplied via the certificate file.
+
 tune.zlib.memlevel number
   Sets the memLevel parameter in zlib initialization for each session. It
   defines how much memory should be allocated for the internal compression
diff --git a/include/common/defaults.h b/include/common/defaults.h
index f765e90..944f3aa 100644
--- a/include/common/defaults.h
+++ b/include/common/defaults.h
@@ -214,4 +214,9 @@
 #define SSLCACHESIZE 2
 #endif
 
+/* ssl max dh param size */
+#ifndef SSL_MAX_DH_PARAM
+#define SSL_MAX_DH_PARAM 1024
+#endif
+
 #endif /* _COMMON_DEFAULTS_H */
diff --git a/include/types/global.h b/include/types/global.h
index 241afe9..2fba7ca 100644
--- a/include/types/global.h
+++ b/include/types/global.h
@@ -131,6 +131,7 @@ struct global {
 		int sslcachesize;  /* SSL cache size in session, defaults to 2 */
 		unsigned int ssllifetime;   /* SSL session lifetime in seconds */
 		unsigned int ssl_max_record; /* SSL max record size */
+		unsigned int ssl_max_dh_param; /* SSL maximum DH parameter size */
 #endif
 #ifdef USE_ZLIB
 		int zlibmemlevel;/* zlib memlevel */
diff --git a/src/cfgparse.c b/src/cfgparse.c
index c4f092f..9a976ba 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -630,6 +630,14 @@ int cfg_parse_global(const char *file, int linenum, char **args, int kwm)
 		}
 		global.tune.ssl_max_record = atol(args[1]);
 	}
+	else if (!strcmp(args[0], tune.ssl.max-dh-param-size)) {
+		if (*(args[1]) == 0) {
+			Alert(parsing [%s:%d] : '%s' expects an integer argument.\n, file, linenum, args[0]);
+			err_code |= ERR_ALERT | 

RE: [PATCH] Add a configurable support of standardized DH parameters = 1024 bits, disabled by default

2014-05-02 Thread Lukas Tribus
Hi Remi,



 The default value for max-dh-param-size is set to 1024, thus keeping
 the current behavior by default. Setting a higher value (for example
 2048 with a 2048 bits RSA/DSA server key) allows an easy upgrade
 to stronger ephemeral DH keys (and back if needed).


Please note that Sander used 4096bit - which is why he saw huge CPE load.

Imho we can default max-dh-param-size to 2048bit.


Best thing would be if Sander could test in his environment with a 2048bit
dhparam manually (in the cert file).




Regards,

Lukas

  


Re: please check

2014-05-02 Thread Patrick Hemmer
*From: *Willy Tarreau w...@1wt.eu
*Sent: * 2014-05-02 02:02:11 E
*To: *Rachel Chavez rachel.chave...@gmail.com
*CC: *haproxy@formilux.org
*Subject: *Re: please check

 On Thu, May 01, 2014 at 03:44:46PM -0400, Rachel Chavez wrote:
 The problem is:

 when client sends a request with incomplete body (it has content-length but
 no body) then haproxy returns a 5XX error when it should be a client issue.
 It's a bit more complicated than that. When the request body flows from the
 client to the server, at any moment the server is free to respond (either
 with an error, a redirect, a timeout or whatever). So as soon as we start
 to forward a request body from the client to the server, we're *really*
 waiting for the server to send a verdict about that request.
At any moment the server is free to respond yes, but the server cannot
respond *properly* until it gets the complete request.
If the response depends on the request payload, the server doesn't know
whether to respond with 200 or with a 400.

RFC2616 covers this behavior in depth. See 8.2.3 Use of the 100
(Continue) Status. This section indicates that it should not be
expected for the server to respond without a request body unless the
client explicitly sends a Expect: 100-continue



 In the session.c file starting in 2404 i make sure that if I haven't
 received the entire body of the request I continue to wait for it by
 keeping AN_REQ_WAIT_HTTP as part of the request analyzers list as long as
 the client read timeout hasn't fired yet.
 It's unrelated unfortunately and it cannot work. AN_REQ_WAIT_HTTP is meant
 to wait for a *new* request. So if the client doesn't send a complete
 request, it's both wrong and dangerous to expect a new request inside the
 body. When the body is being forwarded, the request flows through
 http_request_forward_body(). This one already tests for the client timeout
 as you can see. I'm not seeing any error processing there though, maybe
 we'd need to set some error codes there to avoid them getting the default
 ones.

 In the proto_http.c file what I tried to do is avoid getting a server
 timeout when the client had timed-out already.
 I agree that it's always the *first* timeout which strikes which should
 indicate the faulty side, because eventhough they're generally set to the
 same value, people who want to enforce a specific processing can set them
 apart.

 Regards,
 Willy



Feature Request: Extract IP from TCP Options Header

2014-05-02 Thread Jim Rippon
 

Hi all, 

As mentioned on the IRC channel today, I have a
requirement to extract an end users IP address from the TCP Options
Header (in my case with key 34 or 0x22, but there are other similar
implementations using 28 or 0x1C). This header is being added by some
Application Delivery Optimisation solutions by providers such as Akamai
(with their IPA product line) and CDNetworks (with their DNA product)
though there are likely others out there hijacking the TCP headers this
way. 

Because the options headers won't be forwarded by haproxy to the
back-end servers, the most useful way to deal with this for our http
services would be to extract the IP address encoded and place it into
either the X-Forwarded-For or X-Real-IP headers, so that it can be
understood and handled by the upstream servers. 

Sample implementations
can be found in documentation from F5 [1] and Citrix [2] below. In the
TCP SYN packet (and some later packets, but always in the initial SYN)
we see the option at the end of the options header field like so in our
packet capture: 

22 06 ac 10 05 0a 

Broken down, we have: 

22 = TCP
Options Header key (34 in this case with CDNetworks) 

06 = Field size
- this appears to include the key, this size field and the option value


ac 10 05 0a = the IP address of the end-user - faked in this example
to private address 172.16.5.10 

This would be hugely useful
functionality - it would allow us to avoid the expense of high-end load
balancer devices and licenses to support testing of our CDN
implementations before going into production. 

Regards, 

Jim Rippon


1:
https://devcentral.f5.com/articles/accessing-tcp-options-from-irules


2:
http://blogs.citrix.com/2012/08/31/using-tcp-options-for-client-ip-insertion/


 

Re: please check

2014-05-02 Thread Willy Tarreau
Hi Patrick,

On Fri, May 02, 2014 at 10:57:38AM -0400, Patrick Hemmer wrote:
 *From: *Willy Tarreau w...@1wt.eu
 *Sent: * 2014-05-02 02:02:11 E
 *To: *Rachel Chavez rachel.chave...@gmail.com
 *CC: *haproxy@formilux.org
 *Subject: *Re: please check
 
  On Thu, May 01, 2014 at 03:44:46PM -0400, Rachel Chavez wrote:
  The problem is:
 
  when client sends a request with incomplete body (it has content-length but
  no body) then haproxy returns a 5XX error when it should be a client issue.
  It's a bit more complicated than that. When the request body flows from the
  client to the server, at any moment the server is free to respond (either
  with an error, a redirect, a timeout or whatever). So as soon as we start
  to forward a request body from the client to the server, we're *really*
  waiting for the server to send a verdict about that request.
 At any moment the server is free to respond yes, but the server cannot
 respond *properly* until it gets the complete request.

Yes it can, redirects are the most common anticipated response, as the
result of a POST to a page with an expired cookie. And the 302 is a
clean response, it's not even an error.

 If the response depends on the request payload, the server doesn't know
 whether to respond with 200 or with a 400.

With WAFs deployed massively on server infrastructures, 403 are quite
common long before the whole data. 413 request entity too large appears
quite commonly as well. 401 and 407 can also happen when authentication
is needed.

 RFC2616 covers this behavior in depth. See 8.2.3 Use of the 100
 (Continue) Status. This section indicates that it should not be
 expected for the server to respond without a request body unless the
 client explicitly sends a Expect: 100-continue

Well, 2616 is 15-years old now and pretty obsolete, which is why the
HTTP-bis WG is working on refreshing this. New wording is clearer about
how a request body is used :

   o  A server MAY omit sending a 100 (Continue) response if it has
  already received some or all of the message body for the
  corresponding request, or if the framing indicates that there is
  no message body.

Note the some or all.

It's very tricky to find which side is responsible for a stalled upload.
I've very commonly found that frozen servers, or those with deep request
queues will stall during body transfers because they still didn't start
to consume the part of the request that's queued into network buffers.

All I mean is that it's unfortunately not *that* white and black. We
*really* need to make a careful difference between what happens on the
two sides. The (hard) goal I'm generally seeking is to do my best so
that a misbehaving user doesn't make us believe that a server is going
badly. That's not easy, considering for example the fact that the 501
message could be understood as a server error while it's triggered by
the client.

In general (unless there's something wrong with the way client timeouts
are reported in http_request_forward_body), client timeouts should be
reported as such, and same for server timeouts. It's possible that there
are corner cases, but we need to be extremely careful about them and not
try to generalize.

Best regards,
Willy




Re: [PATCH] Add a configurable support of standardized DH parameters = 1024 bits, disabled by default

2014-05-02 Thread Kobus Bensch

Hi

My 2 cents re high cpu and large key sizes. I loaded a package on my 
HAproxy servers called haveged and I saw significant faster response 
times. Reason for this is it doubled the entropy on my servers. As a 
test I created a 4096 GPG key on a system without haveged and it took 
nearly 2 hours to generate. I then installed haveged on the same system 
and the time came down to 4 minutes.


HTH

Kobus


On 02/05/2014 15:52, Lukas Tribus wrote:

Hi Remi,




The default value for max-dh-param-size is set to 1024, thus keeping
the current behavior by default. Setting a higher value (for example
2048 with a 2048 bits RSA/DSA server key) allows an easy upgrade
to stronger ephemeral DH keys (and back if needed).


Please note that Sander used 4096bit - which is why he saw huge CPE load.

Imho we can default max-dh-param-size to 2048bit.


Best thing would be if Sander could test in his environment with a 2048bit
dhparam manually (in the cert file).




Regards,

Lukas





--


Trustpay Global Limited is an authorised Electronic Money Institution 
regulated by the Financial Conduct Authority registration number 900043. 
Company No 07427913 Registered in England and Wales with registered address 
130 Wood Street, London, EC2V 6DL, United Kingdom.


For further details please visit our website at www.trustpayglobal.com.

The information in this email and any attachments are confidential and 
remain the property of Trustpay Global Ltd unless agreed by contract. It is 
intended solely for the person to whom or the entity to which it is 
addressed. If you are not the intended recipient you may not use, disclose, 
copy, distribute, print or rely on the content of this email or its 
attachments. If this email has been received by you in error please advise 
the sender and delete the email from your system. Trustpay Global Ltd does 
not accept any liability for any personal view expressed in this message.




Re: [PATCH] Add a configurable support of standardized DH parameters = 1024 bits, disabled by default

2014-05-02 Thread Remi Gacogne
Hi Lukas,

 The default value for max-dh-param-size is set to 1024, thus keeping
 the current behavior by default. Setting a higher value (for example
 2048 with a 2048 bits RSA/DSA server key) allows an easy upgrade
 to stronger ephemeral DH keys (and back if needed).
 
 
 Please note that Sander used 4096bit - which is why he saw huge CPE load.
 
 Imho we can default max-dh-param-size to 2048bit.

I am afraid upgrading DH key size from 1024 bits to 2048 bits can divide
performance by 2 for CPU-bound installations doing mostly DHE key
exchanges, based on some quick benchmarks I ran. Of course it depends on
the ratio of new SSL/TLS connections using DHE (without resumption) you
get, but I think it may too big of an impact to change the default
without warnings.

-- 
Rémi Gacogne

Aqua Ray
SAS au capital de 105.720 Euros
RCS Créteil 447 997 099
www.aquaray.fr

14, rue Jules Vanzuppe
94854 IVRY-SUR-SEINE CEDEX (France)
Tel : (+33) (0)1 84 04 04 05
Fax : (+33) (0)1 77 65 60 42



signature.asc
Description: OpenPGP digital signature


Re: ha pool haproxy

2014-05-02 Thread Rafaela
Thanks for the replies.

I opted to use the round robin,  although not the best solution.

tks


2014-04-09 2:58 GMT-03:00 Jarno Huuskonen jarno.huusko...@uef.fi:

 Hello,

 On Tue, Apr 08, Rafaela wrote:
  Tks Lukas!
 
  The threads of the list, it is not possible to climb HAproxy
 horizontally,
  how to maintain availability is using the vrrp (master and slave) or dns
  round robin (despite losing a part of traffic if you do not have to
 check).
  Correct?
  My traffic is great/high and working with virtual machines, only one
  haproxy working in isolation will not support all the traffic. Any other
  suggestions?

 What about using keepalived to share multiple ip addresses between
 multiple machines (and using dns round robin between these addresses).

 Something like this in keepalived.conf:

 vrrp_instance VI_1 {
 ...
 state BACKUP
 priority 100
 virtual_ipaddress {
 ip.addr.1
 }
 track_script { chk_haproxy }
 }
 vrrp_instance VI_2 {
 ...
 state BACKUP
 priority 99
 virtual_ipaddress {
 ip.addr.2
 }
 track_script { chk_haproxy }
 ...
 }
 vrrp_instance VI_3 {
 ...
 state BACKUP
 priority 98
 virtual_ipaddress {
 ip.addr.3
 }
 track_script { chk_haproxy }
 ...
 }

 and if you change the priorities on different servers then
 VI_1 goes to server1, VI_2 to server2 and so forth. If a
 server fails then one of the remaining servers will get the failed
 ip address.

 You'd need to have enough haproxy/keepalived servers that if some fail
 then the remaining ones can handle the load.
 (And dns roundrobin probably won't balance traffic perfectly between
 the haproxy/keepalived servers).

 -Jarno

 
  2014-04-08 17:33 GMT-03:00 Lukas Tribus luky...@hotmail.com:
 
   Hi,
  
  
How can I have high availability and load balancing in my HA PROXY?
Using keepalived only guarantees me an online machine and is not load
balancing between nodes HAproxy.
  
   Haproxy load balances traffic and guarantess high availability for your
   backends. Haproxy cannot load balance its own incoming traffic if thats
   what you are referring to.
  
   Is your question how to balance load on two haproxy instances?
  
   Take a look at the this thread:
   http://thread.gmane.org/gmane.comp.web.haproxy/14320



SSL, peered sticky tables + nbproc 1?

2014-05-02 Thread Jeff Zellner
Hey all,

We'd like to start terminating SSL (so that we can balance on url
parameters, primarily) on one of our busiest load balancer clusters.
Unfortunately, running with nbproc=1 our peak traffic causes us to
just-barely max out a CPU core -- just enough to severely degrade
latency/performance.

There are a good deal of warnings in the docs about setting nbproc  1
-- unfortunately it seems to be our best option to handle the
increased load of SSL termination (we have plenty of unused cores, and
not much hope of getting better CPUs).

We're using a sticky table that's peered between all the servers in
the cluster. I *think* that will still work with multiple processes...
So, aside from some extensive testing -- is there anything we should
be worried about with going nbproc  1 in this configuration?

Thanks!

JZ



Re: redirect question

2014-05-02 Thread Bryan Talbot
On Fri, May 2, 2014 at 2:05 AM, bjun...@gmail.com bjun...@gmail.com wrote:

 Hi,

 i'm trying a basic redirect with HAProxy:


 frontend http


  acl is_domain hdr_dom(host) -i abc.example.com

  acl root path_reg ^$|^/$


  redirect location http://abc.example.com/?code=1234  code 301  if
 is_domain  root


 Unfortunately this ends up in a redirect loop.



All paths begin with /. I suspect that you don't want path_beg but just
path for an exact match.

  acl root path /


-Bryan


RE: [PATCH] Add a configurable support of standardized DH parameters = 1024 bits, disabled by default

2014-05-02 Thread Sander Klein

On 02.05.2014 16:52, Lukas Tribus wrote:

Hi Remi,




The default value for max-dh-param-size is set to 1024, thus keeping
the current behavior by default. Setting a higher value (for example
2048 with a 2048 bits RSA/DSA server key) allows an easy upgrade
to stronger ephemeral DH keys (and back if needed).



Please note that Sander used 4096bit - which is why he saw huge CPE 
load.


Imho we can default max-dh-param-size to 2048bit.


Best thing would be if Sander could test in his environment with a 
2048bit

dhparam manually (in the cert file).


I'll try to test around a bit this weekend.

Sander



Re: please check

2014-05-02 Thread Patrick Hemmer
*From: *Willy Tarreau w...@1wt.eu
*Sent: * 2014-05-02 11:15:07 E
*To: *Patrick Hemmer hapr...@stormcloud9.net
*CC: *Rachel Chavez rachel.chave...@gmail.com, haproxy@formilux.org
*Subject: *Re: please check

 Hi Patrick,

 On Fri, May 02, 2014 at 10:57:38AM -0400, Patrick Hemmer wrote:
 *From: *Willy Tarreau w...@1wt.eu
 *Sent: * 2014-05-02 02:02:11 E
 *To: *Rachel Chavez rachel.chave...@gmail.com
 *CC: *haproxy@formilux.org
 *Subject: *Re: please check

 On Thu, May 01, 2014 at 03:44:46PM -0400, Rachel Chavez wrote:
 The problem is:

 when client sends a request with incomplete body (it has content-length but
 no body) then haproxy returns a 5XX error when it should be a client issue.
 It's a bit more complicated than that. When the request body flows from the
 client to the server, at any moment the server is free to respond (either
 with an error, a redirect, a timeout or whatever). So as soon as we start
 to forward a request body from the client to the server, we're *really*
 waiting for the server to send a verdict about that request.
 At any moment the server is free to respond yes, but the server cannot
 respond *properly* until it gets the complete request.
 Yes it can, redirects are the most common anticipated response, as the
 result of a POST to a page with an expired cookie. And the 302 is a
 clean response, it's not even an error.
I should have clarified what I meant by properly more. I didn't mean
that the server can't respond at all, as there are many cases it can,
some of which you point out. I meant that if the server is expecting a
request body, it can't respond with a 200 until it verifies that request
body.

 If the response depends on the request payload, the server doesn't know
 whether to respond with 200 or with a 400.
 With WAFs deployed massively on server infrastructures, 403 are quite
 common long before the whole data. 413 request entity too large appears
 quite commonly as well. 401 and 407 can also happen when authentication
 is needed.

 RFC2616 covers this behavior in depth. See 8.2.3 Use of the 100
 (Continue) Status. This section indicates that it should not be
 expected for the server to respond without a request body unless the
 client explicitly sends a Expect: 100-continue
 Well, 2616 is 15-years old now and pretty obsolete, which is why the
 HTTP-bis WG is working on refreshing this. New wording is clearer about
 how a request body is used :

o  A server MAY omit sending a 100 (Continue) response if it has
   already received some or all of the message body for the
   corresponding request, or if the framing indicates that there is
   no message body.

 Note the some or all.
I'm assuming you're quoting from:
http://tools.ietf.org/html/draft-ietf-httpbis-p2-semantics-26#section-5.1.1

This only applies if the Expect: 100-continue was sent. Expect:
100-continue was meant to solve the issue where the client has a large
body, and wants to make sure that the server will accept the body before
sending it (and wasting bandwidth). Meaning that without sending
Expect: 100-continue, it is expected that the server will not send a
response until the body has been sent.


 It's very tricky to find which side is responsible for a stalled upload.
 I've very commonly found that frozen servers, or those with deep request
 queues will stall during body transfers because they still didn't start
 to consume the part of the request that's queued into network buffers.

 All I mean is that it's unfortunately not *that* white and black. We
 *really* need to make a careful difference between what happens on the
 two sides. The (hard) goal I'm generally seeking is to do my best so
 that a misbehaving user doesn't make us believe that a server is going
 badly. That's not easy, considering for example the fact that the 501
 message could be understood as a server error while it's triggered by
 the client.

 In general (unless there's something wrong with the way client timeouts
 are reported in http_request_forward_body), client timeouts should be
 reported as such, and same for server timeouts. It's possible that there
 are corner cases, but we need to be extremely careful about them and not
 try to generalize.
I agree, a client timeout should be reported as such, and that's what
this is all about. If the client sends half the body (or no body), and
then freezes, the client timeout should kick in and send back a 408, not
the server timeout resulting in a 504.

I think in this regards it is very clear.
* The server may respond with the HTTP response status code any time it
feels like it.
* Enable the server timeout and disable the client timeout upon any of
the following:
* The client sent Expect: 100-continue and has completed all headers
* The complete client request has been sent, including body if
Content-Length  0
* Writing to the server socket would result in a blocking write
(indicating that the remote end is not processing).
* Enable the client timeout and 

Re: redirect question

2014-05-02 Thread Bryan Talbot
On Fri, May 2, 2014 at 9:13 AM, bjun...@gmail.com bjun...@gmail.com wrote:

 Hi Bryan,

 same problem with your acl.


 I think the acl isn't the problem here, i suspect the redirect line.




You are redirecting requests for abc.example.com/ to abc.example.com/ which
is why you have a loop. Options would be to change the host or path, or
check for your query string, or a cookie being set or something to stop the
loop.



-Bryan


frontend http


  acl is_domain hdr_dom(host) -i abc.example.com

  acl root path_reg ^$|^/$


  redirect location http://abc.example.com/?code=1234  code 301  if
 is_domain  root


 Unfortunately this ends up in a redirect loop.



 All paths begin with /. I suspect that you don't want path_beg but just
 path for an exact match.

   acl root path /





Re: please check

2014-05-02 Thread Willy Tarreau
On Fri, May 02, 2014 at 12:18:43PM -0400, Patrick Hemmer wrote:
  At any moment the server is free to respond yes, but the server cannot
  respond *properly* until it gets the complete request.
  Yes it can, redirects are the most common anticipated response, as the
  result of a POST to a page with an expired cookie. And the 302 is a
  clean response, it's not even an error.
 I should have clarified what I meant by properly more. I didn't mean
 that the server can't respond at all, as there are many cases it can,
 some of which you point out. I meant that if the server is expecting a
 request body, it can't respond with a 200 until it verifies that request
 body.

OK, but from a reverse-proxy point of view, all of them are equally valid,
and there's even no way to know if the server is interested in receiving
these data at all. The only differences are that some of them are considered
precious (ie those returning 200) and other ones less since they're
possibly ephemeral.

  If the response depends on the request payload, the server doesn't know
  whether to respond with 200 or with a 400.
  With WAFs deployed massively on server infrastructures, 403 are quite
  common long before the whole data. 413 request entity too large appears
  quite commonly as well. 401 and 407 can also happen when authentication
  is needed.
 
  RFC2616 covers this behavior in depth. See 8.2.3 Use of the 100
  (Continue) Status. This section indicates that it should not be
  expected for the server to respond without a request body unless the
  client explicitly sends a Expect: 100-continue
  Well, 2616 is 15-years old now and pretty obsolete, which is why the
  HTTP-bis WG is working on refreshing this. New wording is clearer about
  how a request body is used :
 
 o  A server MAY omit sending a 100 (Continue) response if it has
already received some or all of the message body for the
corresponding request, or if the framing indicates that there is
no message body.
 
  Note the some or all.
 I'm assuming you're quoting from:
 http://tools.ietf.org/html/draft-ietf-httpbis-p2-semantics-26#section-5.1.1

Yes indeed. Ah in fact I found the exact part I was looking for, it's in
the same block, two points below :

   o  A server that responds with a final status code before reading the
  entire message body SHOULD indicate in that response whether it
  intends to close the connection or continue reading and discarding
  the request message (see Section 6.6 of [Part1]).

 This only applies if the Expect: 100-continue was sent. Expect:
 100-continue was meant to solve the issue where the client has a large
 body, and wants to make sure that the server will accept the body before
 sending it (and wasting bandwidth). Meaning that without sending
 Expect: 100-continue, it is expected that the server will not send a
 response until the body has been sent.

No, it is expected that it will need to consume all the data before the
connection may be reused for sending another request. That is the point
of 100. And the problem is that if the server closes the connection when
responding early (typically a 302) and doesn't drain the client's data,
there's a high risk that the TCP stack will send an RST that can arrive
before the actual response, making the client unaware of the response.
That's why the server must consume the data even if it responds before
the end.

(...)
  In general (unless there's something wrong with the way client timeouts
  are reported in http_request_forward_body), client timeouts should be
  reported as such, and same for server timeouts. It's possible that there
  are corner cases, but we need to be extremely careful about them and not
  try to generalize.
 I agree, a client timeout should be reported as such, and that's what
 this is all about. If the client sends half the body (or no body), and
 then freezes, the client timeout should kick in and send back a 408, not
 the server timeout resulting in a 504.

Yes, I agree with this description.

 I think in this regards it is very clear.
 * The server may respond with the HTTP response status code any time it
 feels like it.

OK

 * Enable the server timeout and disable the client timeout upon any of
 the following:
 * The client sent Expect: 100-continue and has completed all headers

No, this one is wrong as well, as the client is expected to start sending
if it does not see the 100-continue, for compatibility with 1.0 and pre-2616
servers, because this header was invented very late. So both sides are
responsible for acting here, and the client timeout must not be cleared.

 * The complete client request has been sent, including body if
 Content-Length  0

Yes or chunked encoding is used and all the request, body and trailers
have been received. This is already done exactly this way (unless there's
a but of course).

 * Writing to the server socket would result in a blocking write
 (indicating that the remote end is 

Re: redirect question

2014-05-02 Thread Bryan Talbot
On Fri, May 2, 2014 at 9:48 AM, bjun...@gmail.com bjun...@gmail.com wrote:

 Maybe I dont't understand you correctly.


 When I change http://abc.example.com/?code=1234; to 
 http://abc.example.com/code=1234 http://abc.example.com/?code=1234  
 everything
 works.

 The logic is:

 Only redirect to the redirect location if abc.example.com is called
 directly (or with appended slash). Don't redirect if anything is present
 behind the slash (to avoid a redirect loop).




after the slash is the problem. The query string isn't considered part of
the *path* so in your case the path of / and /?code=1234 is equal to
/ in both cases.

Your second case above with the path of /code=1234 works because it's not
an exact string match of /.

There are many sources to describe the various parts of a URL including the
haproxy documentation:
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#1.2.1

-Bryan




 P.S.: using HAProxy 1.4.24



 2014-05-02 18:27 GMT+02:00 Bryan Talbot bryan.tal...@playnext.com:

 On Fri, May 2, 2014 at 9:13 AM, bjun...@gmail.com bjun...@gmail.comwrote:

 Hi Bryan,

 same problem with your acl.


 I think the acl isn't the problem here, i suspect the redirect line.




 You are redirecting requests for abc.example.com/ to abc.example.com/which 
 is why you have a loop. Options would be to change the host or path,
 or check for your query string, or a cookie being set or something to stop
 the loop.



 -Bryan


  frontend http


  acl is_domain hdr_dom(host) -i abc.example.com

  acl root path_reg ^$|^/$


  redirect location http://abc.example.com/?code=1234  code 301
 if  is_domain  root


 Unfortunately this ends up in a redirect loop.



 All paths begin with /. I suspect that you don't want path_beg but
 just path for an exact match.

   acl root path /








Re: SSL, peered sticky tables + nbproc 1?

2014-05-02 Thread Willy Tarreau
hi,

On Fri, May 02, 2014 at 11:11:39AM -0600, Jeff Zellner wrote:
 Well, I thought wrong -- I see that peered sticky tables absolutely
 don't work with multiple processes, and sticky rules give a warning.
 
 Would that be a feature on the roadmap? I can see that it's probably
 pretty non-trivial -- but would be super useful, at least for us.

Yes that's clearly on the roadmap. In order of fixing/improvements,
here's what I'd like to see :
  - peers work fine when only one process uses them
  - have the ability to run with explicit peers per process : if you
just have to declare as many peers sections as processes, it's
better than nothing.
  - have stick-table (and peers) work in multi-process mode with a
shared memory system like we do with SSL contexts.

Currently the issue is that all processes try to connect to the remote
and present the same peer name, resulting in the previous connection to
be dropped. And incoming connections will only feed one process and not
the other ones.

I'd like to be able to do at least #1 for the release, I do think it's
doable, because I attempted it 18 months ago and ended up in a complex
corner case of inter-proxy dependence calculation, to only realize that
we didn't need to have haproxy automatically deduce everything, just let
it do what the user wants, and document the limits.

Regards,
Willy




Re: ha pool haproxy

2014-05-02 Thread Willy Tarreau
On Fri, May 02, 2014 at 12:46:50PM -0300, Rafaela wrote:
 Thanks for the replies.
 
 I opted to use the round robin,  although not the best solution.

RR DNS with multiple VIPs as Jarno showed is quite common, as it provides
both load balancing and fault tolerance. Other people use ECMP and load
balance on the front switch. It's a little trickier as you need a way to
tell the switch that you're there or not (typically via a dynamic routing
daemon). But that scales better.

Also, since you say that you're using virtual machines and a single instance
will not stand the load, you can also deploy a real machine in front of your
VMs, then get rid of the useless VMs once you migrate their whole configuration
into the single real machine.

Willy




Re: please check

2014-05-02 Thread Patrick Hemmer
*From: *Willy Tarreau w...@1wt.eu
*Sent: * 2014-05-02 12:56:16 E
*To: *Patrick Hemmer hapr...@stormcloud9.net
*CC: *Rachel Chavez rachel.chave...@gmail.com, haproxy@formilux.org
*Subject: *Re: please check

 On Fri, May 02, 2014 at 12:18:43PM -0400, Patrick Hemmer wrote:
 At any moment the server is free to respond yes, but the server cannot
 respond *properly* until it gets the complete request.
 Yes it can, redirects are the most common anticipated response, as the
 result of a POST to a page with an expired cookie. And the 302 is a
 clean response, it's not even an error.
 I should have clarified what I meant by properly more. I didn't mean
 that the server can't respond at all, as there are many cases it can,
 some of which you point out. I meant that if the server is expecting a
 request body, it can't respond with a 200 until it verifies that request
 body.
 OK, but from a reverse-proxy point of view, all of them are equally valid,
 and there's even no way to know if the server is interested in receiving
 these data at all. The only differences are that some of them are considered
 precious (ie those returning 200) and other ones less since they're
 possibly ephemeral.

 If the response depends on the request payload, the server doesn't know
 whether to respond with 200 or with a 400.
 With WAFs deployed massively on server infrastructures, 403 are quite
 common long before the whole data. 413 request entity too large appears
 quite commonly as well. 401 and 407 can also happen when authentication
 is needed.

 RFC2616 covers this behavior in depth. See 8.2.3 Use of the 100
 (Continue) Status. This section indicates that it should not be
 expected for the server to respond without a request body unless the
 client explicitly sends a Expect: 100-continue
 Well, 2616 is 15-years old now and pretty obsolete, which is why the
 HTTP-bis WG is working on refreshing this. New wording is clearer about
 how a request body is used :

o  A server MAY omit sending a 100 (Continue) response if it has
   already received some or all of the message body for the
   corresponding request, or if the framing indicates that there is
   no message body.

 Note the some or all.
 I'm assuming you're quoting from:
 http://tools.ietf.org/html/draft-ietf-httpbis-p2-semantics-26#section-5.1.1
 Yes indeed. Ah in fact I found the exact part I was looking for, it's in
 the same block, two points below :

o  A server that responds with a final status code before reading the
   entire message body SHOULD indicate in that response whether it
   intends to close the connection or continue reading and discarding
   the request message (see Section 6.6 of [Part1]).

 This only applies if the Expect: 100-continue was sent. Expect:
 100-continue was meant to solve the issue where the client has a large
 body, and wants to make sure that the server will accept the body before
 sending it (and wasting bandwidth). Meaning that without sending
 Expect: 100-continue, it is expected that the server will not send a
 response until the body has been sent.
 No, it is expected that it will need to consume all the data before the
 connection may be reused for sending another request. That is the point
 of 100. And the problem is that if the server closes the connection when
 responding early (typically a 302) and doesn't drain the client's data,
 there's a high risk that the TCP stack will send an RST that can arrive
 before the actual response, making the client unaware of the response.
 That's why the server must consume the data even if it responds before
 the end.
 A 100-continue expectation informs recipients that the client is
   about to send a (presumably large) message body in this request and
   wishes to receive a 100 (Continue) interim response if the request-
   line and header fields are not sufficient to cause an immediate
   success, redirect, or error response.  This allows the client to wait
   for an indication that it is worthwhile to send the message body
   before actually doing so, which can improve efficiency when the
   message body is huge or when the client anticipates that an error is
   likely


 (...)
 In general (unless there's something wrong with the way client timeouts
 are reported in http_request_forward_body), client timeouts should be
 reported as such, and same for server timeouts. It's possible that there
 are corner cases, but we need to be extremely careful about them and not
 try to generalize.
 I agree, a client timeout should be reported as such, and that's what
 this is all about. If the client sends half the body (or no body), and
 then freezes, the client timeout should kick in and send back a 408, not
 the server timeout resulting in a 504.
 Yes, I agree with this description.

 I think in this regards it is very clear.
 * The server may respond with the HTTP response status code any time it
 feels like it.
 OK

 * Enable the server timeout and disable the client 

Re: SSL, peered sticky tables + nbproc 1?

2014-05-02 Thread Bryan Talbot
It sounds like that Jeff ran out of CPU for SSL terminations and that could
be addressed as described by Willy here

https://www.mail-archive.com/haproxy@formilux.org/msg13104.html

and allow him to stay with a single-process stick table for the actual load
balancing.

-Bryan




On Fri, May 2, 2014 at 10:23 AM, Willy Tarreau w...@1wt.eu wrote:

 hi,

 On Fri, May 02, 2014 at 11:11:39AM -0600, Jeff Zellner wrote:
  Well, I thought wrong -- I see that peered sticky tables absolutely
  don't work with multiple processes, and sticky rules give a warning.
 
  Would that be a feature on the roadmap? I can see that it's probably
  pretty non-trivial -- but would be super useful, at least for us.

 Yes that's clearly on the roadmap. In order of fixing/improvements,
 here's what I'd like to see :
   - peers work fine when only one process uses them
   - have the ability to run with explicit peers per process : if you
 just have to declare as many peers sections as processes, it's
 better than nothing.
   - have stick-table (and peers) work in multi-process mode with a
 shared memory system like we do with SSL contexts.

 Currently the issue is that all processes try to connect to the remote
 and present the same peer name, resulting in the previous connection to
 be dropped. And incoming connections will only feed one process and not
 the other ones.

 I'd like to be able to do at least #1 for the release, I do think it's
 doable, because I attempted it 18 months ago and ended up in a complex
 corner case of inter-proxy dependence calculation, to only realize that
 we didn't need to have haproxy automatically deduce everything, just let
 it do what the user wants, and document the limits.

 Regards,
 Willy





Re: please check

2014-05-02 Thread Willy Tarreau
On Fri, May 02, 2014 at 01:32:30PM -0400, Patrick Hemmer wrote:
  This only applies if the Expect: 100-continue was sent. Expect:
  100-continue was meant to solve the issue where the client has a large
  body, and wants to make sure that the server will accept the body before
  sending it (and wasting bandwidth). Meaning that without sending
  Expect: 100-continue, it is expected that the server will not send a
  response until the body has been sent.
  No, it is expected that it will need to consume all the data before the
  connection may be reused for sending another request. That is the point
  of 100. And the problem is that if the server closes the connection when
  responding early (typically a 302) and doesn't drain the client's data,
  there's a high risk that the TCP stack will send an RST that can arrive
  before the actual response, making the client unaware of the response.
  That's why the server must consume the data even if it responds before
  the end.
  A 100-continue expectation informs recipients that the client is
about to send a (presumably large) message body in this request and
wishes to receive a 100 (Continue) interim response if the request-
line and header fields are not sufficient to cause an immediate
success, redirect, or error response.  This allows the client to wait
for an indication that it is worthwhile to send the message body
before actually doing so, which can improve efficiency when the
message body is huge or when the client anticipates that an error is
likely

Yes exactly. Since there's no way to stop in the middle of a sent body,
when you start you need to complete and the other side needs to drain.
I think we're saying the same thing from two different angles :-)

 While I strongly disagree with your interpretation of Expect:
 100-continue, I also don't much care about 100-continue. Hardly anyone
 uses it.

100% of web services I've seen use it in order to maintain connection pools :-)
And that's stupid BTW, because they keep the connections open in order to save
a connect round trip, which is replaced with a longer roundtrip involving half
of the request in the first packet, and keeping large amounts of memory in use!

 I was just using it as documentation that the server should not
 be expected to respond before the entire request has been sent.

I know that you used it for this but I disagree with your conclusion,
based on reality in field and even on what the spec says.

 The main thing I care about is not responding with 504 if the client
 freezes while sending the body. This has been a thorn in our side for
 quite some time now, and why I am interested in this patch.

I easily understand. I've seen a place where webservices were used a lot,
and in these environments, they use 500 to return not found! Quite a
mess when you want to set up some monitoring and alerts to report servers
going sick!!!

 I've set up a test scenario, and the only time haproxy responds with 408
 is if the client times out in the middle of request headers. If the
 client has sent all headers, but no body, or partial body, it times out
 after the configured 'timeout server' value, and responds with 504.

OK that's really useful. I'll try to reproduce that case. Could you please
test again with a shorter client timeout than server timeout, just to ensure
that it's not just a sequencing issue ?

 Applying the patch solves this behavior. But my test scenario is very
 simple, and I'm not sure if it has any other consequences.

It definitely has, which is why I'm trying to find the *exact* problem in
order to fix it.

Thanks,
Willy




Re: SSL, peered sticky tables + nbproc 1?

2014-05-02 Thread Willy Tarreau
On Fri, May 02, 2014 at 10:59:00AM -0700, Bryan Talbot wrote:
 It sounds like that Jeff ran out of CPU for SSL terminations and that could
 be addressed as described by Willy here
 
 https://www.mail-archive.com/haproxy@formilux.org/msg13104.html
 
 and allow him to stay with a single-process stick table for the actual load
 balancing.

Yes that's perfectly possible. And when we have proxy proto v2 with SSL info,
it'll be even better :-)

Willy




Re: please check

2014-05-02 Thread Patrick Hemmer
*From: *Willy Tarreau w...@1wt.eu
*Sent: * 2014-05-02 14:00:24 E
*To: *Patrick Hemmer hapr...@stormcloud9.net
*CC: *Rachel Chavez rachel.chave...@gmail.com, haproxy@formilux.org
*Subject: *Re: please check

 On Fri, May 02, 2014 at 01:32:30PM -0400, Patrick Hemmer wrote:
 I've set up a test scenario, and the only time haproxy responds with 408
 is if the client times out in the middle of request headers. If the
 client has sent all headers, but no body, or partial body, it times out
 after the configured 'timeout server' value, and responds with 504.
 OK that's really useful. I'll try to reproduce that case. Could you please
 test again with a shorter client timeout than server timeout, just to ensure
 that it's not just a sequencing issue ?
I have. In my test setup, timeout client 1000 and timeout server 2000.

With incomplete headers I get:
haproxy[8893]: 127.0.0.1:41438 [02/May/2014:14:11:26.373] f1 f1/NOSRV
-1/-1/-1/-1/1001 408 212 - - cR-- 0/0/0/0/0 0/0 BADREQ

With no body I get:
haproxy[8893]: 127.0.0.1:41439 [02/May/2014:14:11:29.576] f1 b1/s1
0/0/0/-1/2002 504 194 - - sH-- 1/1/1/1/0 0/0 GET / HTTP/1.1

With incomplete body I get:
haproxy[8893]: 127.0.0.1:41441 [02/May/2014:14:11:29.779] f1 b1/s1
0/0/0/-1/2002 504 194 - - sH-- 0/0/0/0/0 0/0 GET / HTTP/1.1




 Applying the patch solves this behavior. But my test scenario is very
 simple, and I'm not sure if it has any other consequences.
 It definitely has, which is why I'm trying to find the *exact* problem in
 order to fix it.

 Thanks,
 Willy



-Patrick


Re: SSL, peered sticky tables + nbproc 1?

2014-05-02 Thread James Hogarth
On 2 May 2014 19:02, Willy Tarreau w...@1wt.eu wrote:

 On Fri, May 02, 2014 at 10:59:00AM -0700, Bryan Talbot wrote:
  It sounds like that Jeff ran out of CPU for SSL terminations and that
could
  be addressed as described by Willy here
 
  https://www.mail-archive.com/haproxy@formilux.org/msg13104.html
 
  and allow him to stay with a single-process stick table for the actual
load
  balancing.

 Yes that's perfectly possible. And when we have proxy proto v2 with SSL
info,
 it'll be even better :-)

 Willy



We've done quite a bit of work on this internally recently to provide SSL
multiprocess with sane load balancing.

There's a couple of small edge cases we've got left then we were intending
to push it up for comments...

I've literally just got home but I'll follow up in the office next week to
see how close we are.

James


Re: SSL, peered sticky tables + nbproc 1?

2014-05-02 Thread Jeff Zellner
Great, we'd love to see that.

And thanks for the other SSL performance trick. We might be able to
make that and some SSL cache tuning work for us, as well.

On Fri, May 2, 2014 at 12:23 PM, James Hogarth james.hoga...@gmail.com wrote:

 On 2 May 2014 19:02, Willy Tarreau w...@1wt.eu wrote:

 On Fri, May 02, 2014 at 10:59:00AM -0700, Bryan Talbot wrote:
  It sounds like that Jeff ran out of CPU for SSL terminations and that
  could
  be addressed as described by Willy here
 
  https://www.mail-archive.com/haproxy@formilux.org/msg13104.html
 
  and allow him to stay with a single-process stick table for the actual
  load
  balancing.

 Yes that's perfectly possible. And when we have proxy proto v2 with SSL
 info,
 it'll be even better :-)

 Willy



 We've done quite a bit of work on this internally recently to provide SSL
 multiprocess with sane load balancing.

 There's a couple of small edge cases we've got left then we were intending
 to push it up for comments...

 I've literally just got home but I'll follow up in the office next week to
 see how close we are.

 James



Re: please check

2014-05-02 Thread Willy Tarreau
On Fri, May 02, 2014 at 02:14:41PM -0400, Patrick Hemmer wrote:
 *From: *Willy Tarreau w...@1wt.eu
 *Sent: * 2014-05-02 14:00:24 E
 *To: *Patrick Hemmer hapr...@stormcloud9.net
 *CC: *Rachel Chavez rachel.chave...@gmail.com, haproxy@formilux.org
 *Subject: *Re: please check
 
  On Fri, May 02, 2014 at 01:32:30PM -0400, Patrick Hemmer wrote:
  I've set up a test scenario, and the only time haproxy responds with 408
  is if the client times out in the middle of request headers. If the
  client has sent all headers, but no body, or partial body, it times out
  after the configured 'timeout server' value, and responds with 504.
  OK that's really useful. I'll try to reproduce that case. Could you please
  test again with a shorter client timeout than server timeout, just to ensure
  that it's not just a sequencing issue ?
 I have. In my test setup, timeout client 1000 and timeout server 2000.
 
 With incomplete headers I get:
 haproxy[8893]: 127.0.0.1:41438 [02/May/2014:14:11:26.373] f1 f1/NOSRV
 -1/-1/-1/-1/1001 408 212 - - cR-- 0/0/0/0/0 0/0 BADREQ
 
 With no body I get:
 haproxy[8893]: 127.0.0.1:41439 [02/May/2014:14:11:29.576] f1 b1/s1
 0/0/0/-1/2002 504 194 - - sH-- 1/1/1/1/0 0/0 GET / HTTP/1.1
 
 With incomplete body I get:
 haproxy[8893]: 127.0.0.1:41441 [02/May/2014:14:11:29.779] f1 b1/s1
 0/0/0/-1/2002 504 194 - - sH-- 0/0/0/0/0 0/0 GET / HTTP/1.1

Great, thank you. I think that it tends to fuel the theory that the
response error is not set where it should be in the forwarding path.

I'll check this ASAP. BTW, it would be nice if you could check this
as well with 1.4.25, I guess it does the same.

Best regards,
Willy




Re: SSL, peered sticky tables + nbproc 1?

2014-05-02 Thread Willy Tarreau
Hi James,

On Fri, May 02, 2014 at 07:23:21PM +0100, James Hogarth wrote:
 We've done quite a bit of work on this internally recently to provide SSL
 multiprocess with sane load balancing.
 
 There's a couple of small edge cases we've got left then we were intending
 to push it up for comments...
 
 I've literally just got home but I'll follow up in the office next week to
 see how close we are.

You're welcome. I really want to release 1.5-final ASAP, but at least
with everything in place so that we can safely fix the minor remaining
annoyances. So if we identify quickly that things are still done wrong
and need to be addressed before the release (eg: because we'll be force
to change the way some config settings are used), better do it ASAP.
Otherwise if we're sure that a given config behaviour will not change,
such fixes can happen in -stable because they won't affect users which
do not rely on them.

Best regards,
Willy




Re: please check

2014-05-02 Thread Patrick Hemmer
 



*From: *Willy Tarreau w...@1wt.eu
*Sent: * 2014-05-02 15:06:13 E
*To: *Patrick Hemmer hapr...@stormcloud9.net
*CC: *Rachel Chavez rachel.chave...@gmail.com, haproxy@formilux.org
*Subject: *Re: please check

 On Fri, May 02, 2014 at 02:14:41PM -0400, Patrick Hemmer wrote:
 *From: *Willy Tarreau w...@1wt.eu
 *Sent: * 2014-05-02 14:00:24 E
 *To: *Patrick Hemmer hapr...@stormcloud9.net
 *CC: *Rachel Chavez rachel.chave...@gmail.com, haproxy@formilux.org
 *Subject: *Re: please check

 On Fri, May 02, 2014 at 01:32:30PM -0400, Patrick Hemmer wrote:
 I've set up a test scenario, and the only time haproxy responds with 408
 is if the client times out in the middle of request headers. If the
 client has sent all headers, but no body, or partial body, it times out
 after the configured 'timeout server' value, and responds with 504.
 OK that's really useful. I'll try to reproduce that case. Could you please
 test again with a shorter client timeout than server timeout, just to ensure
 that it's not just a sequencing issue ?
 I have. In my test setup, timeout client 1000 and timeout server 2000.

 With incomplete headers I get:
 haproxy[8893]: 127.0.0.1:41438 [02/May/2014:14:11:26.373] f1 f1/NOSRV
 -1/-1/-1/-1/1001 408 212 - - cR-- 0/0/0/0/0 0/0 BADREQ

 With no body I get:
 haproxy[8893]: 127.0.0.1:41439 [02/May/2014:14:11:29.576] f1 b1/s1
 0/0/0/-1/2002 504 194 - - sH-- 1/1/1/1/0 0/0 GET / HTTP/1.1

 With incomplete body I get:
 haproxy[8893]: 127.0.0.1:41441 [02/May/2014:14:11:29.779] f1 b1/s1
 0/0/0/-1/2002 504 194 - - sH-- 0/0/0/0/0 0/0 GET / HTTP/1.1
 Great, thank you. I think that it tends to fuel the theory that the
 response error is not set where it should be in the forwarding path.

 I'll check this ASAP. BTW, it would be nice if you could check this
 as well with 1.4.25, I guess it does the same.

 Best regards,
 Willy

Confirmed. Exact same behavior with 1.4.25

-Patrick



Re: please check

2014-05-02 Thread Willy Tarreau
On Fri, May 02, 2014 at 03:22:45PM -0400, Patrick Hemmer wrote:
  
 
 
 
 *From: *Willy Tarreau w...@1wt.eu
 *Sent: * 2014-05-02 15:06:13 E
 *To: *Patrick Hemmer hapr...@stormcloud9.net
 *CC: *Rachel Chavez rachel.chave...@gmail.com, haproxy@formilux.org
 *Subject: *Re: please check
 
  On Fri, May 02, 2014 at 02:14:41PM -0400, Patrick Hemmer wrote:
  *From: *Willy Tarreau w...@1wt.eu
  *Sent: * 2014-05-02 14:00:24 E
  *To: *Patrick Hemmer hapr...@stormcloud9.net
  *CC: *Rachel Chavez rachel.chave...@gmail.com, haproxy@formilux.org
  *Subject: *Re: please check
 
  On Fri, May 02, 2014 at 01:32:30PM -0400, Patrick Hemmer wrote:
  I've set up a test scenario, and the only time haproxy responds with 408
  is if the client times out in the middle of request headers. If the
  client has sent all headers, but no body, or partial body, it times out
  after the configured 'timeout server' value, and responds with 504.
  OK that's really useful. I'll try to reproduce that case. Could you please
  test again with a shorter client timeout than server timeout, just to 
  ensure
  that it's not just a sequencing issue ?
  I have. In my test setup, timeout client 1000 and timeout server 2000.
 
  With incomplete headers I get:
  haproxy[8893]: 127.0.0.1:41438 [02/May/2014:14:11:26.373] f1 f1/NOSRV
  -1/-1/-1/-1/1001 408 212 - - cR-- 0/0/0/0/0 0/0 BADREQ
 
  With no body I get:
  haproxy[8893]: 127.0.0.1:41439 [02/May/2014:14:11:29.576] f1 b1/s1
  0/0/0/-1/2002 504 194 - - sH-- 1/1/1/1/0 0/0 GET / HTTP/1.1
 
  With incomplete body I get:
  haproxy[8893]: 127.0.0.1:41441 [02/May/2014:14:11:29.779] f1 b1/s1
  0/0/0/-1/2002 504 194 - - sH-- 0/0/0/0/0 0/0 GET / HTTP/1.1
  Great, thank you. I think that it tends to fuel the theory that the
  response error is not set where it should be in the forwarding path.
 
  I'll check this ASAP. BTW, it would be nice if you could check this
  as well with 1.4.25, I guess it does the same.
 
  Best regards,
  Willy
 
 Confirmed. Exact same behavior with 1.4.25

Thank you!

Willy




[PATCH] Extending Proxy Protocol

2014-05-02 Thread David S
Hi Willy--
   Here's my latest on extending Proxy Protocol V2.
   I'm still testing this, but I would like to solicit any feedback that
you may have.
   I believe I have incorporated all of your comments to date.
   So far, I have implemented CN as a first sub-vector.  I'm willing to
write a couple others, if you would like to suggest any.
   Thanks,
--Dave
diff --git a/include/proto/connection.h b/include/proto/connection.h
index 8609f17..0db677e 100644
--- a/include/proto/connection.h
+++ b/include/proto/connection.h
@@ -41,7 +41,9 @@ int conn_fd_handler(int fd);
 
 /* receive a PROXY protocol header over a connection */
 int conn_recv_proxy(struct connection *conn, int flag);
-int make_proxy_line(char *buf, int buf_len, struct sockaddr_storage *src, 
struct sockaddr_storage *dst);
+int make_proxy_line(char *buf, int buf_len, struct server *srv, struct 
connection *remote);
+int make_proxy_line_v1(char *buf, int buf_len, struct sockaddr_storage *src, 
struct sockaddr_storage *dst);
+int make_proxy_line_v2(char *buf, int buf_len, struct server *srv, struct 
connection *remote);
 
 /* returns true is the transport layer is ready */
 static inline int conn_xprt_ready(const struct connection *conn)
diff --git a/include/proto/ssl_sock.h b/include/proto/ssl_sock.h
index 9d891d9..454edd5 100644
--- a/include/proto/ssl_sock.h
+++ b/include/proto/ssl_sock.h
@@ -40,6 +40,10 @@ int ssl_sock_prepare_srv_ctx(struct server *srv, struct 
proxy *px);
 void ssl_sock_free_all_ctx(struct bind_conf *bind_conf);
 const char *ssl_sock_get_cipher_name(struct connection *conn);
 const char *ssl_sock_get_proto_version(struct connection *conn);
+int ssl_sock_is_ssl(struct connection *conn);
+int ssl_sock_get_cert_used(struct connection *conn);
+char *ssl_sock_get_common_name(struct connection *conn);
+unsigned int ssl_sock_get_verify_result(struct connection *conn);
 
 #endif /* _PROTO_SSL_SOCK_H */
 
diff --git a/include/types/connection.h b/include/types/connection.h
index 5341a86..953cb16 100644
--- a/include/types/connection.h
+++ b/include/types/connection.h
@@ -265,6 +265,87 @@ struct connection {
} addr; /* addresses of the remote side, client for producer and server 
for consumer */
 };
 
+/* proxy protocol v2 definitions */
+#define PP2_SIGNATURE_LEN12
+#define PP2_HEADER_LEN   16
+#define PP2_VERSION0x20
+#define PP2_CMD_LOCAL  0x00
+#define PP2_CMD_PROXY  0x01
+#define PP2_FAM_UNSPEC 0x00
+#define PP2_FAM_INET   0x10
+#define PP2_FAM_INET6  0x20
+#define PP2_FAM_UNIX   0x30
+#define PP2_TRANS_UNSPEC   0x00
+#define PP2_TRANS_STREAM   0x01
+#define PP2_TRANS_DGRAM0x02
+
+#define PP2_ADDR_LEN_UNSPEC   0
+#define PP2_ADDR_LEN_INET12
+#define PP2_ADDR_LEN_INET6   36
+#define PP2_ADDR_LEN_UNIX   216
+
+#define PP2_HDR_LEN_UNSPEC  (PP2_HEADER_LEN + PP2_ADDR_LEN_UNSPEC)
+#define PP2_HDR_LEN_INET(PP2_HEADER_LEN + PP2_ADDR_LEN_INET)
+#define PP2_HDR_LEN_INET6   (PP2_HEADER_LEN + PP2_ADDR_LEN_INET6)
+#define PP2_HDR_LEN_UNIX(PP2_HEADER_LEN + PP2_ADDR_LEN_UNIX)
+
+struct proxy_hdr_v2 {
+   uint8_t sig[12];   /* hex 0D 0A 0D 0A 00 0D 0A 51 55 49 54 0A */
+   uint8_t cmd;   /* protocol version and command */
+   uint8_t fam;   /* protocol family and transport */
+   uint16_t len;  /* number of following bytes part of the header */
+};
+
+union proxy_addr {
+   struct {/* for TCP/UDP over IPv4, len = 12 */
+   uint32_t src_addr;
+   uint32_t dst_addr;
+   uint16_t src_port;
+   uint16_t dst_port;
+   } ipv4_addr;
+   struct {/* for TCP/UDP over IPv6, len = 36 */
+   uint8_t  src_addr[16];
+   uint8_t  dst_addr[16];
+   uint16_t src_port;
+   uint16_t dst_port;
+   } ipv6_addr;
+   struct {/* for AF_UNIX sockets, len = 216 */
+   uint8_t src_addr[108];
+   uint8_t dst_addr[108];
+   } unix_addr;
+};
+
+#define PP2_TYPE_SSL0x20
+#define PP2_TYPE_SSL_CN 0x21
+#define PP2_TYPE_SSL_DN 0x22
+
+struct tlv {
+   uint16_t length;
+   uint8_t type;
+   uint8_t value[0];
+}__attribute__((packed));
+
+struct tlv_ssl {
+   struct tlv tlv;
+   uint32_t version;
+   uint32_t client;
+   uint32_t verify;
+   uint8_t sub_tlv[0];
+}__attribute__((packed));
+
+#define PP2_CLIENT_SSL  0x0001
+#define PP2_CLIENT_CERT 0x0002
+
+struct tlv_cn {
+   struct tlv tlv;
+   uint8_t cn[0];
+}__attribute__((packed));
+
+struct tlv_dn {
+   struct tlv tlv;
+   uint8_t dn[0];
+}__attribute__((packed));
+
 #endif /* _TYPES_CONNECTION_H */
 
 /*
diff --git a/include/types/server.h b/include/types/server.h
index 54ab813..8c4c784 100644
--- a/include/types/server.h
+++ b/include/types/server.h
@@ -57,6 +57,12 @@
 #define SRV_SEND_PROXY 0x0800  /* this server talks the